This commit is contained in:
parent
5a71a9d5dd
commit
81f49f87d3
16 changed files with 267 additions and 202 deletions
|
@ -21,11 +21,11 @@ The observation of the success or failure of these analysis enables us to answer
|
|||
/ RQ3: Does the reusability of tools change when analyzing goodware compared to malware? <rq-3>
|
||||
|
||||
/*
|
||||
As a summary, the contributions of this paper are the following:
|
||||
As a summary, the contributions of this chapterare the following:
|
||||
|
||||
- We provide containers with a compiled version of all studied analysis tools, which ensures the reproducibility of our experiments and an easy way to analyse applications for other researchers. Additionally receipts for rebuilding such containers are provided.
|
||||
- We provide a recent dataset of #NBTOTALSTRING applications balanced over the time interval 2010-2023.
|
||||
- We point out which static analysis tools of Li #etal SLR paper@Li2017 can safely be used and we show that #resultunusable of evaluated tools are unusable (considering that a tool that fails more than 50% of time is unusable). In total, the success rate of the tools we could run is #resultratio on our dataset.
|
||||
- We point out which static analysis tools of Li #etal SLR~@Li2017 can safely be used and we show that #resultunusable of evaluated tools are unusable (considering that a tool that fails more than 50% of time is unusable). In total, the success rate of the tools we could run is #resultratio on our dataset.
|
||||
- We discuss the effect of applications features (date, size, SDK version, goodware/malware) on static analysis tools and the nature of the issues we found by studying statistics on the errors captured during our experiments.
|
||||
*/
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue