parent
f23390279c
commit
87195a483a
5 changed files with 170 additions and 168 deletions
|
@ -5,20 +5,19 @@
|
|||
|
||||
In this chapter, we study the reusability of open source static analysis tools that appeared between 2011 and 2017, on a recent Android dataset.
|
||||
The scope of our study is *not* to quantify if the output results are accurate to ensure reproducibility, because all the studied static analysis tools have different goals in the end.
|
||||
On the contrary, we take as hypothesis that the provided tools compute the intended result but may crash or fail to compute a result due to the evolution of the internals of an Android application, raising unexpected bugs during an analysis.
|
||||
This chapter intends to show that sharing the software artifacts of a paper may not be sufficient to ensure that the provided software would be reusable.
|
||||
On the contrary, we take the hypothesis that the provided tools compute the intended result, but may crash or fail to compute a result due to the evolution of the internals of an Android application, raising unexpected bugs during an analysis.
|
||||
This chapter intends to show that sharing the software artefacts of a paper may not be sufficient to ensure that the provided software will be reusable.
|
||||
|
||||
Thus, our contributions are the following.
|
||||
We carefully retrieved static analysis tools for Android applications that were selected by Li #etal~@Li2017 between 2011 and 2017.
|
||||
#jm-note[Many of those tools where presented in @sec:bg-static.][Yes but not really, @sec:bg-static do not present the contributions in detail \ FIX: develop @sec:bg-static]
|
||||
We contacted the authors, whenever possible, for selecting the best candidate versions and to confirm the good usage of the tools.
|
||||
We contacted the authors whenever possible to select the best candidate versions and to confirm the good usage of the tools.
|
||||
We rebuild the tools in their original environment and share our Docker images.#footnote[on Docker Hub as `histausse/rasta-<toolname>:icsr2024`]
|
||||
We evaluated the reusability of the tools by measuring the number of successful analysis of applications taken in the Drebin dataset~@Arp2014 and in a custom dataset that contains more recent applications (#NBTOTALSTRING in total).
|
||||
The observation of the success or failure of these analysis enables us to answer the following research questions:
|
||||
We evaluated the reusability of the tools by measuring the number of successful analyses of applications taken in the Drebin dataset~@Arp2014 and in a custom dataset that contains more recent applications (#NBTOTALSTRING in total).
|
||||
The observation of the success or failure of these analyses enables us to answer the following research questions:
|
||||
|
||||
/ RQ1: What Android static analysis tools that are more than 5 years old are still available and can be reused without crashing with a reasonable effort? <rq-1>
|
||||
/ RQ2: How the reusability of tools evolved over time, especially when analyzing applications that are more than 5 years far from the publication of the tool? <rq-2>
|
||||
/ RQ3: Does the reusability of tools change when analyzing goodware compared to malware? <rq-3>
|
||||
/ RQ1: Which Android static analysis tools that are more than 5 years old are still available and can be reused without crashing with a reasonable effort? <rq-1>
|
||||
/ RQ2: How has the reusability of tools evolved over time, especially when analysing applications that are more than 5 years away from the publication of the tool? <rq-2>
|
||||
/ RQ3: Does the reusability of tools change when analysing goodware compared to malware? <rq-3>
|
||||
|
||||
/*
|
||||
As a summary, the contributions of this chapterare the following:
|
||||
|
@ -30,8 +29,8 @@ As a summary, the contributions of this chapterare the following:
|
|||
*/
|
||||
|
||||
The chapter is structured as follows.
|
||||
@sec:rasta-methodology presents the methodology employed to build our evaluation process and @sec:rasta-xp gives the associated experimental results.
|
||||
@sec:rasta-methodology presents the methodology employed to build our evaluation process, and @sec:rasta-xp gives the associated experimental results.
|
||||
@sec:rasta-failure-analysis investigates the reasons behind the observed failures of some of the tools.
|
||||
We then compare in @sec:rasta-soa-comp our results with the contributions presented in @sec:bg.
|
||||
In @sec:rasta-reco, we give recommendations for tool development we drawn from our experience running our experiment.
|
||||
Finally, @sec:rasta-limit list the limit of our approach, @sec:rasta-futur present further avenues that did not had time to pursue and @sec:rasta-conclusion concludes the chapter.
|
||||
In @sec:rasta-reco, we give recommendations for tool development that we drew from our experience running our experiment.
|
||||
Finally, @sec:rasta-limit lists the limit of our approach, @sec:rasta-futur presents further avenues that did not have time to pursue and @sec:rasta-conclusion concludes the chapter.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue