some work on rasta

This commit is contained in:
Jean-Marie Mineau 2025-08-07 00:06:29 +02:00
parent 2e52599a7c
commit 4ad17d2484
Signed by: histausse
GPG key ID: B66AEEDA9B645AD2
5 changed files with 56 additions and 13 deletions

View file

@ -1,17 +1,18 @@
#import "../lib.typ": etal
#import "../lib.typ": etal, jfl-note, jm-note
#import "X_var.typ": *
== Introduction
In this chapter, we study the reusability of open source static analysis tools that appeared between 2011 and 2017, on a recent Android dataset.
The scope of our study is *not* to quantify if the output results are accurate for ensuring reproducibility, because all the studied static analysis tools have different goals in the end.
The scope of our study is *not* to quantify if the output results are accurate to ensure reproducibility, because all the studied static analysis tools have different goals in the end.
On the contrary, we take as hypothesis that the provided tools compute the intended result but may crash or fail to compute a result due to the evolution of the internals of an Android application, raising unexpected bugs during an analysis.
This chapter intends to show that sharing the software artifacts of a paper may not be sufficient to ensure that the provided software would be reusable.
Thus, our contributions are the following.
We carefully retrieved static analysis tools for Android applications that were selected by Li #etal~@Li2017 between 2011 and 2017.
#jm-note[Many of those tools where presented in @sec:bg-static.][Yes but not really, @sec:bg-static do not present the contributions in detail \ FIX: develop @sec:bg-static]
We contacted the authors, whenever possible, for selecting the best candidate versions and to confirm the good usage of the tools.
We rebuild the tools in their original environment and we plan to share our Docker images with this paper.
We rebuild the tools in their original environment and #jm-note[share our Docker images.][ref]
We evaluated the reusability of the tools by measuring the number of successful analysis of applications taken in the Drebin dataset~@Arp2014 and in a custom dataset that contains more recent applications (#NBTOTALSTRING in total).
The observation of the success or failure of these analysis enables us to answer the following research questions:
@ -33,6 +34,6 @@ The chapter is structured as follows.
@sec:rasta-methodology presents the methodology employed to build our evaluation process and @sec:rasta-xp gives the associated experimental results.
// @sec:rasta-discussion investigates the reasons behind the observed failures of some of the tools.
@sec:rasta-discussion discusses the limitations of this work and gives some takeaways for future contributions.
@sec:rasta-conclusion concludes the paper.
@sec:rasta-conclusion concludes the chapter.