developp rasta methodology section
All checks were successful
/ test_checkout (push) Successful in 1m16s
All checks were successful
/ test_checkout (push) Successful in 1m16s
This commit is contained in:
parent
01ce20ffda
commit
af1187f041
4 changed files with 82 additions and 9 deletions
|
@ -12,7 +12,7 @@ Thus, our contributions are the following.
|
|||
We carefully retrieved static analysis tools for Android applications that were selected by Li #etal~@Li2017 between 2011 and 2017.
|
||||
#jm-note[Many of those tools where presented in @sec:bg-static.][Yes but not really, @sec:bg-static do not present the contributions in detail \ FIX: develop @sec:bg-static]
|
||||
We contacted the authors, whenever possible, for selecting the best candidate versions and to confirm the good usage of the tools.
|
||||
We rebuild the tools in their original environment and #jm-note[share our Docker images.][ref]
|
||||
We rebuild the tools in their original environment and share our Docker images.#footnote[on Docker Hub as `histausse/rasta-<toolname>:icsr2024`]
|
||||
We evaluated the reusability of the tools by measuring the number of successful analysis of applications taken in the Drebin dataset~@Arp2014 and in a custom dataset that contains more recent applications (#NBTOTALSTRING in total).
|
||||
The observation of the success or failure of these analysis enables us to answer the following research questions:
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
#import "@preview/diagraph:0.3.5": raw-render
|
||||
#import "../lib.typ": etal, eg, MWE, HPC, SDK, SDKs, APKs, DEX
|
||||
#import "../lib.typ": todo, jfl-note
|
||||
#import "X_var.typ": *
|
||||
|
@ -5,8 +6,11 @@
|
|||
|
||||
== Methodology <sec:rasta-methodology>
|
||||
|
||||
#todo[small intro: resumé approche + schema?]
|
||||
#jfl-note[Add diagram: Li etal -> [tool selection] -> drop/ - selected -> [select source version] -> [packaging] -> docker / -> singularity -> [exp]]
|
||||
In this section, we describe our methodology to evaluate the reusability of Android static analysis tools.
|
||||
@fig:rasta-methodo-collection and @fig:rasta-overview summarize our approach.
|
||||
We collected tools listed as open source by Li #etal, checked if that the tools where only using static analysis technique, and selected the most rescent version of the tool.
|
||||
Whe then packaged the tools inside containers and check our choices with the authors.
|
||||
We then run those tools on a large dataset that we sampled, and collected the exit status of the run (wether the tool completed the analysis or not).
|
||||
|
||||
=== Collecting Tools
|
||||
|
||||
|
@ -203,6 +207,73 @@ To guarantee reproducibility we published the results, datasets, Dockerfiles and
|
|||
- on Docker Hub as `histausse/rasta-<toolname>:icsr2024`.
|
||||
]
|
||||
|
||||
#todo[alt text for @fig:rasta-methodo-collection]
|
||||
|
||||
#figure(
|
||||
raw-render(```
|
||||
digraph {
|
||||
rankdir=TB
|
||||
node [shape=none]
|
||||
|
||||
{
|
||||
rank=same
|
||||
|
||||
Li
|
||||
ST
|
||||
TS
|
||||
SV
|
||||
Pack
|
||||
Dock
|
||||
}
|
||||
{
|
||||
rank=same
|
||||
|
||||
Drop0
|
||||
Drop1
|
||||
Drop2
|
||||
}
|
||||
|
||||
Li -> ST
|
||||
ST -> TS
|
||||
TS -> SV
|
||||
SV -> Pack
|
||||
Pack -> Dock
|
||||
ST -> Drop0
|
||||
TS -> Drop1
|
||||
Pack -> Drop2
|
||||
}
|
||||
```,
|
||||
labels: (
|
||||
"Li": align(center)[Tools from\ Li #etal],
|
||||
"ST": block(stroke: black, inset: 1em)[Search Tools],
|
||||
"TS": block(stroke: black, inset: 1em)[Select Tools],
|
||||
"Drop0": "Drop",
|
||||
"Drop1": "Drop",
|
||||
"Drop2": [Not Reusable],
|
||||
"SV": block(stroke: black, inset: 1em)[Select Source Version],
|
||||
"Pack": block(stroke: black, inset: 1em)[Package],
|
||||
"Dock": [Docker\ Images],
|
||||
),
|
||||
edges: (
|
||||
//"ST": ("Drop0": align(center, block(inset: 1em)[Tool no longer\ available])),
|
||||
"TS": ("Drop1": align(center, block(inset: 1em)[Uses Dynamic\ Analysis])),
|
||||
"Pack": ("Drop2": align(center, block(inset: 1em)[Could Not Setup\ in 4 days])),
|
||||
),
|
||||
width: 100%,
|
||||
alt: "",
|
||||
),
|
||||
caption: [Tool selection methodology overview],
|
||||
) <fig:rasta-methodo-collection>
|
||||
|
||||
@fig:rasta-methodo-collection summarizes our tool selection process.
|
||||
We first looked for the tools listed as open source by Li #etal.
|
||||
For the tools still available, we checked if they used dynamic analysis and removed them.
|
||||
We then checked if they where more rescent updates of the tools and select the most relevent version.
|
||||
Finally, we marked as non-reusable the tools that we could not setup within a period of 4 days, even with the help of the authors.
|
||||
|
||||
|
||||
|
||||
|
||||
=== Runtime Conditions
|
||||
|
||||
#figure(
|
||||
|
@ -211,7 +282,7 @@ To guarantee reproducibility we published the results, datasets, Dockerfiles and
|
|||
width: 100%,
|
||||
alt: "A diagram representing the methodology. The word 'Tool' is linked to a box labeled 'Docker image' by an arrow labeled 'building'. The box 'Docker image' is linked to a box labeled 'Singularity image' by an arrow labeled 'conversion'. The box 'Singularity image' is linked to a box labeled 'Execution monitoring' by a dotted arrow labeled 'Manuel tests' and to an image of a server labeled 'Singularity cluster' by an arrow labeled deployment. An image of three android logo labeled 'apks' is also linked to the 'Singularity cluster' by an arrow labeled 'running the tool analysis'. The 'Singularity cluster' image is linked to the 'Execution monitoring' box by an arrow labeled 'log capture'. The 'Execution monitoring' box linked to the words 'Exit status' by an unlabeled arrow.",
|
||||
),
|
||||
caption: [Methodology overview],
|
||||
caption: [Experiment methodology overview],
|
||||
) <fig:rasta-overview>
|
||||
|
||||
As shown in @fig:rasta-overview, before benchmarking the tools, we built and installed them in a Docker containers for facilitating any reuse of other researchers.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue