wip
All checks were successful
/ test_checkout (push) Successful in 1m1s

This commit is contained in:
Jean-Marie Mineau 2025-07-21 22:00:29 +02:00
parent fd4d6fa239
commit ea82a3ca8b
Signed by: histausse
GPG key ID: B66AEEDA9B645AD2
10 changed files with 119 additions and 98 deletions

View file

@ -3,33 +3,16 @@
== Introduction
Android is the most used mobile operating system since 2014, and since 2017, it even surpasses Windows all platforms combined#footnote[https://gs.statcounter.com/os-market-share#monthly-200901-202304].
The public adoption of Android is confirmed by application developers, with 1.3 millions apps available in the Google Play Store in 2014, and 3.5 millions apps available in 2017#footnote[https://www.statista.com/statistics/266210].
Its popularity makes Android a prime target for malware developers. // For example, various applications have been shown to steal personal information@shanSelfhidingBehaviorAndroid2018.
Consequently, Android has also been an important subject for security research.
In the past fifteen years, the research community released many tools to detect or analyze malicious behaviors in applications. Two main approaches can be distinguished: static and dynamic analysis@Li2017.
Dynamic analysis requires to run the application in a controlled environment to observe runtime values and/or interactions with the operating system.
For example, an Android emulator with a patched kernel can capture these interactions but the modifications to apply are not a trivial task.
// Such approach is limited by the required time to execute a limited part of the application with no guarantee on the obtained code coverage.
// For malware, dynamic analysis is also limited by evading techniques that may prevent the execution of malicious parts of the code. // To explain better if we restore these sentences about malware + evading.
As a consequence, a lot of efforts have been put in static approaches, which is the focus of this paper.
The usual goal of a static analysis is to compute data flows to detect potential information leaks@weiAmandroidPreciseGeneral2014 @titzeAppareciumRevealingData2015 @bosuCollusiveDataLeak2017 @klieberAndroidTaintFlow2014 @DBLPconfndssGordonKPGNR15,@octeauCompositeConstantPropagation2015,@liIccTADetectingInterComponent2015 by analyzing the bytecode of an Android application.
The associated developed tools should support the Dalvik bytecode format, the multiplicity of entry points, the event driven architecture of Android applications, the interleaving of native code and bytecode, possibly loaded dynamically, the use of reflection, to name a few.
All these obstacles threaten the research efforts.
When using a more recent version of Android or a recent set of applications, the results previously obtained may become outdated and the developed tools may not work correctly anymore.
In this paper/*#footnote[This work was supported by the ANR Research under the Plan France 2030 bearing the reference ANR-22-PECY-0007.]*/, we study the reusability of open source static analysis tools that appeared between 2011 and 2017, on a recent Android dataset.
In this chapter, we study the reusability of open source static analysis tools that appeared between 2011 and 2017, on a recent Android dataset.
The scope of our study is *not* to quantify if the output results are accurate for ensuring reproducibility, because all the studied static analysis tools have different goals in the end.
On the contrary, we take as hypothesis that the provided tools compute the intended result but may crash or fail to compute a result due to the evolution of the internals of an Android application, raising unexpected bugs during an analysis.
This paper intends to show that sharing the software artifacts of a paper may not be sufficient to ensure that the provided software would be reusable.
This chapter intends to show that sharing the software artifacts of a paper may not be sufficient to ensure that the provided software would be reusable.
Thus, our contributions are the following.
We carefully retrieved static analysis tools for Android applications that were selected by Li #etal@Li2017 between 2011 and 2017.
We contacted the authors, whenever possible, for selecting the best candidate versions and to confirm the good usage of the tools.
We rebuild the tools in their original environment and we plan to share our Docker images with this paper.
We evaluated the reusability of the tools by measuring the number of successful analysis of applications taken /*in the Drebin dataset@Arp2014 and */ in a custom dataset that contains more recent applications (#NBTOTALSTRING in total).
We evaluated the reusability of the tools by measuring the number of successful analysis of applications taken in the Drebin dataset@Arp2014 and in a custom dataset that contains more recent applications (#NBTOTALSTRING in total).
The observation of the success or failure of these analysis enables us to answer the following research questions:
/ RQ1: What Android static analysis tools that are more than 5 years old are still available and can be reused without crashing with a reasonable effort?
@ -45,7 +28,7 @@ As a summary, the contributions of this paper are the following:
- We discuss the effect of applications features (date, size, SDK version, goodware/malware) on static analysis tools and the nature of the issues we found by studying statistics on the errors captured during our experiments.
*/
The paper is structured as follows.
The chapter is structured as follows.
@sec:rasta-soa presents a summary of previous works dedicated to Android static analysis tools.
@sec:rasta-methodology presents the methodology employed to build our evaluation process and @sec:rasta-xp gives the associated experimental results.
// @sec:rasta-discussion investigates the reasons behind the observed failures of some of the tools.

View file

@ -9,38 +9,7 @@
// For example, taint analysis datasets should provide the source and expected sink of a taint.
// In some cases, the datasets are provided with additional software for automatizing part of the analysis.
// Thus,
We review in this section the past existing datasets provided by the community and the papers related to static analysis tools reusability.
=== Application Datasets
Computing if an application contains a possible information flow is an example of a static analysis goal.
Some datasets have been built especially for evaluating tools that are computing information flows inside Android applications.
One of the first well known dataset is DroidBench, that was released with the tool Flowdroid@Arzt2014a.
Later, the dataset ICC-Bench was introduced with the tool Amandroid@weiAmandroidPreciseGeneral2014 to complement DroidBench by introducing applications using Inter-Component data flows.
These datasets contain carefully crafted applications containing flows that the tools should be able to detect.
These hand-crafted applications can also be used for testing purposes or to detect any regression when the software code evolves.
Contrary to real world applications, the behavior of these hand-crafted applications is known in advance, thus providing the ground truth that the tools try to compute.
However, these datasets are not representative of real-world applications@Pendlebury2018 and the obtained results can be misleading.
//, especially for performance or reliability evaluation.
Contrary to DroidBench and ICC-Bench, some approaches use real-world applications.
Bosu #etal@bosuCollusiveDataLeak2017 use DIALDroid to perform a threat analysis of Inter-Application communication and published DIALDroid-Bench, an associated dataset.
Similarly, Luo #etal released TaintBench@luoTaintBenchAutomaticRealworld2022 a real-world dataset and the associated recommendations to build such a dataset.
These datasets confirmed that some tools such as Amandroid@weiAmandroidPreciseGeneral2014 and Flowdroid@Arzt2014a are less efficient on real-world applications.
These datasets are useful for carefully spotting missing taint flows, but contain only a few dozen of applications.
// A larger number of applications would be more suitable for our goal, #ie evaluating the reusability of a variety of static analysis tools.
Pauck #etal@pauckAndroidTaintAnalysis2018 used those three datasets to compare Amandroid@weiAmandroidPreciseGeneral2014, DIAL-Droid@bosuCollusiveDataLeak2017, DidFail@klieberAndroidTaintFlow2014, DroidSafe@DBLPconfndssGordonKPGNR15, FlowDroid@Arzt2014a and IccTA@liIccTADetectingInterComponent2015 -- all these tools will be also compared in this paper.
To perform their comparison, they introduced the AQL (Android App Analysis Query Language) format.
AQL can be used as a common language to describe the computed taint flow as well as the expected result for the datasets.
It is interesting to notice that all the tested tools timed out at least once on real-world applications, and that Amandroid@weiAmandroidPreciseGeneral2014, DidFail@klieberAndroidTaintFlow2014, DroidSafe@DBLPconfndssGordonKPGNR15, IccTA@liIccTADetectingInterComponent2015 and ApkCombiner@liApkCombinerCombiningMultiple2015 (a tool used to combine applications) all failed to run on applications built for Android API 26.
These results suggest that a more thorough study of the link between application characteristics (#eg date, size) should be conducted.
Luo #etal@luoTaintBenchAutomaticRealworld2022 used the framework introduced by Pauck #etal to compare Amandroid@weiAmandroidPreciseGeneral2014 and Flowdroid@Arzt2014a on DroidBench and their own dataset TaintBench, composed of real-world android malware.
They found out that those tools have a low recall on real-world malware, and are thus over adapted to micro-datasets.
Unfortunately, because AQL is only focused on taint flows, we cannot use it to evaluate tools performing more generic analysis.
=== Static Analysis Tools Reusability
We review in this section the past existing contributions related to static analysis tools reusability.
Several papers have reviewed Android analysis tools produced by researchers.
Li #etal@Li2017 published a systematic literature review for Android static analysis before May 2015.
@ -49,6 +18,19 @@ In particular, they listed 27 approaches with an open-source implementation avai
Nevertheless, experiments to evaluate the reusability of the pointed out software were not performed.
We believe that the effort of reviewing the literature for making a comprehensive overview of available approaches should be pushed further: an existing published approach with a software that cannot be used for technical reasons endanger both the reproducibility and reusability of research.
As we saw in @sec:bg-datasets that the need for a ground truth to test analysis tools leads test datasets to often be handcrafted.
The few datasets composed of real-world application confirmed that some tools such as Amandroid@weiAmandroidPreciseGeneral2014 and Flowdroid@Arzt2014a are less efficient on real-world applications@bosuCollusiveDataLeak2017 @luoTaintBenchAutomaticRealworld2022.
Unfortunatly, those real-world applications datasets are rather small, and a larger number of applications would be more suitable for our goal, #ie evaluating the reusability of a variety of static analysis tools.
Pauck #etal@pauckAndroidTaintAnalysis2018 used DroidBench@@Arzt2014a, ICC-Bench@weiAmandroidPreciseGeneral2014 and DIALDroid-Bench@@bosuCollusiveDataLeak2017 to compare Amandroid@weiAmandroidPreciseGeneral2014, DIAL-Droid@bosuCollusiveDataLeak2017, DidFail@klieberAndroidTaintFlow2014, DroidSafe@DBLPconfndssGordonKPGNR15, FlowDroid@Arzt2014a and IccTA@liIccTADetectingInterComponent2015 -- all these tools will be also compared in this chapter.
To perform their comparison, they introduced the AQL (Android App Analysis Query Language) format.
AQL can be used as a common language to describe the computed taint flow as well as the expected result for the datasets.
It is interesting to notice that all the tested tools timed out at least once on real-world applications, and that Amandroid@weiAmandroidPreciseGeneral2014, DidFail@klieberAndroidTaintFlow2014, DroidSafe@DBLPconfndssGordonKPGNR15, IccTA@liIccTADetectingInterComponent2015 and ApkCombiner@liApkCombinerCombiningMultiple2015 (a tool used to combine applications) all failed to run on applications built for Android API 26.
These results suggest that a more thorough study of the link between application characteristics (#eg date, size) should be conducted.
Luo #etal@luoTaintBenchAutomaticRealworld2022 used the framework introduced by Pauck #etal to compare Amandroid@weiAmandroidPreciseGeneral2014 and Flowdroid@Arzt2014a on DroidBench and their own dataset TaintBench, composed of real-world android malware.
They found out that those tools have a low recall on real-world malware, and are thus over adapted to micro-datasets.
Unfortunately, because AQL is only focused on taint flows, we cannot use it to evaluate tools performing more generic analysis.
A first work about quantifying the reusability of static analysis tools was proposed by Reaves #etal@reaves_droid_2016.
Seven Android analysis tools (Amandroid@weiAmandroidPreciseGeneral2014, AppAudit@xiaEffectiveRealTimeAndroid2015, DroidSafe@DBLPconfndssGordonKPGNR15, Epicc@octeau2013effective, FlowDroid@Arzt2014a, MalloDroid@fahlWhyEveMallory2012 and TaintDroid@Enck2010) were selected to check if they were still readily usable.
For each tool, both the usability and results of the tool were evaluated by asking auditors to install and use it on DroidBench and 16 real world applications.
@ -56,7 +38,7 @@ The auditors reported that most of the tools require a significant amount of tim
Reaves #etal propose to solve these issues by distributing a Virtual Machine with a functional build of the tool in addition to the source code.
Regrettably, these Virtual Machines were not made available, preventing future researchers to take advantage of the work done by the auditors.
Reaves #etal also report that real world applications are more challenging to analyze, with tools having lower results, taking more time and memory to run, sometimes to the point of not being able to run the analysis.
We will confirm and expand this result in this paper with a larger dataset than only 16 real-world applications.
We will confirm and expand this result in this chapter with a larger dataset than only 16 real-world applications.
// Indeed, a more diverse dataset would assess the results and give more insight about the factors impacting the performances of the tools.
Finally, our approach is similar to the methodology employed by Mauthe #etal for decompilers@mauthe_large-scale_2021.