wip
This commit is contained in:
parent
2d5cb2459e
commit
590b446f15
5 changed files with 54 additions and 18 deletions
|
@ -31,7 +31,7 @@ We tested our method on a subset of recent applications from the dataset of our
|
|||
The results of our dynamic analysis suggest that we failed to correctly explore many applications, hinting at weaknesses in our experimental setup.
|
||||
Nonetheless, we did obtain some dynamic data, allowing us to pursue our experiment.
|
||||
We compared the finishing rate of tools on the original application and the instrumented application using the same experiment as in our first contribution, and found that, in general, the instrumentation only slightly reduces the finishing rate of analysis tools.
|
||||
We also confirmed that the instrumentation does improve the result of analysis tools, allowing them to compute more comprehensive call graphs of the applications, or to detect new data flows.
|
||||
We also confirmed that the instrumentation improves the result of analysis tools, allowing them to compute more comprehensive call graphs of the applications, or to detect new data flows.
|
||||
|
||||
/*
|
||||
*
|
||||
|
|
|
@ -5,25 +5,26 @@
|
|||
|
||||
In this section, we present what, in light of this thesis, we believe to be worthwhile avenues of work to improve the Android reverse engineering ecosystem.
|
||||
|
||||
The main issue that appeared in all our work appears to be engineering one.
|
||||
The error we analysed in @sec:rasta showed that even something that should be basic, reading the content of an application, can be challenging.
|
||||
The main issues that appeared in all our work appear to be engineering ones.
|
||||
The errors we analysed in @sec:rasta showed that even something that should be basic, reading the content of an application, can be challenging.
|
||||
@sec:cl also showed that reproducing the exact behaviour of Android is more difficult than it seems (in our specific case, it was the class loading algorithm, but we can expect other features to have similar edge cases).
|
||||
As long as those issues are not solved, we cannot build robust analysis tools.
|
||||
One avenue we believe should be investigated would be to reuse the code actually used by Android.
|
||||
|
||||
One avenue that is more research-oriented and that should be investigated would be to reuse for analysis purposes the code actually used by Android.
|
||||
For instance, the parsing of #DEX, #APK, and resource files could be done using the same code as the #ART.
|
||||
This is possible thanks to #AOSP being open-source, and is already partially done by some Android build tools.
|
||||
However, this is not an easy solution.
|
||||
This is possible thanks to #AOSP being open-source.
|
||||
However, this is not straightforward.
|
||||
Dynamic analysis relying on patched versions of the #AOSP showed that it is difficult to maintain this kind of software over time.
|
||||
Doing this would require limiting the modifications to the actual source code of Android to minimise the changes needed at each Android update.
|
||||
Another obstacle to overcome is to decouple the compilation of the tool from the rest of #AOSP: it is a massive dependency that needs a lot of resources to build.
|
||||
Having such a dependency would be a barrier to entry, preventing others from modifying or improving the tool.
|
||||
Should those issues be solved, directly using the code from #AOSP would allow such a tool to keep up with each new version of Android and limit invalid assumptions about Android behaviour.
|
||||
Should those issues be solved, directly using the code from #AOSP would allow such a tool to stay up to date with Android and limit discrepancies between what Android does and what the tool sees.
|
||||
|
||||
An orthogonal solution to this problem is to create a new benchmark to test the capacity of a tool to handle real-life applications.
|
||||
An orthogonal solution to this problem of not being able to analyse edge cases is to create a new benchmark to test the capacity of a tool to handle real-life applications.
|
||||
Benchmarks are usually targeted at some specific technique (#eg taint tracking), and accordingly, test for issues specific to the targeted technique (#eg accurately tracking data that passes through an array).
|
||||
We suggest using a similar method to what we did in @sec:rasta to keep the benchmark independent from the tested tools.
|
||||
Instead of checking the correctness of the tools, this benchmark should test if the tool is able to finish its analysis.
|
||||
Applications in this benchmark could either be real-life applications that proved difficult to analyse (for instance, applications that crashed many of the tested tools in @sec:rasta), or hand-crafted applications reproducing corner cases or anti-reverse techniques encountered while analysing obfuscated applications (for instance, an application with gibberish binary file names inside `META-INF/` that can crash Jadx zip reader).
|
||||
Applications in this benchmark could either be real-life applications that was proven to be difficult to analyse (for instance, applications that crashed many of the tested tools in @sec:rasta), or hand-crafted applications reproducing corner cases or anti-reverse techniques encountered while analysing obfuscated applications (for instance, an application with gibberish binary file names inside `META-INF/` that can crash Jadx zip reader).
|
||||
The main challenge with such a benchmark is that it would need frequent updates to follow Android evolutions, and be diverse enough to encompass a large spectrum of possible issues.
|
||||
|
||||
#todo[web-base? flutter? wasm?]
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue