typos in ch 3
All checks were successful
/ test_checkout (push) Successful in 1m58s

This commit is contained in:
Jean-Marie 'Histausse' Mineau 2025-09-29 16:36:54 +02:00
parent 2df810c3bd
commit 4e38131df5
Signed by: histausse
GPG key ID: B66AEEDA9B645AD2
5 changed files with 65 additions and 65 deletions

View file

@ -4,22 +4,22 @@
== Futur Works <sec:rasta-futur>
A first extention to this work would obviously be to studdy more tools.
We restricted ourself to the tools listed by Li #etal, but it would interesting to compare our result to the finishing rate of recently released tools.
A first extension to this work would obviously be to study more tools.
We restricted ourselves to the tools listed by Li #etal, but it would be interesting to compare our result to the finishing rate of recently released tools.
It would be interesting to see if they are better at handling large #APKs, but also to see if older applications are more challenging for them due to discontinued features.
Another avenue would be to define a benchmark to check the ability of tools to handle real-world applications.
Our dataset is much to large for a simple benchmark, and is sampled to have a variety of application size and year of publication.
Our dataset is too large for a simple benchmark and is sampled to have a variety of application sizes and years of publication.
Hence, the first step would be to sample a dataset for this benchmark.
Current benchmark datasets focus on accuracy of the tested tools, with difficult to analyse applications.
It could be instesting to extract from our result some of applications that the most tools failed to analyse, and either use them directly or studdy them to craft simpler applications reproducing the same challenged as those applications.
Such dataset would need to be updated regularly: we saw that there is a trend for newer applications to be harder to analyse, a frozen dataset would ignore this factor.
Current benchmark datasets focus on the accuracy of the tested tools, with difficult-to-analyse applications.
It could be interesting to extract from our results some of the applications that the most tools failed to analyse, and either use them directly or study them to craft simpler applications reproducing the same challenges as those applications.
Such datasets would need to be updated regularly: we saw that there is a trend for newer applications to be harder to analyse, a frozen dataset would ignore this factor.
In addition to the finishing rate, it would be both interesting and usefull to have reference value.
@tab:rasta-rec-deps list common Android related dependencies we encontered when packaging the tools.
In addition to the finishing rate, it would be both interesting and useful to have reference values.
@tab:rasta-rec-deps list common Android-related dependencies we encountered when packaging the tools.
We can see that each tools use at least one of those dependencies.
It would be resonnable to consider the best finishing ratio a tool can have to be the finishing ratio of a tool that would perfom an "empty analysis" using the same dependencies.
Considering the prevalence of those dependencies, having those theoritical minimum could also guide future tool developers when choosing their dependencies.
It would be reasonable to consider the best finishing ratio a tool can have to be the finishing ratio of a tool that would perform an "empty analysis" using the same dependencies.
Considering the prevalence of those dependencies, having those theoretical minimums could also guide future tool developers when choosing their dependencies.
#figure({
//show table: set text(size: 0.80em)