ico-arrow-big-left

Mythril - Public Benchmark Suite - Test Runner for Manticore

Key Information

Register
Submit
The challenge is finished.
Show Deadlines

Challenge Overview

In this challenge you will update Solidity Benchmark Suites for Evaluating Ethereum Virtual Machine (EVM) Code-Analysis Tools, and Code to Run them to benchmark Manticore analyzer.

Project Background

The project aims to provide up-to-date public benchmarks for EVM code-analysis tools. In the current version, only Mythril tool is benchmarked. In this challenge you will expand the suite to benchmark a competing tool: Manticore. Afterwards we expect a similar follow-up challenge to benchmark Oyente analyzer.

Technology Stack

Benchmark suite is written in Python. Generated benchmark reports are in HTML.

Code Access

All links to the repositories related to this challenge were given above. The actual work will be done in the benchmark suite repository, from the commit 550c6e1e4751d4db04f9f35e589a7379ae785ac4.

Logic Behind the Current Codebase

Provided benchmark suite repository imports the actual benchmark suites (first and second) as git submodules. It also contains two Python programs that run the benchmark and generate HTML reports for each benchmark suite.

The first program runner/run.py takes a number of command-line arguments; one of them is the name of a benchmark suite. From that it reads two YAML configuration files for the benchmark. The first has information about the benchmark suite: the names of the files in the benchmarks, whether the benchmark is supposed to succeed or fail with a vulnerability, and so on. An example of such a YAML file is benchconf/Suhabe.yaml. You won't have to change that. The other YAML input configuration file is specific to the analyzer. For Mythril on the Suhabe benchmark, it is called benchconf/Suhabe-Mythril.yaml. You will need to create this configuration file for Manticore or adjust the report so that these configuration files are empty.

The output of runner/run.py is a YAML file which is stored in the folder benchdata with a subfolder under that with the name of the benchmark. For example the output of run.py for the Suhabe benchmark suite will be a file called benchdata/Suhabe/Mythril.yaml.

The second program, called runner/report.py, takes the aforementioned data YAML file and creates a report from that.  Currently it reads the data from a single analyzer. It needs to be extended to read in data from all analyzers.

Python module dependencies are listed in requirements.txt so you can install python modules with:

$ pip install -e requirements.txt

from the github project root.

Challenge Requirements

Update runner/run.py to be able to run benchmarks on Manticore. Add a command line argument that will allow to select which tool(s) will be benchmarked in the run (benchmark all, if the argument is omitted).

Update runner/report.py to include into generated reports results for all benchmarked tools. Currently result HTML tables have a single result column for Mythril, after this challenge they will also include additional columns for Manticore, if they were analysed.

Do not forget to update documentation of benchmark suite as necessary.

In case of any doubts, do not hesitate to ask questions in the challenge forum.

Final Submission Guidelines

Submit Git patch for the benchmark suite (against the commit specified earlier), a brief verification video, and any notes you may have for the reviewers.

Reliability Rating and Bonus

For challenges that have a reliability bonus, the bonus depends on the reliability rating at the moment of registration for that project. A participant with no previous projects is considered to have no reliability rating, and therefore gets no bonus. Reliability bonus does not apply to Digital Run winnings. Since reliability rating is based on the past 15 projects, it can only have 15 discrete values.
Read more.

ELIGIBLE EVENTS:

Topcoder Open 2019

REVIEW STYLE:

Final Review:

Community Review Board
?

Approval:

User Sign-Off
?

CHALLENGE LINKS:

Review Scorecard

?