In what environment does the submission run ?
The submission is run in a container containing a fresh install of debian 10.
You may customize the container through the setup
script (e.g. install packages, install software from sources, etc.).
What versions of python can I use ?
The python versions available trough the package manager are python2.7 and python3.7. If you need other versions, you have to install them from source.
This can be done by adding the following snippet to the setup
script:
echo -n "# Installing debian packages..." apt-get -y install curl unzip build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libsqlite3-dev libreadline-dev libffi-dev curl libbz2-dev cd /tmp curl -O https://www.python.org/ftp/python/3.6.10/Python-3.6.10.tar.xz tar -xf Python-3.6.10.tar.xz cd Python-3.6.10 ./configure make -j 4 make altinstall # Use as "python3.6" # Pip is available: "python3.6 -m pip install ..."
What base should be used for the logarithms in the scores ?
Any base can be used: the ranking only sums and compares the logarithms of the scores, which is scale-invariant.
How to use the evaluation framework ?
The usage of the evaluation framework is detailled in its README. Example usage is provided below.
The following shell script downloads the evaluation framework and the
demo submission, then evaluates the demo submission.
You need to have first downloaded the traces dataset (at least for sw3
and hw2 if you want to run the full demo, otherwise you can edit the
submission.json
).
The CTF_TRACES_TRAIN_DIR
and
CTF_CHALLENGE_DIR
variables must be set to the directory
containing the dataset.
Do not hesitate to change the parameters (e.g. set
CTF_N_ATTACK_SETS
to a higher value (max. 5) for more
accurate evaluation).
#!/bin/bash echo "Fetching framework ..." wget https://ctf.spook.dev/static/core/spook_ctf_eval.zip echo "Unzip framework ..." unzip spook_ctf_eval.zip cd spook_ctf_eval_scripts echo "Fetching demo_submission" wget https://ctf.spook.dev/static/core/demo_submission.zip echo "Set environement" export CTF_SUB_FILE=demo_submission.zip export CTF_TRACES_TRAIN_DIR=/TODO/ export CTF_CHALLENGE_DIR=/TODO/ export CTF_CHALLENGE_TMP=test/links export CTF_N_ATTACK_SETS=1 export CTF_BASE_DIR=test echo "Setup containers (needs to be run only once)" ./setup.sh echo "Run all" ./main.sh all
How to get the estimated rank for my attack?
After running the above script, the ranks are in the test/ranking/summary/[target]
, in the following format
[number of traces] [lower bound rank key0],[estimate rank key0],[upper bound rank key0];[lower bound rank key1],[estimate rank key1],[upper bound rank key1];... ...
The average rank is in the file test/ranking/summary/[target]_avg
, with the format:
[number of traces] [avg lower bound rank],[avg estimate rank],[avg upper bound rank] ...
All values are logarithms in base 2 of ranks and the average estimate rank is the value considered for the score.
How to debug my submission?
The outputs of the scripts are stored in the following files (assuming configuration given above):
setup
: test/setup/reports/setup.log
(stdout and stderr)profile
: test/profile/[target]/reports/profile.log
(stdout and stderr)attack
: test/attack/[target]/[number of traces]/[key nb]/reports/attack.[res|log]
(.res
for stdout and .log
for stderr)How to choose the number of traces to put in the submission.json
?
It's up to you ! The final grade only depends on the smallest number of traces given that satisfies the rank condition, the others are ignored.