Spook SCA CTF Contest rules
Description of Targets
Seven different targets are proposed in parallel: four in software (corresponding to masked implementations with 3, 4, 6 and 8 shares),
three in hardware/FPGA (corresponding to masked implementations with 2, 3 and 4 shares). The software masking scheme
considered is the bitslice variation of Ishai et al.’s private circuits proposed by
Goudarzi and Rivain. The hardware
masking scheme considered is the glitch-resistant variation Ishai et al.’s private circuits proposed by
Cassiers et al. Our software implementations are on
an ARM Cortex-M0. Our hardware implementations are on a Xilinx Spartan-6 FPGA.
Challengers are provided with the source code of the implementations (C in software and Verilog in hardware/FPGA),
a tool to predict the intermediate values of the hardware implementation, profiling sets of traces including the nonces,
(random) keys, (random) plaintexts and the randomness used for masking, test sets of traces corresponding to a few
fixed keys (without the masking randomness), and finally prototype attacks against a single byte of the secret key for
the 3-share software implementation and the 2-share hardware implementation.
The generation of the randomness for the masking countermeasure is based on a cryptographic
PRG and excluded from the traces in the software case, and it is based on a LFSR that runs
in parallel to our designs in the hardware case. The motivation for these choices is to limit the
size of the software traces and to enable attacks against such a simple randomness generation mechanism
in the hardware case.
Cautionaty note. The targets were not selected to optimize the security vs.
performance tradeoff but to enable an interesting challenge where advanced attack techniques can be
The goal of the challenge is to modify and improve the prototype attacks. A valid attack has to output 16
list of probabilities corresponding to the 16 bytes of the secret key. The submitted attacks will be rated
based on the number of measurements needed to reduce the rank of the master key below 232 using a
rank estimation algorithm.
Precisely, we will use the mean of the estimated rank (over 5 independent data sets) in our ratings.
Each submission must come with at most 8 data complexities (chosen by the challengers) for which the key rank will
be evaluated on independent test sets. The complexity of a target, N(target), is defined
as the smallest complexity (among the ones proposed) for which the average rank is below 232.
A team can only propose one submission (made of up to one attack per target) to
evaluation per deadline (however, updating the submission before the
deadline is allowed and can be done by re-submitting with the same
submission ID in the submission form).
An individual can only be part of one team.
The challenge has multiple deadlines. At time zero,
we consider that the state-of-the-art complexity Nb(target) corresponds the test sets size (Nb standing for "best N"). After each deadline:
- The submitted attacks will be evaluated (against independent test sets).
- The submitted attacks will be made public under a free software license.
- The state-of-the-art complexity Nb(target) will be updated based on the best published attack against each target.
- Every challenger having proposed an attack against a target with improved complexity N(target) will be awarded log_2(Nb(target)/N(target)) points for that target.
The number of points awarded to a team for a deadline is the sum of the points awareded for each of the targets.
The score of a team is the sum of the points awarded for all the deadlines.
The winner of the challenge will be the one with the most points after the third deadline
and will be awarded the "main prize".
A Hall of Fame will be maintained with the scores of each challenger and the best identified attacks per
target. These best attacks will be awarded "target prizes" at the end of the challenge.
The 3 deadlines for the CHES 2020 CTF are June 17 2020, July 24 2020 and August 31 2020.
The Spook SCA CTF continues after CHES2020: points are reset to zero
and only attacks against the hardware targets award points.
Submissions for the software target are nevertheless still encouraged,
they will be evaluated and the result will be published.
Submission and evaluation
Anyone may submit a submission package (the SUBMISSION) through the challenge homepage.
The format required for the SUBMISSION is described in the README file
of the evaluation scripts package (available on the challenge homepage).
Any SUBMISSION that does not conform to this format or does not succeed
to run under the evaluation scripts will not be evaluated.
The time complexity of an attack evaluated with the maximum data (provided by the challengers) is bounded
to 2 hours, using at most 60GB of disk storage, 48GB of RAM and 16 CPU cores.
The time and memory complexity of the profiling phase is left unbounded if performed offline by the challengers.
Optionally, the profiling phase can be performed online by the challenge organizers, in which case it is bounded
to 2 hours, using at most 100Gb of disk storage, 48Gb or RAM and 16 CPU cores.
The SUBMISSION must include the profiling code, even if it is ran offline by the challengers, so that it is
reproducible by other teams after publication (a non-repoducible profiling phase may lead the attack to not be awarded points).
Furthermore, any profiling data included in the SUBMISSION must be licensed under an open data license (e.g.
The submitters must license the SUBMISSION under an open source software license
(GPLv3 or alternatives).
We commit to do our best to keep the SUBMISSIONs private until each next deadline of
the challenge (when they become public according to the rules).
Submitters may cancel their SUBMISSION before the deadline for which
they submitted by uploading a new (possibly empty) SUBMISSION with the
same submission ID.
We commit to do our best to not publish and delete any cancelled SUBMISSION.
The SUBMISSION may only attack the designated targets, that is, analyze
the provided traces and return the result to the evaluation
Including code with any other purpose (including, but not limited to,
exfiltrating data or attacking the evaluation environment) is not
allowed, and will result in disqualification of the submitters.
Attacks from the SUBMISSION must not exploit any bug in the evaluation
framework, and if any such exploitable bug is found, submitters are
asked to report it to the organisers.
Submitters are invited to provide a descrition and documentation of
their attack in the SUBMISSION.
Submissions must contain the list of authors and their email addresses.
The organisers reserve the right to change in any way the contest rules or to reject any submission,
including with retroactive effect.