This project aims to provide a way to automatically benchmark software implementations of the algorithms submitted to the NIST lightweight cryptography (LWC) on real hardware microcontrollers.
Currently, our testbench includes 5 microcontroller of various architectures:
- Arduino UNO
- STM32F103C8T6
- STM32 NUCLEO-F746ZG
- Sipeed Maixduino Kendryte K210
- Espressif ESP32
However, the framework is designed to be easy to extend with support for additional platforms.
## Hardware setup
The hardware requirements for building your own test bench are the following:
- One Linux computer - we use Debian and Arch Linux but any distribution should work
- One Logic Analyzer with one channel for each tested microcontroller
- Multiple UART-2-USB adapters - one for each tested microcontroller
- If necessary, a debugging interface for the tested microcontrollers
If your tested device supports programming over UART (e.g. ESP32 or STM32) and you don't need to perform RAM utilization testing, you will not need a debugging interface.
In our testbench, we make use of the FTDI FT2232H Mini Module to have both a debugger and a UART-2-USB interface on the same USB port.
Each tested device will have one GPIO configured as output, called the "CRYPTO_BUSY" pin, which will normally HIGH but will be pulled down when a cryptographic operation in progress. This pin will be probed by one of the logic analyzer channels and will be used to measure the time that each cryptographic operation takes to complete.
To allow for tests on all platforms to happen in parallel, we make use of [sigrok-mux](URL=https://github.com/epozzobon/sigrok-mux) to allow multiple processes to share the same Logic Analyzer. With some minor modifications, this should make the framework compatible with any Logic Analyzer that is supported by the sigrok library.
This figure shows our test setup:
![Our LWC test bench, not to be confused with a plate of spaghetti][https://epozzobon.it/images/lwc-test-bench.jpg]
When working with many microcontrollers at the same time, it is helpful to work on a properly grounded surface. Since we were making use of a non-powered USB hub, we connected together all the 5V supplies of all microcontrollers to a 5V bench supply.
## Templates
The templates contained in the templates directory contain all the code which is specific to each tested device. A template is composed of:
- Code for the firmware which will interface with the AEAD algorithm under test
- Scripts for compiling the firmware
- A script for flashing the firmware and communicating with it
As an example, let's look at the f1-libopencm3 template. This template is used to build test firmwares for the STM32F103C8T5 microcontroller board "bluepill" using the libopencm3 HAL and flashing it using openocd over an FTDI FT2232H Mini Module.
Here is a description of the files contained in templates/f1-libopencm3:
<dl>
<dt>test.py</dt>
<dd>This is an executable python file which imports and makes use of the functions in test_common.py. It implements, among other things, `flash()` for flashing the microcontroller over openocd, `get_serial()` for getting the correct USB-to-UART device by its serial number and `dump_ram()` to get a snapshot of the RAM.</dd>
<dt>uartp.c uartp.h</dt>
<dd>These files implement a simple protocol to use over UART which is shared across all the microcontrollers. If you don't wish to use this or you are using another interface, you can write your protocol and implement the client side in your test.py file</dd>
<dt>crypto_aead.h</dt>
<dd>This header defines the `crypto_aead_encrypt` and `crypto_aead_decrypt` functions signatures, which will be impelmented by the different ciphers</dd>
<dt>main.c</dt>
<dd>The implementation of the benchmark target. It takes care of receiving the plaintext, key and nonce, toggling the CRYPTO_BUSY pin and starting the encryption / decryption operation.</dd>
<dt>Makefile</dt>
<dd>Makefile copied from libopencm3 - every template needs a Makefile which to produce a firmware</dd>
<dt>stm32f103c8t6_128k.ld</dt>
<dd>Linker script copied from libopencm3</dd>
<dt>empty_ram.bin</dt>
<dd>A binary file containing random binary data to fill the RAM of the microcontroller with before starting the tests. This is then compared to a RAM dump of the microcontroller after the tests are run to check the total RAM utilization.</dd>
<dt>configure, cleanup</dt>
<dd>A pair of optional scripts executed respectively before and after the compilation, to compensate quirks of the specific platform.</dd>
</dl>
## Compilation of the test firmwares
The compile_all.py script takes care of compiling all the LWC candidates the given directory with the given template:
The submissions directory passed with the -s argument should be in the format described by the NIST LWC submission guideline. See the example-submissions directory for an example on how the submissions directory should look like.
You can optionally specify a list of algorithms to compile by using the -i argument:
In this case, only the implementations named "rhys" and "armsrc_NEC" will be compiled, and only for the romulusn1 algorithm.
After the compilation is finished, you will see the results in the build directory specified in the -b argument. For each compiled implementation, a subdirectory will be present containing the firmware.elf file and the logs of the stdout and stderr of the make command.
The compile_all.py script will also include the test vectors text file for each compiled algorithm from the test_vectors directory - make sure you have the test vectors for every algorithm you want to compile.
## Running the tests
Once the algorithms you plan to test are compiled, you can start the benchmarks.
Make sure your Logic Analyzer is capturing. If you plan on using sigrok-mux, start it now with
```bash
./sigrok-mux $XDG_RUNTIME_DIR/lwc-logic-socket
```
This will create a UNIX domain socket at `$XDG_RUNTIME_DIR/lwc-logic-socket` which the `LogicMultiplexerTimeMeasurements` class from `test_common.py` will connect to.
Now, you can start the benchmark of an individual build algorithm by calling the `test.py` script of the appropriate template:
The test scheduler opens a web UI on TCP port 5002.
Now, with each submitted zip, you can run:
```bash
./process_zip.sh example_submissions/example.zip
```
Which will unzip the submission, execute `compile_all.py` with each template, collect the firmwares in the `email-submissions` directory, and start the execution of the tests by the test scheduler.
The results for the benchmarks are stored as JSON files and can be downloaded from the web UI of test scheduler by visiting http://localhost:5002.