Scalebench is a Python script in tools/harness/scalebench.py in the Barrelfish source repo. It contains a number of tests and benchmarks and can be used by anyone, but needs extra configuration when used outside of ETH. In particular, it needs to know about the configuration of the machines it is going to execute tests on, how to boot these machines and how to read their output. There are a number of QEMU-emulated machines preconfigured, which should work for everyone without further configuration.
Runnning a test
The tool of interest is tools/harness/scalebench.py. If this script is run with -L it shows a list of machines and tests that are configured. Some tests such as buildall do not need a machine, as they do not actually run the test but only compile it.
By default, harness rebuilds Barrelfish, which can take quite some time. To speed up the test, one can specify an existing build directory using the -e option. The following command executes memtest (one of the most basic tests) using an existing build directory on qemu, configure to provide two cores:
./tools/harness/scalebench.py -v --debug -t memtest -m qemu2 -e ./build . ./results
The -v and --debug add verbosity, which is very handy for interactive use (the machine output will be displayed in your shell).
Write a new Scalebench ("harness") test
The test class has two main tasks: Describe what binaries should be included in the menu.lst to start the test and how to detect a successful or failed test run.
The function get_modules is used to determine the menu.lst. It should return an instance of BootModules (defined in barrelfish.py). As tests usually inherit from TestCommon it should get the modules from its parent and then use the following functions to make the necessary changes.
modules.add_module(modulename, [arguments]) modules.add_module_arg(modulename, argument)
To determine if a test is finished, the function process_data is called repeatedly. See below for details.
Create a new python class that inherits from TestCommon in a file in tools/harness/tests and implement the following methods (minimum):
import tests from common import TestCommon from results import PassFailResult # need this decorator for our test to show up in list of available tests @tests.add_test class MyTest(TestCommon): '''Give a short description of your test here''' # this is the name that you will later use to run your test: # scalebench.py -t mytest ... name = "mytest" def get_modules(self, build, machine): '''Here you can provide one or more modules (binaries) that should be run for your test''' # get default set of modules from our superclass modules = super(MyTest, self).get_modules(build, machine) # add our test binary, this needs to be buildable as `make mytest` modules.add_module("mytest") return modules def process_data(self, testdir, rawiter): '''Here you can process the output of your test to determine pass/fail. `rawiter` is a raw iterator over the output of the test (e.g. qemu or console). `testdir` is a directory where you can store additional processed output, if desired. ''' # iterate over all lines of output for line in rawiter: if line.startswith("mytest passed"): # found the line that `mytest` prints when the test # passes, return PASS return PassFailResult(True) # didn't find line of test passing at all, return FAIL return PassFailResult(False)