Scalebench is a Python script in tools/harness/scalebench.py in the Barrelfish source repo. It contains a number of tests and benchmarks and can be used by anyone, but needs extra configuration when used outside of ETH. In particular, it needs to know about the configuration of the machines it is going to execute tests on, how to boot these machines and how to read their output. There are a number of QEMU-emulated machines preconfigured, which should work for everyone without further configuration.
Scalebench logs the test output into a log directory. If the gitpython python package is installed, it will record useful information such as current revision, branch and a diff of the worktree alongside the log. On Ubuntu, gitpython is not managed by apt, hence you need to use pip to install the package.
pip install gitpython
If you have access to the ETH rack-machines, make sure you read the NetOS wiki page.
Runnning a test
The tool of interest is tools/harness/scalebench.py. If this script is run with -L it shows a list of machines and tests that are configured. Some tests such as buildall do not need a machine, as they do not actually run the test but only compile it.
By default, harness rebuilds Barrelfish, which can take quite some time. To speed up the test, one can specify an existing build directory using the -e option. The following command executes memtest (one of the most basic tests) using an existing build directory on qemu, configure to provide two cores:
# If you do not have an existing build directory: mkdir build && cd build && ../hake/hake.sh -s .. -a x86_64 # Execute scalebench ./tools/harness/scalebench.py -v --debug -t memtest -m qemu2 -e ./build . ./results
The -v and --debug add verbosity, which is very handy for interactive use (the machine output will be displayed in your shell).
It is also possible to use a locally connected PandaBoard to run the tests
# If you do not have an existing build directory: mkdir build && cd build && ../hake/hake.sh -s .. -a armv7 # Execute scalebench ./tools/harness/scalebench.py -v --debug -t memtest -m panda_local -e ./build . ./results
Write a new Scalebench ("harness") test
The test class has two main tasks: Describe what binaries should be included in the menu.lst to start the test and how to detect a successful or failed test run.
The function get_modules is used to determine the menu.lst. It should return an instance of BootModules (defined in barrelfish.py). As tests usually inherit from TestCommon it should get the modules from its parent and then use the following functions to make the necessary changes.
modules.add_module(modulename, [arguments]) modules.add_module_arg(modulename, argument)
To determine if a test is finished, the function process_data is called repeatedly. See below for details.
Create a new python class that inherits from TestCommon in a file in tools/harness/tests and implement the following methods (minimum):
import tests from common import TestCommon from results import PassFailResult # need this decorator for our test to show up in list of available tests @tests.add_test class MyTest(TestCommon): '''Give a short description of your test here''' # this is the name that you will later use to run your test: # scalebench.py -t mytest ... name = "mytest" def get_modules(self, build, machine): '''Here you can provide one or more modules (binaries) that should be run for your test''' # get default set of modules from our superclass modules = super(MyTest, self).get_modules(build, machine) # add our test binary, this needs to be buildable as `make mytest` modules.add_module("mytest") return modules def get_finish_string(self): '''Once this string is seen, harness declares the test as finished and will run process_data''' return "TEST DONE" def process_data(self, testdir, rawiter): '''Here you can process the output of your test to determine pass/fail. `rawiter` is a raw iterator over the output of the test (e.g. qemu or console). `testdir` is a directory where you can store additional processed output, if desired. ''' # iterate over all lines of output for line in rawiter: if line.startswith("mytest passed"): # found the line that `mytest` prints when the test # passes, return PASS return PassFailResult(True) # didn't find line of test passing at all, return FAIL return PassFailResult(False)