Next: , Previous: Top, Up: Top



1 Overview

Running a benchmark in Auto-pilot requires the following four steps:

  1. To run a benchmark, the benchmarker must write a configuration file that describes which tests to run and how many times. The configuration file does not describe the benchmark itself, but rather points at another executable, which is usually a small wrapper shell script.
  2. The next step is to create the benchmark script itself. The script is usually rather small, and provides arguments to a program like Postmark or a compile benchmark. The wrapper script is also responsible for measurement. We provide sample configuration files and shell scripts for benchmarking file systems. These can easily be run directly for common file systems, or easily adapted for other types of tests.
  3. Given the configuration file and the shell scripts, the next step is to run the configuration file with Auto-pilot. Auto-pilot parses the configuration file and runs the tests producing two types of logs. The first type are simply the output from the programs. This can be used to verify that benchmarks executed correctly and to investigate any anomalies. The second log file is a more structured results file that contains a snapshot of the system and the measurements that were collected.
  4. The results file is then passed through our analysis program, Getstats, to create a tabular report. Optionally, the tabular report can be used to generate a bar or line graph using our plotting tools.

The rest of this manual is roughly divided into sections that correspond to these actions. Configurations describes the Auto-pilot configuration language. Scripts describes the scripts and hooks that are included in the Auto-pilot distribution and Custom Scripts describes how to write your own scripts. Getstats describes how to use and customize Getstats and Graphit describes how to use our plotting tool.