BerkeleyGW ships with a comprehensive testsuite, which is run automatically every time a developer changes the code on a number of different architectures and with a number of different compilers and math libraries. Every time BerkeleyGW is ported to a new architecture, we strongly recommend the user to run the testsuite to ensure correctness of the result produced by the code.
Below are the typical steps to run the BerkeleyGW testsuite (assuming that you
are in the
Build the BerkeleyGW binaries with "make all-flavors" or "make -j all-flavors". Testsuites will check both the real and complex versions of the binaries.
In serial: type
make checkin the
testsuite/directory or in the root BerkeleyGW directory.
make check-savewill do the same, but retain the working directories for debugging.
In parallel without a scheduler: type
make check-parallel. Set environment variable
SAVETESTDIRS=yesto save working directories. The tests use 4 cores, so be sure that at least that many are available, or use
BGW_TEST_MPI_NPROCS(see below). See, for instance, the script
testsuite/directory, which is executed directly and without a scheduler.
In parallel with a scheduler (e.g., SLURM, PBS, etc.): type
make check-jobscript-save, which will execute the job script for the machine you are using, as specified in
arch.mkby the line
TESTSCRIPT. See example job scripts for various machines in this directory, which typically have a
*.scrextension. For instance, to run the testsuite on the KNL nodes of the Cori supercomputer at NERSC, go to the
testsuitedirectory and type
sbatch cori2.scr. Run
sbatch cori2_haswellif you want to run the testsuite using the Haswell nodes.
The following environment variables control how the testsuite gets executed, and are typically put into a submission script:
BGW_TEST_MPI_NPROCS: set to overrule the number of processors listed in the test files. Useful for testing correctness of parallelization, if you don't have 4 cores, or to run the testsuite faster with more cores.
OMP_NUM_THREADS: if you build BGW with the
-DOMPflag, this environment variable will set the number of OpenMP threads used in running the testsuite.
TEMPDIRPATH: sets the scratch directory where temporary working directories will be created. Default is
/tmp, but on some machines you may not have permission to write there.
MPIEXEC: sets the command for parallel runs. Default is
`which mpiexec`. Note that
mpirunare generally equivalent (except for Intel MPI, in which case
mpirunis recommended). Set this if you are using a different command such as
ibrun(SGE parallel environment),
aprun(Cray), if you need to use a different
mpiexecthan the one in your path, or if some options need to be appended to the command. Depending on the value of this variable, three styles of command line are used:
BGW_TEST_NPROCS: deprecated, same as
BGW_TEST_MPI_NPROCS, provided for compatibility with versions 1.0.x.
MACHINELIST: if set, will be added to command line for MPI runs. This is needed in some MPI implementations.
EXEC: if set, will be added before name of executable. Used to run code through
If you want to run a subset of the tests, you can enable/disable individual
tests with the
toggle_tests.py script, described below.
How to update the testsuite manually
If you have updated one of the input files in a testsuite (e.g., Si-Wire-EPM/plotxct.inp), or you have fixed a bug in the code somewhere that would affect the result of a specific output file (e.g., plotxct.out), you can modify the BGW_CHECK_FLAGS environmental variable (e.g., export BGW_CHECK_FLAGS=" -d Si-Wire-EPM/ ") in the corresponding testsuite script and run it (e.g., sbatch cori2_haswell.scr). Check the file "test.out" inthe testsuite directory and locate the failed test with the keyword "FAIL". The "Calculated value" and "Reference value" will be printed out before the failed test. Then go to the corresponding directory and open the .test file (e.g., Si-Wire-EPM/Si_wire.test). Find the reference value and replace it with the new value. Rerun the testsuite script to double-check that all the tests in the corresponding directories are OK.
The following are the scripts shipped with the testsuite:
run_testsuite.sh: runs all tests from filenames ending in
*.testthat can be found under the current folder and which are enabled.
toggle_tests.py: enable or disable a particular test. This script is supposed to be executed interactively.
run_regression_test.pl: called from
run_testsuite.shfor each individual test. The syntax is:
../run_regression_test.pl [-s] [-p] -D ../../bin -f [testname].test. Include the
-sflag to run in serial; otherwise it will be in parallel. Include the
-pflag to preserve the working directory.
*.scr: job scripts for running the testsuite on various parallel machines.
interactive*.sh: scripts to run in parallel, in interactive mode. You have to follow the instructions in the script, not just run it.
queue_monitor.pl: a Perl script for parallel tests with a SLURM or PBS scheduler, which submits a job and monitors the queue to see what happens. Formally used by the BuildBot (deprecated).
fix_testsuite.py: (for developers) can be used to create or update match reference values in test files.
no_hires.sh: Some clusters will not have the Perl package
Time::HiResinstalled which is needed for timing in
run_regression_test.pl. You can just deactivate the usage with this script.
buildbot_query.pl: (for developers) can be used to find the range of values obtained on the buildbot workers for a particular match, to set values in the test files (deprecated).
If you are interested in modifying the testsuite and/or writing new tests, please refer to the developer's manual.