3.6 Software Testing Approaches

3.6.1 Unit Tests

Unit testing, sometimes called “module testing” or “element testing”, is a software testing method by which individual “units” of source code are tested to determine whether they are fit for use [44]. Unit tests are added to Nektar++ through the CMake system, and implemented using the Boost test framework. As an example, the set of linear algebra unit tests is listed in this file:

.../library/UnitTests/LibUtilities/LinearAlgebra/CMakeLists.txt

and the actual tests are implemented in this file:

.../library/UnitTests/LibUtilities/LinearAlgebra/TestBandedMatrixOperations.cpp

To register a new test, you use BOOST_AUTO_TEST_CASE( TestName ), implement the unit test, and test the result using BOOST_CHECK_CLOSE(...), BOOST_CHECK_EQUAL(...), etc. Unit tests are invaluable in maintaining the integrity of the code base and for localizing, finding, and debugging errors entered into the code. It is important to remember a unit test should test very specific functionality of the code - in the best case, a single function should be tested per unit test.

While it is beyond the scope of this document to go into more detail on writing unit tests, a good summary of the Boost test system can be found here:

http://www.boost.org/doc/libs/1\_63\_0/libs/test/doc/html/.

3.6.2 Integration, System and Regression Tests

Integration testing involves testing ecosystems of components and their interoperability. System testing tests complete applications and regression testing focuses on ensuring previously fixed bugs do not resurface. In Nektar++ all of these are often colloquially referred to as regression testing. It is not white-box in that it does not examine how the code arrives at a particular answer, but rather in a black-box fashion tests to see if code when operating on certain data yields the predicted response [44].

3.6.3 Performance Tests

To avoid sudden regressions in performance, Nektar++ has a series of performance tests which measure the execution times of some sample scenarios using various solvers. These are automatically run by the continuous integration system before merging, always using the same runner. Each scenario is run multiple times, and the average execution time is compared to a baseline figure and tolerance. If the test fails, the developer can investigate the cause of the regression and change the baseline figure if necessary.

To register a new performance test, use ADD_NEKTAR_PERFORMANCE_TEST( TestName ) in the solver’s CMakeLists.txt file. As an example, the performance tests for the incompressible Navier-Stokes solver are listed in

.../solvers/IncNavierStokesSolver/CMakeLists.txt

and one of the tests are defined by the following files:

.../solvers/IncNavierStokesSolver/Tests/Perf_ChanFlow_3DH1D_pRef.tst

.../solvers/IncNavierStokesSolver/Tests/Perf_ChanFlow_3DH1D_pRef.xml.

The .tst file contains the test definition and metrics, and the XML file contains the test parameters. Perf_ChanFlow_3DH1D_pRef.tst contains the following information specific to performance tests:

By default, the ExecutionTime metric will search the test output for the Regex "^.*Total Computation Time\s*=\s*(\d+\.?\d*).*", which is used in most cases. If it is different, the <regex> element may be used to specify a different expression, such as in this implementation of the metric:

16<metric type="ExecutionTime" id="1"> 
17    <regex>^.*Execute\s*(\d+\.?\d*).*</regex> 
18    <value tolerance="0.5" hostname="42.debian-bullseye-performance-build-and-test">60.4946</value> 
19</metric>

3.6.4 Continuous Integration

Nektar++ uses the GitLab continuous integration to perform testing of the code across multiple operating systems. Builds are automatically instigated when merge requests are opened and subsequently when the associated branches receive additional commits.

For more information, go to:

https://gitlab.nektar.info