4.11 SIMDLib

This directory contains the tinysimd library, a header only library. The library is a light, zero-overhead wrapper based on template meta-programming that automatically selects a SIMD intrinsics from the most specialized x86-64 instruction set extension available among SSE2 (limited support), AVX2 and AVX512 (not tested) or SVE for the ARM AArch64 architecture. The library is designed to be easily extended to other architectures.

To use the library one needs to import the tinysimd.hpp header. The type traits routines, needed for templated programming are available in the traits.hpp header. It is highly discouraged to perform IO with vector types. If IO is needed for debugging, one needs to import the io.hpp header.

To enable the vector types in Nektar++ you need to set to ON the desired extension (for instance NEKTAR_ENABLE_SIMD_AVX2). This will automatically set the appropriate compiler flags. However notice that currently these are set correctly only for gcc and that you might need to delete the cached variable CMAKE_CXX_FLAGS before configuring cmake. You can check that the desired vector extension was compiled properly by running the VecDataUnitTests which prints out the extension in use.

SVE (for instance NEKTAR_ENABLE_SIMD_SVE) is a vector length agnostic ISA extension. However, in order to wrap the SVE intrinsic types with c++ classes, we fix the size at compile time. Therefore you need to set appropriate vector size (NEKTAR_SVE_BITS) according to your target machine.

Note, the extensions are advanced options and only the options relevant to the compiling machine architecture are made available (see NektarSIMD.cmake for more details).

Vector types are largely used with the same semantic as built-in c++ types.

A simple example: if avx2 is available, then this scalar code

1#include <array> 
2std::array<double,4> a = {-1.0, -1.0, -1.0, -1.0}; 
3std::array<double,4> b; 
4for (int i = 0; i < 4; ++i){ 
5    b[i] = abs(a[i]); 
6}

is equivalent to this vector computation

1#include <LibUtilities/SimdLib/tinysimd.hpp> 
2using vec_t = tinysimd::simd<double>; 
3vec_t a = -1.0; 
4vec_t b = abs(a);

which the compiler translates to the corresponding intrisics

1#include <immintrin.h> 
2__m256d a = -1.0; 
3__m256d sign_mask = _mm256_set1_pd(1<<63); 
4__m256d b = _mm256_andnot_pd(sign_mask, a);

A realistic example: an example of a more realistic usage can be found in the SIMD version of the Vmath routines

 
1
2/// \brief Multiply vector z = x + y 
3template <class T, typename = typename std::enable_if< 
4                       std::is_floating_point<T>::value>::type> 
5void Vadd(const size_t n, const T *x, const T *y, T *z) 
6
7    using namespace tinysimd; 
8    using vec_t = simd<T>; 
9 
10    size_t cnt = n; 
11    // Vectorized loop unroll 4x 
12    while (cnt >= 4 * vec_t::width) 
13    { 
14        // load 
15        vec_t yChunk0, yChunk1, yChunk2, yChunk3; 
16        yChunk0.load(y, is_not_aligned); 
17        yChunk1.load(y + vec_t::width, is_not_aligned); 
18        yChunk2.load(y + 2 * vec_t::width, is_not_aligned); 
19        yChunk3.load(y + 3 * vec_t::width, is_not_aligned); 
20 
21        vec_t xChunk0, xChunk1, xChunk2, xChunk3; 
22        xChunk0.load(x, is_not_aligned); 
23        xChunk1.load(x + vec_t::width, is_not_aligned); 
24        xChunk2.load(x + 2 * vec_t::width, is_not_aligned); 
25        xChunk3.load(x + 3 * vec_t::width, is_not_aligned); 
26 
27        // z = x + y 
28        vec_t zChunk0 = xChunk0 + yChunk0; 
29        vec_t zChunk1 = xChunk1 + yChunk1; 
30        vec_t zChunk2 = xChunk2 + yChunk2; 
31        vec_t zChunk3 = xChunk3 + yChunk3; 
32 
33        // store 
34        zChunk0.store(z, is_not_aligned); 
35        zChunk1.store(z + vec_t::width, is_not_aligned); 
36        zChunk2.store(z + 2 * vec_t::width, is_not_aligned); 
37        zChunk3.store(z + 3 * vec_t::width, is_not_aligned); 
38 
39        // update pointers 
40        x += 4 * vec_t::width; 
41        y += 4 * vec_t::width; 
42        z += 4 * vec_t::width;

Note that there are 2 loops, a vectorized loop and a spillover loop (which is used when the input array size is not a multiple of the vector width). For more complex methods the core of the loop is replaced by a call to a kernel that can accept both a vector type or a scalar type. In general the loops are characterized 3 sections: a load to local variables from the input arrays, a call to one or more kernels, a store from the local variables to the output arrays. The load and store operations need to specify the flag is_not_aligned if the referenced memory is not guaranteed to be aligned to the vector width boundaries. Otherwise a segmentation fault is just waiting to happen!

As an example of a method with a complex body with calls to multiple kernels refer to RoeSolverSIMD.cpp.

Usage with matrix free operators: the usage of the tineysimd library in the matrix free operators differs from the above due to the interleaving of n elements degree of freedoms (where n is the vector width) in a contiguous chunk of memory. You can refer to [57] for more details.

General optimization guidelines: a key factor to improve performance on modern architectures is to limit as much as possible data transfer from DRAM to cache