This directory contains the tinysimd library, a header only library. The library is a light, zero-overhead wrapper based on template meta-programming that automatically selects a SIMD intrinsics from the most specialized x86-64 instruction set extension available among SSE2 (limited support), AVX2 and AVX512 (not tested) or SVE for the ARM AArch64 architecture. The library is designed to be easily extended to other architectures.
To use the library one needs to import the tinysimd.hpp header. The type traits routines, needed for templated programming are available in the traits.hpp header. It is highly discouraged to perform IO with vector types. If IO is needed for debugging, one needs to import the io.hpp header.
To enable the vector types in Nektar++ you need to set to ON the desired extension (for instance NEKTAR_ENABLE_SIMD_AVX2). This will automatically set the appropriate compiler flags. However notice that currently these are set correctly only for gcc and that you might need to delete the cached variable CMAKE_CXX_FLAGS before configuring cmake. You can check that the desired vector extension was compiled properly by running the VecDataUnitTests which prints out the extension in use.
SVE (for instance NEKTAR_ENABLE_SIMD_SVE) is a vector length agnostic ISA extension. However, in order to wrap the SVE intrinsic types with c++ classes, we fix the size at compile time. Therefore you need to set appropriate vector size (NEKTAR_SVE_BITS) according to your target machine.
Note, the extensions are advanced options and only the options relevant to the compiling machine architecture are made available (see NektarSIMD.cmake for more details).
Vector types are largely used with the same semantic as built-in c++ types.
A simple example: if avx2 is available, then this scalar code
1#include <array> 2std::array<double,4> a = {-1.0, -1.0, -1.0, -1.0}; 3std::array<double,4> b; 4for (int i = 0; i < 4; ++i){ 5 b[i] = abs(a[i]); 6}
is equivalent to this vector computation
1#include <LibUtilities/SimdLib/tinysimd.hpp> 2using vec_t = tinysimd::simd<double>; 3vec_t a = -1.0; 4vec_t b = abs(a);
which the compiler translates to the corresponding intrisics
1#include <immintrin.h> 2__m256d a = -1.0; 3__m256d sign_mask = _mm256_set1_pd(1<<63); 4__m256d b = _mm256_andnot_pd(sign_mask, a);
A realistic example: an example of a more realistic usage can be found in the SIMD version of the Vmath routines
1void Vadd(const size_t n, const T *x, const T *y, T *z) 2{ 3 using namespace tinysimd; 4 using vec_t = simd<T>; 5 6 size_t cnt = n; 7 // Vectorized loop unroll 4x 8 while (cnt >= 4 * vec_t::width) 9 { 10 // load 11 vec_t yChunk0, yChunk1, yChunk2, yChunk3; 12 yChunk0.load(y, is_not_aligned); 13 yChunk1.load(y + vec_t::width, is_not_aligned); 14 yChunk2.load(y + 2 * vec_t::width, is_not_aligned); 15 yChunk3.load(y + 3 * vec_t::width, is_not_aligned); 16 17 vec_t xChunk0, xChunk1, xChunk2, xChunk3; 18 xChunk0.load(x, is_not_aligned); 19 xChunk1.load(x + vec_t::width, is_not_aligned); 20 xChunk2.load(x + 2 * vec_t::width, is_not_aligned); 21 xChunk3.load(x + 3 * vec_t::width, is_not_aligned); 22 23 // z = x + y 24 vec_t zChunk0 = xChunk0 + yChunk0; 25 vec_t zChunk1 = xChunk1 + yChunk1; 26 vec_t zChunk2 = xChunk2 + yChunk2; 27 vec_t zChunk3 = xChunk3 + yChunk3; 28 29 // store 30 zChunk0.store(z, is_not_aligned); 31 zChunk1.store(z + vec_t::width, is_not_aligned); 32 zChunk2.store(z + 2 * vec_t::width, is_not_aligned); 33 zChunk3.store(z + 3 * vec_t::width, is_not_aligned); 34 35 // update pointers 36 x += 4 * vec_t::width; 37 y += 4 * vec_t::width; 38 z += 4 * vec_t::width; 39 cnt -= 4 * vec_t::width; 40 } 41 42 // Vectorized loop unroll 2x
Note that there are 2 loops, a vectorized loop and a spillover loop (which is used when the input array size is not a multiple of the vector width). For more complex methods the core of the loop is replaced by a call to a kernel that can accept both a vector type or a scalar type. In general the loops are characterized 3 sections: a load to local variables from the input arrays, a call to one or more kernels, a store from the local variables to the output arrays. The load and store operations need to specify the flag is_not_aligned if the referenced memory is not guaranteed to be aligned to the vector width boundaries. Otherwise a segmentation fault is just waiting to happen!
As an example of a method with a complex body with calls to multiple kernels refer to RoeSolverSIMD.cpp.
Usage with matrix free operators: the usage of the tineysimd library in the matrix free operators differs from the above due to the interleaving of n elements degree of freedoms (where n is the vector width) in a contiguous chunk of memory. You can refer to [59] for more details.
General optimization guidelines: a key factor to improve performance on modern architectures is to limit as much as possible data transfer from DRAM to cache
use local temporary variables to store intermediate values
do not call Vmath functions more than once, make a loop over the points instead
if you do call a Vmath function, call the VmathArray version (it might be optimized via VmathSIMD call)