This section discusses particulars related to analytic expressions appearing in Nektar++. Analytic expressions in Nektar++ are used to describe spatially or temporally varying properties, for example
which can be retrieved in the solver code.
Analytic expressions appear as the content of
VALUE attribute of
The tags above declare analytic expressions as well as link them to one of the field variables
<EXPANSIONS> section. For example, the declaration
registers expression sin(πx)cos(πy) as a Dirichlet boundary constraint associated with field
Enforcing the same velocity profile at multiple boundary regions and/or field variables results
in repeated re-declarations of a corresponding analytic expression. Currently one cannot
directly link a boundary condition declaration with an analytic expression uniquely specified
somewhere else, e.g. in the
<FUNCTION> subsection. However this duplication does not affect an
overall computational performance.
Declarations of analytic expressions are formulated in terms of problem space-time coordinates. The library code makes a number of assumptions to variable names and their order of appearance in the declarations. This section describes these assumptions.
Internally, the library uses 3D global coordinate space regardless of problem dimension. Internal global coordinate system has natural basis (1,0,0),(0,1,0),(0,0,1) with coordinates ”’x”’,”’y”’ and ”’z”’. In other words, variables ”’x”’,”’y”’ and ”’z”’ are considered to be first, second and third coordinates of a point (”’x”’,”’y”’,”’z”’).
Declarations of problem spatial variables do not exist in the current XML file format. Even though field variables are declarable as in the following code snippet,
there are no analogous tags for space variables. However an attribute
section tag declares the dimension of problem space. For example,
specifies 1D flow within 2D problem space. The number of spatial variables presented in
expression declaration should match space dimension declared via
The library assumes the problem space also has natural basis and spatial coordinates have names ”’x”’,”’y”’ and”’z”’.
Problem space is naturally embedded into the global coordinate space: each point of
Currently, there is no way to describe rotations and translations of problem space relative to the global coordinate system.
The list of variables allowed in analytic expressions depends on the problem dimension:
Violation of these constraints yields unpredictable results of expression evaluation. The current implementation assigns magic value -9999 to each dimensionally excessive spacial variable appearing in analytic expressions. For example, the following declaration
results in expression x + y + z being evaluated at spatial points (xi,yi,-9999) where xi and yi are the spacial coordinates of boundary degrees of freedom. However, the library behaviour under this constraint violation may change at later stages of development (e.g., magic constant 0 may be chosen) and should be considered unpredictable.
Another example of unpredictable behaviour corresponds to wrong ordering of variables:
Here one declares 1D problem, so Nektar++ library assumes spacial variable is ”’x”’. At the same time, an expression sin(y) is perfectly valid on its own, but since it does not depend on ”’x”’, it will be evaluated to constant sin(-9999) regardless of degree of freedom under consideration.
Variable ”’t”’ represents time dependence within analytic expressions. The boundary condition
declarations need to add an additional property
USERDEFINEDTYPE="TimeDependent" in order
to flag time dependency to the library.
Analytic expressions are formed of
_gamma123are perfectly acceptable parameter names. However parameter name cannot start with a numeral. Parameters must be defined with
<PARAMETERS>...</PARAMETERS>. Parameters play the role of constants that may change their values in between of expression evaluations.
x, y, zand
+, -, *, /, ^Powering operator allows using real exponents (it is implemented with
<, <=, >, >=, ==evaluate their sub-expressions to real values 0.0 or 1.0.
| ||absolute value |x||
| ||inverse sine arcsin x|
| ||inverse cosine arccosx|
| ||computes polar coordinate θ = arctan(y∕x) from (x,y)|
| ||inverse tangent arctan x|
| ||inverse tangent function (used in polar transformations)|
| ||round up to nearest integer ⌈x⌉|
| ||cosine cosx|
| ||hyperbolic cosine cosh x|
| ||exponential ex|
| || absolute value (equivalent to
| ||rounding down ⌊x⌋|
| ||logarithm base e, ln x = log x|
| ||logarithm base 10, log 10x|
| ||computes polar coordinate r = from (x,y)|
| ||sine sin x|
| ||hyperbolic sine sinh x|
| ||square root|
| ||tangent tan x|
| ||hyperbolic tangent tanh x|
These functions are implemented by means of the cmath library:
http://www.cplusplus.com/reference/clibrary/cmath/. Underlying data type is
double at each stage of expression evaluation. As consequence, complex-valued
expressions (e.g. (-2)0.123) get value
nan (not a number). The operator
is implemented via call to
std::pow() function and accepts arbitrary real
awgn(sigma)- Gaussian Noise generator, where σ is the variance of normal distribution with zero mean. Implemented using the
boost::mt19937random number generator with boost variate generators (see http://www.boost.org/libs/random)
Some straightforward examples include
(y<0)*sin(y) + (y>=0)*y
Processing analytic expressions is split into two stages:
Parsing of analytic expressions with their partial evaluation take place at the time of setting the run up (reading an XML file). Each analytic expression, after being pre-processed, is stored internally and quickly retrieved when it turns to evaluation at given spatial-time point(s). This allows to perform evaluation of expressions at a large number of spacial points with minimal setup costs.
Partial evaluation of all constant sub-expressions makes no sense in using derived constants
from table above. This means, either make use of pre-defined constant
log10(2)^2 results in constant
stored internally after pre-processing. The rules of pre-evaluation are as follows:
For example, declaration
results in expression
exp(-x*(-0.97372300937516503167)*y ) being stored internally:
sin(PI*(sqrt(2)+sqrt(3))/2) is evaluated to constant but appearance of
y variables stops further pre-evaluation.
Grouping predefined constants and numbers together helps. Its useful to put brackets to be sure your constants do not run out and become factors of some variables or parameters.
Expression evaluator does not do any clever simplifications of input expressions, which is clear from example above (there is no point in double negation). The following subsection addresses the simplification strategy.
The total evaluation cost depends on the overall number of operations. Since evaluator is not making simplifications, it worth trying to minimise the total number of operations in input expressions manually.
Some operations are more computationally expensive than others. In an order of increasing complexity:
+, -, <, >, <=, >=, ==,
*, /, abs, fabs, ceil, floor,
^, sqrt, exp, log, log10, sin, cos, tan, sinh, cosh, tanh, asin, acos, atan.
x*xis faster than
x^2— it is one double multiplication vs generic calculation of arbitrary power with real exponents.
(x+sin(y))^2is faster than
(x+sin(y))*(x+sin(y))- sine is an expensive operation. It is cheaper to square complicated expression rather than compute it twice and add one multiplication.
exp(-41*( (x+(0.3*cos(2*PI*t)))^2 + (0.3*sin(2*PI*t))^2 ))makes use of 5 expensive operations (
^twice) while an equivalent expression
exp(-41*( x*x+0.6*x*cos(2*PI*t) + 0.09 ))uses only 2 expensive operations.
If any simplifying identity applies to input expression, it may worth applying it, provided it minimises the complexity of evaluation. Computer algebra systems may help.
Expression evaluator is able to calculate an expression for either given point (its space-time coordinates) or given array of points (arrays of their space-time coordinates, it uses SoA). Vectorized evaluation is faster then sequential due to a better data access pattern. Some expressions give measurable speedup factor 4.6. Therefore, if you are creating your own solver, it worth making vectorized calls.