Nektar++
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
Public Member Functions | Protected Member Functions | Protected Attributes | List of all members
Nektar::MultiRegions::GlobalLinSysStaticCond Class Reference

A global linear system. More...

#include <GlobalLinSysStaticCond.h>

Inheritance diagram for Nektar::MultiRegions::GlobalLinSysStaticCond:
Inheritance graph
[legend]
Collaboration diagram for Nektar::MultiRegions::GlobalLinSysStaticCond:
Collaboration graph
[legend]

Public Member Functions

 GlobalLinSysStaticCond (const GlobalLinSysKey &mkey, const boost::weak_ptr< ExpList > &pExpList, const boost::shared_ptr< AssemblyMap > &locToGloMap)
 Constructor for full direct matrix solve.
virtual ~GlobalLinSysStaticCond ()
- Public Member Functions inherited from Nektar::MultiRegions::GlobalLinSys
 GlobalLinSys (const GlobalLinSysKey &pKey, const boost::weak_ptr< ExpList > &pExpList, const boost::shared_ptr< AssemblyMap > &pLocToGloMap)
 Constructor for full direct matrix solve.
virtual ~GlobalLinSys ()
const GlobalLinSysKeyGetKey (void) const
 Returns the key associated with the system.
const boost::weak_ptr< ExpList > & GetLocMat (void) const
void InitObject ()
void Initialise (const boost::shared_ptr< AssemblyMap > &pLocToGloMap)
void Solve (const Array< OneD, const NekDouble > &in, Array< OneD, NekDouble > &out, const AssemblyMapSharedPtr &locToGloMap, const Array< OneD, const NekDouble > &dirForcing=NullNekDouble1DArray)
 Solve the linear system for given input and output vectors using a specified local to global map.
boost::shared_ptr< GlobalLinSysGetSharedThisPtr ()
 Returns a shared pointer to the current object.
int GetNumBlocks ()
DNekScalMatSharedPtr GetBlock (unsigned int n)
DNekScalBlkMatSharedPtr GetStaticCondBlock (unsigned int n)
void DropStaticCondBlock (unsigned int n)
void SolveLinearSystem (const int pNumRows, const Array< OneD, const NekDouble > &pInput, Array< OneD, NekDouble > &pOutput, const AssemblyMapSharedPtr &locToGloMap, const int pNumDir=0)
 Solve the linear system for given input and output vectors.

Protected Member Functions

virtual DNekScalBlkMatSharedPtr v_PreSolve (int scLevel, NekVector< NekDouble > &F_GlobBnd)
virtual void v_BasisTransform (Array< OneD, NekDouble > &pInOut, int offset)
virtual void v_BasisInvTransform (Array< OneD, NekDouble > &pInOut)
virtual void v_AssembleSchurComplement (boost::shared_ptr< AssemblyMap > pLoctoGloMap)
virtual int v_GetNumBlocks ()
 Get the number of blocks in this system.
virtual
GlobalLinSysStaticCondSharedPtr 
v_Recurse (const GlobalLinSysKey &mkey, const boost::weak_ptr< ExpList > &pExpList, const DNekScalBlkMatSharedPtr pSchurCompl, const DNekScalBlkMatSharedPtr pBinvD, const DNekScalBlkMatSharedPtr pC, const DNekScalBlkMatSharedPtr pInvD, const boost::shared_ptr< AssemblyMap > &locToGloMap)=0
virtual void v_Solve (const Array< OneD, const NekDouble > &in, Array< OneD, NekDouble > &out, const AssemblyMapSharedPtr &locToGloMap, const Array< OneD, const NekDouble > &dirForcing=NullNekDouble1DArray)
 Solve the linear system for given input and output vectors using a specified local to global map.
virtual void v_InitObject ()
virtual void v_Initialise (const boost::shared_ptr< AssemblyMap > &locToGloMap)
 Initialise this object.
void SetupTopLevel (const boost::shared_ptr< AssemblyMap > &locToGloMap)
 Set up the storage for the Schur complement or the top level of the multi-level Schur complement.
void ConstructNextLevelCondensedSystem (const boost::shared_ptr< AssemblyMap > &locToGloMap)
- Protected Member Functions inherited from Nektar::MultiRegions::GlobalLinSys
virtual DNekScalMatSharedPtr v_GetBlock (unsigned int n)
 Retrieves the block matrix from n-th expansion using the matrix key provided by the m_linSysKey.
virtual DNekScalBlkMatSharedPtr v_GetStaticCondBlock (unsigned int n)
 Retrieves a the static condensation block matrices from n-th expansion using the matrix key provided by the m_linSysKey.
virtual void v_DropStaticCondBlock (unsigned int n)
 Releases the static condensation block matrices from NekManager of n-th expansion using the matrix key provided by the m_linSysKey.

Protected Attributes

GlobalLinSysStaticCondSharedPtr m_recursiveSchurCompl
 Schur complement for Direct Static Condensation.
DNekScalBlkMatSharedPtr m_schurCompl
 Block Schur complement matrix.
DNekScalBlkMatSharedPtr m_BinvD
 Block $ BD^{-1} $ matrix.
DNekScalBlkMatSharedPtr m_C
 Block $ C $ matrix.
DNekScalBlkMatSharedPtr m_invD
 Block $ D^{-1} $ matrix.
boost::shared_ptr< AssemblyMapm_locToGloMap
 Local to global map.
Array< OneD, NekDoublem_wsp
 Workspace array for matrix multiplication.
- Protected Attributes inherited from Nektar::MultiRegions::GlobalLinSys
const GlobalLinSysKey m_linSysKey
 Key associated with this linear system.
const boost::weak_ptr< ExpListm_expList
 Local Matrix System.
const map< int,
RobinBCInfoSharedPtr
m_robinBCInfo
 Robin boundary info.

Detailed Description

A global linear system.

Solves a linear system using single- or multi-level static condensation.

Definition at line 55 of file GlobalLinSysStaticCond.h.

Constructor & Destructor Documentation

Nektar::MultiRegions::GlobalLinSysStaticCond::GlobalLinSysStaticCond ( const GlobalLinSysKey pKey,
const boost::weak_ptr< ExpList > &  pExpList,
const boost::shared_ptr< AssemblyMap > &  pLocToGloMap 
)

Constructor for full direct matrix solve.

For a matrix system of the form

\[ \left[ \begin{array}{cc} \boldsymbol{A} & \boldsymbol{B}\\ \boldsymbol{C} & \boldsymbol{D} \end{array} \right] \left[ \begin{array}{c} \boldsymbol{x_1}\\ \boldsymbol{x_2} \end{array}\right] = \left[ \begin{array}{c} \boldsymbol{y_1}\\ \boldsymbol{y_2} \end{array}\right], \]

where $\boldsymbol{D}$ and $(\boldsymbol{A-BD^{-1}C})$ are invertible, store and assemble a static condensation system, according to a given local to global mapping. #m_linSys is constructed by AssembleSchurComplement().

Parameters
mKeyAssociated matrix key.
pLocMatSysLocalMatrixSystem
locToGloMapLocal to global mapping.

Definition at line 74 of file GlobalLinSysStaticCond.cpp.

: GlobalLinSys(pKey, pExpList, pLocToGloMap),
m_locToGloMap (pLocToGloMap)
{
}
Nektar::MultiRegions::GlobalLinSysStaticCond::~GlobalLinSysStaticCond ( )
virtual

Definition at line 95 of file GlobalLinSysStaticCond.cpp.

{
}

Member Function Documentation

void Nektar::MultiRegions::GlobalLinSysStaticCond::ConstructNextLevelCondensedSystem ( const boost::shared_ptr< AssemblyMap > &  locToGloMap)
protected

Definition at line 345 of file GlobalLinSysStaticCond.cpp.

References ASSERTL0, Nektar::eDIAGONAL, Nektar::eWrapper, Nektar::MultiRegions::GlobalLinSys::m_expList, Nektar::MultiRegions::GlobalLinSys::m_linSysKey, m_recursiveSchurCompl, m_schurCompl, sign, and v_Recurse().

Referenced by v_Initialise().

{
int i,j,n,cnt;
DNekScalBlkMatSharedPtr blkMatrices[4];
// Create temporary matrices within an inner-local scope to ensure
// any references to the intermediate storage is lost before
// the recursive step, rather than at the end of the routine.
// This allows the schur complement matrix from this level to be
// disposed of in the next level after use without being retained
// due to lingering shared pointers.
{
const Array<OneD,const unsigned int>& nBndDofsPerPatch
= pLocToGloMap->GetNumLocalBndCoeffsPerPatch();
const Array<OneD,const unsigned int>& nIntDofsPerPatch
= pLocToGloMap->GetNumLocalIntCoeffsPerPatch();
// STEP 1:
// Based upon the schur complement of the the current level we
// will substructure this matrix in the form
// -- --
// | A B |
// | C D |
// -- --
// All matrices A,B,C and D are (diagonal) blockmatrices.
// However, as a start we will use an array of DNekMatrices as
// it is too hard to change the individual entries of a
// DNekScalBlkMatSharedPtr.
// In addition, we will also try to ensure that the memory of
// the blockmatrices will be contiguous. This will probably
// enhance the efficiency
// - Calculate the total number of entries in the blockmatrices
int nPatches = pLocToGloMap->GetNumPatches();
int nEntriesA = 0; int nEntriesB = 0;
int nEntriesC = 0; int nEntriesD = 0;
for(i = 0; i < nPatches; i++)
{
nEntriesA += nBndDofsPerPatch[i]*nBndDofsPerPatch[i];
nEntriesB += nBndDofsPerPatch[i]*nIntDofsPerPatch[i];
nEntriesC += nIntDofsPerPatch[i]*nBndDofsPerPatch[i];
nEntriesD += nIntDofsPerPatch[i]*nIntDofsPerPatch[i];
}
// Now create the DNekMatrices and link them to the memory
// allocated above
Array<OneD, DNekMatSharedPtr> substructuredMat[4]
= {Array<OneD, DNekMatSharedPtr>(nPatches), //Matrix A
Array<OneD, DNekMatSharedPtr>(nPatches), //Matrix B
Array<OneD, DNekMatSharedPtr>(nPatches), //Matrix C
Array<OneD, DNekMatSharedPtr>(nPatches)}; //Matrix D
// Initialise storage for the matrices. We do this separately
// for each matrix so the matrices may be independently
// deallocated when no longer required.
Array<OneD, NekDouble> storageA(nEntriesA,0.0);
Array<OneD, NekDouble> storageB(nEntriesB,0.0);
Array<OneD, NekDouble> storageC(nEntriesC,0.0);
Array<OneD, NekDouble> storageD(nEntriesD,0.0);
Array<OneD, NekDouble> tmparray;
int cntA = 0;
int cntB = 0;
int cntC = 0;
int cntD = 0;
for(i = 0; i < nPatches; i++)
{
// Matrix A
tmparray = storageA+cntA;
substructuredMat[0][i] = MemoryManager<DNekMat>
::AllocateSharedPtr(nBndDofsPerPatch[i],
nBndDofsPerPatch[i],
tmparray, wType);
// Matrix B
tmparray = storageB+cntB;
substructuredMat[1][i] = MemoryManager<DNekMat>
::AllocateSharedPtr(nBndDofsPerPatch[i],
nIntDofsPerPatch[i],
tmparray, wType);
// Matrix C
tmparray = storageC+cntC;
substructuredMat[2][i] = MemoryManager<DNekMat>
::AllocateSharedPtr(nIntDofsPerPatch[i],
nBndDofsPerPatch[i],
tmparray, wType);
// Matrix D
tmparray = storageD+cntD;
substructuredMat[3][i] = MemoryManager<DNekMat>
::AllocateSharedPtr(nIntDofsPerPatch[i],
nIntDofsPerPatch[i],
tmparray, wType);
cntA += nBndDofsPerPatch[i] * nBndDofsPerPatch[i];
cntB += nBndDofsPerPatch[i] * nIntDofsPerPatch[i];
cntC += nIntDofsPerPatch[i] * nBndDofsPerPatch[i];
cntD += nIntDofsPerPatch[i] * nIntDofsPerPatch[i];
}
// Then, project SchurComplement onto
// the substructured matrices of the next level
DNekScalMatSharedPtr schurComplSubMat;
int schurComplSubMatnRows;
Array<OneD, const int> patchId, dofId;
Array<OneD, const unsigned int> isBndDof;
Array<OneD, const NekDouble> sign;
NekDouble scale;
for(n = cnt = 0; n < SchurCompl->GetNumberOfBlockRows(); ++n)
{
schurComplSubMat = SchurCompl->GetBlock(n,n);
schurComplSubMatnRows = schurComplSubMat->GetRows();
scale = SchurCompl->GetBlock(n,n)->Scale();
Array<OneD, NekDouble> schurSubMat
= SchurCompl->GetBlock(n,n)->GetOwnedMatrix()->GetPtr();
patchId = pLocToGloMap->GetPatchMapFromPrevLevel()
->GetPatchId() + cnt;
dofId = pLocToGloMap->GetPatchMapFromPrevLevel()
->GetDofId() + cnt;
isBndDof = pLocToGloMap->GetPatchMapFromPrevLevel()
->IsBndDof() + cnt;
sign = pLocToGloMap->GetPatchMapFromPrevLevel()
->GetSign() + cnt;
// Set up Matrix;
for(i = 0; i < schurComplSubMatnRows; ++i)
{
int pId = patchId[i];
Array<OneD, NekDouble> subMat0
= substructuredMat[0][pId]->GetPtr();
Array<OneD, NekDouble> subMat1
= substructuredMat[1][patchId[i]]->GetPtr();
Array<OneD, NekDouble> subMat2
= substructuredMat[2][patchId[i]]->GetPtr();
Array<OneD, NekDouble> subMat3
= substructuredMat[3][patchId[i]]->GetPtr();
int subMat0rows = substructuredMat[0][pId]->GetRows();
int subMat1rows = substructuredMat[1][pId]->GetRows();
int subMat2rows = substructuredMat[2][pId]->GetRows();
int subMat3rows = substructuredMat[3][pId]->GetRows();
if(isBndDof[i])
{
for(j = 0; j < schurComplSubMatnRows; ++j)
{
ASSERTL0(patchId[i]==patchId[j],
"These values should be equal");
if(isBndDof[j])
{
subMat0[dofId[i]+dofId[j]*subMat0rows] +=
sign[i]*sign[j]*(
scale*schurSubMat[
i+j*schurComplSubMatnRows]);
}
else
{
subMat1[dofId[i]+dofId[j]*subMat1rows] +=
sign[i]*sign[j]*(
scale*schurSubMat[
i+j*schurComplSubMatnRows]);
}
}
}
else
{
for(j = 0; j < schurComplSubMatnRows; ++j)
{
ASSERTL0(patchId[i]==patchId[j],
"These values should be equal");
if(isBndDof[j])
{
subMat2[dofId[i]+dofId[j]*subMat2rows] +=
sign[i]*sign[j]*(
scale*schurSubMat[
i+j*schurComplSubMatnRows]);
}
else
{
subMat3[dofId[i]+dofId[j]*subMat3rows] +=
sign[i]*sign[j]*(
scale*schurSubMat[
i+j*schurComplSubMatnRows]);
}
}
}
}
cnt += schurComplSubMatnRows;
}
// STEP 2: condense the system
// This can be done elementally (i.e. patch per patch)
for(i = 0; i < nPatches; i++)
{
if(nIntDofsPerPatch[i])
{
Array<OneD, NekDouble> subMat0
= substructuredMat[0][i]->GetPtr();
Array<OneD, NekDouble> subMat1
= substructuredMat[1][i]->GetPtr();
Array<OneD, NekDouble> subMat2
= substructuredMat[2][i]->GetPtr();
int subMat0rows = substructuredMat[0][i]->GetRows();
int subMat1rows = substructuredMat[1][i]->GetRows();
int subMat2rows = substructuredMat[2][i]->GetRows();
int subMat2cols = substructuredMat[2][i]->GetColumns();
// 1. D -> InvD
substructuredMat[3][i]->Invert();
// 2. B -> BInvD
(*substructuredMat[1][i]) = (*substructuredMat[1][i])*
(*substructuredMat[3][i]);
// 3. A -> A - BInvD*C (= schurcomplement)
// (*substructuredMat[0][i]) =(*substructuredMat[0][i])-
// (*substructuredMat[1][i])*(*substructuredMat[2][i]);
// Note: faster to use blas directly
Blas::Dgemm('N','N', subMat1rows, subMat2cols,
subMat2rows, -1.0, &subMat1[0], subMat1rows,
&subMat2[0], subMat2rows, 1.0, &subMat0[0],
subMat0rows);
}
}
// STEP 3: fill the block matrices. However, do note that we
// first have to convert them to a DNekScalMat in order to be
// compatible with the first level of static condensation
const Array<OneD,const unsigned int>& nbdry_size
= pLocToGloMap->GetNumLocalBndCoeffsPerPatch();
const Array<OneD,const unsigned int>& nint_size
= pLocToGloMap->GetNumLocalIntCoeffsPerPatch();
MatrixStorage blkmatStorage = eDIAGONAL;
::AllocateSharedPtr(nbdry_size, nbdry_size, blkmatStorage);
::AllocateSharedPtr(nbdry_size, nint_size , blkmatStorage);
::AllocateSharedPtr(nint_size , nbdry_size, blkmatStorage);
::AllocateSharedPtr(nint_size , nint_size , blkmatStorage);
for(i = 0; i < nPatches; i++)
{
for(j = 0; j < 4; j++)
{
::AllocateSharedPtr(1.0,substructuredMat[j][i]);
blkMatrices[j]->SetBlock(i,i,tmpscalmat);
}
}
}
// We've finished with the Schur complement matrix passed to this
// level, so return the memory to the system. The Schur complement
// matrix need only be retained at the last level. Save the other
// matrices at this level though.
m_schurCompl.reset();
m_linSysKey, m_expList, blkMatrices[0], blkMatrices[1],
blkMatrices[2], blkMatrices[3], pLocToGloMap);
}
void Nektar::MultiRegions::GlobalLinSysStaticCond::SetupTopLevel ( const boost::shared_ptr< AssemblyMap > &  pLocToGloMap)
protected

Set up the storage for the Schur complement or the top level of the multi-level Schur complement.

For the first level in multi-level static condensation, or the only level in the case of single-level static condensation, allocate the condensed matrices and populate them with the local matrices retrieved from the expansion list.

Parameters

Definition at line 296 of file GlobalLinSysStaticCond.cpp.

References Nektar::eDIAGONAL, Nektar::StdRegions::eHybridDGHelmBndLam, Nektar::MultiRegions::GlobalMatrixKey::GetMatrixType(), m_BinvD, m_C, Nektar::MultiRegions::GlobalLinSys::m_expList, m_invD, Nektar::MultiRegions::GlobalLinSys::m_linSysKey, m_schurCompl, Nektar::MultiRegions::GlobalLinSys::v_GetBlock(), and Nektar::MultiRegions::GlobalLinSys::v_GetStaticCondBlock().

Referenced by v_InitObject(), and Nektar::MultiRegions::GlobalLinSysIterativeStaticCond::v_InitObject().

{
int n;
int n_exp = m_expList.lock()->GetNumElmts();
const Array<OneD,const unsigned int>& nbdry_size
= pLocToGloMap->GetNumLocalBndCoeffsPerPatch();
const Array<OneD,const unsigned int>& nint_size
= pLocToGloMap->GetNumLocalIntCoeffsPerPatch();
// Setup Block Matrix systems
MatrixStorage blkmatStorage = eDIAGONAL;
::AllocateSharedPtr(nbdry_size, nbdry_size, blkmatStorage);
::AllocateSharedPtr(nbdry_size, nint_size , blkmatStorage);
::AllocateSharedPtr(nint_size , nbdry_size, blkmatStorage);
::AllocateSharedPtr(nint_size , nint_size , blkmatStorage);
for(n = 0; n < n_exp; ++n)
{
{
m_expList.lock()->GetOffset_Elmt_Id(n));
m_schurCompl->SetBlock(n,n,loc_mat);
}
else
{
m_expList.lock()->GetOffset_Elmt_Id(n));
m_schurCompl->SetBlock(n, n, t = loc_schur->GetBlock(0,0));
m_BinvD ->SetBlock(n, n, t = loc_schur->GetBlock(0,1));
m_C ->SetBlock(n, n, t = loc_schur->GetBlock(1,0));
m_invD ->SetBlock(n, n, t = loc_schur->GetBlock(1,1));
}
}
}
virtual void Nektar::MultiRegions::GlobalLinSysStaticCond::v_AssembleSchurComplement ( boost::shared_ptr< AssemblyMap pLoctoGloMap)
inlineprotectedvirtual
virtual void Nektar::MultiRegions::GlobalLinSysStaticCond::v_BasisInvTransform ( Array< OneD, NekDouble > &  pInOut)
inlineprotectedvirtual

Reimplemented in Nektar::MultiRegions::GlobalLinSysIterativeStaticCond.

Definition at line 81 of file GlobalLinSysStaticCond.h.

Referenced by v_Solve().

{
}
virtual void Nektar::MultiRegions::GlobalLinSysStaticCond::v_BasisTransform ( Array< OneD, NekDouble > &  pInOut,
int  offset 
)
inlineprotectedvirtual

Reimplemented in Nektar::MultiRegions::GlobalLinSysIterativeStaticCond.

Definition at line 74 of file GlobalLinSysStaticCond.h.

Referenced by v_Solve().

{
}
int Nektar::MultiRegions::GlobalLinSysStaticCond::v_GetNumBlocks ( )
protectedvirtual

Get the number of blocks in this system.

At the top level this corresponds to the number of elements in the expansion list.

Reimplemented from Nektar::MultiRegions::GlobalLinSys.

Definition at line 284 of file GlobalLinSysStaticCond.cpp.

References m_schurCompl.

{
return m_schurCompl->GetNumberOfBlockRows();
}
void Nektar::MultiRegions::GlobalLinSysStaticCond::v_Initialise ( const boost::shared_ptr< AssemblyMap > &  pLocToGloMap)
protectedvirtual

Initialise this object.

If at the last level of recursion (or the only level in the case of single-level static condensation), assemble the Schur complement. For other levels, in the case of multi-level static condensation, the next level of the condensed system is computed.

Parameters
pLocToGloMapLocal to global mapping.

Reimplemented from Nektar::MultiRegions::GlobalLinSys.

Definition at line 266 of file GlobalLinSysStaticCond.cpp.

References ConstructNextLevelCondensedSystem(), m_locToGloMap, m_wsp, and v_AssembleSchurComplement().

{
int nLocalBnd = m_locToGloMap->GetNumLocalBndCoeffs();
int nGlobal = m_locToGloMap->GetNumGlobalCoeffs();
m_wsp = Array<OneD, NekDouble>(2*nLocalBnd + nGlobal);
if (pLocToGloMap->AtLastLevel())
{
}
else
{
pLocToGloMap->GetNextLevelLocalToGlobalMap());
}
}
void Nektar::MultiRegions::GlobalLinSysStaticCond::v_InitObject ( )
protectedvirtual

Reimplemented from Nektar::MultiRegions::GlobalLinSys.

Definition at line 83 of file GlobalLinSysStaticCond.cpp.

References Nektar::MultiRegions::GlobalLinSys::Initialise(), m_locToGloMap, and SetupTopLevel().

{
// Allocate memory for top-level structure
// Construct this level
}
virtual DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::v_PreSolve ( int  scLevel,
NekVector< NekDouble > &  F_GlobBnd 
)
inlineprotectedvirtual

Reimplemented in Nektar::MultiRegions::GlobalLinSysIterativeStaticCond.

Definition at line 67 of file GlobalLinSysStaticCond.h.

References m_schurCompl.

Referenced by v_Solve().

{
return m_schurCompl;
}
virtual GlobalLinSysStaticCondSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::v_Recurse ( const GlobalLinSysKey mkey,
const boost::weak_ptr< ExpList > &  pExpList,
const DNekScalBlkMatSharedPtr  pSchurCompl,
const DNekScalBlkMatSharedPtr  pBinvD,
const DNekScalBlkMatSharedPtr  pC,
const DNekScalBlkMatSharedPtr  pInvD,
const boost::shared_ptr< AssemblyMap > &  locToGloMap 
)
protectedpure virtual
void Nektar::MultiRegions::GlobalLinSysStaticCond::v_Solve ( const Array< OneD, const NekDouble > &  in,
Array< OneD, NekDouble > &  out,
const AssemblyMapSharedPtr locToGloMap,
const Array< OneD, const NekDouble > &  dirForcing = NullNekDouble1DArray 
)
protectedvirtual

Solve the linear system for given input and output vectors using a specified local to global map.

Implements Nektar::MultiRegions::GlobalLinSys.

Definition at line 104 of file GlobalLinSysStaticCond.cpp.

References Nektar::eWrapper, Vmath::Fill(), Nektar::NekVector< DataType >::GetPtr(), m_BinvD, m_C, m_invD, m_recursiveSchurCompl, m_wsp, Nektar::MultiRegions::GlobalLinSys::SolveLinearSystem(), v_BasisInvTransform(), v_BasisTransform(), v_PreSolve(), Vmath::Vadd(), Vmath::Vcopy(), and Vmath::Vsub().

{
bool dirForcCalculated = (bool) dirForcing.num_elements();
bool atLastLevel = pLocToGloMap->AtLastLevel();
int scLevel = pLocToGloMap->GetStaticCondLevel();
int nGlobDofs = pLocToGloMap->GetNumGlobalCoeffs();
int nGlobBndDofs = pLocToGloMap->GetNumGlobalBndCoeffs();
int nDirBndDofs = pLocToGloMap->GetNumGlobalDirBndCoeffs();
int nGlobHomBndDofs = nGlobBndDofs - nDirBndDofs;
int nLocBndDofs = pLocToGloMap->GetNumLocalBndCoeffs();
int nIntDofs = pLocToGloMap->GetNumGlobalCoeffs()
- nGlobBndDofs;
Array<OneD, NekDouble> F = m_wsp + 2*nLocBndDofs;
Array<OneD, NekDouble> tmp;
if(nDirBndDofs && dirForcCalculated)
{
Vmath::Vsub(nGlobDofs,in.get(),1,dirForcing.get(),1,F.get(),1);
}
else
{
Vmath::Vcopy(nGlobDofs,in.get(),1,F.get(),1);
}
NekVector<NekDouble> F_HomBnd(nGlobHomBndDofs,tmp=F+nDirBndDofs,
NekVector<NekDouble> F_GlobBnd(nGlobBndDofs,F,eWrapper);
NekVector<NekDouble> F_LocBnd(nLocBndDofs,0.0);
NekVector<NekDouble> F_Int(nIntDofs,tmp=F+nGlobBndDofs,eWrapper);
NekVector<NekDouble> V_GlobBnd(nGlobBndDofs,out,eWrapper);
NekVector<NekDouble> V_GlobHomBnd(nGlobHomBndDofs,
tmp=out+nDirBndDofs,
NekVector<NekDouble> V_Int(nIntDofs,tmp=out+nGlobBndDofs,eWrapper);
NekVector<NekDouble> V_LocBnd(nLocBndDofs,m_wsp,eWrapper);
NekVector<NekDouble> V_GlobHomBndTmp(nGlobHomBndDofs,0.0);
// set up normalisation factor for right hand side on first SC level
DNekScalBlkMatSharedPtr sc = v_PreSolve(scLevel, F_GlobBnd);
if(nGlobHomBndDofs)
{
// construct boundary forcing
if( nIntDofs && ((!dirForcCalculated) && (atLastLevel)) )
{
DNekScalBlkMat &SchurCompl = *sc;
// include dirichlet boundary forcing
pLocToGloMap->GlobalToLocalBnd(V_GlobBnd,V_LocBnd);
V_LocBnd = BinvD*F_Int + SchurCompl*V_LocBnd;
}
else if((!dirForcCalculated) && (atLastLevel))
{
// include dirichlet boundary forcing
DNekScalBlkMat &SchurCompl = *sc;
pLocToGloMap->GlobalToLocalBnd(V_GlobBnd,V_LocBnd);
V_LocBnd = SchurCompl*V_LocBnd;
}
else
{
V_LocBnd = BinvD*F_Int;
}
pLocToGloMap->AssembleBnd(V_LocBnd,V_GlobHomBndTmp,
nDirBndDofs);
F_HomBnd = F_HomBnd - V_GlobHomBndTmp;
// Transform from original basis to low energy
v_BasisTransform(F, nDirBndDofs);
// For parallel multi-level static condensation some
// processors may have different levels to others. This
// routine receives contributions to partition vertices from
// those lower levels, whilst not sending anything to the
// other partitions, and includes them in the modified right
// hand side vector.
int lcLevel = pLocToGloMap->GetLowestStaticCondLevel();
if(atLastLevel && scLevel < lcLevel)
{
// If this level is not the lowest level across all
// processes, we must do dummy communication for the
// remaining levels
Array<OneD, NekDouble> tmp(nGlobBndDofs);
for (int i = scLevel; i < lcLevel; ++i)
{
Vmath::Fill(nGlobBndDofs, 0.0, tmp, 1);
pLocToGloMap->UniversalAssembleBnd(tmp);
Vmath::Vcopy(nGlobHomBndDofs,
tmp.get()+nDirBndDofs, 1,
V_GlobHomBndTmp.GetPtr().get(), 1);
F_HomBnd = F_HomBnd - V_GlobHomBndTmp;
}
}
// solve boundary system
if(atLastLevel)
{
Array<OneD, NekDouble> pert(nGlobBndDofs,0.0);
NekVector<NekDouble> Pert(nGlobBndDofs,pert,eWrapper);
// Solve for difference from initial solution given inout;
nGlobBndDofs, F, pert, pLocToGloMap, nDirBndDofs);
// Transform back to original basis
// Add back initial conditions onto difference
Vmath::Vadd(nGlobHomBndDofs,&out[nDirBndDofs],1,
&pert[nDirBndDofs],1,&out[nDirBndDofs],1);
}
else
{
V_GlobBnd.GetPtr(),
pLocToGloMap->GetNextLevelLocalToGlobalMap());
}
}
// solve interior system
if(nIntDofs)
{
if(nGlobHomBndDofs || nDirBndDofs)
{
if(dirForcCalculated && nDirBndDofs)
{
pLocToGloMap->GlobalToLocalBnd(V_GlobHomBnd,V_LocBnd,
nDirBndDofs);
}
else
{
pLocToGloMap->GlobalToLocalBnd(V_GlobBnd,V_LocBnd);
}
F_Int = F_Int - C*V_LocBnd;
}
V_Int = invD*F_Int;
}
}

Member Data Documentation

DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_BinvD
protected
DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_C
protected
DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_invD
protected
boost::shared_ptr<AssemblyMap> Nektar::MultiRegions::GlobalLinSysStaticCond::m_locToGloMap
protected
GlobalLinSysStaticCondSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_recursiveSchurCompl
protected

Schur complement for Direct Static Condensation.

Definition at line 105 of file GlobalLinSysStaticCond.h.

Referenced by ConstructNextLevelCondensedSystem(), and v_Solve().

DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_schurCompl
protected
Array<OneD, NekDouble> Nektar::MultiRegions::GlobalLinSysStaticCond::m_wsp
protected

Workspace array for matrix multiplication.

Definition at line 117 of file GlobalLinSysStaticCond.h.

Referenced by Nektar::MultiRegions::GlobalLinSysIterativeStaticCond::v_DoMatrixMultiply(), v_Initialise(), and v_Solve().