Nektar++
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Friends Macros Pages
Public Member Functions | Protected Member Functions | Protected Attributes | List of all members
Nektar::MultiRegions::GlobalLinSysStaticCond Class Referenceabstract

A global linear system. More...

#include <GlobalLinSysStaticCond.h>

Inheritance diagram for Nektar::MultiRegions::GlobalLinSysStaticCond:
Inheritance graph
[legend]
Collaboration diagram for Nektar::MultiRegions::GlobalLinSysStaticCond:
Collaboration graph
[legend]

Public Member Functions

 GlobalLinSysStaticCond (const GlobalLinSysKey &mkey, const boost::weak_ptr< ExpList > &pExpList, const boost::shared_ptr< AssemblyMap > &locToGloMap)
 Constructor for full direct matrix solve. More...
 
virtual ~GlobalLinSysStaticCond ()
 
- Public Member Functions inherited from Nektar::MultiRegions::GlobalLinSys
 GlobalLinSys (const GlobalLinSysKey &pKey, const boost::weak_ptr< ExpList > &pExpList, const boost::shared_ptr< AssemblyMap > &pLocToGloMap)
 Constructor for full direct matrix solve. More...
 
virtual ~GlobalLinSys ()
 
const GlobalLinSysKeyGetKey (void) const
 Returns the key associated with the system. More...
 
const boost::weak_ptr< ExpList > & GetLocMat (void) const
 
void InitObject ()
 
void Initialise (const boost::shared_ptr< AssemblyMap > &pLocToGloMap)
 
void Solve (const Array< OneD, const NekDouble > &in, Array< OneD, NekDouble > &out, const AssemblyMapSharedPtr &locToGloMap, const Array< OneD, const NekDouble > &dirForcing=NullNekDouble1DArray)
 Solve the linear system for given input and output vectors using a specified local to global map. More...
 
boost::shared_ptr< GlobalLinSysGetSharedThisPtr ()
 Returns a shared pointer to the current object. More...
 
int GetNumBlocks ()
 
DNekScalMatSharedPtr GetBlock (unsigned int n)
 
DNekScalBlkMatSharedPtr GetStaticCondBlock (unsigned int n)
 
void DropStaticCondBlock (unsigned int n)
 
void SolveLinearSystem (const int pNumRows, const Array< OneD, const NekDouble > &pInput, Array< OneD, NekDouble > &pOutput, const AssemblyMapSharedPtr &locToGloMap, const int pNumDir=0)
 Solve the linear system for given input and output vectors. More...
 

Protected Member Functions

virtual DNekScalBlkMatSharedPtr v_PreSolve (int scLevel, NekVector< NekDouble > &F_GlobBnd)
 
virtual void v_BasisTransform (Array< OneD, NekDouble > &pInOut, int offset)
 
virtual void v_BasisInvTransform (Array< OneD, NekDouble > &pInOut)
 
virtual void v_AssembleSchurComplement (boost::shared_ptr< AssemblyMap > pLoctoGloMap)
 
virtual int v_GetNumBlocks ()
 Get the number of blocks in this system. More...
 
virtual
GlobalLinSysStaticCondSharedPtr 
v_Recurse (const GlobalLinSysKey &mkey, const boost::weak_ptr< ExpList > &pExpList, const DNekScalBlkMatSharedPtr pSchurCompl, const DNekScalBlkMatSharedPtr pBinvD, const DNekScalBlkMatSharedPtr pC, const DNekScalBlkMatSharedPtr pInvD, const boost::shared_ptr< AssemblyMap > &locToGloMap)=0
 
virtual void v_Solve (const Array< OneD, const NekDouble > &in, Array< OneD, NekDouble > &out, const AssemblyMapSharedPtr &locToGloMap, const Array< OneD, const NekDouble > &dirForcing=NullNekDouble1DArray)
 Solve the linear system for given input and output vectors using a specified local to global map. More...
 
virtual void v_InitObject ()
 
virtual void v_Initialise (const boost::shared_ptr< AssemblyMap > &locToGloMap)
 Initialise this object. More...
 
void SetupTopLevel (const boost::shared_ptr< AssemblyMap > &locToGloMap)
 Set up the storage for the Schur complement or the top level of the multi-level Schur complement. More...
 
void ConstructNextLevelCondensedSystem (const boost::shared_ptr< AssemblyMap > &locToGloMap)
 
- Protected Member Functions inherited from Nektar::MultiRegions::GlobalLinSys
virtual DNekScalMatSharedPtr v_GetBlock (unsigned int n)
 Retrieves the block matrix from n-th expansion using the matrix key provided by the m_linSysKey. More...
 
virtual DNekScalBlkMatSharedPtr v_GetStaticCondBlock (unsigned int n)
 Retrieves a the static condensation block matrices from n-th expansion using the matrix key provided by the m_linSysKey. More...
 
virtual void v_DropStaticCondBlock (unsigned int n)
 Releases the static condensation block matrices from NekManager of n-th expansion using the matrix key provided by the m_linSysKey. More...
 
PreconditionerSharedPtr CreatePrecon (AssemblyMapSharedPtr asmMap)
 Create a preconditioner object from the parameters defined in the supplied assembly map. More...
 

Protected Attributes

GlobalLinSysStaticCondSharedPtr m_recursiveSchurCompl
 Schur complement for Direct Static Condensation. More...
 
DNekScalBlkMatSharedPtr m_schurCompl
 Block Schur complement matrix. More...
 
DNekScalBlkMatSharedPtr m_BinvD
 Block $ BD^{-1} $ matrix. More...
 
DNekScalBlkMatSharedPtr m_C
 Block $ C $ matrix. More...
 
DNekScalBlkMatSharedPtr m_invD
 Block $ D^{-1} $ matrix. More...
 
boost::shared_ptr< AssemblyMapm_locToGloMap
 Local to global map. More...
 
Array< OneD, NekDoublem_wsp
 Workspace array for matrix multiplication. More...
 
- Protected Attributes inherited from Nektar::MultiRegions::GlobalLinSys
const GlobalLinSysKey m_linSysKey
 Key associated with this linear system. More...
 
const boost::weak_ptr< ExpListm_expList
 Local Matrix System. More...
 
const std::map< int,
RobinBCInfoSharedPtr
m_robinBCInfo
 Robin boundary info. More...
 
bool m_verbose
 

Detailed Description

A global linear system.

Solves a linear system using single- or multi-level static condensation.

Definition at line 55 of file GlobalLinSysStaticCond.h.

Constructor & Destructor Documentation

Nektar::MultiRegions::GlobalLinSysStaticCond::GlobalLinSysStaticCond ( const GlobalLinSysKey pKey,
const boost::weak_ptr< ExpList > &  pExpList,
const boost::shared_ptr< AssemblyMap > &  pLocToGloMap 
)

Constructor for full direct matrix solve.

For a matrix system of the form

\[ \left[ \begin{array}{cc} \boldsymbol{A} & \boldsymbol{B}\\ \boldsymbol{C} & \boldsymbol{D} \end{array} \right] \left[ \begin{array}{c} \boldsymbol{x_1}\\ \boldsymbol{x_2} \end{array}\right] = \left[ \begin{array}{c} \boldsymbol{y_1}\\ \boldsymbol{y_2} \end{array}\right], \]

where $\boldsymbol{D}$ and $(\boldsymbol{A-BD^{-1}C})$ are invertible, store and assemble a static condensation system, according to a given local to global mapping. #m_linSys is constructed by AssembleSchurComplement().

Parameters
mKeyAssociated matrix key.
pLocMatSysLocalMatrixSystem
locToGloMapLocal to global mapping.

Definition at line 76 of file GlobalLinSysStaticCond.cpp.

80  : GlobalLinSys(pKey, pExpList, pLocToGloMap),
81  m_locToGloMap (pLocToGloMap)
82  {
83  }
GlobalLinSys(const GlobalLinSysKey &pKey, const boost::weak_ptr< ExpList > &pExpList, const boost::shared_ptr< AssemblyMap > &pLocToGloMap)
Constructor for full direct matrix solve.
boost::shared_ptr< AssemblyMap > m_locToGloMap
Local to global map.
Nektar::MultiRegions::GlobalLinSysStaticCond::~GlobalLinSysStaticCond ( )
virtual

Definition at line 97 of file GlobalLinSysStaticCond.cpp.

98  {
99 
100  }

Member Function Documentation

void Nektar::MultiRegions::GlobalLinSysStaticCond::ConstructNextLevelCondensedSystem ( const boost::shared_ptr< AssemblyMap > &  locToGloMap)
protected

Definition at line 346 of file GlobalLinSysStaticCond.cpp.

References Nektar::MemoryManager< DataType >::AllocateSharedPtr(), ASSERTL0, Nektar::eDIAGONAL, Nektar::eFULL, Nektar::StdRegions::eHelmholtz, Nektar::StdRegions::eLaplacian, Nektar::StdRegions::eMass, Nektar::eSYMMETRIC, Nektar::eWrapper, Nektar::MultiRegions::GlobalMatrixKey::GetMatrixType(), Nektar::MultiRegions::GlobalLinSys::m_expList, Nektar::MultiRegions::GlobalLinSys::m_linSysKey, m_recursiveSchurCompl, m_schurCompl, sign, and v_Recurse().

Referenced by v_Initialise().

348  {
349  int i,j,n,cnt;
350  DNekScalBlkMatSharedPtr blkMatrices[4];
351 
352  // Create temporary matrices within an inner-local scope to ensure
353  // any references to the intermediate storage is lost before
354  // the recursive step, rather than at the end of the routine.
355  // This allows the schur complement matrix from this level to be
356  // disposed of in the next level after use without being retained
357  // due to lingering shared pointers.
358  {
359 
360  const Array<OneD,const unsigned int>& nBndDofsPerPatch
361  = pLocToGloMap->GetNumLocalBndCoeffsPerPatch();
362  const Array<OneD,const unsigned int>& nIntDofsPerPatch
363  = pLocToGloMap->GetNumLocalIntCoeffsPerPatch();
364 
365  // STEP 1:
366  // Based upon the schur complement of the the current level we
367  // will substructure this matrix in the form
368  // -- --
369  // | A B |
370  // | C D |
371  // -- --
372  // All matrices A,B,C and D are (diagonal) blockmatrices.
373  // However, as a start we will use an array of DNekMatrices as
374  // it is too hard to change the individual entries of a
375  // DNekScalBlkMatSharedPtr.
376 
377  // In addition, we will also try to ensure that the memory of
378  // the blockmatrices will be contiguous. This will probably
379  // enhance the efficiency
380  // - Calculate the total number of entries in the blockmatrices
381  int nPatches = pLocToGloMap->GetNumPatches();
382  int nEntriesA = 0; int nEntriesB = 0;
383  int nEntriesC = 0; int nEntriesD = 0;
384 
385  for(i = 0; i < nPatches; i++)
386  {
387  nEntriesA += nBndDofsPerPatch[i]*nBndDofsPerPatch[i];
388  nEntriesB += nBndDofsPerPatch[i]*nIntDofsPerPatch[i];
389  nEntriesC += nIntDofsPerPatch[i]*nBndDofsPerPatch[i];
390  nEntriesD += nIntDofsPerPatch[i]*nIntDofsPerPatch[i];
391  }
392 
393  // Now create the DNekMatrices and link them to the memory
394  // allocated above
395  Array<OneD, DNekMatSharedPtr> substructuredMat[4]
396  = {Array<OneD, DNekMatSharedPtr>(nPatches), //Matrix A
397  Array<OneD, DNekMatSharedPtr>(nPatches), //Matrix B
398  Array<OneD, DNekMatSharedPtr>(nPatches), //Matrix C
399  Array<OneD, DNekMatSharedPtr>(nPatches)}; //Matrix D
400 
401  // Initialise storage for the matrices. We do this separately
402  // for each matrix so the matrices may be independently
403  // deallocated when no longer required.
404  Array<OneD, NekDouble> storageA(nEntriesA,0.0);
405  Array<OneD, NekDouble> storageB(nEntriesB,0.0);
406  Array<OneD, NekDouble> storageC(nEntriesC,0.0);
407  Array<OneD, NekDouble> storageD(nEntriesD,0.0);
408 
409  Array<OneD, NekDouble> tmparray;
410  PointerWrapper wType = eWrapper;
411  int cntA = 0;
412  int cntB = 0;
413  int cntC = 0;
414  int cntD = 0;
415 
416  // Use symmetric storage for invD if possible
417  MatrixStorage storageTypeD = eFULL;
421  {
422  storageTypeD = eSYMMETRIC;
423  }
424 
425  for(i = 0; i < nPatches; i++)
426  {
427  // Matrix A
428  tmparray = storageA+cntA;
429  substructuredMat[0][i] = MemoryManager<DNekMat>
430  ::AllocateSharedPtr(nBndDofsPerPatch[i],
431  nBndDofsPerPatch[i],
432  tmparray, wType);
433  // Matrix B
434  tmparray = storageB+cntB;
435  substructuredMat[1][i] = MemoryManager<DNekMat>
436  ::AllocateSharedPtr(nBndDofsPerPatch[i],
437  nIntDofsPerPatch[i],
438  tmparray, wType);
439  // Matrix C
440  tmparray = storageC+cntC;
441  substructuredMat[2][i] = MemoryManager<DNekMat>
442  ::AllocateSharedPtr(nIntDofsPerPatch[i],
443  nBndDofsPerPatch[i],
444  tmparray, wType);
445  // Matrix D
446  tmparray = storageD+cntD;
447  substructuredMat[3][i] = MemoryManager<DNekMat>
448  ::AllocateSharedPtr(nIntDofsPerPatch[i],
449  nIntDofsPerPatch[i],
450  tmparray, wType,
451  storageTypeD);
452 
453  cntA += nBndDofsPerPatch[i] * nBndDofsPerPatch[i];
454  cntB += nBndDofsPerPatch[i] * nIntDofsPerPatch[i];
455  cntC += nIntDofsPerPatch[i] * nBndDofsPerPatch[i];
456  cntD += nIntDofsPerPatch[i] * nIntDofsPerPatch[i];
457  }
458 
459  // Then, project SchurComplement onto
460  // the substructured matrices of the next level
462  DNekScalMatSharedPtr schurComplSubMat;
463  int schurComplSubMatnRows;
464  Array<OneD, const int> patchId, dofId;
465  Array<OneD, const unsigned int> isBndDof;
466  Array<OneD, const NekDouble> sign;
467 
468  for(n = cnt = 0; n < SchurCompl->GetNumberOfBlockRows(); ++n)
469  {
470  schurComplSubMat = SchurCompl->GetBlock(n,n);
471  schurComplSubMatnRows = schurComplSubMat->GetRows();
472 
473  patchId = pLocToGloMap->GetPatchMapFromPrevLevel()
474  ->GetPatchId() + cnt;
475  dofId = pLocToGloMap->GetPatchMapFromPrevLevel()
476  ->GetDofId() + cnt;
477  isBndDof = pLocToGloMap->GetPatchMapFromPrevLevel()
478  ->IsBndDof() + cnt;
479  sign = pLocToGloMap->GetPatchMapFromPrevLevel()
480  ->GetSign() + cnt;
481 
482  // Set up Matrix;
483  for(i = 0; i < schurComplSubMatnRows; ++i)
484  {
485  int pId = patchId[i];
486  Array<OneD, NekDouble> subMat0
487  = substructuredMat[0][pId]->GetPtr();
488  Array<OneD, NekDouble> subMat1
489  = substructuredMat[1][patchId[i]]->GetPtr();
490  Array<OneD, NekDouble> subMat2
491  = substructuredMat[2][patchId[i]]->GetPtr();
492  DNekMatSharedPtr subMat3
493  = substructuredMat[3][patchId[i]];
494  int subMat0rows = substructuredMat[0][pId]->GetRows();
495  int subMat1rows = substructuredMat[1][pId]->GetRows();
496  int subMat2rows = substructuredMat[2][pId]->GetRows();
497 
498  if(isBndDof[i])
499  {
500  for(j = 0; j < schurComplSubMatnRows; ++j)
501  {
502  ASSERTL0(patchId[i]==patchId[j],
503  "These values should be equal");
504 
505  if(isBndDof[j])
506  {
507  subMat0[dofId[i]+dofId[j]*subMat0rows] +=
508  sign[i]*sign[j]*
509  (*schurComplSubMat)(i,j);
510  }
511  else
512  {
513  subMat1[dofId[i]+dofId[j]*subMat1rows] +=
514  sign[i]*sign[j]*
515  (*schurComplSubMat)(i,j);
516  }
517  }
518  }
519  else
520  {
521  for(j = 0; j < schurComplSubMatnRows; ++j)
522  {
523  ASSERTL0(patchId[i]==patchId[j],
524  "These values should be equal");
525 
526  if(isBndDof[j])
527  {
528  subMat2[dofId[i]+dofId[j]*subMat2rows] +=
529  sign[i]*sign[j]*
530  (*schurComplSubMat)(i,j);
531  }
532  else
533  {
534  if (storageTypeD == eSYMMETRIC)
535  {
536  if (dofId[i] <= dofId[j])
537  {
538  (*subMat3)(dofId[i],dofId[j]) +=
539  sign[i]*sign[j]*
540  (*schurComplSubMat)(i,j);
541  }
542  }
543  else
544  {
545  (*subMat3)(dofId[i],dofId[j]) +=
546  sign[i]*sign[j]*
547  (*schurComplSubMat)(i,j);
548  }
549  }
550  }
551  }
552  }
553  cnt += schurComplSubMatnRows;
554  }
555 
556  // STEP 2: condense the system
557  // This can be done elementally (i.e. patch per patch)
558  for(i = 0; i < nPatches; i++)
559  {
560  if(nIntDofsPerPatch[i])
561  {
562  Array<OneD, NekDouble> subMat0
563  = substructuredMat[0][i]->GetPtr();
564  Array<OneD, NekDouble> subMat1
565  = substructuredMat[1][i]->GetPtr();
566  Array<OneD, NekDouble> subMat2
567  = substructuredMat[2][i]->GetPtr();
568  int subMat0rows = substructuredMat[0][i]->GetRows();
569  int subMat1rows = substructuredMat[1][i]->GetRows();
570  int subMat2rows = substructuredMat[2][i]->GetRows();
571  int subMat2cols = substructuredMat[2][i]->GetColumns();
572 
573  // 1. D -> InvD
574  substructuredMat[3][i]->Invert();
575  // 2. B -> BInvD
576  (*substructuredMat[1][i]) = (*substructuredMat[1][i])*
577  (*substructuredMat[3][i]);
578  // 3. A -> A - BInvD*C (= schurcomplement)
579  // (*substructuredMat[0][i]) =(*substructuredMat[0][i])-
580  // (*substructuredMat[1][i])*(*substructuredMat[2][i]);
581  // Note: faster to use blas directly
582  Blas::Dgemm('N','N', subMat1rows, subMat2cols,
583  subMat2rows, -1.0, &subMat1[0], subMat1rows,
584  &subMat2[0], subMat2rows, 1.0, &subMat0[0],
585  subMat0rows);
586  }
587  }
588 
589  // STEP 3: fill the block matrices. However, do note that we
590  // first have to convert them to a DNekScalMat in order to be
591  // compatible with the first level of static condensation
592  const Array<OneD,const unsigned int>& nbdry_size
593  = pLocToGloMap->GetNumLocalBndCoeffsPerPatch();
594  const Array<OneD,const unsigned int>& nint_size
595  = pLocToGloMap->GetNumLocalIntCoeffsPerPatch();
596  MatrixStorage blkmatStorage = eDIAGONAL;
597 
598  blkMatrices[0] = MemoryManager<DNekScalBlkMat>
599  ::AllocateSharedPtr(nbdry_size, nbdry_size, blkmatStorage);
600  blkMatrices[1] = MemoryManager<DNekScalBlkMat>
601  ::AllocateSharedPtr(nbdry_size, nint_size , blkmatStorage);
602  blkMatrices[2] = MemoryManager<DNekScalBlkMat>
603  ::AllocateSharedPtr(nint_size , nbdry_size, blkmatStorage);
604  blkMatrices[3] = MemoryManager<DNekScalBlkMat>
605  ::AllocateSharedPtr(nint_size , nint_size , blkmatStorage);
606 
607  DNekScalMatSharedPtr tmpscalmat;
608  for(i = 0; i < nPatches; i++)
609  {
610  for(j = 0; j < 4; j++)
611  {
612  tmpscalmat= MemoryManager<DNekScalMat>
613  ::AllocateSharedPtr(1.0,substructuredMat[j][i]);
614  blkMatrices[j]->SetBlock(i,i,tmpscalmat);
615  }
616  }
617  }
618 
619  // We've finished with the Schur complement matrix passed to this
620  // level, so return the memory to the system. The Schur complement
621  // matrix need only be retained at the last level. Save the other
622  // matrices at this level though.
623  m_schurCompl.reset();
624 
626  m_linSysKey, m_expList, blkMatrices[0], blkMatrices[1],
627  blkMatrices[2], blkMatrices[3], pLocToGloMap);
628  }
#define ASSERTL0(condition, msg)
Definition: ErrorUtil.hpp:198
virtual GlobalLinSysStaticCondSharedPtr v_Recurse(const GlobalLinSysKey &mkey, const boost::weak_ptr< ExpList > &pExpList, const DNekScalBlkMatSharedPtr pSchurCompl, const DNekScalBlkMatSharedPtr pBinvD, const DNekScalBlkMatSharedPtr pC, const DNekScalBlkMatSharedPtr pInvD, const boost::shared_ptr< AssemblyMap > &locToGloMap)=0
static boost::shared_ptr< DataType > AllocateSharedPtr()
Allocate a shared pointer from the memory pool.
#define sign(a, b)
return the sign(b)*a
Definition: Polylib.cpp:27
DNekScalBlkMatSharedPtr m_schurCompl
Block Schur complement matrix.
GlobalLinSysStaticCondSharedPtr m_recursiveSchurCompl
Schur complement for Direct Static Condensation.
boost::shared_ptr< DNekMat > DNekMatSharedPtr
Definition: NekTypeDefs.hpp:70
boost::shared_ptr< DNekScalMat > DNekScalMatSharedPtr
boost::shared_ptr< DNekScalBlkMat > DNekScalBlkMatSharedPtr
Definition: NekTypeDefs.hpp:74
PointerWrapper
Specifies if the pointer passed to a NekMatrix or NekVector is copied into an internal representation...
StdRegions::MatrixType GetMatrixType() const
Return the matrix type.
const GlobalLinSysKey m_linSysKey
Key associated with this linear system.
Definition: GlobalLinSys.h:127
const boost::weak_ptr< ExpList > m_expList
Local Matrix System.
Definition: GlobalLinSys.h:129
void Nektar::MultiRegions::GlobalLinSysStaticCond::SetupTopLevel ( const boost::shared_ptr< AssemblyMap > &  pLocToGloMap)
protected

Set up the storage for the Schur complement or the top level of the multi-level Schur complement.

For the first level in multi-level static condensation, or the only level in the case of single-level static condensation, allocate the condensed matrices and populate them with the local matrices retrieved from the expansion list.

Parameters

Definition at line 299 of file GlobalLinSysStaticCond.cpp.

References Nektar::MemoryManager< DataType >::AllocateSharedPtr(), Nektar::eDIAGONAL, Nektar::StdRegions::eHybridDGHelmBndLam, Nektar::MultiRegions::GlobalMatrixKey::GetMatrixType(), m_BinvD, m_C, Nektar::MultiRegions::GlobalLinSys::m_expList, m_invD, Nektar::MultiRegions::GlobalLinSys::m_linSysKey, m_schurCompl, Nektar::MultiRegions::GlobalLinSys::v_GetBlock(), and Nektar::MultiRegions::GlobalLinSys::v_GetStaticCondBlock().

Referenced by Nektar::MultiRegions::GlobalLinSysPETScStaticCond::v_InitObject(), v_InitObject(), and Nektar::MultiRegions::GlobalLinSysIterativeStaticCond::v_InitObject().

301  {
302  int n;
303  int n_exp = m_expList.lock()->GetNumElmts();
304 
305  const Array<OneD,const unsigned int>& nbdry_size
306  = pLocToGloMap->GetNumLocalBndCoeffsPerPatch();
307  const Array<OneD,const unsigned int>& nint_size
308  = pLocToGloMap->GetNumLocalIntCoeffsPerPatch();
309 
310  // Setup Block Matrix systems
311  MatrixStorage blkmatStorage = eDIAGONAL;
313  ::AllocateSharedPtr(nbdry_size, nbdry_size, blkmatStorage);
315  ::AllocateSharedPtr(nbdry_size, nint_size , blkmatStorage);
317  ::AllocateSharedPtr(nint_size , nbdry_size, blkmatStorage);
319  ::AllocateSharedPtr(nint_size , nint_size , blkmatStorage);
320 
321  for(n = 0; n < n_exp; ++n)
322  {
323  if (m_linSysKey.GetMatrixType() ==
325  {
326  DNekScalMatSharedPtr loc_mat
328  m_schurCompl->SetBlock(n,n,loc_mat);
329  }
330  else
331  {
332  DNekScalBlkMatSharedPtr loc_schur
335  m_schurCompl->SetBlock(n, n, t = loc_schur->GetBlock(0,0));
336  m_BinvD ->SetBlock(n, n, t = loc_schur->GetBlock(0,1));
337  m_C ->SetBlock(n, n, t = loc_schur->GetBlock(1,0));
338  m_invD ->SetBlock(n, n, t = loc_schur->GetBlock(1,1));
339  }
340  }
341  }
static boost::shared_ptr< DataType > AllocateSharedPtr()
Allocate a shared pointer from the memory pool.
DNekScalBlkMatSharedPtr m_schurCompl
Block Schur complement matrix.
DNekScalBlkMatSharedPtr m_C
Block matrix.
virtual DNekScalBlkMatSharedPtr v_GetStaticCondBlock(unsigned int n)
Retrieves a the static condensation block matrices from n-th expansion using the matrix key provided ...
DNekScalBlkMatSharedPtr m_invD
Block matrix.
boost::shared_ptr< DNekScalMat > DNekScalMatSharedPtr
virtual DNekScalMatSharedPtr v_GetBlock(unsigned int n)
Retrieves the block matrix from n-th expansion using the matrix key provided by the m_linSysKey...
boost::shared_ptr< DNekScalBlkMat > DNekScalBlkMatSharedPtr
Definition: NekTypeDefs.hpp:74
StdRegions::MatrixType GetMatrixType() const
Return the matrix type.
const GlobalLinSysKey m_linSysKey
Key associated with this linear system.
Definition: GlobalLinSys.h:127
DNekScalBlkMatSharedPtr m_BinvD
Block matrix.
const boost::weak_ptr< ExpList > m_expList
Local Matrix System.
Definition: GlobalLinSys.h:129
virtual void Nektar::MultiRegions::GlobalLinSysStaticCond::v_AssembleSchurComplement ( boost::shared_ptr< AssemblyMap pLoctoGloMap)
inlineprotectedvirtual
virtual void Nektar::MultiRegions::GlobalLinSysStaticCond::v_BasisInvTransform ( Array< OneD, NekDouble > &  pInOut)
inlineprotectedvirtual
virtual void Nektar::MultiRegions::GlobalLinSysStaticCond::v_BasisTransform ( Array< OneD, NekDouble > &  pInOut,
int  offset 
)
inlineprotectedvirtual
int Nektar::MultiRegions::GlobalLinSysStaticCond::v_GetNumBlocks ( )
protectedvirtual

Get the number of blocks in this system.

At the top level this corresponds to the number of elements in the expansion list.

Reimplemented from Nektar::MultiRegions::GlobalLinSys.

Definition at line 287 of file GlobalLinSysStaticCond.cpp.

References m_schurCompl.

288  {
289  return m_schurCompl->GetNumberOfBlockRows();
290  }
DNekScalBlkMatSharedPtr m_schurCompl
Block Schur complement matrix.
void Nektar::MultiRegions::GlobalLinSysStaticCond::v_Initialise ( const boost::shared_ptr< AssemblyMap > &  pLocToGloMap)
protectedvirtual

Initialise this object.

If at the last level of recursion (or the only level in the case of single-level static condensation), assemble the Schur complement. For other levels, in the case of multi-level static condensation, the next level of the condensed system is computed.

Parameters
pLocToGloMapLocal to global mapping.

Reimplemented from Nektar::MultiRegions::GlobalLinSys.

Definition at line 265 of file GlobalLinSysStaticCond.cpp.

References ConstructNextLevelCondensedSystem(), m_locToGloMap, m_wsp, and v_AssembleSchurComplement().

267  {
268  int nLocalBnd = m_locToGloMap->GetNumLocalBndCoeffs();
269  int nGlobal = m_locToGloMap->GetNumGlobalCoeffs();
270  int nGlobBndDofs = pLocToGloMap->GetNumGlobalBndCoeffs();
271  int nDirBndDofs = pLocToGloMap->GetNumGlobalDirBndCoeffs();
272  int nGlobHomBndDofs = nGlobBndDofs - nDirBndDofs;
273  m_wsp = Array<OneD, NekDouble>
274  (2*nLocalBnd + nGlobal + nGlobHomBndDofs, 0.0);
275 
276  if (pLocToGloMap->AtLastLevel())
277  {
278  v_AssembleSchurComplement(pLocToGloMap);
279  }
280  else
281  {
283  pLocToGloMap->GetNextLevelLocalToGlobalMap());
284  }
285  }
Array< OneD, NekDouble > m_wsp
Workspace array for matrix multiplication.
virtual void v_AssembleSchurComplement(boost::shared_ptr< AssemblyMap > pLoctoGloMap)
boost::shared_ptr< AssemblyMap > m_locToGloMap
Local to global map.
void ConstructNextLevelCondensedSystem(const boost::shared_ptr< AssemblyMap > &locToGloMap)
void Nektar::MultiRegions::GlobalLinSysStaticCond::v_InitObject ( )
protectedvirtual

Reimplemented from Nektar::MultiRegions::GlobalLinSys.

Reimplemented in Nektar::MultiRegions::GlobalLinSysIterativeStaticCond, and Nektar::MultiRegions::GlobalLinSysPETScStaticCond.

Definition at line 85 of file GlobalLinSysStaticCond.cpp.

References Nektar::MultiRegions::GlobalLinSys::Initialise(), m_locToGloMap, and SetupTopLevel().

86  {
87  // Allocate memory for top-level structure
89 
90  // Construct this level
92  }
void Initialise(const boost::shared_ptr< AssemblyMap > &pLocToGloMap)
Definition: GlobalLinSys.h:216
void SetupTopLevel(const boost::shared_ptr< AssemblyMap > &locToGloMap)
Set up the storage for the Schur complement or the top level of the multi-level Schur complement...
boost::shared_ptr< AssemblyMap > m_locToGloMap
Local to global map.
virtual DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::v_PreSolve ( int  scLevel,
NekVector< NekDouble > &  F_GlobBnd 
)
inlineprotectedvirtual

Reimplemented in Nektar::MultiRegions::GlobalLinSysIterativeStaticCond, and Nektar::MultiRegions::GlobalLinSysPETScStaticCond.

Definition at line 67 of file GlobalLinSysStaticCond.h.

References m_schurCompl.

Referenced by v_Solve().

70  {
71  return m_schurCompl;
72  }
DNekScalBlkMatSharedPtr m_schurCompl
Block Schur complement matrix.
virtual GlobalLinSysStaticCondSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::v_Recurse ( const GlobalLinSysKey mkey,
const boost::weak_ptr< ExpList > &  pExpList,
const DNekScalBlkMatSharedPtr  pSchurCompl,
const DNekScalBlkMatSharedPtr  pBinvD,
const DNekScalBlkMatSharedPtr  pC,
const DNekScalBlkMatSharedPtr  pInvD,
const boost::shared_ptr< AssemblyMap > &  locToGloMap 
)
protectedpure virtual
void Nektar::MultiRegions::GlobalLinSysStaticCond::v_Solve ( const Array< OneD, const NekDouble > &  in,
Array< OneD, NekDouble > &  out,
const AssemblyMapSharedPtr locToGloMap,
const Array< OneD, const NekDouble > &  dirForcing = NullNekDouble1DArray 
)
protectedvirtual

Solve the linear system for given input and output vectors using a specified local to global map.

Implements Nektar::MultiRegions::GlobalLinSys.

Definition at line 106 of file GlobalLinSysStaticCond.cpp.

References Nektar::DiagonalBlockFullScalMatrixMultiply(), Nektar::eWrapper, Vmath::Fill(), Nektar::NekVector< DataType >::GetPtr(), m_BinvD, m_C, m_invD, m_recursiveSchurCompl, m_wsp, Nektar::Multiply(), Nektar::MultiRegions::GlobalLinSys::SolveLinearSystem(), Nektar::Subtract(), v_BasisInvTransform(), v_BasisTransform(), v_PreSolve(), Vmath::Vadd(), Vmath::Vcopy(), and Vmath::Vsub().

111  {
112  bool dirForcCalculated = (bool) dirForcing.num_elements();
113  bool atLastLevel = pLocToGloMap->AtLastLevel();
114  int scLevel = pLocToGloMap->GetStaticCondLevel();
115 
116  int nGlobDofs = pLocToGloMap->GetNumGlobalCoeffs();
117  int nGlobBndDofs = pLocToGloMap->GetNumGlobalBndCoeffs();
118  int nDirBndDofs = pLocToGloMap->GetNumGlobalDirBndCoeffs();
119  int nGlobHomBndDofs = nGlobBndDofs - nDirBndDofs;
120  int nLocBndDofs = pLocToGloMap->GetNumLocalBndCoeffs();
121  int nIntDofs = pLocToGloMap->GetNumGlobalCoeffs()
122  - nGlobBndDofs;
123 
124  Array<OneD, NekDouble> F = m_wsp + 2*nLocBndDofs + nGlobHomBndDofs;
125  Array<OneD, NekDouble> tmp;
126  if(nDirBndDofs && dirForcCalculated)
127  {
128  Vmath::Vsub(nGlobDofs,in.get(),1,dirForcing.get(),1,F.get(),1);
129  }
130  else
131  {
132  Vmath::Vcopy(nGlobDofs,in.get(),1,F.get(),1);
133  }
134 
135  NekVector<NekDouble> F_HomBnd(nGlobHomBndDofs,tmp=F+nDirBndDofs,
136  eWrapper);
137  NekVector<NekDouble> F_GlobBnd(nGlobBndDofs,F,eWrapper);
138  NekVector<NekDouble> F_Int(nIntDofs,tmp=F+nGlobBndDofs,eWrapper);
139 
140  NekVector<NekDouble> V_GlobBnd(nGlobBndDofs,out,eWrapper);
141  NekVector<NekDouble> V_GlobHomBnd(nGlobHomBndDofs,
142  tmp=out+nDirBndDofs,
143  eWrapper);
144  NekVector<NekDouble> V_Int(nIntDofs,tmp=out+nGlobBndDofs,eWrapper);
145  NekVector<NekDouble> V_LocBnd(nLocBndDofs,m_wsp,eWrapper);
146 
147  NekVector<NekDouble> V_GlobHomBndTmp(
148  nGlobHomBndDofs,tmp = m_wsp + 2*nLocBndDofs,eWrapper);
149 
150  // set up normalisation factor for right hand side on first SC level
151  DNekScalBlkMatSharedPtr sc = v_PreSolve(scLevel, F_GlobBnd);
152 
153  if(nGlobHomBndDofs)
154  {
155  // construct boundary forcing
156  if( nIntDofs && ((!dirForcCalculated) && (atLastLevel)) )
157  {
158  DNekScalBlkMat &BinvD = *m_BinvD;
159  DNekScalBlkMat &SchurCompl = *sc;
160 
161  // include dirichlet boundary forcing
162  pLocToGloMap->GlobalToLocalBnd(V_GlobBnd,V_LocBnd);
163  V_LocBnd = BinvD*F_Int + SchurCompl*V_LocBnd;
164  }
165  else if((!dirForcCalculated) && (atLastLevel))
166  {
167  // include dirichlet boundary forcing
168  DNekScalBlkMat &SchurCompl = *sc;
169  pLocToGloMap->GlobalToLocalBnd(V_GlobBnd,V_LocBnd);
170  V_LocBnd = SchurCompl*V_LocBnd;
171  }
172  else
173  {
174  DNekScalBlkMat &BinvD = *m_BinvD;
175  DiagonalBlockFullScalMatrixMultiply( V_LocBnd, BinvD, F_Int);
176  }
177 
178  pLocToGloMap->AssembleBnd(V_LocBnd,V_GlobHomBndTmp,
179  nDirBndDofs);
180  Subtract(F_HomBnd, F_HomBnd, V_GlobHomBndTmp);
181 
182  // Transform from original basis to low energy
183  v_BasisTransform(F, nDirBndDofs);
184 
185  // For parallel multi-level static condensation some
186  // processors may have different levels to others. This
187  // routine receives contributions to partition vertices from
188  // those lower levels, whilst not sending anything to the
189  // other partitions, and includes them in the modified right
190  // hand side vector.
191  int lcLevel = pLocToGloMap->GetLowestStaticCondLevel();
192  if(atLastLevel && scLevel < lcLevel)
193  {
194  // If this level is not the lowest level across all
195  // processes, we must do dummy communication for the
196  // remaining levels
197  Array<OneD, NekDouble> tmp(nGlobBndDofs);
198  for (int i = scLevel; i < lcLevel; ++i)
199  {
200  Vmath::Fill(nGlobBndDofs, 0.0, tmp, 1);
201  pLocToGloMap->UniversalAssembleBnd(tmp);
202  Vmath::Vcopy(nGlobHomBndDofs,
203  tmp.get()+nDirBndDofs, 1,
204  V_GlobHomBndTmp.GetPtr().get(), 1);
205  Subtract( F_HomBnd, F_HomBnd, V_GlobHomBndTmp);
206  }
207  }
208 
209  // solve boundary system
210  if(atLastLevel)
211  {
212  Array<OneD, NekDouble> pert(nGlobBndDofs,0.0);
213 
214  // Solve for difference from initial solution given inout;
216  nGlobBndDofs, F, pert, pLocToGloMap, nDirBndDofs);
217 
218  // Transform back to original basis
219  v_BasisInvTransform(pert);
220 
221  // Add back initial conditions onto difference
222  Vmath::Vadd(nGlobHomBndDofs,&out[nDirBndDofs],1,
223  &pert[nDirBndDofs],1,&out[nDirBndDofs],1);
224  }
225  else
226  {
227  m_recursiveSchurCompl->Solve(F,
228  V_GlobBnd.GetPtr(),
229  pLocToGloMap->GetNextLevelLocalToGlobalMap());
230  }
231  }
232 
233  // solve interior system
234  if(nIntDofs)
235  {
236  DNekScalBlkMat &invD = *m_invD;
237 
238  if(nGlobHomBndDofs || nDirBndDofs)
239  {
240  DNekScalBlkMat &C = *m_C;
241 
242  if(dirForcCalculated && nDirBndDofs)
243  {
244  pLocToGloMap->GlobalToLocalBnd(V_GlobHomBnd,V_LocBnd,
245  nDirBndDofs);
246  }
247  else
248  {
249  pLocToGloMap->GlobalToLocalBnd(V_GlobBnd,V_LocBnd);
250  }
251  F_Int = F_Int - C*V_LocBnd;
252  }
253  Multiply( V_Int, invD, F_Int);
254  }
255  }
Array< OneD, NekDouble > m_wsp
Workspace array for matrix multiplication.
void DiagonalBlockFullScalMatrixMultiply(NekVector< double > &result, const NekMatrix< NekMatrix< NekMatrix< NekDouble, StandardMatrixTag >, ScaledMatrixTag >, BlockMatrixTag > &lhs, const NekVector< double > &rhs)
GlobalLinSysStaticCondSharedPtr m_recursiveSchurCompl
Schur complement for Direct Static Condensation.
DNekScalBlkMatSharedPtr m_C
Block matrix.
void Fill(int n, const T alpha, T *x, const int incx)
Fill a vector with a constant value.
Definition: Vmath.cpp:46
void SolveLinearSystem(const int pNumRows, const Array< OneD, const NekDouble > &pInput, Array< OneD, NekDouble > &pOutput, const AssemblyMapSharedPtr &locToGloMap, const int pNumDir=0)
Solve the linear system for given input and output vectors.
Definition: GlobalLinSys.h:201
virtual void v_BasisTransform(Array< OneD, NekDouble > &pInOut, int offset)
DNekScalBlkMatSharedPtr m_invD
Block matrix.
void Multiply(NekMatrix< ResultDataType, StandardMatrixTag > &result, const NekMatrix< LhsDataType, LhsMatrixType > &lhs, const ResultDataType &rhs)
NekMatrix< NekMatrix< NekMatrix< NekDouble, StandardMatrixTag >, ScaledMatrixTag >, BlockMatrixTag > DNekScalBlkMat
Definition: NekTypeDefs.hpp:66
virtual DNekScalBlkMatSharedPtr v_PreSolve(int scLevel, NekVector< NekDouble > &F_GlobBnd)
boost::shared_ptr< DNekScalBlkMat > DNekScalBlkMatSharedPtr
Definition: NekTypeDefs.hpp:74
void Vsub(int n, const T *x, const int incx, const T *y, const int incy, T *z, const int incz)
Subtract vector z = x-y.
Definition: Vmath.cpp:343
RhsMatrixType void Subtract(NekMatrix< DataType, StandardMatrixTag > &result, const NekMatrix< LhsDataType, LhsMatrixType > &lhs, const NekMatrix< RhsDataType, RhsMatrixType > &rhs)
void Vcopy(int n, const T *x, const int incx, T *y, const int incy)
Definition: Vmath.cpp:1061
virtual void v_BasisInvTransform(Array< OneD, NekDouble > &pInOut)
DNekScalBlkMatSharedPtr m_BinvD
Block matrix.
void Vadd(int n, const T *x, const int incx, const T *y, const int incy, T *z, const int incz)
Add vector z = x+y.
Definition: Vmath.cpp:299

Member Data Documentation

DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_BinvD
protected
DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_C
protected
DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_invD
protected
boost::shared_ptr<AssemblyMap> Nektar::MultiRegions::GlobalLinSysStaticCond::m_locToGloMap
protected
GlobalLinSysStaticCondSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_recursiveSchurCompl
protected

Schur complement for Direct Static Condensation.

Definition at line 105 of file GlobalLinSysStaticCond.h.

Referenced by ConstructNextLevelCondensedSystem(), and v_Solve().

DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_schurCompl
protected
Array<OneD, NekDouble> Nektar::MultiRegions::GlobalLinSysStaticCond::m_wsp
protected

Workspace array for matrix multiplication.

Definition at line 117 of file GlobalLinSysStaticCond.h.

Referenced by Nektar::MultiRegions::GlobalLinSysIterativeStaticCond::v_DoMatrixMultiply(), v_Initialise(), and v_Solve().