Nektar++
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
Public Member Functions | Protected Member Functions | Protected Attributes | List of all members
Nektar::MultiRegions::GlobalLinSysStaticCond Class Referenceabstract

A global linear system. More...

#include <GlobalLinSysStaticCond.h>

Inheritance diagram for Nektar::MultiRegions::GlobalLinSysStaticCond:
Inheritance graph
[legend]
Collaboration diagram for Nektar::MultiRegions::GlobalLinSysStaticCond:
Collaboration graph
[legend]

Public Member Functions

 GlobalLinSysStaticCond (const GlobalLinSysKey &mkey, const boost::weak_ptr< ExpList > &pExpList, const boost::shared_ptr< AssemblyMap > &locToGloMap)
 Constructor for full direct matrix solve. More...
 
virtual ~GlobalLinSysStaticCond ()
 
- Public Member Functions inherited from Nektar::MultiRegions::GlobalLinSys
 GlobalLinSys (const GlobalLinSysKey &pKey, const boost::weak_ptr< ExpList > &pExpList, const boost::shared_ptr< AssemblyMap > &pLocToGloMap)
 Constructor for full direct matrix solve. More...
 
virtual ~GlobalLinSys ()
 
const GlobalLinSysKeyGetKey (void) const
 Returns the key associated with the system. More...
 
const boost::weak_ptr< ExpList > & GetLocMat (void) const
 
void InitObject ()
 
void Initialise (const boost::shared_ptr< AssemblyMap > &pLocToGloMap)
 
void Solve (const Array< OneD, const NekDouble > &in, Array< OneD, NekDouble > &out, const AssemblyMapSharedPtr &locToGloMap, const Array< OneD, const NekDouble > &dirForcing=NullNekDouble1DArray)
 Solve the linear system for given input and output vectors using a specified local to global map. More...
 
boost::shared_ptr< GlobalLinSysGetSharedThisPtr ()
 Returns a shared pointer to the current object. More...
 
int GetNumBlocks ()
 
DNekScalMatSharedPtr GetBlock (unsigned int n)
 
DNekScalBlkMatSharedPtr GetStaticCondBlock (unsigned int n)
 
void DropStaticCondBlock (unsigned int n)
 
void SolveLinearSystem (const int pNumRows, const Array< OneD, const NekDouble > &pInput, Array< OneD, NekDouble > &pOutput, const AssemblyMapSharedPtr &locToGloMap, const int pNumDir=0)
 Solve the linear system for given input and output vectors. More...
 

Protected Member Functions

virtual DNekScalBlkMatSharedPtr v_PreSolve (int scLevel, NekVector< NekDouble > &F_GlobBnd)
 
virtual void v_BasisTransform (Array< OneD, NekDouble > &pInOut, int offset)
 
virtual void v_BasisInvTransform (Array< OneD, NekDouble > &pInOut)
 
virtual void v_AssembleSchurComplement (boost::shared_ptr< AssemblyMap > pLoctoGloMap)
 
virtual int v_GetNumBlocks ()
 Get the number of blocks in this system. More...
 
virtual
GlobalLinSysStaticCondSharedPtr 
v_Recurse (const GlobalLinSysKey &mkey, const boost::weak_ptr< ExpList > &pExpList, const DNekScalBlkMatSharedPtr pSchurCompl, const DNekScalBlkMatSharedPtr pBinvD, const DNekScalBlkMatSharedPtr pC, const DNekScalBlkMatSharedPtr pInvD, const boost::shared_ptr< AssemblyMap > &locToGloMap)=0
 
virtual void v_Solve (const Array< OneD, const NekDouble > &in, Array< OneD, NekDouble > &out, const AssemblyMapSharedPtr &locToGloMap, const Array< OneD, const NekDouble > &dirForcing=NullNekDouble1DArray)
 Solve the linear system for given input and output vectors using a specified local to global map. More...
 
virtual void v_InitObject ()
 
virtual void v_Initialise (const boost::shared_ptr< AssemblyMap > &locToGloMap)
 Initialise this object. More...
 
void SetupTopLevel (const boost::shared_ptr< AssemblyMap > &locToGloMap)
 Set up the storage for the Schur complement or the top level of the multi-level Schur complement. More...
 
void ConstructNextLevelCondensedSystem (const boost::shared_ptr< AssemblyMap > &locToGloMap)
 
- Protected Member Functions inherited from Nektar::MultiRegions::GlobalLinSys
virtual DNekScalMatSharedPtr v_GetBlock (unsigned int n)
 Retrieves the block matrix from n-th expansion using the matrix key provided by the m_linSysKey. More...
 
virtual DNekScalBlkMatSharedPtr v_GetStaticCondBlock (unsigned int n)
 Retrieves a the static condensation block matrices from n-th expansion using the matrix key provided by the m_linSysKey. More...
 
virtual void v_DropStaticCondBlock (unsigned int n)
 Releases the static condensation block matrices from NekManager of n-th expansion using the matrix key provided by the m_linSysKey. More...
 
PreconditionerSharedPtr CreatePrecon (AssemblyMapSharedPtr asmMap)
 Create a preconditioner object from the parameters defined in the supplied assembly map. More...
 

Protected Attributes

GlobalLinSysStaticCondSharedPtr m_recursiveSchurCompl
 Schur complement for Direct Static Condensation. More...
 
DNekScalBlkMatSharedPtr m_schurCompl
 Block Schur complement matrix. More...
 
DNekScalBlkMatSharedPtr m_BinvD
 Block $ BD^{-1} $ matrix. More...
 
DNekScalBlkMatSharedPtr m_C
 Block $ C $ matrix. More...
 
DNekScalBlkMatSharedPtr m_invD
 Block $ D^{-1} $ matrix. More...
 
boost::shared_ptr< AssemblyMapm_locToGloMap
 Local to global map. More...
 
Array< OneD, NekDoublem_wsp
 Workspace array for matrix multiplication. More...
 
- Protected Attributes inherited from Nektar::MultiRegions::GlobalLinSys
const GlobalLinSysKey m_linSysKey
 Key associated with this linear system. More...
 
const boost::weak_ptr< ExpListm_expList
 Local Matrix System. More...
 
const std::map< int,
RobinBCInfoSharedPtr
m_robinBCInfo
 Robin boundary info. More...
 
bool m_verbose
 

Detailed Description

A global linear system.

Solves a linear system using single- or multi-level static condensation.

Definition at line 55 of file GlobalLinSysStaticCond.h.

Constructor & Destructor Documentation

Nektar::MultiRegions::GlobalLinSysStaticCond::GlobalLinSysStaticCond ( const GlobalLinSysKey pKey,
const boost::weak_ptr< ExpList > &  pExpList,
const boost::shared_ptr< AssemblyMap > &  pLocToGloMap 
)

Constructor for full direct matrix solve.

For a matrix system of the form

\[ \left[ \begin{array}{cc} \boldsymbol{A} & \boldsymbol{B}\\ \boldsymbol{C} & \boldsymbol{D} \end{array} \right] \left[ \begin{array}{c} \boldsymbol{x_1}\\ \boldsymbol{x_2} \end{array}\right] = \left[ \begin{array}{c} \boldsymbol{y_1}\\ \boldsymbol{y_2} \end{array}\right], \]

where $\boldsymbol{D}$ and $(\boldsymbol{A-BD^{-1}C})$ are invertible, store and assemble a static condensation system, according to a given local to global mapping. #m_linSys is constructed by AssembleSchurComplement().

Parameters
mKeyAssociated matrix key.
pLocMatSysLocalMatrixSystem
locToGloMapLocal to global mapping.

Definition at line 76 of file GlobalLinSysStaticCond.cpp.

80  : GlobalLinSys(pKey, pExpList, pLocToGloMap),
81  m_locToGloMap (pLocToGloMap)
82  {
83  }
GlobalLinSys(const GlobalLinSysKey &pKey, const boost::weak_ptr< ExpList > &pExpList, const boost::shared_ptr< AssemblyMap > &pLocToGloMap)
Constructor for full direct matrix solve.
boost::shared_ptr< AssemblyMap > m_locToGloMap
Local to global map.
Nektar::MultiRegions::GlobalLinSysStaticCond::~GlobalLinSysStaticCond ( )
virtual

Definition at line 97 of file GlobalLinSysStaticCond.cpp.

98  {
99 
100  }

Member Function Documentation

void Nektar::MultiRegions::GlobalLinSysStaticCond::ConstructNextLevelCondensedSystem ( const boost::shared_ptr< AssemblyMap > &  locToGloMap)
protected

Definition at line 346 of file GlobalLinSysStaticCond.cpp.

References Nektar::MemoryManager< DataType >::AllocateSharedPtr(), ASSERTL0, Nektar::eDIAGONAL, Nektar::eWrapper, Nektar::MultiRegions::GlobalLinSys::m_expList, Nektar::MultiRegions::GlobalLinSys::m_linSysKey, m_recursiveSchurCompl, m_schurCompl, sign, and v_Recurse().

Referenced by v_Initialise().

348  {
349  int i,j,n,cnt;
350  DNekScalBlkMatSharedPtr blkMatrices[4];
351 
352  // Create temporary matrices within an inner-local scope to ensure
353  // any references to the intermediate storage is lost before
354  // the recursive step, rather than at the end of the routine.
355  // This allows the schur complement matrix from this level to be
356  // disposed of in the next level after use without being retained
357  // due to lingering shared pointers.
358  {
359 
360  const Array<OneD,const unsigned int>& nBndDofsPerPatch
361  = pLocToGloMap->GetNumLocalBndCoeffsPerPatch();
362  const Array<OneD,const unsigned int>& nIntDofsPerPatch
363  = pLocToGloMap->GetNumLocalIntCoeffsPerPatch();
364 
365  // STEP 1:
366  // Based upon the schur complement of the the current level we
367  // will substructure this matrix in the form
368  // -- --
369  // | A B |
370  // | C D |
371  // -- --
372  // All matrices A,B,C and D are (diagonal) blockmatrices.
373  // However, as a start we will use an array of DNekMatrices as
374  // it is too hard to change the individual entries of a
375  // DNekScalBlkMatSharedPtr.
376 
377  // In addition, we will also try to ensure that the memory of
378  // the blockmatrices will be contiguous. This will probably
379  // enhance the efficiency
380  // - Calculate the total number of entries in the blockmatrices
381  int nPatches = pLocToGloMap->GetNumPatches();
382  int nEntriesA = 0; int nEntriesB = 0;
383  int nEntriesC = 0; int nEntriesD = 0;
384 
385  for(i = 0; i < nPatches; i++)
386  {
387  nEntriesA += nBndDofsPerPatch[i]*nBndDofsPerPatch[i];
388  nEntriesB += nBndDofsPerPatch[i]*nIntDofsPerPatch[i];
389  nEntriesC += nIntDofsPerPatch[i]*nBndDofsPerPatch[i];
390  nEntriesD += nIntDofsPerPatch[i]*nIntDofsPerPatch[i];
391  }
392 
393  // Now create the DNekMatrices and link them to the memory
394  // allocated above
395  Array<OneD, DNekMatSharedPtr> substructuredMat[4]
396  = {Array<OneD, DNekMatSharedPtr>(nPatches), //Matrix A
397  Array<OneD, DNekMatSharedPtr>(nPatches), //Matrix B
398  Array<OneD, DNekMatSharedPtr>(nPatches), //Matrix C
399  Array<OneD, DNekMatSharedPtr>(nPatches)}; //Matrix D
400 
401  // Initialise storage for the matrices. We do this separately
402  // for each matrix so the matrices may be independently
403  // deallocated when no longer required.
404  Array<OneD, NekDouble> storageA(nEntriesA,0.0);
405  Array<OneD, NekDouble> storageB(nEntriesB,0.0);
406  Array<OneD, NekDouble> storageC(nEntriesC,0.0);
407  Array<OneD, NekDouble> storageD(nEntriesD,0.0);
408 
409  Array<OneD, NekDouble> tmparray;
410  PointerWrapper wType = eWrapper;
411  int cntA = 0;
412  int cntB = 0;
413  int cntC = 0;
414  int cntD = 0;
415 
416  for(i = 0; i < nPatches; i++)
417  {
418  // Matrix A
419  tmparray = storageA+cntA;
420  substructuredMat[0][i] = MemoryManager<DNekMat>
421  ::AllocateSharedPtr(nBndDofsPerPatch[i],
422  nBndDofsPerPatch[i],
423  tmparray, wType);
424  // Matrix B
425  tmparray = storageB+cntB;
426  substructuredMat[1][i] = MemoryManager<DNekMat>
427  ::AllocateSharedPtr(nBndDofsPerPatch[i],
428  nIntDofsPerPatch[i],
429  tmparray, wType);
430  // Matrix C
431  tmparray = storageC+cntC;
432  substructuredMat[2][i] = MemoryManager<DNekMat>
433  ::AllocateSharedPtr(nIntDofsPerPatch[i],
434  nBndDofsPerPatch[i],
435  tmparray, wType);
436  // Matrix D
437  tmparray = storageD+cntD;
438  substructuredMat[3][i] = MemoryManager<DNekMat>
439  ::AllocateSharedPtr(nIntDofsPerPatch[i],
440  nIntDofsPerPatch[i],
441  tmparray, wType);
442 
443  cntA += nBndDofsPerPatch[i] * nBndDofsPerPatch[i];
444  cntB += nBndDofsPerPatch[i] * nIntDofsPerPatch[i];
445  cntC += nIntDofsPerPatch[i] * nBndDofsPerPatch[i];
446  cntD += nIntDofsPerPatch[i] * nIntDofsPerPatch[i];
447  }
448 
449  // Then, project SchurComplement onto
450  // the substructured matrices of the next level
452  DNekScalMatSharedPtr schurComplSubMat;
453  int schurComplSubMatnRows;
454  Array<OneD, const int> patchId, dofId;
455  Array<OneD, const unsigned int> isBndDof;
456  Array<OneD, const NekDouble> sign;
457  NekDouble scale;
458 
459  for(n = cnt = 0; n < SchurCompl->GetNumberOfBlockRows(); ++n)
460  {
461  schurComplSubMat = SchurCompl->GetBlock(n,n);
462  schurComplSubMatnRows = schurComplSubMat->GetRows();
463 
464  scale = SchurCompl->GetBlock(n,n)->Scale();
465  Array<OneD, NekDouble> schurSubMat
466  = SchurCompl->GetBlock(n,n)->GetOwnedMatrix()->GetPtr();
467 
468  patchId = pLocToGloMap->GetPatchMapFromPrevLevel()
469  ->GetPatchId() + cnt;
470  dofId = pLocToGloMap->GetPatchMapFromPrevLevel()
471  ->GetDofId() + cnt;
472  isBndDof = pLocToGloMap->GetPatchMapFromPrevLevel()
473  ->IsBndDof() + cnt;
474  sign = pLocToGloMap->GetPatchMapFromPrevLevel()
475  ->GetSign() + cnt;
476 
477  // Set up Matrix;
478  for(i = 0; i < schurComplSubMatnRows; ++i)
479  {
480  int pId = patchId[i];
481  Array<OneD, NekDouble> subMat0
482  = substructuredMat[0][pId]->GetPtr();
483  Array<OneD, NekDouble> subMat1
484  = substructuredMat[1][patchId[i]]->GetPtr();
485  Array<OneD, NekDouble> subMat2
486  = substructuredMat[2][patchId[i]]->GetPtr();
487  Array<OneD, NekDouble> subMat3
488  = substructuredMat[3][patchId[i]]->GetPtr();
489  int subMat0rows = substructuredMat[0][pId]->GetRows();
490  int subMat1rows = substructuredMat[1][pId]->GetRows();
491  int subMat2rows = substructuredMat[2][pId]->GetRows();
492  int subMat3rows = substructuredMat[3][pId]->GetRows();
493 
494  if(isBndDof[i])
495  {
496  for(j = 0; j < schurComplSubMatnRows; ++j)
497  {
498  ASSERTL0(patchId[i]==patchId[j],
499  "These values should be equal");
500 
501  if(isBndDof[j])
502  {
503  subMat0[dofId[i]+dofId[j]*subMat0rows] +=
504  sign[i]*sign[j]*(
505  scale*schurSubMat[
506  i+j*schurComplSubMatnRows]);
507  }
508  else
509  {
510  subMat1[dofId[i]+dofId[j]*subMat1rows] +=
511  sign[i]*sign[j]*(
512  scale*schurSubMat[
513  i+j*schurComplSubMatnRows]);
514  }
515  }
516  }
517  else
518  {
519  for(j = 0; j < schurComplSubMatnRows; ++j)
520  {
521  ASSERTL0(patchId[i]==patchId[j],
522  "These values should be equal");
523 
524  if(isBndDof[j])
525  {
526  subMat2[dofId[i]+dofId[j]*subMat2rows] +=
527  sign[i]*sign[j]*(
528  scale*schurSubMat[
529  i+j*schurComplSubMatnRows]);
530  }
531  else
532  {
533  subMat3[dofId[i]+dofId[j]*subMat3rows] +=
534  sign[i]*sign[j]*(
535  scale*schurSubMat[
536  i+j*schurComplSubMatnRows]);
537  }
538  }
539  }
540  }
541  cnt += schurComplSubMatnRows;
542  }
543 
544  // STEP 2: condense the system
545  // This can be done elementally (i.e. patch per patch)
546  for(i = 0; i < nPatches; i++)
547  {
548  if(nIntDofsPerPatch[i])
549  {
550  Array<OneD, NekDouble> subMat0
551  = substructuredMat[0][i]->GetPtr();
552  Array<OneD, NekDouble> subMat1
553  = substructuredMat[1][i]->GetPtr();
554  Array<OneD, NekDouble> subMat2
555  = substructuredMat[2][i]->GetPtr();
556  int subMat0rows = substructuredMat[0][i]->GetRows();
557  int subMat1rows = substructuredMat[1][i]->GetRows();
558  int subMat2rows = substructuredMat[2][i]->GetRows();
559  int subMat2cols = substructuredMat[2][i]->GetColumns();
560 
561  // 1. D -> InvD
562  substructuredMat[3][i]->Invert();
563  // 2. B -> BInvD
564  (*substructuredMat[1][i]) = (*substructuredMat[1][i])*
565  (*substructuredMat[3][i]);
566  // 3. A -> A - BInvD*C (= schurcomplement)
567  // (*substructuredMat[0][i]) =(*substructuredMat[0][i])-
568  // (*substructuredMat[1][i])*(*substructuredMat[2][i]);
569  // Note: faster to use blas directly
570  Blas::Dgemm('N','N', subMat1rows, subMat2cols,
571  subMat2rows, -1.0, &subMat1[0], subMat1rows,
572  &subMat2[0], subMat2rows, 1.0, &subMat0[0],
573  subMat0rows);
574  }
575  }
576 
577  // STEP 3: fill the block matrices. However, do note that we
578  // first have to convert them to a DNekScalMat in order to be
579  // compatible with the first level of static condensation
580  const Array<OneD,const unsigned int>& nbdry_size
581  = pLocToGloMap->GetNumLocalBndCoeffsPerPatch();
582  const Array<OneD,const unsigned int>& nint_size
583  = pLocToGloMap->GetNumLocalIntCoeffsPerPatch();
584  MatrixStorage blkmatStorage = eDIAGONAL;
585 
586  blkMatrices[0] = MemoryManager<DNekScalBlkMat>
587  ::AllocateSharedPtr(nbdry_size, nbdry_size, blkmatStorage);
588  blkMatrices[1] = MemoryManager<DNekScalBlkMat>
589  ::AllocateSharedPtr(nbdry_size, nint_size , blkmatStorage);
590  blkMatrices[2] = MemoryManager<DNekScalBlkMat>
591  ::AllocateSharedPtr(nint_size , nbdry_size, blkmatStorage);
592  blkMatrices[3] = MemoryManager<DNekScalBlkMat>
593  ::AllocateSharedPtr(nint_size , nint_size , blkmatStorage);
594 
595  DNekScalMatSharedPtr tmpscalmat;
596  for(i = 0; i < nPatches; i++)
597  {
598  for(j = 0; j < 4; j++)
599  {
600  tmpscalmat= MemoryManager<DNekScalMat>
601  ::AllocateSharedPtr(1.0,substructuredMat[j][i]);
602  blkMatrices[j]->SetBlock(i,i,tmpscalmat);
603  }
604  }
605  }
606 
607  // We've finished with the Schur complement matrix passed to this
608  // level, so return the memory to the system. The Schur complement
609  // matrix need only be retained at the last level. Save the other
610  // matrices at this level though.
611  m_schurCompl.reset();
612 
614  m_linSysKey, m_expList, blkMatrices[0], blkMatrices[1],
615  blkMatrices[2], blkMatrices[3], pLocToGloMap);
616  }
#define ASSERTL0(condition, msg)
Definition: ErrorUtil.hpp:188
virtual GlobalLinSysStaticCondSharedPtr v_Recurse(const GlobalLinSysKey &mkey, const boost::weak_ptr< ExpList > &pExpList, const DNekScalBlkMatSharedPtr pSchurCompl, const DNekScalBlkMatSharedPtr pBinvD, const DNekScalBlkMatSharedPtr pC, const DNekScalBlkMatSharedPtr pInvD, const boost::shared_ptr< AssemblyMap > &locToGloMap)=0
static boost::shared_ptr< DataType > AllocateSharedPtr()
Allocate a shared pointer from the memory pool.
#define sign(a, b)
return the sign(b)*a
Definition: Polylib.cpp:22
DNekScalBlkMatSharedPtr m_schurCompl
Block Schur complement matrix.
GlobalLinSysStaticCondSharedPtr m_recursiveSchurCompl
Schur complement for Direct Static Condensation.
boost::shared_ptr< DNekScalMat > DNekScalMatSharedPtr
boost::shared_ptr< DNekScalBlkMat > DNekScalBlkMatSharedPtr
Definition: NekTypeDefs.hpp:74
double NekDouble
PointerWrapper
Specifies if the pointer passed to a NekMatrix or NekVector is copied into an internal representation...
const GlobalLinSysKey m_linSysKey
Key associated with this linear system.
Definition: GlobalLinSys.h:127
const boost::weak_ptr< ExpList > m_expList
Local Matrix System.
Definition: GlobalLinSys.h:129
void Nektar::MultiRegions::GlobalLinSysStaticCond::SetupTopLevel ( const boost::shared_ptr< AssemblyMap > &  pLocToGloMap)
protected

Set up the storage for the Schur complement or the top level of the multi-level Schur complement.

For the first level in multi-level static condensation, or the only level in the case of single-level static condensation, allocate the condensed matrices and populate them with the local matrices retrieved from the expansion list.

Parameters

Definition at line 297 of file GlobalLinSysStaticCond.cpp.

References Nektar::MemoryManager< DataType >::AllocateSharedPtr(), Nektar::eDIAGONAL, Nektar::StdRegions::eHybridDGHelmBndLam, Nektar::MultiRegions::GlobalMatrixKey::GetMatrixType(), m_BinvD, m_C, Nektar::MultiRegions::GlobalLinSys::m_expList, m_invD, Nektar::MultiRegions::GlobalLinSys::m_linSysKey, m_schurCompl, Nektar::MultiRegions::GlobalLinSys::v_GetBlock(), and Nektar::MultiRegions::GlobalLinSys::v_GetStaticCondBlock().

Referenced by Nektar::MultiRegions::GlobalLinSysPETScStaticCond::v_InitObject(), v_InitObject(), and Nektar::MultiRegions::GlobalLinSysIterativeStaticCond::v_InitObject().

299  {
300  int n;
301  int n_exp = m_expList.lock()->GetNumElmts();
302 
303  const Array<OneD,const unsigned int>& nbdry_size
304  = pLocToGloMap->GetNumLocalBndCoeffsPerPatch();
305  const Array<OneD,const unsigned int>& nint_size
306  = pLocToGloMap->GetNumLocalIntCoeffsPerPatch();
307 
308  // Setup Block Matrix systems
309  MatrixStorage blkmatStorage = eDIAGONAL;
311  ::AllocateSharedPtr(nbdry_size, nbdry_size, blkmatStorage);
313  ::AllocateSharedPtr(nbdry_size, nint_size , blkmatStorage);
315  ::AllocateSharedPtr(nint_size , nbdry_size, blkmatStorage);
317  ::AllocateSharedPtr(nint_size , nint_size , blkmatStorage);
318 
319  for(n = 0; n < n_exp; ++n)
320  {
321  if (m_linSysKey.GetMatrixType() ==
323  {
324  DNekScalMatSharedPtr loc_mat
326  m_expList.lock()->GetOffset_Elmt_Id(n));
327  m_schurCompl->SetBlock(n,n,loc_mat);
328  }
329  else
330  {
331  DNekScalBlkMatSharedPtr loc_schur
333  m_expList.lock()->GetOffset_Elmt_Id(n));
335  m_schurCompl->SetBlock(n, n, t = loc_schur->GetBlock(0,0));
336  m_BinvD ->SetBlock(n, n, t = loc_schur->GetBlock(0,1));
337  m_C ->SetBlock(n, n, t = loc_schur->GetBlock(1,0));
338  m_invD ->SetBlock(n, n, t = loc_schur->GetBlock(1,1));
339  }
340  }
341  }
static boost::shared_ptr< DataType > AllocateSharedPtr()
Allocate a shared pointer from the memory pool.
DNekScalBlkMatSharedPtr m_schurCompl
Block Schur complement matrix.
DNekScalBlkMatSharedPtr m_C
Block matrix.
virtual DNekScalBlkMatSharedPtr v_GetStaticCondBlock(unsigned int n)
Retrieves a the static condensation block matrices from n-th expansion using the matrix key provided ...
DNekScalBlkMatSharedPtr m_invD
Block matrix.
boost::shared_ptr< DNekScalMat > DNekScalMatSharedPtr
virtual DNekScalMatSharedPtr v_GetBlock(unsigned int n)
Retrieves the block matrix from n-th expansion using the matrix key provided by the m_linSysKey...
boost::shared_ptr< DNekScalBlkMat > DNekScalBlkMatSharedPtr
Definition: NekTypeDefs.hpp:74
StdRegions::MatrixType GetMatrixType() const
Return the matrix type.
const GlobalLinSysKey m_linSysKey
Key associated with this linear system.
Definition: GlobalLinSys.h:127
DNekScalBlkMatSharedPtr m_BinvD
Block matrix.
const boost::weak_ptr< ExpList > m_expList
Local Matrix System.
Definition: GlobalLinSys.h:129
virtual void Nektar::MultiRegions::GlobalLinSysStaticCond::v_AssembleSchurComplement ( boost::shared_ptr< AssemblyMap pLoctoGloMap)
inlineprotectedvirtual
virtual void Nektar::MultiRegions::GlobalLinSysStaticCond::v_BasisInvTransform ( Array< OneD, NekDouble > &  pInOut)
inlineprotectedvirtual
virtual void Nektar::MultiRegions::GlobalLinSysStaticCond::v_BasisTransform ( Array< OneD, NekDouble > &  pInOut,
int  offset 
)
inlineprotectedvirtual
int Nektar::MultiRegions::GlobalLinSysStaticCond::v_GetNumBlocks ( )
protectedvirtual

Get the number of blocks in this system.

At the top level this corresponds to the number of elements in the expansion list.

Reimplemented from Nektar::MultiRegions::GlobalLinSys.

Definition at line 285 of file GlobalLinSysStaticCond.cpp.

References m_schurCompl.

286  {
287  return m_schurCompl->GetNumberOfBlockRows();
288  }
DNekScalBlkMatSharedPtr m_schurCompl
Block Schur complement matrix.
void Nektar::MultiRegions::GlobalLinSysStaticCond::v_Initialise ( const boost::shared_ptr< AssemblyMap > &  pLocToGloMap)
protectedvirtual

Initialise this object.

If at the last level of recursion (or the only level in the case of single-level static condensation), assemble the Schur complement. For other levels, in the case of multi-level static condensation, the next level of the condensed system is computed.

Parameters
pLocToGloMapLocal to global mapping.

Reimplemented from Nektar::MultiRegions::GlobalLinSys.

Definition at line 267 of file GlobalLinSysStaticCond.cpp.

References ConstructNextLevelCondensedSystem(), m_locToGloMap, m_wsp, and v_AssembleSchurComplement().

269  {
270  int nLocalBnd = m_locToGloMap->GetNumLocalBndCoeffs();
271  int nGlobal = m_locToGloMap->GetNumGlobalCoeffs();
272  m_wsp = Array<OneD, NekDouble>(2*nLocalBnd + nGlobal, 0.0);
273 
274  if (pLocToGloMap->AtLastLevel())
275  {
276  v_AssembleSchurComplement(pLocToGloMap);
277  }
278  else
279  {
281  pLocToGloMap->GetNextLevelLocalToGlobalMap());
282  }
283  }
Array< OneD, NekDouble > m_wsp
Workspace array for matrix multiplication.
virtual void v_AssembleSchurComplement(boost::shared_ptr< AssemblyMap > pLoctoGloMap)
boost::shared_ptr< AssemblyMap > m_locToGloMap
Local to global map.
void ConstructNextLevelCondensedSystem(const boost::shared_ptr< AssemblyMap > &locToGloMap)
void Nektar::MultiRegions::GlobalLinSysStaticCond::v_InitObject ( )
protectedvirtual

Reimplemented from Nektar::MultiRegions::GlobalLinSys.

Reimplemented in Nektar::MultiRegions::GlobalLinSysIterativeStaticCond, and Nektar::MultiRegions::GlobalLinSysPETScStaticCond.

Definition at line 85 of file GlobalLinSysStaticCond.cpp.

References Nektar::MultiRegions::GlobalLinSys::Initialise(), m_locToGloMap, and SetupTopLevel().

86  {
87  // Allocate memory for top-level structure
89 
90  // Construct this level
92  }
void Initialise(const boost::shared_ptr< AssemblyMap > &pLocToGloMap)
Definition: GlobalLinSys.h:216
void SetupTopLevel(const boost::shared_ptr< AssemblyMap > &locToGloMap)
Set up the storage for the Schur complement or the top level of the multi-level Schur complement...
boost::shared_ptr< AssemblyMap > m_locToGloMap
Local to global map.
virtual DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::v_PreSolve ( int  scLevel,
NekVector< NekDouble > &  F_GlobBnd 
)
inlineprotectedvirtual

Reimplemented in Nektar::MultiRegions::GlobalLinSysIterativeStaticCond, and Nektar::MultiRegions::GlobalLinSysPETScStaticCond.

Definition at line 67 of file GlobalLinSysStaticCond.h.

References m_schurCompl.

Referenced by v_Solve().

70  {
71  return m_schurCompl;
72  }
DNekScalBlkMatSharedPtr m_schurCompl
Block Schur complement matrix.
virtual GlobalLinSysStaticCondSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::v_Recurse ( const GlobalLinSysKey mkey,
const boost::weak_ptr< ExpList > &  pExpList,
const DNekScalBlkMatSharedPtr  pSchurCompl,
const DNekScalBlkMatSharedPtr  pBinvD,
const DNekScalBlkMatSharedPtr  pC,
const DNekScalBlkMatSharedPtr  pInvD,
const boost::shared_ptr< AssemblyMap > &  locToGloMap 
)
protectedpure virtual
void Nektar::MultiRegions::GlobalLinSysStaticCond::v_Solve ( const Array< OneD, const NekDouble > &  in,
Array< OneD, NekDouble > &  out,
const AssemblyMapSharedPtr locToGloMap,
const Array< OneD, const NekDouble > &  dirForcing = NullNekDouble1DArray 
)
protectedvirtual

Solve the linear system for given input and output vectors using a specified local to global map.

Implements Nektar::MultiRegions::GlobalLinSys.

Definition at line 106 of file GlobalLinSysStaticCond.cpp.

References Nektar::eWrapper, Vmath::Fill(), Nektar::NekVector< DataType >::GetPtr(), m_BinvD, m_C, m_invD, m_recursiveSchurCompl, m_wsp, Nektar::MultiRegions::GlobalLinSys::SolveLinearSystem(), v_BasisInvTransform(), v_BasisTransform(), v_PreSolve(), Vmath::Vadd(), Vmath::Vcopy(), and Vmath::Vsub().

111  {
112  bool dirForcCalculated = (bool) dirForcing.num_elements();
113  bool atLastLevel = pLocToGloMap->AtLastLevel();
114  int scLevel = pLocToGloMap->GetStaticCondLevel();
115 
116  int nGlobDofs = pLocToGloMap->GetNumGlobalCoeffs();
117  int nGlobBndDofs = pLocToGloMap->GetNumGlobalBndCoeffs();
118  int nDirBndDofs = pLocToGloMap->GetNumGlobalDirBndCoeffs();
119  int nGlobHomBndDofs = nGlobBndDofs - nDirBndDofs;
120  int nLocBndDofs = pLocToGloMap->GetNumLocalBndCoeffs();
121  int nIntDofs = pLocToGloMap->GetNumGlobalCoeffs()
122  - nGlobBndDofs;
123 
124  Array<OneD, NekDouble> F = m_wsp + 2*nLocBndDofs;
125  Array<OneD, NekDouble> tmp;
126  if(nDirBndDofs && dirForcCalculated)
127  {
128  Vmath::Vsub(nGlobDofs,in.get(),1,dirForcing.get(),1,F.get(),1);
129  }
130  else
131  {
132  Vmath::Vcopy(nGlobDofs,in.get(),1,F.get(),1);
133  }
134 
135  NekVector<NekDouble> F_HomBnd(nGlobHomBndDofs,tmp=F+nDirBndDofs,
136  eWrapper);
137  NekVector<NekDouble> F_GlobBnd(nGlobBndDofs,F,eWrapper);
138  NekVector<NekDouble> F_LocBnd(nLocBndDofs,0.0);
139  NekVector<NekDouble> F_Int(nIntDofs,tmp=F+nGlobBndDofs,eWrapper);
140 
141  NekVector<NekDouble> V_GlobBnd(nGlobBndDofs,out,eWrapper);
142  NekVector<NekDouble> V_GlobHomBnd(nGlobHomBndDofs,
143  tmp=out+nDirBndDofs,
144  eWrapper);
145  NekVector<NekDouble> V_Int(nIntDofs,tmp=out+nGlobBndDofs,eWrapper);
146  NekVector<NekDouble> V_LocBnd(nLocBndDofs,m_wsp,eWrapper);
147 
148  NekVector<NekDouble> V_GlobHomBndTmp(nGlobHomBndDofs,0.0);
149 
150  // set up normalisation factor for right hand side on first SC level
151  DNekScalBlkMatSharedPtr sc = v_PreSolve(scLevel, F_GlobBnd);
152 
153  if(nGlobHomBndDofs)
154  {
155  // construct boundary forcing
156  if( nIntDofs && ((!dirForcCalculated) && (atLastLevel)) )
157  {
158  DNekScalBlkMat &BinvD = *m_BinvD;
159  DNekScalBlkMat &SchurCompl = *sc;
160 
161  // include dirichlet boundary forcing
162  pLocToGloMap->GlobalToLocalBnd(V_GlobBnd,V_LocBnd);
163  V_LocBnd = BinvD*F_Int + SchurCompl*V_LocBnd;
164  }
165  else if((!dirForcCalculated) && (atLastLevel))
166  {
167  // include dirichlet boundary forcing
168  DNekScalBlkMat &SchurCompl = *sc;
169  pLocToGloMap->GlobalToLocalBnd(V_GlobBnd,V_LocBnd);
170  V_LocBnd = SchurCompl*V_LocBnd;
171  }
172  else
173  {
174  DNekScalBlkMat &BinvD = *m_BinvD;
175  V_LocBnd = BinvD*F_Int;
176  }
177 
178  pLocToGloMap->AssembleBnd(V_LocBnd,V_GlobHomBndTmp,
179  nDirBndDofs);
180  F_HomBnd = F_HomBnd - V_GlobHomBndTmp;
181 
182  // Transform from original basis to low energy
183  v_BasisTransform(F, nDirBndDofs);
184 
185  // For parallel multi-level static condensation some
186  // processors may have different levels to others. This
187  // routine receives contributions to partition vertices from
188  // those lower levels, whilst not sending anything to the
189  // other partitions, and includes them in the modified right
190  // hand side vector.
191  int lcLevel = pLocToGloMap->GetLowestStaticCondLevel();
192  if(atLastLevel && scLevel < lcLevel)
193  {
194  // If this level is not the lowest level across all
195  // processes, we must do dummy communication for the
196  // remaining levels
197  Array<OneD, NekDouble> tmp(nGlobBndDofs);
198  for (int i = scLevel; i < lcLevel; ++i)
199  {
200  Vmath::Fill(nGlobBndDofs, 0.0, tmp, 1);
201  pLocToGloMap->UniversalAssembleBnd(tmp);
202  Vmath::Vcopy(nGlobHomBndDofs,
203  tmp.get()+nDirBndDofs, 1,
204  V_GlobHomBndTmp.GetPtr().get(), 1);
205  F_HomBnd = F_HomBnd - V_GlobHomBndTmp;
206  }
207  }
208 
209  // solve boundary system
210  if(atLastLevel)
211  {
212  Array<OneD, NekDouble> pert(nGlobBndDofs,0.0);
213  NekVector<NekDouble> Pert(nGlobBndDofs,pert,eWrapper);
214 
215  // Solve for difference from initial solution given inout;
217  nGlobBndDofs, F, pert, pLocToGloMap, nDirBndDofs);
218 
219  // Transform back to original basis
220  v_BasisInvTransform(pert);
221 
222  // Add back initial conditions onto difference
223  Vmath::Vadd(nGlobHomBndDofs,&out[nDirBndDofs],1,
224  &pert[nDirBndDofs],1,&out[nDirBndDofs],1);
225  }
226  else
227  {
228  m_recursiveSchurCompl->Solve(F,
229  V_GlobBnd.GetPtr(),
230  pLocToGloMap->GetNextLevelLocalToGlobalMap());
231  }
232  }
233 
234  // solve interior system
235  if(nIntDofs)
236  {
237  DNekScalBlkMat &invD = *m_invD;
238 
239  if(nGlobHomBndDofs || nDirBndDofs)
240  {
241  DNekScalBlkMat &C = *m_C;
242 
243  if(dirForcCalculated && nDirBndDofs)
244  {
245  pLocToGloMap->GlobalToLocalBnd(V_GlobHomBnd,V_LocBnd,
246  nDirBndDofs);
247  }
248  else
249  {
250  pLocToGloMap->GlobalToLocalBnd(V_GlobBnd,V_LocBnd);
251  }
252  F_Int = F_Int - C*V_LocBnd;
253  }
254 
255  V_Int = invD*F_Int;
256  }
257  }
Array< OneD, NekDouble > m_wsp
Workspace array for matrix multiplication.
GlobalLinSysStaticCondSharedPtr m_recursiveSchurCompl
Schur complement for Direct Static Condensation.
DNekScalBlkMatSharedPtr m_C
Block matrix.
void Fill(int n, const T alpha, T *x, const int incx)
Fill a vector with a constant value.
Definition: Vmath.cpp:46
void SolveLinearSystem(const int pNumRows, const Array< OneD, const NekDouble > &pInput, Array< OneD, NekDouble > &pOutput, const AssemblyMapSharedPtr &locToGloMap, const int pNumDir=0)
Solve the linear system for given input and output vectors.
Definition: GlobalLinSys.h:201
virtual void v_BasisTransform(Array< OneD, NekDouble > &pInOut, int offset)
DNekScalBlkMatSharedPtr m_invD
Block matrix.
NekMatrix< NekMatrix< NekMatrix< NekDouble, StandardMatrixTag >, ScaledMatrixTag >, BlockMatrixTag > DNekScalBlkMat
Definition: NekTypeDefs.hpp:66
virtual DNekScalBlkMatSharedPtr v_PreSolve(int scLevel, NekVector< NekDouble > &F_GlobBnd)
boost::shared_ptr< DNekScalBlkMat > DNekScalBlkMatSharedPtr
Definition: NekTypeDefs.hpp:74
void Vsub(int n, const T *x, const int incx, const T *y, const int incy, T *z, const int incz)
Subtract vector z = x-y.
Definition: Vmath.cpp:329
void Vcopy(int n, const T *x, const int incx, T *y, const int incy)
Definition: Vmath.cpp:1047
virtual void v_BasisInvTransform(Array< OneD, NekDouble > &pInOut)
DNekScalBlkMatSharedPtr m_BinvD
Block matrix.
void Vadd(int n, const T *x, const int incx, const T *y, const int incy, T *z, const int incz)
Add vector z = x+y.
Definition: Vmath.cpp:285

Member Data Documentation

DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_BinvD
protected
DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_C
protected
DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_invD
protected
boost::shared_ptr<AssemblyMap> Nektar::MultiRegions::GlobalLinSysStaticCond::m_locToGloMap
protected
GlobalLinSysStaticCondSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_recursiveSchurCompl
protected

Schur complement for Direct Static Condensation.

Definition at line 105 of file GlobalLinSysStaticCond.h.

Referenced by ConstructNextLevelCondensedSystem(), and v_Solve().

DNekScalBlkMatSharedPtr Nektar::MultiRegions::GlobalLinSysStaticCond::m_schurCompl
protected
Array<OneD, NekDouble> Nektar::MultiRegions::GlobalLinSysStaticCond::m_wsp
protected

Workspace array for matrix multiplication.

Definition at line 117 of file GlobalLinSysStaticCond.h.

Referenced by Nektar::MultiRegions::GlobalLinSysIterativeStaticCond::v_DoMatrixMultiply(), v_Initialise(), and v_Solve().