When processing large files, it is not always convenient to run in parallel but process each parallel partition in serial, for example when interpolating a solution field from one mesh to another or creating an output file for visualization.
Loading full file1.xml
can be expensive if the file1.xml
is already large. So instead you
can pre-partition the file using the using the –part-only
option. So the command
will partition the mesh into 10 partitions and write each partition into a directory called
file_xml
. If you enter this directory you will find partitioned XML files P0000000.xml
,
P0000001.xml
, …, P0000009.xml
which can then be processed individually as outlined
above.
There is also a –part-only-overlapping
option, which can be run in the same fashion.
In this mode, the mesh is partitioned into 10 partitions in a similar manner, but the elements
at the partition edges will now overlap, so that the intersection of each partition with its
neighbours is non-empty. This is sometime helpful when, for example, producing a
global isocontour which has been smoothed. Applying the smoothed isocontour
extraction routine with the –part-only
option will produce a series of isocontour
where there will be a gap between partitions, as the smoother tends to shrink the
isocontour within a partition. using the –part-only-overlapping
option will still yield a
shrinking isocontour, but the overlapping partitions help to overlap the partiiton
boundaries.
If you have a partitioned directory either from a parallel run or using the –part-only
option
you can now run the FieldConvert
option using the nparts
command line option, that is
Note the form file1_xml:xml
option tells the code it is a parallel partition which should be
treated as an xml
type file. the argument of nparts
should correpsond to the number of
partitions used in generating the file1_xml directory. This will create a parallel vtu file as it
processes each partition.
Another example is to interpolate file1.fld
from one mesh file1.xml
to another
file2.xml
. If the mesh files are large we can do this by partitioning file2.xml
into 10 (or
more) partitions to generate the file_xml
directory and interpolating each partition one by
one using the command:
Note that internally the routine uses the range option so that it only has to load the part of
file1.xml
that overlaps with each partition of file2.xml
. The resulting output will lie in a
directory called file2.fld
, with each of the different parallel partitions in files with names
P0000000.fld
, P0000001.fld
, …, P0000009.fld
. In previous versions of FieldConvert it was
necessary to generate an updated Info.xml
file but in the current version it should
automatically be updating this file.
The examples above will process each partition serially which may now take a while for many partitions. You can however run this option in parallel using a smaller number of cores than the nparts.
For the example of creating a vtu file above you can use 4 processor concurrently wiht the command line:
Obviously the executable will have to have been compiled with the MPI option for this to work.