DISPATCH
|
Module handling convenient AMR I/O. We wish to have a file format that makes it easy to read when restarting jobs, and which makes it easy to read with Python. We can rely on the existing mechanism to write patch metadata in the form of file run/snapno/rank_rankno_patches.nml with namelist data for each patch. This may be replaced later with a file written with mpi_buffer_mod. More...
Data Types | |
type | amr_io_t |
Variables | |
integer | verbose =0 |
type(amr_io_t), public | amr_io |
Module handling convenient AMR I/O. We wish to have a file format that makes it easy to read when restarting jobs, and which makes it easy to read with Python. We can rely on the existing mechanism to write patch metadata in the form of file run/snapno/rank_rankno_patches.nml with namelist data for each patch. This may be replaced later with a file written with mpi_buffer_mod.
We can reuse the functionality in mpi_file_mod, and build a task list with patch data, ready for output. We also reuse the mechanism which counts down the number of tasks still remaining to be added to the list – this saves debugging efforts, since it is known to work and is rather delicate, since new AMR patches may be added after the I/O process has started, until it is complete, with all existing tasks having passed the current out_next value.
Note that the output procedure is called from inside a critical region in data_io_mod.f90, and hence everything in output and the procedures it calls is trivially thread safe; only the last thread to reach next_out enters here, and while it is active other threads are waiting at the start of the critical region. When they enter, they find that the output has been completed.
Call hierarchy:
experiment_toutput ! request patch output gpatch_toutput ! catch output request data_io_toutput ! select output method amr_io_toutput ! check if final thread & rank amr_io_toutput_list ! run through list mpi_file_topen ! open data/run/IOUT/snapshot.dat amr_iotoutput_buffer ! fill buffer for variable iv mpi_file_twrite ! compute offset and size MPI_File_write_at ! write out mpi_file_tclose ! close file
OpenMPI aspects: All threads call experiment_toutput repeatedly, but most of the time they return from data_iooutput, after finding that the time has not advanced enough. When reaching a time where they should add their data to the output list, they must operate one-at-a-time, so that part should be inside a unique critical region (or should use a lock).
data_iooutput indeed has name critical regions around all calls that end up inside amr_io_mod, so only one thread can ever be inside the procedures below. But we must also make sure that when that thread decides to do collective MPI calls, which may only be done by one thread on some systems (laptops), all other threads must be held up at the very same critical region (data_io_cr). This is arranged by using an OMP lock in data_io_mod – cf. comments there.