123 lines
4.6 KiB
Plaintext
123 lines
4.6 KiB
Plaintext
[section:communicators Communicators]
|
|
[section:managing Managing communicators]
|
|
|
|
Communication with Boost.MPI always occurs over a communicator. A
|
|
communicator contains a set of processes that can send messages among
|
|
themselves and perform collective operations. There can be many
|
|
communicators within a single program, each of which contains its own
|
|
isolated communication space that acts independently of the other
|
|
communicators.
|
|
|
|
When the MPI environment is initialized, only the "world" communicator
|
|
(called `MPI_COMM_WORLD` in the MPI C and Fortran bindings) is
|
|
available. The "world" communicator, accessed by default-constructing
|
|
a [classref boost::mpi::communicator mpi::communicator]
|
|
object, contains all of the MPI processes present when the program
|
|
begins execution. Other communicators can then be constructed by
|
|
duplicating or building subsets of the "world" communicator. For
|
|
instance, in the following program we split the processes into two
|
|
groups: one for processes generating data and the other for processes
|
|
that will collect the data. (`generate_collect.cpp`)
|
|
|
|
#include <boost/mpi.hpp>
|
|
#include <iostream>
|
|
#include <cstdlib>
|
|
#include <boost/serialization/vector.hpp>
|
|
namespace mpi = boost::mpi;
|
|
|
|
enum message_tags {msg_data_packet, msg_broadcast_data, msg_finished};
|
|
|
|
void generate_data(mpi::communicator local, mpi::communicator world);
|
|
void collect_data(mpi::communicator local, mpi::communicator world);
|
|
|
|
int main()
|
|
{
|
|
mpi::environment env;
|
|
mpi::communicator world;
|
|
|
|
bool is_generator = world.rank() < 2 * world.size() / 3;
|
|
mpi::communicator local = world.split(is_generator? 0 : 1);
|
|
if (is_generator) generate_data(local, world);
|
|
else collect_data(local, world);
|
|
|
|
return 0;
|
|
}
|
|
|
|
When communicators are split in this way, their processes retain
|
|
membership in both the original communicator (which is not altered by
|
|
the split) and the new communicator. However, the ranks of the
|
|
processes may be different from one communicator to the next, because
|
|
the rank values within a communicator are always contiguous values
|
|
starting at zero. In the example above, the first two thirds of the
|
|
processes become "generators" and the remaining processes become
|
|
"collectors". The ranks of the "collectors" in the `world`
|
|
communicator will be 2/3 `world.size()` and greater, whereas the ranks
|
|
of the same collector processes in the `local` communicator will start
|
|
at zero. The following excerpt from `collect_data()` (in
|
|
`generate_collect.cpp`) illustrates how to manage multiple
|
|
communicators:
|
|
|
|
mpi::status msg = world.probe();
|
|
if (msg.tag() == msg_data_packet) {
|
|
// Receive the packet of data
|
|
std::vector<int> data;
|
|
world.recv(msg.source(), msg.tag(), data);
|
|
|
|
// Tell each of the collectors that we'll be broadcasting some data
|
|
for (int dest = 1; dest < local.size(); ++dest)
|
|
local.send(dest, msg_broadcast_data, msg.source());
|
|
|
|
// Broadcast the actual data.
|
|
broadcast(local, data, 0);
|
|
}
|
|
|
|
The code in this except is executed by the "master" collector, e.g.,
|
|
the node with rank 2/3 `world.size()` in the `world` communicator and
|
|
rank 0 in the `local` (collector) communicator. It receives a message
|
|
from a generator via the `world` communicator, then broadcasts the
|
|
message to each of the collectors via the `local` communicator.
|
|
|
|
For more control in the creation of communicators for subgroups of
|
|
processes, the Boost.MPI [classref boost::mpi::group `group`] provides
|
|
facilities to compute the union (`|`), intersection (`&`), and
|
|
difference (`-`) of two groups, generate arbitrary subgroups, etc.
|
|
|
|
[endsect:managing]
|
|
|
|
[section:cartesian_communicator Cartesian communicator]
|
|
|
|
A communicator can be organised as a cartesian grid, here a basic example:
|
|
|
|
#include <vector>
|
|
#include <iostream>
|
|
|
|
#include <boost/mpi/communicator.hpp>
|
|
#include <boost/mpi/collectives.hpp>
|
|
#include <boost/mpi/environment.hpp>
|
|
#include <boost/mpi/cartesian_communicator.hpp>
|
|
|
|
#include <boost/test/minimal.hpp>
|
|
|
|
namespace mpi = boost::mpi;
|
|
int test_main(int argc, char* argv[])
|
|
{
|
|
mpi::environment env;
|
|
mpi::communicator world;
|
|
|
|
if (world.size() != 24) return -1;
|
|
mpi::cartesian_dimension dims[] = {{2, true}, {3,true}, {4,true}};
|
|
mpi::cartesian_communicator cart(world, mpi::cartesian_topology(dims));
|
|
for (int r = 0; r < cart.size(); ++r) {
|
|
cart.barrier();
|
|
if (r == cart.rank()) {
|
|
std::vector<int> c = cart.coordinates(r);
|
|
std::cout << "rk :" << r << " coords: "
|
|
<< c[0] << ' ' << c[1] << ' ' << c[2] << '\n';
|
|
}
|
|
}
|
|
return 0;
|
|
}
|
|
|
|
[endsect:cartesian_communicator]
|
|
[endsect:communicators]
|