?? mpi.qbk
字號:
[library Boost.MPI [authors [Gregor, Douglas], [Troyer, Matthias] ] [copyright 2005 2006 2007 Douglas Gregor, Matthias Troyer, Trustees of Indiana University] [purpose An generic, user-friendly interface to MPI, the Message Passing Interface. ] [id mpi] [dirname mpi] [license Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at <ulink url="http://www.boost.org/LICENSE_1_0.txt"> http://www.boost.org/LICENSE_1_0.txt </ulink>) ]][/ Links ][def _MPI_ [@http://www-unix.mcs.anl.gov/mpi/ MPI]][def _MPI_implementations_ [@http://www-unix.mcs.anl.gov/mpi/implementations.html MPI implementations]][def _Serialization_ [@http://www.boost.org/libs/serialization/doc Boost.Serialization]][def _BoostPython_ [@http://www.boost.org/libs/python/doc Boost.Python]][def _Python_ [@http://www.python.org Python]][def _LAM_ [@http://www.lam-mpi.org/ LAM/MPI]][def _MPICH_ [@http://www-unix.mcs.anl.gov/mpi/mpich/ MPICH]][def _OpenMPI_ [@http://www.open-mpi.org OpenMPI]][def _accumulate_ [@http://www.sgi.com/tech/stl/accumulate.html `accumulate`]][/ QuickBook Document version 1.0 ][section:intro Introduction]Boost.MPI is a library for message passing in high-performanceparallel applications. A Boost.MPI program is one or more processesthat can communicate either via sending and receiving individualmessages (point-to-point communication) or by coordinating as a group(collective communication). Unlike communication in threadedenvironments or using a shared-memory library, Boost.MPI processes canbe spread across many different machines, possibly with differentoperating systems and underlying architectures. Boost.MPI is not a completely new parallel programminglibrary. Rather, it is a C++-friendly interface to the standardMessage Passing Interface (_MPI_), the most popular library interfacefor high-performance, distributed computing. MPI definesa library interface, available from C, Fortran, and C++, for whichthere are many _MPI_implementations_. Although there exist C++bindings for MPI, they offer little functionality over the Cbindings. The Boost.MPI library provides an alternative C++ interfaceto MPI that better supports modern C++ development styles, includingcomplete support for user-defined data types and C++ Standard Librarytypes, arbitrary function objects for collective algorithms, and theuse of modern C++ library techniques to maintain maximalefficiency.At present, Boost.MPI supports the majority of functionality in MPI1.1. The thin abstractions in Boost.MPI allow one to easily combine itwith calls to the underlying C MPI library. Boost.MPI currentlysupports:* Communicators: Boost.MPI supports the creation, destruction, cloning, and splitting of MPI communicators, along with manipulation of process groups. * Point-to-point communication: Boost.MPI supports point-to-point communication of primitive and user-defined data types with send and receive operations, with blocking and non-blocking interfaces.* Collective communication: Boost.MPI supports collective operations such as [funcref boost::mpi::reduce `reduce`] and [funcref boost::mpi::gather `gather`] with both built-in and user-defined data types and function objects.* MPI Datatypes: Boost.MPI can build MPI data types for user-defined types using the _Serialization_ library.* Separating structure from content: Boost.MPI can transfer the shape (or "skeleton") of complexc data structures (lists, maps, etc.) and then separately transfer their content. This facility optimizes for cases where the data within a large, static data structure needs to be transmitted many times.Boost.MPI can be accessed either through its native C++ bindings, orthrough its alternative, [link mpi.python Python interface].[endsect][section:getting_started Getting started]Getting started with Boost.MPI requires a working MPI implementation,a recent version of Boost, and some configuration information.[section:mpi_impl MPI Implementation]To get started with Boost.MPI, you will first need a workingMPI implementation. There are many conforming _MPI_implementations_available. Boost.MPI should work with any of theimplementations, although it has only been tested extensively with:* [@http://www.open-mpi.org Open MPI 1.0.x]* [@http://www.lam-mpi.org LAM/MPI 7.x]* [@http://www-unix.mcs.anl.gov/mpi/mpich/ MPICH 1.2.x]You can test your implementation using the following simple program,which passes a message from one processor to another. Each processorprints a message to standard output. #include <mpi.h> #include <iostream> int main(int argc, char* argv[]) { MPI_Init(&argc, &argv); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == 0) { int value = 17; int result = MPI_Send(&value, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); if (result == MPI_SUCCESS) std::cout << "Rank 0 OK!" << std::endl; } else if (rank == 1) { int value; int result = MPI_Recv(&value, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); if (result == MPI_SUCCESS && value == 17) std::cout << "Rank 1 OK!" << std::endl; } MPI_Finalize(); return 0; } You should compile and run this program on two processors. To do this,consult the documentation for your MPI implementation. With _LAM_, forinstance, you compile with the `mpiCC` or `mpic++` compiler, boot theLAM/MPI daemon, and run your program via `mpirun`. For instance, ifyour program is called `mpi-test.cpp`, use the following commands:[prempiCC -o mpi-test mpi-test.cpplambootmpirun -np 2 ./mpi-testlamhalt]When you run this program, you will see both `Rank 0 OK!` and `Rank 1OK!` printed to the screen. However, they may be printed in any orderand may even overlap each other. The following output is perfectlylegitimate for this MPI program:[preRank Rank 1 OK!0 OK!]If your output looks something like the above, your MPI implementationappears to be working with a C++ compiler and we're ready to move on.[endsect][section:config Configure and Build]Boost.MPI uses version 2 of the[@http://www.boost.org/doc/html/bbv2.html Boost.Build] system forconfiguring and building the library binary. You will need a very newversion of [@http://www.boost.org/tools/build/jam_src/index.htmlBoost.Jam] (3.1.12 or later). If you already have Boost.Jam, run `bjam-v` to determine what version you are using.Information about building Boost.Jam is[@http://www.boost.org/tools/build/jam_src/index.html#building_bjamavailable here]. However, most users need only run `build.sh` in the`tools/build/jam_src` subdirectory of Boost. Then,copy the resulting `bjam` executable some place convenient.For many users using _LAM_, _MPICH_, or _OpenMPI_, configuration isalmost automatic. If you don't already have a file `user-config.jam`in your home directory, copy `tools/build/v2/user-config.jam`there. For many users, MPI support can be enabled simply by adding thefollowing line to your user-config.jam file, which is used to configureBoost.Build version 2. using mpi ;This should auto-detect MPI settings based on the MPI wrapper compiler in your path, e.g., `mpic++`. If the wrapper compiler is not in yourpath, see below.To actually build the MPI library, go into the top-level Boostdirectory and execute the command:[prebjam --with-mpi]If your MPI wrapper compiler has a different name from the default,you can pass the name of the wrapper compiler as the first argument tothe mpi module: using mpi : /opt/mpich2-1.0.4/bin/mpiCC ;If your MPI implementation does not have a wrapper compiler, or the MPI auto-detection code does not work with your MPI's wrapper compiler,you can pass MPI-related options explicitly via the second parameter to the `mpi` module: using mpi : : <find-shared-library>lammpio <find-shared-library>lammpi++ <find-shared-library>mpi <find-shared-library>lam <find-shared-library>dl ;To see the results of MPI auto-detection, pass `--debug-configuration` onthe bjam command line.The (optional) fourth argument configures Boost.MPI for runningregression tests. These parameters specify the executable used tolaunch jobs (default: "mpirun") followed by any necessary argumentsto this to run tests and tell the program to expect the number ofprocessors to follow (default: "-np"). With the default parameters,for instance, the test harness will execute, e.g.,[pre mpirun -np 4 all_gather_test][endsect][section:installation Installing and Using Boost.MPI]Installation of Boost.MPI can be performed in the build step byspecifying `install` on the command line and (optionally) providing aninstallation location, e.g.,[prebjam --with-mpi install]This command will install libraries into a default system location. Tochange the path where libraries will be installed, add the option`--prefix=PATH`.To build applications based on Boost.MPI, compile and link them as younormally would for MPI programs, but remember to link against the`boost_mpi` and `boost_serialization` libraries, e.g.,[prempic++ -I/path/to/boost/mpi my_application.cpp -Llibdir \ -lboost_mpi-gcc-mt-1_35 -lboost_serialization-gcc-d-1_35.a][endsect]If you plan to use the [link mpi.python Python bindings] forBoost.MPI in conjunction with the C++ Boost.MPI, you will also need tolink against the boost_mpi_python library, e.g., by adding`-lboost_mpi_python-gcc-mt-1_35` to your link command. This step willonly be necessary if you intend to [link mpi.python_user_dataregister C++ types] or use the [linkmpi.python_skeleton_content skeleton/content mechanism] fromwithin Python.[section:testing Testing Boost.MPI] If you would like to verify that Boost.MPI is working properly withyour compiler, platform, and MPI implementation, a self-contained testsuite is available. To use this test suite, you will need to firstconfigure Boost.Build for your MPI environment and then run `bjam` in`libs/mpi/test` (possibly with some extra options). For _LAM_, you will need to run `lamboot` before running `bjam`. For_MPICH_, you may need to create a machine file and pass`-sMPIRUN_FLAGS="-machinefile <filename>"` to Boost.Jam; see thesection on [link mpi.config configuration] for moreinformation. If testing succeeds, `bjam` will exit without errors.[endsect][endsect][section:tutorial Tutorial]A Boost.MPI program consists of many cooperating processes (possiblyrunning on different computers) that communicate among themselves bypassing messages. Boost.MPI is a library (as is the lower-level MPI),not a language, so the first step in a Boost.MPI is to create an[classref boost::mpi::environment mpi::environment] objectthat initializes the MPI environment and enables communication amongthe processes. The [classref boost::mpi::environmentmpi::environment] object is initialized with the program arguments(which it may modify) in your main program. The creation of thisobject initializes MPI, and its destruction will finalize MPI. In thevast majority of Boost.MPI programs, an instance of [classrefboost::mpi::environment mpi::environment] will be declaredin `main` at the very beginning of the program.Communication with MPI always occurs over a *communicator*,which can be created be simply default-constructing an object of type[classref boost::mpi::communicator mpi::communicator]. Thiscommunicator can then be queried to determine how many processes arerunning (the "size" of the communicator) and to give a unique numberto each process, from zero to the size of the communicator (i.e., the"rank" of the process): #include <boost/mpi/environment.hpp> #include <boost/mpi/communicator.hpp> #include <iostream> namespace mpi = boost::mpi; int main(int argc, char* argv[]) { mpi::environment env(argc, argv); mpi::communicator world; std::cout << "I am process " << world.rank() << " of " << world.size() << "." << std::endl; return 0; }If you run this program with 7 processes, for instance, you willreceive output such as:[preI am process 5 of 7.I am process 0 of 7.I am process 1 of 7.I am process 6 of 7.I am process 2 of 7.I am process 4 of 7.I am process 3 of 7.]Of course, the processes can execute in a different order each time,so the ranks might not be strictly increasing. More interestingly, the
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -