亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频

? 歡迎來到蟲蟲下載站! | ?? 資源下載 ?? 資源專輯 ?? 關于我們
? 蟲蟲下載站

?? mpi.qbk

?? Boost provides free peer-reviewed portable C++ source libraries. We emphasize libraries that work
?? QBK
?? 第 1 頁 / 共 5 頁
字號:
[library Boost.MPI    [authors [Gregor, Douglas], [Troyer, Matthias] ]    [copyright 2005 2006 2007 Douglas Gregor, Matthias Troyer, Trustees of Indiana University]    [purpose        An generic, user-friendly interface to MPI, the Message        Passing Interface.    ]    [id mpi]    [dirname mpi]    [license        Distributed under the Boost Software License, Version 1.0.        (See accompanying file LICENSE_1_0.txt or copy at        <ulink url="http://www.boost.org/LICENSE_1_0.txt">            http://www.boost.org/LICENSE_1_0.txt        </ulink>)    ]][/ Links ][def _MPI_         [@http://www-unix.mcs.anl.gov/mpi/ MPI]][def _MPI_implementations_    [@http://www-unix.mcs.anl.gov/mpi/implementations.html    MPI implementations]][def _Serialization_ [@http://www.boost.org/libs/serialization/doc                      Boost.Serialization]][def _BoostPython_ [@http://www.boost.org/libs/python/doc                      Boost.Python]][def _Python_      [@http://www.python.org Python]][def _LAM_          [@http://www.lam-mpi.org/ LAM/MPI]][def _MPICH_        [@http://www-unix.mcs.anl.gov/mpi/mpich/ MPICH]][def _OpenMPI_      [@http://www.open-mpi.org OpenMPI]][def _accumulate_   [@http://www.sgi.com/tech/stl/accumulate.html                     `accumulate`]][/ QuickBook Document version 1.0 ][section:intro Introduction]Boost.MPI is a library for message passing in high-performanceparallel applications. A Boost.MPI program is one or more processesthat can communicate either via sending and receiving individualmessages (point-to-point communication) or by coordinating as a group(collective communication). Unlike communication in threadedenvironments or using a shared-memory library, Boost.MPI processes canbe spread across many different machines, possibly with differentoperating systems and underlying architectures. Boost.MPI is not a completely new parallel programminglibrary. Rather, it is a C++-friendly interface to the standardMessage Passing Interface (_MPI_), the most popular library interfacefor high-performance, distributed computing. MPI definesa library interface, available from C, Fortran, and C++, for whichthere are many _MPI_implementations_. Although there exist C++bindings for MPI, they offer little functionality over the Cbindings. The Boost.MPI library provides an alternative C++ interfaceto MPI that better supports modern C++ development styles, includingcomplete support for user-defined data types and C++ Standard Librarytypes, arbitrary function objects for collective algorithms, and theuse of modern C++ library techniques to maintain maximalefficiency.At present, Boost.MPI supports the majority of functionality in MPI1.1. The thin abstractions in Boost.MPI allow one to easily combine itwith calls to the underlying C MPI library. Boost.MPI currentlysupports:* Communicators: Boost.MPI supports the creation,  destruction, cloning, and splitting of MPI communicators, along with  manipulation of process groups. * Point-to-point communication: Boost.MPI supports  point-to-point communication of primitive and user-defined data  types with send and receive operations, with blocking and  non-blocking interfaces.* Collective communication: Boost.MPI supports collective  operations such as [funcref boost::mpi::reduce `reduce`]  and [funcref boost::mpi::gather `gather`] with both  built-in and user-defined data types and function objects.* MPI Datatypes: Boost.MPI can build MPI data types for  user-defined types using the _Serialization_ library.* Separating structure from content: Boost.MPI can transfer the shape  (or "skeleton") of complexc data structures (lists, maps,  etc.) and then separately transfer their content. This facility  optimizes for cases where the data within a large, static data  structure needs to be transmitted many times.Boost.MPI can be accessed either through its native C++ bindings, orthrough its alternative, [link mpi.python Python interface].[endsect][section:getting_started Getting started]Getting started with Boost.MPI requires a working MPI implementation,a recent version of Boost, and some configuration information.[section:mpi_impl MPI Implementation]To get started with Boost.MPI, you will first need a workingMPI implementation. There are many conforming _MPI_implementations_available. Boost.MPI should work with any of theimplementations, although it has only been tested extensively with:* [@http://www.open-mpi.org Open MPI 1.0.x]* [@http://www.lam-mpi.org LAM/MPI 7.x]* [@http://www-unix.mcs.anl.gov/mpi/mpich/ MPICH 1.2.x]You can test your implementation using the following simple program,which passes a message from one processor to another. Each processorprints a message to standard output.   #include <mpi.h>  #include <iostream>  int main(int argc, char* argv[])  {    MPI_Init(&argc, &argv);    int rank;    MPI_Comm_rank(MPI_COMM_WORLD, &rank);    if (rank == 0) {      int value = 17;      int result = MPI_Send(&value, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);      if (result == MPI_SUCCESS)        std::cout << "Rank 0 OK!" << std::endl;    } else if (rank == 1) {      int value;      int result = MPI_Recv(&value, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,  			  MPI_STATUS_IGNORE);      if (result == MPI_SUCCESS && value == 17)        std::cout << "Rank 1 OK!" << std::endl;    }    MPI_Finalize();    return 0;  } You should compile and run this program on two processors. To do this,consult the documentation for your MPI implementation. With _LAM_, forinstance, you compile with the `mpiCC` or `mpic++` compiler, boot theLAM/MPI daemon, and run your program via `mpirun`. For instance, ifyour program is called `mpi-test.cpp`, use the following commands:[prempiCC -o mpi-test mpi-test.cpplambootmpirun -np 2 ./mpi-testlamhalt]When you run this program, you will see both `Rank 0 OK!` and `Rank 1OK!` printed to the screen. However, they may be printed in any orderand may even overlap each other. The following output is perfectlylegitimate for this MPI program:[preRank Rank 1 OK!0 OK!]If your output looks something like the above, your MPI implementationappears to be working with a C++ compiler and we're ready to move on.[endsect][section:config Configure and Build]Boost.MPI uses version 2 of the[@http://www.boost.org/doc/html/bbv2.html Boost.Build] system forconfiguring and building the library binary. You will need a very newversion of [@http://www.boost.org/tools/build/jam_src/index.htmlBoost.Jam] (3.1.12 or later). If you already have Boost.Jam, run `bjam-v` to determine what version you are using.Information about building Boost.Jam is[@http://www.boost.org/tools/build/jam_src/index.html#building_bjamavailable here]. However, most users need only run `build.sh` in the`tools/build/jam_src` subdirectory of Boost. Then,copy the resulting `bjam` executable some place convenient.For many users using _LAM_, _MPICH_, or _OpenMPI_, configuration isalmost automatic. If you don't already have a file `user-config.jam`in your home directory, copy `tools/build/v2/user-config.jam`there. For many users, MPI support can be enabled simply by adding thefollowing line to your user-config.jam file, which is used to configureBoost.Build version 2.   using mpi ;This should auto-detect MPI settings based on the MPI wrapper compiler in your path, e.g., `mpic++`. If the wrapper compiler is not in yourpath, see below.To actually build the MPI library, go into the top-level Boostdirectory and execute the command:[prebjam --with-mpi]If your MPI wrapper compiler has a different name from the default,you can pass the name of the wrapper compiler as the first argument tothe mpi module:  using mpi : /opt/mpich2-1.0.4/bin/mpiCC ;If your MPI implementation does not have a wrapper compiler, or the MPI auto-detection code does not work with your MPI's wrapper compiler,you can pass MPI-related options explicitly via the second parameter to the `mpi` module:   using mpi : : <find-shared-library>lammpio <find-shared-library>lammpi++                 <find-shared-library>mpi <find-shared-library>lam                  <find-shared-library>dl ;To see the results of MPI auto-detection, pass `--debug-configuration` onthe bjam command line.The (optional) fourth argument configures Boost.MPI for runningregression tests. These parameters specify the executable used tolaunch jobs (default: "mpirun") followed by any necessary argumentsto this to run tests and tell the program to expect the number ofprocessors to follow (default: "-np").  With the default parameters,for instance, the test harness will execute, e.g.,[pre  mpirun -np 4 all_gather_test][endsect][section:installation Installing and Using Boost.MPI]Installation of Boost.MPI can be performed in the build step byspecifying `install` on the command line and (optionally) providing aninstallation location, e.g.,[prebjam --with-mpi install]This command will install libraries into a default system location. Tochange the path where libraries will be installed, add the option`--prefix=PATH`.To build applications based on Boost.MPI, compile and link them as younormally would for MPI programs, but remember to link against the`boost_mpi` and `boost_serialization` libraries, e.g.,[prempic++ -I/path/to/boost/mpi my_application.cpp -Llibdir \  -lboost_mpi-gcc-mt-1_35 -lboost_serialization-gcc-d-1_35.a][endsect]If you plan to use the [link mpi.python Python bindings] forBoost.MPI in conjunction with the C++ Boost.MPI, you will also need tolink against the boost_mpi_python library, e.g., by adding`-lboost_mpi_python-gcc-mt-1_35` to your link command. This step willonly be necessary if you intend to [link mpi.python_user_dataregister C++ types] or use the [linkmpi.python_skeleton_content skeleton/content mechanism] fromwithin Python.[section:testing Testing Boost.MPI] If you would like to verify that Boost.MPI is working properly withyour compiler, platform, and MPI implementation, a self-contained testsuite is available. To use this test suite, you will need to firstconfigure Boost.Build for your MPI environment and then run `bjam` in`libs/mpi/test` (possibly with some extra options). For _LAM_, you will need to run `lamboot` before running `bjam`. For_MPICH_, you may need to create a machine file and pass`-sMPIRUN_FLAGS="-machinefile <filename>"` to Boost.Jam; see thesection on [link mpi.config configuration] for moreinformation. If testing succeeds, `bjam` will exit without errors.[endsect][endsect][section:tutorial Tutorial]A Boost.MPI program consists of many cooperating processes (possiblyrunning on different computers) that communicate among themselves bypassing messages. Boost.MPI is a library (as is the lower-level MPI),not a language, so the first step in a Boost.MPI is to create an[classref boost::mpi::environment mpi::environment] objectthat initializes the MPI environment and enables communication amongthe processes. The [classref boost::mpi::environmentmpi::environment] object is initialized with the program arguments(which it may modify) in your main program. The creation of thisobject initializes MPI, and its destruction will finalize MPI. In thevast majority of Boost.MPI programs, an instance of [classrefboost::mpi::environment mpi::environment] will be declaredin `main` at the very beginning of the program.Communication with MPI always occurs over a *communicator*,which can be created be simply default-constructing an object of type[classref boost::mpi::communicator mpi::communicator]. Thiscommunicator can then be queried to determine how many processes arerunning (the "size" of the communicator) and to give a unique numberto each process, from zero to the size of the communicator (i.e., the"rank" of the process):  #include <boost/mpi/environment.hpp>  #include <boost/mpi/communicator.hpp>  #include <iostream>  namespace mpi = boost::mpi;  int main(int argc, char* argv[])   {    mpi::environment env(argc, argv);    mpi::communicator world;    std::cout << "I am process " << world.rank() << " of " << world.size()              << "." << std::endl;    return 0;  }If you run this program with 7 processes, for instance, you willreceive output such as:[preI am process 5 of 7.I am process 0 of 7.I am process 1 of 7.I am process 6 of 7.I am process 2 of 7.I am process 4 of 7.I am process 3 of 7.]Of course, the processes can execute in a different order each time,so the ranks might not be strictly increasing. More interestingly, the

?? 快捷鍵說明

復制代碼 Ctrl + C
搜索代碼 Ctrl + F
全屏模式 F11
切換主題 Ctrl + Shift + D
顯示快捷鍵 ?
增大字號 Ctrl + =
減小字號 Ctrl + -
亚洲欧美第一页_禁久久精品乱码_粉嫩av一区二区三区免费野_久草精品视频
日韩高清一区二区| 成人精品免费看| proumb性欧美在线观看| 91精品国产综合久久精品| 国产精品久久久久aaaa樱花 | 蜜桃av一区二区| 成人h精品动漫一区二区三区| 欧美二区三区91| 亚洲欧美激情插| 国产91高潮流白浆在线麻豆 | 美女一区二区在线观看| 色999日韩国产欧美一区二区| 国产清纯白嫩初高生在线观看91 | 欧美a一区二区| 欧美三级午夜理伦三级中视频| 亚洲国产成人一区二区三区| 免费久久精品视频| 欧美电影在哪看比较好| 亚洲一区二区三区在线| 91亚洲永久精品| 综合久久久久久| 成人av免费在线观看| 欧美国产欧美综合| 国产福利精品一区| 欧美激情艳妇裸体舞| 成熟亚洲日本毛茸茸凸凹| 欧美不卡视频一区| 精品一区二区三区欧美| 欧美精品一区二区三区四区| 久久精品国产秦先生| xnxx国产精品| 国精产品一区一区三区mba桃花| 日韩情涩欧美日韩视频| 精品一区二区三区视频在线观看| 精品国产亚洲在线| 国内不卡的二区三区中文字幕| 精品少妇一区二区三区视频免付费| 开心九九激情九九欧美日韩精美视频电影| 欧美男人的天堂一二区| 日产精品久久久久久久性色| 欧美一级理论性理论a| 国内精品国产三级国产a久久| 国产日本亚洲高清| 99精品在线观看视频| 亚洲午夜影视影院在线观看| 91.com在线观看| 久久99热国产| 国产精品国产a| 欧美日韩综合不卡| 精品综合免费视频观看| 久久精品夜色噜噜亚洲a∨| 99久久777色| 日本成人中文字幕在线视频| 欧美成人国产一区二区| 成人伦理片在线| 天堂精品中文字幕在线| 国产亚洲精久久久久久| 欧美色偷偷大香| 国产成人精品1024| 亚洲一区二区三区在线| 久久久蜜桃精品| 欧美性受xxxx黑人xyx性爽| 日产欧产美韩系列久久99| 欧美激情在线一区二区| 4hu四虎永久在线影院成人| 国产精品12区| 日日噜噜夜夜狠狠视频欧美人 | 亚洲一区二区四区蜜桃| 精品福利在线导航| 色欧美片视频在线观看| 精品一区免费av| 亚洲一线二线三线久久久| 2欧美一区二区三区在线观看视频| 91丝袜美女网| 国产一区免费电影| 午夜精品久久久久久| 国产精品久久久久国产精品日日| 911精品国产一区二区在线| 91亚洲精品久久久蜜桃| 精久久久久久久久久久| 亚洲一区二区三区国产| 国产精品三级视频| 日韩美女天天操| 欧美色综合网站| 9久草视频在线视频精品| 国产一区二区三区四区五区入口 | 91精品国产一区二区人妖| 成人免费毛片嘿嘿连载视频| 蜜臀av国产精品久久久久| 亚洲欧美偷拍卡通变态| 国产日韩精品视频一区| 精品国产免费一区二区三区四区| 欧美性videosxxxxx| www.欧美精品一二区| 国产精品一区二区在线观看网站| 性久久久久久久久久久久| 亚洲另类一区二区| 亚洲欧美一区二区三区孕妇| 国产精品伦理在线| 国产日产欧产精品推荐色| 久久人人97超碰com| 欧美xingq一区二区| 91麻豆精品国产91久久久使用方法 | 蜜臀久久久久久久| 亚洲第一av色| 亚洲gay无套男同| 亚洲一区二区三区激情| 一区二区三区在线免费视频| 国产精品视频麻豆| 国产无遮挡一区二区三区毛片日本| 日韩美女视频在线| 日韩一区二区在线看片| 欧美电视剧免费观看| 久久亚洲一区二区三区四区| 欧美tickling挠脚心丨vk| 欧美xxxx老人做受| 国产欧美日韩三级| 国产三区在线成人av| 国产精品久久精品日日| 国产精品灌醉下药二区| 伊人色综合久久天天人手人婷| 亚洲精品免费电影| 亚洲一区二区三区四区在线| 亚洲国产日韩一级| 久久精品av麻豆的观看方式| 精品中文字幕一区二区| 国产成人午夜视频| 99久久99久久综合| 欧美日韩高清影院| 日韩欧美在线网站| 国产日韩av一区二区| 亚洲人成网站影音先锋播放| 亚洲444eee在线观看| 另类中文字幕网| 成人福利在线看| 欧美日韩视频专区在线播放| 日韩免费电影一区| 国产精品乱码久久久久久| 亚洲一区二区三区爽爽爽爽爽| 青草国产精品久久久久久| 国产成人精品一区二区三区四区| 成人av网站在线| 欧美日韩精品免费| 国产婷婷色一区二区三区四区| 综合久久一区二区三区| 日韩黄色片在线观看| 国产99久久精品| 欧美日韩电影一区| 国产精品美女一区二区三区 | 亚洲人成人一区二区在线观看| 亚洲高清免费视频| 精品一区二区三区视频| 欧美性猛片aaaaaaa做受| 久久久久久久久久久电影| 亚洲精品乱码久久久久久黑人 | 国产视频一区二区三区在线观看| 亚洲色欲色欲www| 久久国产剧场电影| 欧美在线三级电影| 欧美激情一区二区在线| 免费精品视频在线| 91搞黄在线观看| 日本一区二区三区在线不卡| 日韩精品一级二级| 色综合色狠狠天天综合色| 精品成人一区二区| 亚洲va欧美va国产va天堂影院| 国产成人精品免费看| 精品国产凹凸成av人网站| 亚洲一区二区欧美| 99国产精品久久久久久久久久久| 日韩精品一区二区三区视频在线观看 | 亚洲综合成人在线| av网站免费线看精品| 精品国产乱码久久久久久久久| 亚洲一区二区三区小说| 99视频精品免费视频| 国产日韩精品一区| 国内一区二区在线| 欧美v国产在线一区二区三区| 亚洲主播在线播放| 色婷婷综合激情| 中文字幕在线观看不卡视频| 国产精品一区二区在线观看网站| 精品少妇一区二区三区免费观看| 午夜伊人狠狠久久| 欧美日韩精品是欧美日韩精品| 亚洲精品久久嫩草网站秘色| 91色porny| 一区二区三区中文字幕电影| 粉嫩高潮美女一区二区三区 | www.成人网.com| 国产精品全国免费观看高清 | 久久精工是国产品牌吗| 91精品国产高清一区二区三区蜜臀| 亚洲一区免费视频| 在线观看亚洲专区| 图片区小说区区亚洲影院| 欧美日韩免费不卡视频一区二区三区| 亚洲一区二区av在线|