Документ взят из кэша поисковой машины. Адрес оригинального документа : http://angel.cmc.msu.ru/~ifed/teraflops/Pozdneev_HybridMPIOpenMP.pdf
Дата изменения: Thu Mar 4 12:27:46 2010
Дата индексирования: Mon Oct 1 19:38:50 2012
Кодировка:
MPI+Op enMP

2-

pozdneev@gmail.com
. .

« »

( )

MPI+Op enMP

28 2008

1 / 16



1



2

SMP-

3



4

MPI-2

5

PBS POE LoadLeveler

( )

MPI+Op enMP

28 2008

2 / 16




MPI- :

Op enMP POSIX threads (Pthreads)

( )

MPI+Op enMP

28 2008

3 / 16





, SMP-

IBM System Cluster 1600 (IBM eServer pSeries 690 Regatta) Blue Gene/P IBM eServer pSeries 690 Regatta


: Op enMP MPI MPI- :
, MPI-

A

1

A

2 ,

F(A1 , A2 )

, , /

( )

MPI+Op enMP

28 2008

4 / 16



SMP-

( )

MPI+Op enMP

28 2008

5 / 16




MPI :

mpif90_r, mpicc_r, mpixlf90_r, mpixlc_r Blue Gene/P: bgxlf_r, bgxlc_r -qsmp=omp
, «» :

Portable Batch System (PBS) IBM Parallel Op erating Environment (POE) IBM LoadLeveler

( )

MPI+Op enMP

28 2008

6 / 16


«» MPI

int main(int argc, char **argv) { int required = MPI_THREAD_FUNNELED; int mpi_rank, mpi_size, mpi_err, provided; MPI_Comm comm = MPI_COMM_WORLD;

mpi_err = MPI_Init_thread(&argc, &argv, required, &provided); mpi_err = MPI_Comm_rank(comm, &mpi_rank); if (mpi_rank == 0) { switch (provided) { case MPI_THREAD_SINGLE: /* */ break; case MPI_THREAD_FUNNELED: /* */ break; case MPI_THREAD_SERIALIZED: /* */ break; case MPI_THREAD_MULTIPLE: /* */ break; default: /* */ break; } }
MPI+Op enMP

( )

28 2008

8 / 16




MPI THREAD SINGLE

MPI-
MPI THREAD FUNNELED

MPI- , MPI- , MPI
MPI THREAD SERIALIZED

MPI- , MPI-
MPI THREAD MULTIPLE

MPI- , MPI-

( )

MPI+Op enMP

28 2008

9 / 16




MPI_INIT , MPI_INIT_THREAD required = MPI_THREAD_SINGLE MPI_THREAD_SINGLE
MPI-, . , MPI-

( )

MPI+Op enMP

28 2008

10 / 16


PBS

#!/bin/sh #PBS -lnodes=3:ppn=1 #PBS -lwalltime=0:15:00 module load intel-compilers module load intel-mpich-ib export OMP_NUM_THREADS=2 cd $PBS_O_WORKDIR mpiexec ok

( )

MPI+Op enMP

28 2008

12 / 16


POE

#!/bin/sh # @ notification = never # @ output = $(jobid).out # @ error = $(jobid).err # @ job_type = parallel # @ network.mpi = csss,shared,us # @ wall_clock_limit = 00:15:00 # @ requirements = (Pool==2) # @ node = 2 # @ tasks_per_node = 1 # @ resources = ConsumableCpus(4) # # @ queue export MP_EUILIB=us export OMP_NUM_THREADS=4 ./ok
( ) MPI+Op enMP 28 2008 14 / 16


Loadleveler (Regatta)

( )

#!/bin/bash #@ output = $(executable).$(jobid).$(stepid).out #@ error = $(executable).$(jobid).$(stepid).err #@ notification = never #@ job_type = parallel #@ node = 1 #@ tasks_per_node = 4 #@ node_usage = not_shared #@ resources = ConsumableCpus(3) #@ wall_clock_limit = 00:05:00 #@ environment = COPY_ALL; \ OMP_NUM_THREADS=3; AIXTHREAD_SCOPE=S; \ MEMORY_AFFINITY=MCM; \ MP_SHARED_MEMORY=yes; MP_WAIT_MODE=poll; \ MP_SINGLE_THREAD=yes; MP_TASK_AFFINITY=MCM #@ queue /usr/local/bin/mpirun -np 4 ok
MPI+Op enMP 28 2008

16 / 16