SGI Library¶
The SGI MPI (MPT) software package facilitates parallel programming on large systems and on clusters of computer systems. SGI MPI supports both the Message Passing Interface (MPI) standard and the OpenSHMEM standard, as follows:- The MPI standard supports C and Fortran programs with a library and supporting commands. MPI also supports parallel file I/O and remote memory access (RMA).
- MPT supports the MPI 3.0 standard. SGI MPI includes significant features that make it the preferred implementation for use on SGI hardware:
- Multirail InfiniBand support, which takes full advantage of the multiple InfiniBand fabrics available on SGI ICE systems.
- Optimized MPI remote memory access (RMA) one-sided commands.
- High-performance communication support for partitioned systems.
In order to use MPT within a PBS job with Performance Suite, you may need to add the following in your job script before you call MPI::
[homer@thor]$ module load mpt/2.10 [homer@thor]$ which mpirun /opt/sgi/mpt/mpt-2.10/bin/mpirun [homer@thor ~]$ which mpif90 /opt/sgi/mpt/mpt-2.10/bin/mpif90
The values of the wrappers depend of the selected compilers used::
[homer@thor ~]$ module list Currently Loaded Modulefiles: 1) mpt/2.10 3) intel-fc-14/14.0.2.144 2) intel-cc-14/14.0.2.144 4) intel-compilers-14/14.0.2.144 [homer@thor ~]$ mpif90 --version ifort (IFORT) 14.0.2 20140120 Copyright (C) 1985-2014 Intel Corporation. All rights reserved. [homer@thor ~]$ module purge [homer@thor ~]$ module load mpt/2.10 [homer@thor ~]$ mpif90 --version GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4) Copyright (C) 2010 Free Software Foundation, Inc. [homer@thor ~]$ module list Currently Loaded Modulefiles: 1) mpt/2.10
If you don't want to use wrapper:
[homer@thor ~]$ gcc -o myprog myprog.c -lmpi [homer@thor ~]$ icc -o myprog myprog.c -lmpi [homer@thor ~]$ g++ -o myprog myprog.C -lmpi++ -lmpi [homer@thor ~]$ ifort -o myprog myprog.f -lmpi
The mpirun(1) command starts an MPI application. For a complete specification of the command line syntax, see the mpirun(1) man page. This section summarizes the procedures for launching an MPI application.
PBS supplies its own mpiexec
to use with SGI MPT. When you use the PBS-supplied mpiexec, PBS can track resource usage, signal processes, and perform accounting, for all job processes.
The PBS mpiexec provides the standard mpiexec interface. In addition, SGI MPT include the mpiexec_mpt
which provide these capacities.
You can run MPI applications from job scripts that you submit through workload managers such as PBS Professional. The following procedures explain how to configure PBS job scripts to run MPI applications.
- Within a script, use the -l option on a #PBS directive line. These lines have the following format::
#PBS -l select=chunk:ncpus=x[:other_options]
For processes, specify the total number of MPI processes in the job.
For threads, specify the number of OpenMP threads per process. For purely MPI jobs, specify 1.
- Use the mpiexec_mpt command included in SGI MPT, or PBS mpiexec :
mpiexec_mpt [-n processes ] ./a.out
Here is a PBS script which use MPT:
#!/bin/bash #PBS -N mpt_test #PBS -l select=3:ncpus=20:mpiprocs=20 #PBS -l place=scatter:excl #PBS -l walltime=02:00:00 #PBS -j oe module purge module load intel-tools-14/14.0.2.144 module load mpt/2.10 cd ${PBS_O_WORKDIR} #Affichage dans le fichier de sortie de plusieurs informations NCPU=`wc -l < $PBS_NODEFILE` CODE=/home/homer/bin/mycode_mpt #Run echo "------------------" datei echo "on utilise mpi :" which mpiexec_mpt echo ------------------------------------------------------ echo 'Ce job alloue '${NCPU}' cpu(s)' echo 'la var PBS NCPUS :'${NCPUS} echo 'ce job tourne sur le(s) noeud(s): ' #cat $PBS_NODEFILE echo ------------------------------------------------------ echo PBS: qsub tourne sur $PBS_O_HOST echo PBS: la queue d\'origine est $PBS_O_QUEUE echo PBS: la queue d\'execution est $PBS_QUEUE echo PBS: la working directory est $PBS_O_WORKDIR echo PBS: l\'identifiant de job est $PBS_JOBID echo PBS: le nom du job est $PBS_JOBNAME echo PBS: le fichier contenant les noeuds est $PBS_NODEFILE echo PBS: la home directory est $PBS_O_HOME echo PBS: PATH = $PBS_O_PATH echo ------------------------------------------------------ mpiexec_mpt $CODE > out.$PBS_JOBID 2>&1
Performance consideration¶
If your MPI job uses all the processors in each node (20 MPI processes/node for Ivy Bridge), pinning MPI processes greatly helps the performance of the code.
The MPT version mpt.2.10
will pin processes by default by setting the environment variable MPI_DSM_DISTRIBUTE
to 1 (or true) when jobs are run on any of these nodes.
If your MPI job does not use all the processors in each node, we recommend that you disable MPI_DSM_DISTRIBUTE
as follows:
export MPI_DSM_DISTRIBUTE=0