Intel Library¶
The 2nd possibility is to load the intel mpi module intel-mpi-4 or intel-mpi-5, and to use the appropriate
compiler script. All supported compilers have equivalent commandes that use the prefix mpi
for the standard
compiler command. For example, the intel MPI Library command for the intel fortran is mpiifort
, and the GNU fortran is mpif90
:
- if you want to use GNU fortran :
[homer@thor]$ module load intel-mpi-4/4.1.3.048 [homer@thor]$ mpif90 --version GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4) Copyright (C) 2010 Free Software Foundation, Inc. [homer@thor]$mpif90 mycode.f90 ...
- if you want to use Intel fortran :
[homer@thor IMB]$ mpiifort --version ifort (IFORT) 15.0.3 20150407 Copyright (C) 1985-2015 Intel Corporation. All rights reserved. [homer@thor]$mpiifort mycode.f90 ...
To launch programs linked with the Intel MPI lib, use the mpirun
command.
The mpirun command uses Hydra or MPD as the underlying process managers. Hydra is the default process manager. Set the I_MPI_PROCESS_MANAGER
environment variable to change the default value.
[homer@thor]I_MPI_PROCESS_MANAGER=hydra|mpd
You can run each process manager directly by invoking the `mpiexec` command for MPD and the `mpiexec.hydra` command for Hydra.
In the case of mpd process manager, the `mpirun` command automatically starts an independent
ring of the mpd daemons, launches an MPI job, and shuts down the mpd ring upon job termination.the `mpiexec` command doesn't make this step and you must start yourself the mpd deamons. ::
[homer@thor]$ NUM_NODES=`cat $PBS_NODEFILE | sort | uniq | wc -l` [homer@thor]$ NCPUS=`cat $PBS_NODEFILE | wc -l` [homer@thor]$ mpdboot -n $NUM_NODES -f $PBS_NODEFILE [homer@thor]$ mpiexec -n $NCPUS ./executable [homer@thor]$ mpdallexit
In the case of hydra process manager, no daemons is necessary for launching mpi jobs.
The mpirun
command make all prologue and epilogue steps inside the batch manager.
PBS provides an interface to Intel MPI’s mpirun. If executed inside a PBS job, this allows for PBS to track all Intel MPI processes so that PBS can perform accounting and have complete job control. If executed outside of a PBS job, it behaves exactly as if standard Intel MPI's mpirun was used.
Inside Batch manager, you do not need to create the mpd.hosts file for MPD* process manager. Allocate the session using a job scheduler installed on your system, and use the mpirun command inside this session to run your MPI job.
When submitting PBS jobs that invoke the PBS-supplied interface to mpirun for Intel MPI, be sure to explicitly specify the actual number of ranks or MPI tasks in the qsub select specification. Otherwise, jobs will fail to run with "too few entries in the machinefile".
The PBS interface to Intel MPI’s mpirun always passes the arguments -totalnum=<number of mpds to start> and -file=<mpd_hosts_file> to the actual mpirun, taking its input from unique entries in $PBS_NODEFILE.
Here is a script which use the intel MPI :
#!/bin/bash #PBS -N intel_mpi_test #PBS -l select=3:ncpus=20:mpiprocs=20 #PBS -l place=scatter:excl #PBS -l walltime=02:00:00 #PBS -j oe module purge module load intel-tools-14/14.0.2.144 module load intel-mpi-4/4.1.3.048 cd ${PBS_O_WORKDIR} #Affichage dans le fichier de sortie de plusieurs informations NCPU=`wc -l < $PBS_NODEFILE` CODE=/home/homer/bin/mycode_impi echo ------------------------------------------------------ echo PBS: qsub tourne sur $PBS_O_HOST echo PBS: la queue d\'origine est $PBS_O_QUEUE echo PBS: la queue d\'execution est $PBS_QUEUE echo PBS: la working directory est $PBS_O_WORKDIR echo PBS: l\'identifiant de job est $PBS_JOBID echo PBS: le nom du job est $PBS_JOBNAME echo PBS: le fichier contenant les noeuds est $PBS_NODEFILE echo PBS: la home directory est $PBS_O_HOME echo PBS: PATH = $PBS_O_PATH echo ------------------------------------------------------ mpirun -np ${NCPU} $CODE > out.$PBS_JOBID 2>&1
Run Time Parameters¶
There are many options to control runtime of an intel mpi compiled application.
I_MPI_WAIT_MODE=<enable|yes|on|1;disable|no|off|0>
: Turn on/off wait mode.
Set this environment variable to control the wait mode.
If this mode is enabled, the processes wait for receiving messages without polling the fabric(s).
This mode can save CPU time for other tasks.
Test Java MPI¶
Chargement de MPI, java 8 et du classpath.
module load intel-mpi-18/18.2.199 module load java/jdk8 source mpivars.sh
Création d'un fichier de test nommé par exemple Test.java dans le répertoire mpi.test conformément au package :
/* Copyright 2016 Intel Corporation. All Rights Reserved. The source code contained or described herein and all documents related to the source code ("Material") are owned by Intel Corporation or its suppliers or licensors. Title to the Material remains with Intel Corporation or its suppliers and licensors. The Material is protected by worldwide copyright and trade secret laws and treaty provisions. No part of the Material may be used, copied, reproduced, modified, published, uploaded, posted, transmitted, distributed, or disclosed in any way without Intel's prior express written permission. No license under any patent, copyright, trade secret or other intellectual property right is granted to or conferred upon you by disclosure or delivery of the Materials, either expressly, by implication, inducement, estoppel or otherwise. Any license under such intellectual property rights must be express and approved by Intel in writing. */ package mpi.test; import mpi.*; public class Test { static public void main(String[] args) throws MPIException { MPI.Init(args); int[] size = new int[]{Comm.WORLD.getSize()}; int[] rank = new int[]{Comm.WORLD.getRank()}; String processorName = MPI.getProcessorName(); int[] length = new int[]{processorName.length()}; char[] nameToSend = processorName.toCharArray(); if (rank[0] == 0) { for (int i = 1; i < size[0]; i++) { PTP.recv (rank, 1, Datatype.INT, i, 1, Comm.WORLD); PTP.recv (size, 1, Datatype.INT, i, 1, Comm.WORLD); PTP.recv (length, 1, Datatype.INT, i, 1, Comm.WORLD); char[] name = new char[length[0]]; PTP.recv (name, length[0]*Character.SIZE / Byte.SIZE, Datatype.CHAR, i, 1, Comm.WORLD); MPI.println("Hello world: rank " + rank[0] + " of " + size[0] + " running on " + new String(name)); } } else { PTP.send (rank, 1, Datatype.INT, 0, 1, Comm.WORLD); PTP.send (size, 1, Datatype.INT, 0, 1, Comm.WORLD); PTP.send (length, 1, Datatype.INT, 0, 1, Comm.WORLD); PTP.send (nameToSend, (length[0])*Character.SIZE / Byte.SIZE, Datatype.CHAR, 0, 1, Comm.WORLD); } MPI.Finalize(); } }
Compilation de Test.java :
javac mpi/test/Test.java
Création d'un script de lancement du job avec réservation de 2x11 CPU :
#!/bin/bash -l #PBS -l select=2:ncpus=11:mpiprocs=11:mem=100mb #PBS -l walltime=0:10:00 #PBS -j oe #PBS -M mguichar@univ-lr.fr #PBS -q express #PBS -V cd ~/testJavaMPI #### start mpirun -n 22 java -cp $CLASSPATH:. mpi.test.Test
Lancement du job :
qsub joblauncher
Résultat :
[mguichar@thor testJavaMPI]$ cat joblauncher.o925041 Start Prologue v2.5.3 Fri Apr 19 16:06:44 CEST 2019 End Prologue v2.5.3 Fri Apr 19 16:06:45 CEST 2019 Hello world: rank 1 of 22 running on r2i3n11 Hello world: rank 2 of 22 running on r2i3n11 Hello world: rank 3 of 22 running on r2i3n11 Hello world: rank 4 of 22 running on r2i3n11 Hello world: rank 5 of 22 running on r2i3n11 Hello world: rank 6 of 22 running on r2i3n11 Hello world: rank 7 of 22 running on r2i3n11 Hello world: rank 8 of 22 running on r2i3n11 Hello world: rank 9 of 22 running on r2i3n11 Hello world: rank 10 of 22 running on r2i3n11 Hello world: rank 11 of 22 running on r2i3n12 Hello world: rank 12 of 22 running on r2i3n12 Hello world: rank 13 of 22 running on r2i3n12 Hello world: rank 14 of 22 running on r2i3n12 Hello world: rank 15 of 22 running on r2i3n12 Hello world: rank 16 of 22 running on r2i3n12 Hello world: rank 17 of 22 running on r2i3n12 Hello world: rank 18 of 22 running on r2i3n12 Hello world: rank 19 of 22 running on r2i3n12 Hello world: rank 20 of 22 running on r2i3n12 Hello world: rank 21 of 22 running on r2i3n12 Start Epilogue v2.5.3 Fri Apr 19 16:06:46 CEST 2019 End Epilogue v2.5.3 Fri Apr 19 16:06:47 CEST 2019