OpenMPI

To use OenMPI, first you have to load one of this modules :

openmpi/1.10.7-gcc-9.2.0-slurm   
openmpi/1.6.5-gcc-4.8.5          
openmpi/1.8.8-gcc-4.8.5          
openmpi/2.0.2-gcc-4.8.5          
openmpi/2.0.2-gcc-4.8.5-32bits   
openmpi/2.1.5-gcc-8.2.0          
openmpi/2.1.6-gcc-9.2.0-slurm    
openmpi/3.0.5-gcc-9.2.0-slurm    
openmpi/3.1.2-gcc-8.2.0          
openmpi/3.1.2-gcc-8.2.0-dsa      
openmpi/3.1.2-intel-17.0.8       
openmpi/3.1.3-orca-gcc-4.8.5     
openmpi/3.1.4-gcc-7.3.0          
openmpi/3.1.5-gcc-9.2.0-slurm
openmpi/4.0.3-gcc-9.2.0-dsa
openmpi/openmpi-2.0.4-gcc-4.8.5
openmpi/openmpi-3.1.2-gcc-4.8.5

Use ifort

If you want to build a code with openmpi and ifort, be sure you have this variable set :

export OMPIFC=ifort

Communication optimization

They are set when you load a module, but in case of problem with mpi communication, verify this setup

For CLUSTER - xeonv1/v2/v3/v4/v5/v6 - xeonv3_mono - xeonv4_mono - xeonv5_mono - napab :

export OMPI_MCA_btl_tcp_if_exclude="10.1.0.0/16,10.3.0.0/16,10.4.0.0/16"
export OMPI_MCA_btl="self,sm,tcp"

For xeonv1_mono - xeonv2_mono - sv6 :

export OMPI_MCA_btl_tcp_if_exclude="10.1.0.0/16,10.2.0.0/16,10.4.0.0/16"
export OMPI_MCA_btl="self,sm,tcp"

For moonshot :

export OMPI_MCA_btl_tcp_if_exclude="10.2.0.0/16,10.3.0.0/16,10.4.0.0/16"
export OMPI_MCA_btl="self,sm,tcp"

With modern OpenMPI ( version >= 3.1 ) :

export OMPI_MCA_btl="vader,self,tcp"

slurm

Use mpirun instead of srun