Topics Map > Research Computing
Euler: Using Modules
Using OpenHPC Environment Modules on Euler
Overview
Euler integrates the OpenHPC 4.0 distribution as its main HPC software provider1. Software in OpenHPC is provided using environment modules (specifically Lmod) which differ a in layout and usage from the module tree users might be familiar with.
At a high level, using the OpenHPC modules involves the following steps:
-
Load a compiler toolchain module.
-
Loading a module will cause libraries and MPI toolchains to appear in
module avail. -
Load an MPI module.
-
Loading the MPI module will cause additional optimized parallel libraries to appear in
module avail. -
Load parallel libraries and any other desired modules.
Each combination of compiler and MPI library typically provides the same set of libraries2, with builds optimized for use with the selected toolchain. OpenHPC provides a full list of components on its wiki. Not all components are necessarily installed or supported on Euler, but users may contact CAE to request additional software be installed.
Selecting a compiler
OpenHPC's compilers are loaded using a module bearing the name of the compiler family, such as gnu15 or intel. Loading one of these modules makes available a set of libraries (including MPI implementations) which are compatible with that compiler.
Example: GNU Compiler Collection 15
Loading the gnu15 module will provide a version of GCC that is suitable for building applications with OpenHPC.
module load gnu15
Running module avail after loading the module will show additional libraries.
------------------------------------ /opt/ohpc/pub/moduledeps/gnu15 -------------------------------------
R/4.5.0 hdf5/1.14.6 mpich/3.4.3-ucx opari2/2.0.9 py3-numpy/1.26.4
cubelib/4.9 impi/2021.14 netcdf-cxx/4.3.1 openblas/0.3.29 scotch/7.0.7
cubew/4.9 likwid/5.4.1 netcdf-fortran/4.6.2 openmpi5/5.0.8 superlu/7.0.0
gotcha/1.0.8 metis/5.1.0 netcdf/4.9.3 pdtoolkit/3.25.1
--------------------------------------- /opt/ohpc/pub/modulefiles ---------------------------------------
autotools gnu15/15.1.0 (L) intel/2025.0.4 papi/7.2.0 prun/2.2 ucx/1.18.0
cmake/4.1.2 hwloc/2.12.1 os pmix/4.2.9 spack/1.0.1 valgrind/3.25.1
The module also sets environment variables such as $CC, $CXX, and $FC to ensure that the compiler is detected by configuration tools like Autoconf or CMake.
The use of other GCC modules or versions with OpenHPC libraries is not supported.
Selecting an MPI implementation
Each compiler provides support for one of a few MPI implementations. These include MPICH, OpenMPI, and Intel MPI, among others. The corresponding modules usually include the name of the implementation, i.e. mpich, openmpi5, and impi respectively.
Take care when loading these modules, as a different MPI version may have various performance or stability implications for your software. Be sure to take note of the specific module version (e.g.
openmpi5/5.0.8) when building against one of these libraries.
Loading an MPI implementation will make available a set of optimized parallel libraries built against that MPI implementation.
Example: GNU Compiler Collection 15 with OpenMPI 5
Loading the gnu15 module causes the openmpi5 module to become available.
module load gnu15
module load openmpi5
Loading the above modules can be combined into a single command, but be careful to ensure that the MPI implementation comes after the compiler module in your
module load.
Running module avail will now show parallel libraries using OpenMPI 5.
-------------------------------- /opt/ohpc/pub/moduledeps/gnu15-openmpi5 --------------------------------
adios2/2.10.2 netcdf-cxx/4.3.1 petsc/3.23.5 scalapack/2.2.2 superlu_dist/6.4.0
boost/1.88.0 netcdf-fortran/4.6.2 phdf5/1.14.6 scalasca/2.6.2 tau/2.34.1
extrae/3.8.3 netcdf/4.9.3 pnetcdf/1.14.0 scorep/9.3 trilinos/16.1.0
fftw/3.3.10 omb/7.5 ptscotch/7.0.7 sionlib/1.7.7
mumps/5.8.1 otf2/3.1.1 py3-mpi4py/3.1.5 slepc/3.23.0
------------------------------------ /opt/ohpc/pub/moduledeps/gnu15 -------------------------------------
R/4.5.0 hdf5/1.14.6 mpich/3.4.3-ucx opari2/2.0.9 py3-numpy/1.26.4
cubelib/4.9 impi/2021.14 netcdf-cxx/4.3.1 openblas/0.3.29 scotch/7.0.7
cubew/4.9 likwid/5.4.1 netcdf-fortran/4.6.2 openmpi5/5.0.8 (L) superlu/7.0.0
gotcha/1.0.8 metis/5.1.0 netcdf/4.9.3 pdtoolkit/3.25.1
--------------------------------------- /opt/ohpc/pub/modulefiles ---------------------------------------
autotools gnu15/15.1.0 (L) intel/2025.0.4 papi/7.2.0 prun/2.2 ucx/1.18.0 (L)
cmake/4.1.2 hwloc/2.12.1 (L) os pmix/4.2.9 spack/1.0.1 valgrind/3.25.1
Combining OpenHPC with classic Euler Modules
Some HPC software and useful libraries are not available through OpenHPC. As such, Euler continues to maintain some classic modules to provide that software. When this software requires an MPI implementation, Euler relies on the OpenHPC modules to provide one. Modules built against a particular toolchain will reflect this in their version slug. For example, the GNU Compiler Collection 15 module combined with OpenMPI might look like this: package_name/gnu15-openmpi5.0.8/package_version. By selecting packages with the same toolchain versions, users can build an environment of compatible libraries and applications.
Example: GROMACS
The molecular dynamics software GROMACS uses both MPI and GPU acceleration to achieve extremely high performance. A list of GROMACS packages available on Euler can be queried using module avail gromacs.
-------------------------------------- /opt/apps/lmod/modulefiles ---------------------------------------
gromacs/gnu15-mpich3.4.3-cu13.0/2025.4 gromacs/gnu15-mpich3.4.3-nogpu/2025.4 (D)
Please note that the version slug for this package includes its name and version, gromacs and 2025.4, but also includes a reference to the compiler gnu15, the specific MPI library version mpich3.4.3, and the CUDA version cu13.0 used to build and run the package.
Footnotes
As of December 2025.
At the time of this writing, Euler does not include optimized libraries for Intel oneAPI-based toolchains.
