How to use software (using the module command)

Last update: 5 Dec. 2024

List of software available on Genkai

Please see here for a list of software available on Genkai.


switch the usage environemnt using module command

Genkai has a variety of software installed, but not all of them are ready to run immediately. It is necessary to use Environment Modules (module command) to switch the usage environment.

This may seem like a bother until you get used to it, but it is an easy way to use different versions of the same software in different environments.

This page introduces the basic usage of the module command. For a more detailed explanation of the module command, please check man module or the official website. For the module commands required to use each software installed in Genkai, please check the page of each software.


List available module environments

By executing module avail, you can display a list of currently available (loaded) module environments. Note that modules that require a specific module to be loaded will not be displayed. (Some modules are only displayed if you run module avial with a specific module loaded.)

A show_module command is also provided to assist in understanding module dependencies. The meaning of the display (relationships) is as follows. ApplicationName is available in NodeGroup. It is necessary to load ModuleName to use it. The BaseCompiler/MPI load is also required for the target module to load and run correctly.

         NodeGroup   BaseCompiler/MPI
------------------------------------------------------------------------------------------------
Amber & AmberTools                  amber/24                        LoginNode   gcc/8 impi/2021.12
Amber & AmberTools                  amber/24                        NodeGroupA  gcc/8 impi/2021.12
Amber & AmberTools                  amber/24                        NodeGroupB  gcc/8 impi/2021.12
Amber & AmberTools                  amber/24                        NodeGroupC  gcc/8 impi/2021.12
(snip)

The -k option can also be used to narrow down the contents of the display.

$ show_module -k Intel
ApplicationName                     ModuleName                      NodeGroup   BaseCompiler/MPI
------------------------------------------------------------------------------------------------
Intel MPI                           impi/2021.10.0                  LoginNode   intel/2023.2
Intel MPI                           impi/2021.10.0                  NodeGroupA  intel/2023.2
Intel MPI                           impi/2021.10.0                  NodeGroupB  intel/2023.2
Intel MPI                           impi/2021.10.0                  NodeGroupC  intel/2023.2
(snip)

Load (use) the module environment

By executing module load target_module_name, the target module environment is loaded and the software can be used. (In most cases, the environment variable PATH will be updated.)

In the list of modules output by module avail, the module name and version name are separated by /. It is possible to load a module by specifying only the module name, and in that case (when no version name is specified), the module with (default) will be loaded.

Note that some compilers and libraries cannot be executed (cannot be started or executed correctly) unless the same module is loaded at the time of program creation (during login node work) and at the time of program execution (in a job script). Also, some modules are mutually exclusive (cannot be used at the same time). Please unload, switch, etc. as necessary.


List loaded module environments

You can display a list of currently loaded module environments by running module list.


Unload (end of use) loaded module environment

You can unload the loaded module environment by invoking module unload target_module_name.


Switching (replacement) module environment

You can switch modules by executing module switch source_module_name destination_module_name.


Get information about the module environment

You can get hints about the target module environment by running module help target_module_name or module whatis target_module_name. You may get some useful information about the target module, such as simple usage or related modules, only if the information corresponding to the target module has been set up.


initialize module environment

By running module purge, you can unload all modules that have already been loaded and return them to their initial state.


Usage example 1

The following is an example of using the CUDA compiler (nvcc) as an example of executing the module command.
Lines beginning with $ are command execution, Lines beginning with # are comments for explanation.

# check the list of available modules
$ module avail
---------------------------------- /home/modules/modulefiles/LN/core ----------------------------------
cuda/11.8.0           gcc-toolset/12  intel/2023.2           nvidia/23.9(default)
cuda/12.2.2(default)  gcc/8(default)  intel/2024.1(default)

---------------------------------- /home/modules/modulefiles/LN/util ----------------------------------
avs/express85(default)     jupyter_notebook/7.2.1(default)  molpro/2024.1.0_mpipr
fieldview/2023(default)    marc/2024.1(default)             molpro/2024.1.0_sockets
gaussian/16.C.01(default)  mathematica/14.0(default)        nastran/2024.1(default)
julia/1.10.3(default)      matlab/R2024a(default)           singularity-ce/4.1.3(default)

# check for loaded modules (not initially loaded)
$ module list
No Modulefiles Currently Loaded.

# CUDA compiler nvcc is not available by default
$ nvcc --version
-bash: nvcc: command not found
# nvcc is available after loading cuda module
$ module load cuda
$ module list
Currently Loaded Modulefiles:
 1) cuda/12.2.2(default)
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:02:13_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0

# change the version to load
$ module switch cuda/12.2.2 cuda/11.8.0
$ module list
Currently Loaded Modulefiles:
 1) cuda/11.8.0
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

# check module information
$ module help cuda
-------------------------------------------------------------------
Module Specific Help for /home/modules/modulefiles/LN/core/cuda/12.2.2:

1. How to use NVIDIA CUDA Toolkit

(1) Set environment
    [username@genkai0001 work]$ module load cuda/12.2.2

(2) Compile/link
    [username@genkai0001 work]$ nvcc [option] file

-------------------------------------------------------------------
$ module whatis cuda
---------------------------------- /home/modules/modulefiles/LN/core ----------------------------------
         cuda/11.8.0: CUDA Toolkit installed on 2024/06/13
         cuda/12.2.2: CUDA Toolkit installed on 2023/09/20

# initialize modules (unload all modules)
$ module purge
$ module list
No Modulefiles Currently Loaded.

Usage Example 2

As an example of executing the module command, here is an example of using the NVIDIA HPC SDK. The configuration is such that information on the related MPI library modules will not be displayed unless the nvidia module is loaded. Please refer to it when there are dependencies on the module.

Lines beginning with $ are command execution, Lines beginning with # are comments for explanation.

# List of modules available in the initial state
[ku40000105@genkai0001 ~]$ module avail
--------------------------------- /home/modules/modulefiles/LN/core ---------------------------------
cuda/11.8.0           gcc-toolset/12(default)  intel/2023.2(default)  nvidia/24.11
cuda/12.2.2(default)  gcc-toolset/13           intel/2024.1
cuda/12.6.1           gcc/8(default)           nvidia/23.9(default)

--------------------------------- /home/modules/modulefiles/LN/util ---------------------------------
avs/express85(default)           mathematica/14.0(default)
aws_pcluster/3.9.1(default)      matlab/R2024a(default)
awscli/2.16.8(default)           matlab_parallel_server/R2024a(default)
azure_cyclecli/8.6.2(default)    mesa/20.3.5
(snip)
# load nvidia module to increase the number of modules that can be checked in avail.
[ku40000105@genkai0001 ~]$ module load nvidia
[ku40000105@genkai0001 ~]$ module list
Currently Loaded Modulefiles:
 1) nvidia/23.9(default)
[ku40000105@genkai0001 ~]$ module avail
------------------------- /home/modules/modulefiles/LN/compiler/nvidia/23.9 -------------------------
fftw/3.3.10(default)  netcdf-cxx/4.3.1(default)      nvhpcx/23.9
hdf5/1.14.4(default)  netcdf-fortran/4.6.1(default)  nvhpcx/23.9-cuda12
hpcx/2.17.1(default)  netcdf/4.9.2(default)          nvompi/23.9
--------------------------------- /home/modules/modulefiles/LN/core ---------------------------------
cuda/11.8.0           gcc-toolset/12(default)  intel/2023.2(default)  nvidia/24.11
cuda/12.2.2(default)  gcc-toolset/13           intel/2024.1
cuda/12.6.1           gcc/8(default)           nvidia/23.9(default)

--------------------------------- /home/modules/modulefiles/LN/util ---------------------------------
avs/express85(default)           mathematica/14.0(default)
aws_pcluster/3.9.1(default)      matlab/R2024a(default)
awscli/2.16.8(default)           matlab_parallel_server/R2024a(default)
azure_cyclecli/8.6.2(default)    mesa/20.3.5
(snip)
# After loading nvhpcx (or nvompi), the MPI included in the HPC SDK is available
[ku40000105@genkai0001 ~]$ module load nvhpcx
[ku40000105@genkai0001 ~]$ module list
Currently Loaded Modulefiles:
 1) nvidia/23.9(default)   2) nvhpcx/23.9-cuda12