CP2K

Last Update: 27 Nov. 2024


CP2K is the ab initio calculation library that supports pseudopotential and all-electron calculation methods for solids, liquids, molecules, materials, crystals, and biological systems.

CP2K


Available users

Kyushu Univ. users Academic users Non-academic users
OK OK OK

Module

Module name Version
cp2k-cpu 2023.1
cp2k-gpu 2023.1
Refer to the following page for the usage of modules:
Module usage

Usage

Setup Environment

CP2K CPU

$ module load intel
$ module load impi
$ module load hdf5
$ module load cp2k-cpu

CP2K GPU

$ module load intel
$ module load impi
$ module load hdf5
$ module load cuda
$ module load cp2k-gpu

Batch processing script description example (MPI)

#!/bin/bash

#PJM -L rscgrp=a-batch
#PJM -L node=1
#PJM --mpi proc=8
#PJM -L elapse=2:00:00
#PJM -j

module load intel
module load impi
module load hdf5
module load cp2k-cpu

mpiexec cp2k.popt -i geo.inp > output
  • 1 node and 8 processes
  • Loading an input file geo.inp.
  • The calculation results shown as standard output are written to output.

Batch processing script description example (Hybrid parallel)

#!/bin/bash

#PJM -L rscgrp=a-batch
#PJM -L node=1
#PJM --mpi proc=2
#PJM -L elapse=2:00:00
#PJM -j

module load intel
module load impi
module load hdf5
module load cp2k-cpu
export OMP_NUM_THREADS=2

mpiexec cp2k.psmp -i geo.inp > output
  • 1 node, 2 processes, 2 threads

Batch processing script description example (GPU)

#!/bin/bash

#PJM -L rscgrp=b-batch
#PJM -L node=1
#PJM --mpi proc=4
#PJM -L elapse=2:00:00
#PJM -j

module load intel
module load impi
module load hdf5
module load cuda
module load cp2k-gpu
export OMP_NUM_THREADS=1

mpirun -map-by ppr:2:node cp2k.psmp -i geo.inp > output
  • 1 node (exclusive use), 4 processes, 1 thread, 4 GPU