How to use LAMMPS on Seawulf

LAMMPS stands for Large-scale Atomic/Molecular Massively Parallel Simulator. It's a classical molecular dynamics (MD) code. As the name implies, it's designed to run well on parallel machines, but it also runs fine on single-processor desktop machines. http://lammps.sandia.gov/index.html

Audience: Faculty, Researchers and Staff

This KB Article References:
This Information is Intended for: Faculty, Researchers, Staff
Last Updated: December 19, 2017

Current Version

There are currently three versions of LAMMPS available on Seawulf, gcc Intel builds, and a GPU-optimized gcc build.

The MPI enabled builds are LAMMPS versions 17NOV2016, and include the following packages:

ASPHERE

COMPRESS

KSPACE

POEMS

RIGID

BODY

CORESHELL

MANYBODY

PYTHON

SHOCK

CLASS

DIPOLE

OPT

REAX

SNAP

COLLOID

GRANULAR

PERI

REPLICA

SRD

 

 

 

 

 

The GPU optimized build is LAMMPS version 11Aug17, and includes all of the previous packages with the addition of open-mp, gpu, and kokkos.

For more information these packages see: http://lammps.sandia.gov/doc/Section_packages.html


Using LAMMPS

The current version of LAMMPS is available using both the Intel and GNU toolchains. Both versions are accessible through modulefiles; lammps/gcc/17Nov2016 or lammps/intel/17Nov2016

As usual, we recommend the use of the Intel compiled versions unless your workflow demands otherwise.

To access LAMMPS load the appropriate modulefile:

module load lammps/intel/17Nov2016

or

module load lammps/gcc/17Nov2016

Loading either of these modules also loads the appropriate implementation of MPI


Using GPU Accelerated LAMMPS

GPU accelerated lammps is only available using the GNU toolchain. This can be accessed through the lammps/gpu/11Aug17 module. 

module load shared
module load lammps/gpu/11Aug17

 

Open-mp threading can further be requested through setting the OMP_NUM_THREADS variable to the desired number of threads, for example 2:

export OMP_NUM_THREADS=2

 


LAMMPS Examples

 
Parallel LAMMPS

Once you have loaded one of these modules, the lammps binary will be available in your path as either: lmp_mvapich2 or lmp_intel

An example job script and basic input files are available in the LAMMPS_EXAMPLES directory. To test this workflow:

module load shared
module load torque
module load lammps/intel/17Nov2016

cd $LAMMPS_EXAMPLES

mkdir -p $HOME/lammps_example

cp * $HOME/lammps_example && cd $_

qsub lammps_intel.pbs

This should produce the following files:

dump.melt.gz, log.lammps, and my_test.o* 

 

LAMMPS GPU

The following script can be used as a template for LAMMPS GPU submission:

#!/bin/bash

#PBS -j oe
#PBS -l nodes=1:ppn=28,walltime=00:60:00
#PBS -N my_test
#PBS -q gpu

mkdir $HOME/lammps/gpu_test
cd $_

module load shared
module load mvapich2/gcc/64/2.2rc1
module load lammps/gpu/11Aug17

cp $LAMMPS_EXAMPLES/in.lj .

export OMP_NUM_THREADS=2
export MV_ENABLE_AFFINITY=0

mpirun -np 28 lmp_gpu -sf gpu -pk gpu 8 < in.melt 2> out.txt 1> err.txt

The variables OMP_NUM_THREADS and -pk gpu # can be varied to suit your needs.


Visualizing the Results

Several tools exist for visualizing the results of the above calculation. However, for quick visualizations on the cluster, we recommend an application such as OVITO: https://ovito.org/

The output file from the above calculation, dump.melt.gz, can be visualized by first loading the ovito module, then calling ovito:

module load ovito/2.8.2
ovito dump.melt.gz

To view successive iterations as an animation check the "File contains time series" box

 

Which should produce the following:

 

 

Additional Information


There are no additional resources available for this article.

Getting Help


The Division of Information Technology provides support on all of our services. If you require assistance please submit a support ticket through the IT Service Management system.

Submit A Ticket

Supported By