Run 1-dimensional umbrella sampling =================================== | Erika McCarthy\ :sup:`1`, Şölen Ekesan\ :sup:`1`, and Darrin M. York\ :sup:`1` | :sup:`1`\ Laboratory for Biomolecular Simulation Research, Institute for Quantitative Biomedicine and Department of Chemistry and Chemical Biology, Rutgers University, Piscataway, NJ 08854, USA Learning objectives -------------------- - Run umbrella sampling for a 1D methyl transfer reaction within MTR1 Relevant literature ------------------- - `Catalytic mechanism and pH dependence of a methyltransferase ribozyme (MTR1) from computational enzymology `__ - `Surface-Accelerated String Method for Locating Minimum Free Energy Paths `__ - `Extension of the Variational Free Energy Profile and Multistate Bennett Acceptance Ratio Methods for High-Dimensional Potential of Mean Force Profile Analysis `__ Running the umbrella sampling ----------------------------- After generating the windows, it it best practice to exclude the first ~2 ps of production sampling from analysis to allow for proper equilibration time. The equilibration region can be checked using ndfes-CheckEquil.py. In this tutorial you will only perform enough sampling to generate a PMF, but more sampling may be performed. Now we will perform umbrella sampling on all of the umbrella windows in parallel. Using the Expanse cluster as an example, a full compute node can run up to 128 tasks. With 32 umbrella windows, we will assign 4 tasks per window using a groupfile. You have also been provided a slurm script called run_1d.slurm. Take a look at run_1d.slurm: .. code-block:: bash #!/bin/bash #SBATCH --job-name="sim1d" #SBATCH --output="%sim1d.slurmout" #SBATCH --error="%sim1d.slurmerr" #SBATCH --partition=compute #SBATCH --nodes=1 #SBATCH --ntasks-per-node=128 #SBATCH --mem=200GB #SBATCH --cpus-per-task=1 #SBATCH --requeue #SBATCH --export=ALL #SBATCH -t 2-00:00:00 #SBATCH --account=gue998 #SBATCH --reservation=amber24 module load workshop/amber24/default export LAUNCH="srun --mpi=pmi2 -K1 -N1 -n128 -c1 --exclusive sander.MPI" #export PARM="../template/qmmm.parm7" set -e set -u $LAUNCH -ng 32 -groupfile template.groupfile wait sleep 1 Take a look at the groupfile: .. code-block:: bash -O -p ../template/qmmm.parm7 -c init01.rst7 -i img01.mdin -o img01.mdout -r img01.rst7 -x img01.nc -inf img01.mdinfo -O -p ../template/qmmm.parm7 -c init02.rst7 -i img02.mdin -o img02.mdout -r img02.rst7 -x img02.nc -inf img02.mdinfo -O -p ../template/qmmm.parm7 -c init03.rst7 -i img03.mdin -o img03.mdout -r img03.rst7 -x img03.nc -inf img03.mdinfo -O -p ../template/qmmm.parm7 -c init04.rst7 -i img04.mdin -o img04.mdout -r img04.rst7 -x img04.nc -inf img04.mdinfo -O -p ../template/qmmm.parm7 -c init05.rst7 -i img05.mdin -o img05.mdout -r img05.rst7 -x img05.nc -inf img05.mdinfo -O -p ../template/qmmm.parm7 -c init06.rst7 -i img06.mdin -o img06.mdout -r img06.rst7 -x img06.nc -inf img06.mdinfo -O -p ../template/qmmm.parm7 -c init07.rst7 -i img07.mdin -o img07.mdout -r img07.rst7 -x img07.nc -inf img07.mdinfo -O -p ../template/qmmm.parm7 -c init08.rst7 -i img08.mdin -o img08.mdout -r img08.rst7 -x img08.nc -inf img08.mdinfo -O -p ../template/qmmm.parm7 -c init09.rst7 -i img09.mdin -o img09.mdout -r img09.rst7 -x img09.nc -inf img09.mdinfo -O -p ../template/qmmm.parm7 -c init10.rst7 -i img10.mdin -o img10.mdout -r img10.rst7 -x img10.nc -inf img10.mdinfo -O -p ../template/qmmm.parm7 -c init11.rst7 -i img11.mdin -o img11.mdout -r img11.rst7 -x img11.nc -inf img11.mdinfo -O -p ../template/qmmm.parm7 -c init12.rst7 -i img12.mdin -o img12.mdout -r img12.rst7 -x img12.nc -inf img12.mdinfo -O -p ../template/qmmm.parm7 -c init13.rst7 -i img13.mdin -o img13.mdout -r img13.rst7 -x img13.nc -inf img13.mdinfo -O -p ../template/qmmm.parm7 -c init14.rst7 -i img14.mdin -o img14.mdout -r img14.rst7 -x img14.nc -inf img14.mdinfo -O -p ../template/qmmm.parm7 -c init15.rst7 -i img15.mdin -o img15.mdout -r img15.rst7 -x img15.nc -inf img15.mdinfo -O -p ../template/qmmm.parm7 -c init16.rst7 -i img16.mdin -o img16.mdout -r img16.rst7 -x img16.nc -inf img16.mdinfo -O -p ../template/qmmm.parm7 -c init17.rst7 -i img17.mdin -o img17.mdout -r img17.rst7 -x img17.nc -inf img17.mdinfo -O -p ../template/qmmm.parm7 -c init18.rst7 -i img18.mdin -o img18.mdout -r img18.rst7 -x img18.nc -inf img18.mdinfo -O -p ../template/qmmm.parm7 -c init19.rst7 -i img19.mdin -o img19.mdout -r img19.rst7 -x img19.nc -inf img19.mdinfo -O -p ../template/qmmm.parm7 -c init20.rst7 -i img20.mdin -o img20.mdout -r img20.rst7 -x img20.nc -inf img20.mdinfo -O -p ../template/qmmm.parm7 -c init21.rst7 -i img21.mdin -o img21.mdout -r img21.rst7 -x img21.nc -inf img21.mdinfo -O -p ../template/qmmm.parm7 -c init22.rst7 -i img22.mdin -o img22.mdout -r img22.rst7 -x img22.nc -inf img22.mdinfo -O -p ../template/qmmm.parm7 -c init23.rst7 -i img23.mdin -o img23.mdout -r img23.rst7 -x img23.nc -inf img23.mdinfo -O -p ../template/qmmm.parm7 -c init24.rst7 -i img24.mdin -o img24.mdout -r img24.rst7 -x img24.nc -inf img24.mdinfo -O -p ../template/qmmm.parm7 -c init25.rst7 -i img25.mdin -o img25.mdout -r img25.rst7 -x img25.nc -inf img25.mdinfo -O -p ../template/qmmm.parm7 -c init26.rst7 -i img26.mdin -o img26.mdout -r img26.rst7 -x img26.nc -inf img26.mdinfo -O -p ../template/qmmm.parm7 -c init27.rst7 -i img27.mdin -o img27.mdout -r img27.rst7 -x img27.nc -inf img27.mdinfo -O -p ../template/qmmm.parm7 -c init28.rst7 -i img28.mdin -o img28.mdout -r img28.rst7 -x img28.nc -inf img28.mdinfo -O -p ../template/qmmm.parm7 -c init29.rst7 -i img29.mdin -o img29.mdout -r img29.rst7 -x img29.nc -inf img29.mdinfo -O -p ../template/qmmm.parm7 -c init30.rst7 -i img30.mdin -o img30.mdout -r img30.rst7 -x img30.nc -inf img30.mdinfo -O -p ../template/qmmm.parm7 -c init31.rst7 -i img31.mdin -o img31.mdout -r img31.rst7 -x img31.nc -inf img31.mdinfo -O -p ../template/qmmm.parm7 -c init32.rst7 -i img32.mdin -o img32.mdout -r img32.rst7 -x img32.nc -inf img32.mdinfo This is similar to executing multiple backgrounded (&) sander.MPI processes, but it also gives us the capability to run replica exchange if desired by adding the -rem option. Navigate to your trial directory. Submit the run script: .. code-block:: bash [user@cluster] cd t1 [user@cluster] sbatch run_1d.slurm