Difference between revisions of "Cluster:Gaussian"
Line 1: | Line 1: | ||
+ | Web interface: [[WebMO]] | ||
+ | |||
+ | Users for WebMo | ||
+ | |||
+ | == Older notes, c. 2005 == | ||
* [[Cluster:Gaussian environment|Gaussian environment]] | * [[Cluster:Gaussian environment|Gaussian environment]] | ||
* [[Cluster:Running Gaussian in parallel|Running Gaussian in parallel]] | * [[Cluster:Running Gaussian in parallel|Running Gaussian in parallel]] | ||
* [[Cluster:Gaussian PBS script|Sample Gaussian PBS script]] | * [[Cluster:Gaussian PBS script|Sample Gaussian PBS script]] | ||
+ | |||
+ | === Gaussian environment === | ||
+ | I use this in my .bashrc: | ||
+ | |||
+ | export GAUSS_EXEDIR=/cluster/bazaar/software/g03 | ||
+ | export GAUSS_LFLAGS='-nodelist "b0 b1" -opt "Tsnet.Node.lindarsharg: ssh " -mp 2' | ||
+ | export GAUSS_SCRDIR=/tmp | ||
+ | export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/cluster/bazaar/software/g03 | ||
+ | export g03root=/cluster/bazaar/software | ||
+ | export PATH=$PATH:/cluster/bazaar/software/g03 | ||
+ | |||
+ | Seems to work, although you might want to tweak the node list. | ||
+ | |||
+ | === Running Gaussian in parallel === | ||
+ | Understand the distinction between <tt>nprocs</tt> and <tt>nprocl</tt>. <tt>nprocs</tt> is for a shared-memory (single-image) machine, whereas <tt>nprocl</tt> is for a distributed-memory (cluster) system. You want to use <tt>nprocl</tt> on all our computing systems. For example, if you want to run on eight processes (four nodes), add a line like this | ||
+ | |||
+ | %nprocl=8 | ||
+ | |||
+ | to your input file. You should then specify <tt>qsub -l nodes=4:ppn=2 ''script''</tt> on the command line to submit your PBS script with a requirement of four dual-processor machines. | ||
+ | |||
+ | Once you have the number of CPUs specified in the input file, make sure that your nodelist in GAUSS_LFLAGS is set properly. See [[Cluster:Gaussian_environment|Gaussian environment]] for details. | ||
+ | |||
+ | If you have a bunch of input files you want to modify, use the add_nproc.pl script in /cluster/generic/bin. First cd to the directory with your input files, and then do: | ||
+ | |||
+ | perl /cluster/generic/bin/add_nproc.pl ''n'' | ||
+ | |||
+ | where ''n'' is the number proecssors you want to run on. | ||
+ | |||
+ | After that, fire it up with <tt>$g03root/bsd/g03l</tt>. | ||
+ | |||
+ | === Sample Gaussian PBS script === | ||
+ | |||
+ | This is a sample PBS script for Gaussian: | ||
+ | |||
+ | #!/bin/sh | ||
+ | #PBS -N gaussian_test061.com_nodes=2:ppn=2 | ||
+ | #PBS -o /cluster/home/skylar/bazaar/gaussian_nodes2/test061.com.out | ||
+ | #PBS -e /cluster/home/skylar/bazaar/gaussian_nodes2/test061.com.err | ||
+ | #PBS -q batch | ||
+ | #PBS -m abe | ||
+ | #PBS -l nodes=2:ppn=2 | ||
+ | |||
+ | cd $PBS_O_WORKDIR | ||
+ | /cluster/bazaar/software/g03/bsd/g03l < /cluster/bazaar/software/g03/tests/com_smp4/test061.com > /cluster/home/skylar/bazaar/gaussian_nodes2/test061.com.log | ||
+ | |||
+ | exit $! |
Revision as of 10:08, 26 September 2019
Web interface: WebMO
Users for WebMo
Contents
Older notes, c. 2005
Gaussian environment
I use this in my .bashrc:
export GAUSS_EXEDIR=/cluster/bazaar/software/g03 export GAUSS_LFLAGS='-nodelist "b0 b1" -opt "Tsnet.Node.lindarsharg: ssh " -mp 2' export GAUSS_SCRDIR=/tmp export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/cluster/bazaar/software/g03 export g03root=/cluster/bazaar/software export PATH=$PATH:/cluster/bazaar/software/g03
Seems to work, although you might want to tweak the node list.
Running Gaussian in parallel
Understand the distinction between nprocs and nprocl. nprocs is for a shared-memory (single-image) machine, whereas nprocl is for a distributed-memory (cluster) system. You want to use nprocl on all our computing systems. For example, if you want to run on eight processes (four nodes), add a line like this
%nprocl=8
to your input file. You should then specify qsub -l nodes=4:ppn=2 script on the command line to submit your PBS script with a requirement of four dual-processor machines.
Once you have the number of CPUs specified in the input file, make sure that your nodelist in GAUSS_LFLAGS is set properly. See Gaussian environment for details.
If you have a bunch of input files you want to modify, use the add_nproc.pl script in /cluster/generic/bin. First cd to the directory with your input files, and then do:
perl /cluster/generic/bin/add_nproc.pl n
where n is the number proecssors you want to run on.
After that, fire it up with $g03root/bsd/g03l.
Sample Gaussian PBS script
This is a sample PBS script for Gaussian:
#!/bin/sh #PBS -N gaussian_test061.com_nodes=2:ppn=2 #PBS -o /cluster/home/skylar/bazaar/gaussian_nodes2/test061.com.out #PBS -e /cluster/home/skylar/bazaar/gaussian_nodes2/test061.com.err #PBS -q batch #PBS -m abe #PBS -l nodes=2:ppn=2
cd $PBS_O_WORKDIR /cluster/bazaar/software/g03/bsd/g03l < /cluster/bazaar/software/g03/tests/com_smp4/test061.com > /cluster/home/skylar/bazaar/gaussian_nodes2/test061.com.log
exit $!