Getting started on clusters
Prerequisites
- Get a cluster account. You can email admin at cs dot earlham dot edu or a current CS faculty member to get started. Your user account will grant access to all the servers below, and you will have a home directory at
~username
that you can access when you connect to any of them. - Connect through a terminal via ssh to
username@hopper.cluster.earlham.edu
. If you intend to work with these machines a lot, you should also configure your ssh keys.
Cluster systems to choose from
The cluster dot earlham dot edu domain consists of clusters (a collection of physical servers linked through a switch to perform high-performance computing tasks with distributed memory) and jumbo servers (previously "phat nodes"; a system comprising one physical server with a high ratio of disk+RAM to CPU, good for jobs demanding shared memory).
Our current machines are:
- whedon: newest cluster; 8 compute nodes
- layout: cluster; 4 compute nodes, pre-whedon, features NVIDIA GPGPU's and multiple CUDA options
- lovelace: newest jumbo server
- pollock: jumbo server, older than lovelace but well-tested and featuring the most available disk space
To get to, e.g., whedon, from hopper, run ssh whedon
.
Cluster software notes
The cluster dot earlham dot edu servers all run a supported CentOS version.
All these servers (unless otherwise noted) also feature the following software:
- Torque (scheduler): submit a job with
qsub jobname.qsub
, delete it withqdel jobID
. Docs here. - Environment modules: run
module avail
to see available software modules andmodule load modulename
to load one; you may load modules in bash scripts and qsub jobs as well.
The default shell on all these servers is bash.
The default Python version on all these servers is Python 2.x, but all have at least one Python 3 module with a collection of available scientific computing libraries.