Difference between revisions of "Sysadmin"

From Earlham CS Department
Jump to navigation Jump to search
Line 106: Line 106:
* [[X Applications]]
* [[X Applications]]
* [[Sysadmin:Services:ClusterOverview|Cluster Overview]] and [[Sysadmin:Ccg-admin|additional details]]
* [[Sysadmin:Services:ClusterOverview|Cluster Overview]] and [[Sysadmin:Ccg-admin|additional details]]
* [[Sysadmin:Firewall|Firewall]] running on babbage.cs.e.e
===Common tasks===
===Common tasks===

Revision as of 14:07, 20 September 2021

This is the hub for the CS sysadmins on the wiki.


If you're visually inclined, we have a colorful and easy-to-edit map of our servers here!

Server room

Our servers are in Noyes, the science building that predates the CST. For general information about the server room and how to use it, check out this page.

Compute (servers and clusters)

We have CS and cluster machines.

CS machines:

  • bowie: hosts and exports user files; Jupyterhub; landing server
  • smiley: VM host, not accessible to regular users
  • web: website host
  • net: network administration host for CS
  • code: GitLab host
  • auth: host of the LDAP user database

Cluster machines:

  • hopper: landing server, NFS host for cluster
  • bronte, pollock, lovelace: large compute servers
  • layout, wachowski: clusters of multiple nodes linked together through a switch and managed through a headnode
  • meier, miyamoto, sakurai: backup servers
  • monitor: server monitoring

We have spare nodes on the old al-salam cluster’s rack. These should be used for services that can handle minutes to hours of downtime, as they only have one power supply.

Specialized resources

Specialized computing applications are supported on the following machines:


We have two network fabrics linking the machines together. There are three subdomains.

10 Gb

We have 10Gb fabric to mount files over NFS. Machines with 10Gb support have an IP address in the class C range and we want to add DNS to these addresses.

1 Gb (cluster, cs)

We have two class C subnets on the 1Gb fabric: (CS) and (cluster). This means we have double the IP addresses on the 1Gb fabric that we have on the 10Gb fabric.

Any user accessing *.cluster.earlham.edu and *.cs.earlham.edu is making calls on a 1Gb network.

Intra-cluster fabrics

The layout cluster has an Infiniband infrastructure. Wachowski has only a 1Gb infrastructure.


We have a backup power supply, with batteries last upgraded in 2019 (?). We’ve had a few outages since then and power has held up well.


HVAC systems are static and are largely managed by Facilities.

See full topology diagrams here.

A word about what's happening between files and the drives they live on.

New sysadmins

These pages will be helpful for you if you're just starting in the group:

Note: you'll need to log in with wiki credentials to see most Sysadmin pages.

Additional information

These pages contain a lot of the most important information about our systems and how we operate.

Technical docs

Common tasks

Group and institution information