Difference between revisions of "Sysadmin"

From Earlham CS Department
Jump to navigation Jump to search
(Compute (servers and clusters))
m (Compute (servers and clusters))
Line 19: Line 19:
 
! Machine name !! Ip Addresses !! Metal or Virtual !! Description
 
! Machine name !! Ip Addresses !! Metal or Virtual !! Description
 
|-
 
|-
| Bowie || fill in || fill in || hosts and exports user files; Jupyterhub; landing server
+
| Bowie || fill in || Metal || hosts and exports user files; Jupyterhub; landing server
 
|-
 
|-
| Smiley || fill in || fill in || VM host, not accessible to regular users
+
| Smiley || fill in || Metal || VM host, not accessible to regular users
 
|-
 
|-
| Web || fill in || fill in || Website host
+
| Web || fill in || Virtual || Website host
 
|-
 
|-
| Auth || fill in || fill in || host of LDAP user database
+
| Auth || fill in || Virtual || host of LDAP user database
 
|-
 
|-
| Code || fill in || fill in || Gitlab host
+
| Code || fill in || Virtual || Gitlab host
 
|-
 
|-
| Net || fill in || fill in || network administration host for CS
+
| Net || fill in || Virtual || network administration host for CS
 
|-
 
|-
| Lovelace || fill in || fill in || Example
+
| Lovelace || fill in || Metal || Example
 
|-
 
|-
| Hopper || fill in || fill in || landing server, NFS host for cluster
+
| Hopper || fill in || Metal || landing server, NFS host for cluster
 
|-
 
|-
| Sakurai || fill in || fill in || Example
+
| Sakurai || fill in || Metal || Example
 
|-
 
|-
|HopperPrime || fill in || fill in || Runs Backup
+
|HopperPrime || fill in || Metal || Runs Backup
 
|-
 
|-
| Monitor || fill in || fill in || Server Monitoring
+
| Monitor || fill in || Metal || Server Monitoring
 
|-
 
|-
| Bronte || fill in || fill in || Example
+
| Bronte || fill in || Metal || Example
 
|-
 
|-
| Layout 0 || fill in || fill in || Example
+
| Layout 0 || fill in || Metal || Example
 
|-
 
|-
| Layout 3 || fill in || fill in || Example
+
| Layout 3 || fill in || Metal || Example
 
|-
 
|-
| Layout 1 || fill in || fill in || Example
+
| Layout 1 || fill in || Metal || Example
 
|-
 
|-
| Layout 2 || fill in || fill in || Example
+
| Layout 2 || fill in || Metal || Example
 
|-
 
|-
| Whedon || fill in || fill in || Example
+
| Whedon || fill in || Metal || Example
 
|-
 
|-
| Pollock || fill in || fill in || Example
+
| Pollock || fill in || Metal || Example
 
|}
 
|}
 
CS machines:  
 
CS machines:  

Revision as of 20:02, 6 November 2021

This is the hub for the CS sysadmins on the wiki.

Overview

If you're visually inclined, we have a colorful and easy-to-edit map of our servers here!

Server room

Our servers are in Noyes, the science building that predates the CST. For general information about the server room and how to use it, check out this page.

Columns: machine name, IPs, type (virtual, metal), purpose, dies, cores, RAM

Compute (servers and clusters)

CS machines and cluster machines
Machine name Ip Addresses Metal or Virtual Description
Bowie fill in Metal hosts and exports user files; Jupyterhub; landing server
Smiley fill in Metal VM host, not accessible to regular users
Web fill in Virtual Website host
Auth fill in Virtual host of LDAP user database
Code fill in Virtual Gitlab host
Net fill in Virtual network administration host for CS
Lovelace fill in Metal Example
Hopper fill in Metal landing server, NFS host for cluster
Sakurai fill in Metal Example
HopperPrime fill in Metal Runs Backup
Monitor fill in Metal Server Monitoring
Bronte fill in Metal Example
Layout 0 fill in Metal Example
Layout 3 fill in Metal Example
Layout 1 fill in Metal Example
Layout 2 fill in Metal Example
Whedon fill in Metal Example
Pollock fill in Metal Example

CS machines: bowie.cs.eaarlham.edu,web.cs.earlham.edu,auth.cs.earlham.edu,code.cs.earlham.edu,net.cs.earlham.edu

Cluster Machines: lovelace.cluster.earlham.edu,sakurai.cluster.earlham.edu,bronte.cluster.earlham.edu,whedon.cluster.earlham.edu,pollock.cluster.earlham.edu,layout.cluster.earlham.edu,monitor.cluster.earlham.edu


There are 6 machines currently not in use in the 6 spaces above Monitor on the Equitorial Guinea rack

Specialized resources

Specialized computing applications are supported on the following machines:

Network

We have two network fabrics linking the machines together. There are three subdomains.

10 Gb

We have 10Gb fabric to mount files over NFS. Machines with 10Gb support have an IP address in the class C range 10.10.10.0/24 and we want to add DNS to these addresses.

1 Gb (cluster, cs)

We have two class C subnets on the 1Gb fabric: 159.28.22.0/24 (CS) and 159.28.23.0/24 (cluster). This means we have double the IP addresses on the 1Gb fabric that we have on the 10Gb fabric.

Any user accessing *.cluster.earlham.edu and *.cs.earlham.edu is making calls on a 1Gb network.

Intra-cluster fabrics

The layout cluster has an Infiniband infrastructure. Wachowski has only a 1Gb infrastructure.

Power

We have a backup power supply, with batteries last upgraded in 2019 (?). We’ve had a few outages since then and power has held up well.

HVAC

HVAC systems are static and are largely managed by Facilities.

See full topology diagrams here.

A word about what's happening between files and the drives they live on.


New sysadmins

These pages will be helpful for you if you're just starting in the group:

Note: you'll need to log in with wiki credentials to see most Sysadmin pages.

Additional information

These pages contain a lot of the most important information about our systems and how we operate.

Technical docs

Common tasks

Group and institution information