|
|
Line 13: |
Line 13: |
| == Compute Resources == | | == Compute Resources == |
| | | |
− | | + | [https://wiki.cs.earlham.edu/index.php/Sysadmin:Computer_Resources Machines and VMs related information here!] |
− | {| class="wikitable"
| |
− | |+ CS machines and VMs
| |
− | |-
| |
− | ! Machine name !! 159 Ip Address !! 10Gb Ip address !! Operating System !! Metal or Virtual !! Description !! RAM
| |
− | |-
| |
− | | Bowie || 159.28.22.5 || 10.10.10.15 || Debian 9 || Metal || hosts and exports user files; Jupyterhub; landing server || 198 GB
| |
− | |-
| |
− | | Smiley || 159.28.22.251 || 10.10.10.252 || Ubuntu 18.04 || Metal || VM host, not accessible to regular users || 156 GB
| |
− | |-
| |
− | | Web || 159.28.22.2 || 10.10.10.200 || Ubuntu 18.04 || Virtual || Website host || 8 GB
| |
− | |-
| |
− | | Auth || 159.28.22.39 || No 10Gb internet|| CentOS 7 || Virtual || host of LDAP user database || 4 GB
| |
− | |-
| |
− | | Code || 159.28.22.42 || 10.10.10.42 || Ubuntu 18.04 || Virtual || Gitlab host || 8 GB
| |
− | |-
| |
− | | Net || 159.28.22.1 || 10.10.10.100 || Ubuntu 18.04 || Virtual || network administration host for CS || 4 GB
| |
− | |-
| |
− | | Central || 159.28.22.177 || No 10Gb internet || Debian 9 || Virtual || ODK Central Host || 4 GB
| |
− | |-
| |
− | | Urey || 159.28.22.139 || No 10Gb internet || XCP-ng || Metal || Sysadmin Sandbox Environment || 16 GB
| |
− | |}
| |
− | | |
− | {| class="wikitable"
| |
− | |+ Cluster machines
| |
− | |-
| |
− | ! Machine name !! 159 Ip Address !! 10Gb Ip address !! Operating System !! Metal or Virtual !! Description !! RAM
| |
− | |-
| |
− | | Hopper || 159.28.23.1 || 10.10.10.1 || Debian 10 || Metal || landing server, NFS host for cluster || 64 GB
| |
− | |-
| |
− | | Lovelace || 159.28.23.35 || 10.10.10.35 || CentOS 7 || Metal || Large compute server || 96 GB
| |
− | |-
| |
− | | Pollock || 159.28.23.8 || 10.10.10.8 || CentOS 7 || Metal || Large compute server || 131 GB
| |
− | |-
| |
− | | Bronte || 159.28.23.140 || No 10Gb internet || CentOS 7 || Metal || Large compute server || 115 GB
| |
− | |-
| |
− | | Sakurai || 159.23.23.3 || 10.10.10.3 || Debian 10 || Metal || Runs Backup || 12 GB
| |
− | |-
| |
− | | Miyamoto || 159.28.23.45 || No 10Gb currently || Debian 10 || Metal || Runs Backup || 16 GB
| |
− | |-
| |
− | | HopperPrime || 159.28.23.142 || 10.10.10.142 || Debian 10 || Metal || Runs Backup || 16 GB
| |
− | |-
| |
− | | Monitor || 159.28.23.250 || No 10Gb internet || Debian 11 || Metal || Server Monitoring || 8 GB
| |
− | |-
| |
− | | Layout 0 || 159.28.23.2 || 10.10.10.2 || CentOS 7 || Metal || Head Node || 32 GB
| |
− | |-
| |
− | | Layout 1 || None || None || CentOS 7 || Metal || Compute Node || 32 GB
| |
− | |-
| |
− | | Layout 2 || None || None || CentOS 7 || Metal || Compute Node || 32 GB
| |
− | |-
| |
− | | Layout 3 || None || None || CentOS 7 || Metal || Compute Node || 32 GB
| |
− | |-
| |
− | | Layout 4 || None || None || CentOS 7 || Metal || Compute Node || 32 GB
| |
− | |-
| |
− | | Whedon 0 || 159.28.23.4 || No 10Gb internet|| CentOS 7 || Metal || Head Node || 256 GB
| |
− | |-
| |
− | | Whedon 1 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
| |
− | |-
| |
− | | Whedon 2 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
| |
− | |-
| |
− | | Whedon 3 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
| |
− | |-
| |
− | | Whedon 4 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
| |
− | |-
| |
− | | Whedon 5 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
| |
− | |-
| |
− | | Whedon 6 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
| |
− | |-
| |
− | | Whedon 7 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
| |
− | |-
| |
− | | Hamilton 0 || 159.28.23.5 || No 10Gb internet || Debian 11 || Metal || Head Node || 128 GB
| |
− | |-
| |
− | | Hamilton 1 || None || None || Debian 11 || Metal || Compute Node || 256 GB
| |
− | |-
| |
− | | Hamilton 2 || None || None || Debian 11 || Metal || Compute Node || 256 GB
| |
− | |-
| |
− | | Hamilton 3 || None || None || Debian 11 || Metal || Compute Node || 256 GB
| |
− | |-
| |
− | | Hamilton 4 || None || None || Debian 11 || Metal || Compute Node || 256 GB
| |
− | |-
| |
− | | Hamilton 5 || None || None || Debian 11 || Metal || Compute Node || 256 GB
| |
− | |}
| |
− | | |
− | {| class="wikitable"
| |
− | |+ Lab machines
| |
− | |-
| |
− | ! Machine name !! 159 Ip Address !! Location !! Operating System !! RAM
| |
− | |-
| |
− | | Borg || 159.28.22.10 || Turing (CST 222) || Ubuntu 20 || 16 GB
| |
− | |-
| |
− | | Gao || 159.28.22.11 || Turing (CST 222) || Ubuntu 20 || 8 GB
| |
− | |-
| |
− | | Snyder || 159.28.22.12 || Turing (CST 222) || Ubuntu 20 || 8 GB
| |
− | |-
| |
− | | Goldwasser || 159.28.22.13 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
| |
− | |-
| |
− | | Bartik || 159.28.22.14 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
| |
− | |-
| |
− | | Wilson || 159.28.22.15 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
| |
− | |-
| |
− | | Bilas || 159.28.22.16 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
| |
− | |-
| |
− | | Johnson || 159.28.22.17 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
| |
− | |-
| |
− | | Graham || 159.28.22.14 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
| |
− | |}
| |
− | | |
− | === CS Machine Address List ===
| |
− | <pre>bowie.cs.earlham.edu smiley.cs.earlham.edu web.cs.earlham.edu auth.cs.earlham.edu code.cs.earlham.edu net.cs.earlham.edu central.cs.earlham.edu urey.cs.earlham.edu</pre>
| |
− | | |
− | === Cluster Machine Address List ===
| |
− | <pre>
| |
− | hopper.cluster.earlham.edu lovelace.cluster.earlham.edu pollock.cluster.earlham.edu bronte.cluster.earlham.edu sakurai.cluster.earlham.edu miyamoto.cluster.earlham.edu hopperprime.cluster.earlham.edu monitor.cluster.earlham.edu whedon.cluster.earlham.edu layout.cluster.earlham.edu hamilton.cluster.earlham.edu</pre>
| |
− | | |
− | === Lab Machine Address List ===
| |
− | <pre>borg.cs.earlham.edu gao.cs.earlham.edu snyder.cs.earlham.edu goldwasser.cs.earlham.edu bartik.cs.earlham.edu wilson.cs.earlham.edu bilas.cs.earlham.edu johnson.cs.earlham.edu graham.cs.earlham.edu</pre>
| |
− | | |
− | === Specialized resources ===
| |
− | | |
− | Specialized computing applications are supported on the following machines:
| |
− | | |
− | * [[Sysadmin:GPGPU|GPU’s for AI/ML/data science]]: layout cluster
| |
− | * virtualization: smiley
| |
− | * containers: bowie
| |
| | | |
| == Network == | | == Network == |
Line 167: |
Line 44: |
| | | |
| [[Sysadmin:Layers of abstraction for filesystems|A word about what's happening between files and the drives they live on.]] | | [[Sysadmin:Layers of abstraction for filesystems|A word about what's happening between files and the drives they live on.]] |
− |
| |
| | | |
| = New sysadmins = | | = New sysadmins = |
This is the hub for the CS sysadmins on the wiki.
Overview
If you're visually inclined, we have a colorful and easy-to-edit map of our servers here!
Server room
Our servers are in Noyes, the science building that predates the CST. For general information about the server room and how to use it, check out this page.
Columns: machine name, IPs, type (virtual, metal), purpose, dies, cores, RAM
Compute Resources
Machines and VMs related information here!
Network
We have two network fabrics linking the machines together. There are three subdomains.
10 Gb
We have 10Gb fabric to mount files over NFS. Machines with 10Gb support have an IP address in the class C range 10.10.10.0/24 and we want to add DNS to these addresses.
1 Gb (cluster, cs)
We have two class C subnets on the 1Gb fabric: 159.28.22.0/24 (CS) and 159.28.23.0/24 (cluster). This means we have double the IP addresses on the 1Gb fabric that we have on the 10Gb fabric.
Any user accessing *.cluster.earlham.edu and *.cs.earlham.edu is making calls on a 1Gb network.
Intra-cluster fabrics
The layout cluster has an Infiniband infrastructure. Wachowski has only a 1Gb infrastructure.
Power
We have a backup power supply, with batteries last upgraded in 2019 (?). We’ve had a few outages since then and power has held up well.
HVAC
HVAC systems are static and are largely managed by Facilities.
See full topology diagrams here.
A word about what's happening between files and the drives they live on.
New sysadmins
These pages will be helpful for you if you're just starting in the group:
Note: you'll need to log in with wiki credentials to see most Sysadmin pages.
Additional information
These pages contain a lot of the most important information about our systems and how we operate.
Technical docs
Common tasks
Group and institution information