Difference between revisions of "Sysadmin"

From Earlham CS Department
Jump to navigation Jump to search
(Services)
(Compute (servers and clusters))
(40 intermediate revisions by 6 users not shown)
Line 1: Line 1:
 +
This is the hub for the CS sysadmins on the wiki.
 +
 +
= Overview =
 +
 +
[https://docs.google.com/drawings/d/1XaULz5IxXV_BZQjrko3QJ8wV5aXsSTYcSWxxT49OyZk/edit If you're visually inclined, we have a colorful and easy-to-edit map of our servers here!]
  
This is the hub for the CS sysadmins on the wiki.
+
== Server room ==
  
== Common Tasks ==
+
Our servers are in Noyes, the science building that predates the CST. For general information about the server room and how to use it, check out [[Sysadmin:Server Room|this page]].
  
* [[Sysadmin:New Sysadmins | Welcoming a new sysadmin ]] <- log in here with wiki credentials to begin learning to be an admin
+
Columns: machine name, IPs, type (virtual, metal), purpose, dies, cores, RAM
* [[Sysadmin:Troubleshooting|General troubleshooting tips for admins]]
 
* [[Sysadmin:SSH|Useful ssh information for admins]]
 
* [[Sysadmin:Recurring Tasks | Recurring tasks - e.g. software updates, hardware replacements]]
 
* [[Sysadmin:SlackAndGitLab | Slack and GitLab integration]]
 
* [https://code.cs.earlham.edu/sysadmin/ticket-tracker Ticket tracking for current projects]
 
* [[Sysadmin:User Management | User Management]]
 
** [[Sysadmin:Contacting all users|Contacting all users]]
 
** [[Reset password]]
 
* [[Sysadmin:Software installation | Software installation]]
 
** [[Modules | Installing software under modules ]]
 
* [[Sysadmin:Monitoring | Monitoring ]]
 
* [[Sysadmin:Backup|Backup]]
 
* [[Sysadmin:AddComputer|Add a computer]]
 
** [[Sysadmin:Setting up Lovelace Lab Machines | Setting up Lovelace Lab Machines]]
 
* [[Senior projects]]
 
* [[ShutdownProcedure| Shutdown and Boot up]]
 
* [[Password managers]]
 
* [[Server safety]]
 
* [[Sysadmin:Upgrading SSL Certificate | Upgrading SSL Certificates ]]
 
** [[Sysadmin:ImportantInfo:SSLcerts| Generating SSL Certificates]]
 
* [[Sysadmin:Launch at startup|Launch a process at startup]]
 
* [[Sysadmin:CS-ITS Interoperability|Working with ITS]]
 
* [[Sysadmin:Recurring spending | Recurring spending ]]
 
  
== Services ==
+
== Compute Resources ==
* [[Sysadmin:Services:ClusterOverview|Cluster Overview]] and [[Sysadmin:Ccg-admin|additional details]]
 
* [[Sysadmin:Jupyterhub Notebook Server|Jupyterhub]] and [[Nbgrader notes|NBGrader]]
 
* [[Sysadmin:Web Servers|Web Servers and Websites]]
 
* [[Sysadmin:Services:Databases|Databases]]
 
* [[Sysadmin:DNS & DHCP|DNS and DHCP]]
 
* [[Sysadmin:VirtualBox | VirtualBox]]
 
* [[Sysadmin:Services:XenServerSetup | Xen Server]]
 
* [[X Applications]]
 
* [[Bash_start_up_script|Bash startup scripts]]
 
* [[Sysadmin:AWS|AWS]]
 
For old documentation, see: [[Sysadmin:Old | Old Wiki Information]]
 
  
= Machines and Brief Descriptions of Services =
 
  
[https://docs.google.com/drawings/d/1XaULz5IxXV_BZQjrko3QJ8wV5aXsSTYcSWxxT49OyZk/edit If you're visually inclined, we have a colorful and easy-to-edit map of our servers here!]
+
{| class="wikitable"
 +
|+ CS machines and VMs
 +
|-
 +
! Machine name !! 159 Ip Address !! 10Gb Ip address !! Operating System !! Metal or Virtual !! Description !! RAM
 +
|-
 +
| Bowie || 159.28.22.5 || 10.10.10.15 || Debian 9 || Metal || hosts and exports user files; Jupyterhub; landing server || 198 GB
 +
|-
 +
| Smiley || 159.28.22.251 || 10.10.10.252 || Ubuntu 18.04 || Metal || VM host, not accessible to regular users || 156 GB
 +
|-
 +
| Web || 159.28.22.2 || 10.10.10.200 || Ubuntu 18.04 || Virtual || Website host || 8 GB
 +
|-
 +
| Auth || 159.28.22.39 || No 10Gb internet|| CentOS 7 || Virtual || host of LDAP user database || 4 GB
 +
|-
 +
| Code || 159.28.22.42 || 10.10.10.42 || Ubuntu 18.04 || Virtual || Gitlab host || 8 GB
 +
|-
 +
| Net || 159.28.22.1 || 10.10.10.100 || Ubuntu 18.04 || Virtual || network administration host for CS || 4 GB
 +
|-
 +
| Central || 159.28.22.177 || No 10Gb internet || Debian 9 || Virtual || ODK Central Host || 4 GB
 +
|-
 +
| Urey || 159.28.22.139 || No 10Gb internet || XCP-ng || Metal || Sysadmin Sandbox Environment || 16 GB
 +
|}
  
=== Compute (servers and clusters) ===
+
{| class="wikitable"
 +
|+ Cluster machines
 +
|-
 +
! Machine name !! 159 Ip Address !! 10Gb Ip address !! Operating System !! Metal or Virtual !! Description !! RAM
 +
|-
 +
| Hopper || 159.28.23.1 || 10.10.10.1 || Debian 10 || Metal || landing server, NFS host for cluster || 64 GB
 +
|-
 +
| Lovelace || 159.28.23.35 || 10.10.10.35 || CentOS 7 || Metal || Large compute server || 96 GB
 +
|-
 +
| Pollock || 159.28.23.8 || 10.10.10.8 || CentOS 7 || Metal || Large compute server || 131 GB
 +
|-
 +
| Bronte || 159.28.23.140 || No 10Gb internet || CentOS 7 || Metal || Large compute server || 115 GB
 +
|-
 +
| Sakurai || 159.23.23.3 || 10.10.10.3 || Debian 10 || Metal || Runs Backup || 12 GB
 +
|-
 +
| Miyamoto || 159.28.23.45 || No 10Gb currently || Debian 10 || Metal || Runs Backup || 16 GB
 +
|-
 +
| HopperPrime || 159.28.23.142 || 10.10.10.142 || Debian 10 || Metal || Runs Backup || 16 GB
 +
|-
 +
| Monitor || 159.28.23.250 || No 10Gb internet || Debian 11 || Metal || Server Monitoring || 8 GB
 +
|-
 +
| Layout 0 || 159.28.23.2 || 10.10.10.2 || CentOS 7 || Metal || Head Node || 32 GB
 +
|-
 +
| Layout 1 || None || None || CentOS 7 || Metal || Compute Node || 32 GB
 +
|-
 +
| Layout 2 || None || None || CentOS 7 || Metal || Compute Node || 32 GB
 +
|-
 +
| Layout 3 || None || None || CentOS 7 || Metal || Compute Node || 32 GB
 +
|-
 +
| Layout 4 || None || None || CentOS 7 || Metal || Compute Node || 32 GB
 +
|-
 +
| Whedon 0 || 159.28.23.4 || No 10Gb internet|| CentOS 7 || Metal || Head Node || 256 GB
 +
|-
 +
| Whedon 1 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
 +
|-
 +
| Whedon 2 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
 +
|-
 +
| Whedon 3 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
 +
|-
 +
| Whedon 4 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
 +
|-
 +
| Whedon 5 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
 +
|-
 +
| Whedon 6 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
 +
|-
 +
| Whedon 7 || None || None || CentOS 7 || Metal || Compute Node || 256 GB
 +
|-
 +
| Hamilton 0 || 159.28.23.5 || No 10Gb internet || Debian 11 || Metal || Head Node || 128 GB
 +
|-
 +
| Hamilton 1 || None || None || Debian 11 || Metal || Compute Node || 256 GB
 +
|-
 +
| Hamilton 2 || None || None || Debian 11 || Metal || Compute Node || 256 GB
 +
|-
 +
| Hamilton 3 || None || None || Debian 11 || Metal || Compute Node || 256 GB
 +
|-
 +
| Hamilton 4 || None || None || Debian 11 || Metal || Compute Node || 256 GB
 +
|-
 +
| Hamilton 5 || None || None || Debian 11 || Metal || Compute Node || 256 GB
 +
|}
  
We have CS and cluster machines.
+
{| class="wikitable"
 +
|+ Lab machines
 +
|-
 +
! Machine name !! 159 Ip Address !! Location !! Operating System !! RAM
 +
|-
 +
| Borg || 159.28.22.10 || Turing (CST 222) || Ubuntu 20 || 16 GB
 +
|-
 +
| Gao || 159.28.22.11 || Turing (CST 222) || Ubuntu 20 || 8 GB
 +
|-
 +
| Snyder || 159.28.22.12 || Turing (CST 222) || Ubuntu 20 || 8 GB
 +
|-
 +
| Goldwasser || 159.28.22.13 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
 +
|-
 +
| Bartik || 159.28.22.14 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
 +
|-
 +
| Wilson || 159.28.22.15 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
 +
|-
 +
| Bilas || 159.28.22.16 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
 +
|-
 +
| Johnson || 159.28.22.17 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
 +
|-
 +
| Graham || 159.28.22.14 || Lovelace (CST 219) || Ubuntu 20 || 8 GB
 +
|}
  
CS machines:
+
=== CS Machine Address List ===
* bowie: hosts and exports user files; Jupyterhub; landing server
+
<pre>bowie.cs.earlham.edu smiley.cs.earlham.edu web.cs.earlham.edu auth.cs.earlham.edu code.cs.earlham.edu net.cs.earlham.edu central.cs.earlham.edu urey.cs.earlham.edu</pre>
* smiley: VM host, not accessible to regular users
 
* web: website host
 
* net: network administration host for CS
 
* code: GitLab host
 
* auth: host of the LDAP user database
 
  
Cluster machines:
+
=== Cluster Machine Address List ===
* hopper: landing server
+
<pre>
* bronte, pollock, lovelace: large compute servers
+
hopper.cluster.earlham.edu lovelace.cluster.earlham.edu pollock.cluster.earlham.edu bronte.cluster.earlham.edu sakurai.cluster.earlham.edu miyamoto.cluster.earlham.edu hopperprime.cluster.earlham.edu monitor.cluster.earlham.edu whedon.cluster.earlham.edu layout.cluster.earlham.edu hamilton.cluster.earlham.edu</pre>
* layout, whedon: clusters of multiple nodes linked together through a switch and managed through a headnode
 
* sakurai: big data storage and exports
 
* meier, miyamoto: backup servers
 
* monitor: server monitoring
 
  
We have spare nodes on the old al-salam cluster’s rack. These should be used for services that can handle minutes to hours of downtime, as they only have one power supply.
+
=== Lab Machine Address List ===
 +
<pre>borg.cs.earlham.edu gao.cs.earlham.edu snyder.cs.earlham.edu goldwasser.cs.earlham.edu bartik.cs.earlham.edu wilson.cs.earlham.edu bilas.cs.earlham.edu johnson.cs.earlham.edu graham.cs.earlham.edu</pre>
  
==== Specialized resources ====
+
=== Specialized resources ===
  
 
Specialized computing applications are supported on the following machines:
 
Specialized computing applications are supported on the following machines:
Line 76: Line 138:
 
* containers: bowie
 
* containers: bowie
  
=== Network ===
+
== Network ==
  
 
We have two network fabrics linking the machines together. There are three subdomains.
 
We have two network fabrics linking the machines together. There are three subdomains.
  
==== 10 Gb ====
+
=== 10 Gb ===
  
 
We have 10Gb fabric to mount files over NFS. Machines with 10Gb support have an IP address in the class C range 10.10.10.0/24 and we want to add DNS to these addresses.
 
We have 10Gb fabric to mount files over NFS. Machines with 10Gb support have an IP address in the class C range 10.10.10.0/24 and we want to add DNS to these addresses.
  
==== 1 Gb (cluster, cs) ====
+
=== 1 Gb (cluster, cs) ===
  
 
We have two class C subnets on the 1Gb fabric: 159.28.22.0/24 (CS) and 159.28.23.0/24 (cluster). This means we have double the IP addresses on the 1Gb fabric that we have on the 10Gb fabric.
 
We have two class C subnets on the 1Gb fabric: 159.28.22.0/24 (CS) and 159.28.23.0/24 (cluster). This means we have double the IP addresses on the 1Gb fabric that we have on the 10Gb fabric.
Line 90: Line 152:
 
Any user accessing *.cluster.earlham.edu and *.cs.earlham.edu is making calls on a 1Gb network.
 
Any user accessing *.cluster.earlham.edu and *.cs.earlham.edu is making calls on a 1Gb network.
  
==== Intra-cluster fabrics ====
+
=== Intra-cluster fabrics ===
  
The layout cluster has an Infiniband infrastructure. Whedon has only a 1Gb infrastructure.
+
The layout cluster has an Infiniband infrastructure. Wachowski has only a 1Gb infrastructure.
  
=== Power ===
+
== Power ==
  
 
We have a backup power supply, with batteries last upgraded in 2019 (?). We’ve had a few outages since then and power has held up well.
 
We have a backup power supply, with batteries last upgraded in 2019 (?). We’ve had a few outages since then and power has held up well.
  
=== HVAC ===
+
== HVAC ==
  
 
HVAC systems are static and are largely managed by Facilities.
 
HVAC systems are static and are largely managed by Facilities.
Line 105: Line 167:
  
 
[[Sysadmin:Layers of abstraction for filesystems|A word about what's happening between files and the drives they live on.]]
 
[[Sysadmin:Layers of abstraction for filesystems|A word about what's happening between files and the drives they live on.]]
 +
 +
 +
= New sysadmins =
 +
 +
These pages will be helpful for you if you're just starting in the group:
 +
 +
* [[Sysadmin:New Sysadmins | Welcoming a new sysadmin ]]
 +
* [[Sysadmin:Troubleshooting|General troubleshooting tips for admins]]
 +
* [[Sandbox Notes|Sandbox Notes]]
 +
* [[Password managers]]
 +
* [[Server safety]]
 +
* [https://code.cs.earlham.edu/sysadmin/ticket-tracker Ticket tracking for current projects]
 +
 +
Note: you'll need to log in with wiki credentials to see most Sysadmin pages.
 +
 +
= Additional information =
 +
 +
These pages contain a lot of the most important information about our systems and how we operate.
 +
 +
===Technical docs===
 +
 +
* [https://code.cs.earlham.edu/sysadmin/ticket-tracker Ticket tracking for current projects]
 +
* [[Server safety]]
 +
* [[Sysadmin:Backup|Backup]]
 +
* [[Sysadmin:Monitoring | Monitoring ]]
 +
* [[Sysadmin:SSH|SSH info relevant to admins]]
 +
* [[Sysadmin:User Management | User Management]] and [[Sysadmin:LDAP|LDAP]] generally
 +
* [[Sysadmin:Jupyterhub Notebook Server|Jupyterhub]] and [[Nbgrader notes|NBGrader]]
 +
* [[Sysadmin:MailStack|Email service]]
 +
* [[Sysadmin:XenDocs | Xen Server]]
 +
* [[Sysadmin:NFS|Network File System (NFS)]]
 +
* [[Sysadmin:Web Servers|Web Servers and Websites]]
 +
* [[Sysadmin:Services:Databases|Databases]]
 +
* [[Sysadmin:DNS & DHCP|DNS and DHCP]]
 +
* [[Sysadmin:AWS|AWS]]
 +
* [[Bash_start_up_script|Bash startup scripts]]
 +
* [[Sysadmin:VirtualBox | VirtualBox]]
 +
* [[X Applications]]
 +
* [[Sysadmin:Services:ClusterOverview|Cluster Overview]] and [[Sysadmin:Ccg-admin|additional details]]
 +
* [[Sysadmin:Firewall|Firewall]] running on babbage.cs.e.e
 +
* [[Sysadmin:Setting_up_Lovelace_Lab_Machines|Setting up Lab Machines]]
 +
 +
===Common tasks===
 +
* [[Sysadmin:Recurring Tasks | Recurring tasks - e.g. software updates, hardware replacements]]
 +
* [[Sysadmin:Contacting all users|Contacting all users]]
 +
* [[Reset password]]
 +
* [[Sysadmin:Software installation | Software installation]]
 +
* [[Modules | Installing software under modules ]]
 +
* [[Sysadmin:AddComputer|Add a computer to CS or cluster domains]]
 +
* [[Senior projects|Supporting senior projects]]
 +
* [[ShutdownProcedure|How to do a planned shutdown and reboot of the system]]
 +
** [[Sysadmin:TestingServices | Testing services]] (after a reboot, upgrade, change in the phase of the moon, etc.)
 +
* [[Sysadmin:Upgrading SSL Certificate | Upgrading SSL Certificates ]]
 +
* [[Sysadmin:Launch at startup|Launch a process at startup]]
 +
* [[Sysadmin:Psql-setup | setup psql for cs430 students]]
 +
 +
===Group and institution information===
 +
* [[Sysadmin:CS-ITS Interoperability|Working with ITS]]
 +
* [[Sysadmin:Recurring spending | Recurring spending ]]
 +
* [[Sysadmin:SlackAndGitLab | Slack and GitLab integration]]

Revision as of 11:15, 1 June 2022

This is the hub for the CS sysadmins on the wiki.

Overview

If you're visually inclined, we have a colorful and easy-to-edit map of our servers here!

Server room

Our servers are in Noyes, the science building that predates the CST. For general information about the server room and how to use it, check out this page.

Columns: machine name, IPs, type (virtual, metal), purpose, dies, cores, RAM

Compute Resources

CS machines and VMs
Machine name 159 Ip Address 10Gb Ip address Operating System Metal or Virtual Description RAM
Bowie 159.28.22.5 10.10.10.15 Debian 9 Metal hosts and exports user files; Jupyterhub; landing server 198 GB
Smiley 159.28.22.251 10.10.10.252 Ubuntu 18.04 Metal VM host, not accessible to regular users 156 GB
Web 159.28.22.2 10.10.10.200 Ubuntu 18.04 Virtual Website host 8 GB
Auth 159.28.22.39 No 10Gb internet CentOS 7 Virtual host of LDAP user database 4 GB
Code 159.28.22.42 10.10.10.42 Ubuntu 18.04 Virtual Gitlab host 8 GB
Net 159.28.22.1 10.10.10.100 Ubuntu 18.04 Virtual network administration host for CS 4 GB
Central 159.28.22.177 No 10Gb internet Debian 9 Virtual ODK Central Host 4 GB
Urey 159.28.22.139 No 10Gb internet XCP-ng Metal Sysadmin Sandbox Environment 16 GB
Cluster machines
Machine name 159 Ip Address 10Gb Ip address Operating System Metal or Virtual Description RAM
Hopper 159.28.23.1 10.10.10.1 Debian 10 Metal landing server, NFS host for cluster 64 GB
Lovelace 159.28.23.35 10.10.10.35 CentOS 7 Metal Large compute server 96 GB
Pollock 159.28.23.8 10.10.10.8 CentOS 7 Metal Large compute server 131 GB
Bronte 159.28.23.140 No 10Gb internet CentOS 7 Metal Large compute server 115 GB
Sakurai 159.23.23.3 10.10.10.3 Debian 10 Metal Runs Backup 12 GB
Miyamoto 159.28.23.45 No 10Gb currently Debian 10 Metal Runs Backup 16 GB
HopperPrime 159.28.23.142 10.10.10.142 Debian 10 Metal Runs Backup 16 GB
Monitor 159.28.23.250 No 10Gb internet Debian 11 Metal Server Monitoring 8 GB
Layout 0 159.28.23.2 10.10.10.2 CentOS 7 Metal Head Node 32 GB
Layout 1 None None CentOS 7 Metal Compute Node 32 GB
Layout 2 None None CentOS 7 Metal Compute Node 32 GB
Layout 3 None None CentOS 7 Metal Compute Node 32 GB
Layout 4 None None CentOS 7 Metal Compute Node 32 GB
Whedon 0 159.28.23.4 No 10Gb internet CentOS 7 Metal Head Node 256 GB
Whedon 1 None None CentOS 7 Metal Compute Node 256 GB
Whedon 2 None None CentOS 7 Metal Compute Node 256 GB
Whedon 3 None None CentOS 7 Metal Compute Node 256 GB
Whedon 4 None None CentOS 7 Metal Compute Node 256 GB
Whedon 5 None None CentOS 7 Metal Compute Node 256 GB
Whedon 6 None None CentOS 7 Metal Compute Node 256 GB
Whedon 7 None None CentOS 7 Metal Compute Node 256 GB
Hamilton 0 159.28.23.5 No 10Gb internet Debian 11 Metal Head Node 128 GB
Hamilton 1 None None Debian 11 Metal Compute Node 256 GB
Hamilton 2 None None Debian 11 Metal Compute Node 256 GB
Hamilton 3 None None Debian 11 Metal Compute Node 256 GB
Hamilton 4 None None Debian 11 Metal Compute Node 256 GB
Hamilton 5 None None Debian 11 Metal Compute Node 256 GB
Lab machines
Machine name 159 Ip Address Location Operating System RAM
Borg 159.28.22.10 Turing (CST 222) Ubuntu 20 16 GB
Gao 159.28.22.11 Turing (CST 222) Ubuntu 20 8 GB
Snyder 159.28.22.12 Turing (CST 222) Ubuntu 20 8 GB
Goldwasser 159.28.22.13 Lovelace (CST 219) Ubuntu 20 8 GB
Bartik 159.28.22.14 Lovelace (CST 219) Ubuntu 20 8 GB
Wilson 159.28.22.15 Lovelace (CST 219) Ubuntu 20 8 GB
Bilas 159.28.22.16 Lovelace (CST 219) Ubuntu 20 8 GB
Johnson 159.28.22.17 Lovelace (CST 219) Ubuntu 20 8 GB
Graham 159.28.22.14 Lovelace (CST 219) Ubuntu 20 8 GB

CS Machine Address List

bowie.cs.earlham.edu smiley.cs.earlham.edu web.cs.earlham.edu auth.cs.earlham.edu code.cs.earlham.edu net.cs.earlham.edu central.cs.earlham.edu urey.cs.earlham.edu

Cluster Machine Address List

hopper.cluster.earlham.edu lovelace.cluster.earlham.edu pollock.cluster.earlham.edu bronte.cluster.earlham.edu sakurai.cluster.earlham.edu miyamoto.cluster.earlham.edu hopperprime.cluster.earlham.edu monitor.cluster.earlham.edu whedon.cluster.earlham.edu layout.cluster.earlham.edu hamilton.cluster.earlham.edu

Lab Machine Address List

borg.cs.earlham.edu gao.cs.earlham.edu snyder.cs.earlham.edu goldwasser.cs.earlham.edu bartik.cs.earlham.edu wilson.cs.earlham.edu bilas.cs.earlham.edu johnson.cs.earlham.edu graham.cs.earlham.edu

Specialized resources

Specialized computing applications are supported on the following machines:

Network

We have two network fabrics linking the machines together. There are three subdomains.

10 Gb

We have 10Gb fabric to mount files over NFS. Machines with 10Gb support have an IP address in the class C range 10.10.10.0/24 and we want to add DNS to these addresses.

1 Gb (cluster, cs)

We have two class C subnets on the 1Gb fabric: 159.28.22.0/24 (CS) and 159.28.23.0/24 (cluster). This means we have double the IP addresses on the 1Gb fabric that we have on the 10Gb fabric.

Any user accessing *.cluster.earlham.edu and *.cs.earlham.edu is making calls on a 1Gb network.

Intra-cluster fabrics

The layout cluster has an Infiniband infrastructure. Wachowski has only a 1Gb infrastructure.

Power

We have a backup power supply, with batteries last upgraded in 2019 (?). We’ve had a few outages since then and power has held up well.

HVAC

HVAC systems are static and are largely managed by Facilities.

See full topology diagrams here.

A word about what's happening between files and the drives they live on.


New sysadmins

These pages will be helpful for you if you're just starting in the group:

Note: you'll need to log in with wiki credentials to see most Sysadmin pages.

Additional information

These pages contain a lot of the most important information about our systems and how we operate.

Technical docs

Common tasks

Group and institution information