Sysadmin:XenDocs
All about Xen on the CS network
We run Xen virtual machines on our physical hardware. In the CS network, those VM's run on smiley, an Ubuntu server (see /etc/os-release for details). The xl command line tool interfaces with Xen config files and VM's, e.g. through the console.
You may bridge over smiley's 1Gb interface (xenbr0) or over the 10Gb interface (xenbr1). "Both" may be the best answer. The documents on this wiki were written with the 1Gb as an example but s/xenbr0/xenbr1/g will get you most of the way to 10Gb networking.
VM's or "Guests"
In Xen parlance, a VM is called a "guest virtual machine" or just "guest". These will be used interchangeably in this document.
You can see a list of active guests by running xl list. This will display the name, ID, memory it gets to itself, virtual CPU's it gets to itself, its state, and its uptime in seconds.
Networking
A VM will get network interfaces bridged to the underlying server's physical network interface. A VM may use the 1Gb interface, the 10Gb interface, or both.
You can see virtual interfaces of Xen guests with ip a on smiley. Each interface associated with a guest will have that guest's ID in its name - for example, if the guest ID is 97, it may have interfaces vif97.0 and vif97.1.
Choosing a MAC address for a guest
MAC addresses should have the form 00:16:3e:xx:yy:zz, as this is the OUI reserved by Xen for use creating Xen guests.
A simple system for choosing xx:yy:zz:
- xx: the YY of the year (e.g. 21 for 2021)
- yy: the MM of the month (e.g. 01 for January)
- zz: an incrementing value (e.g. 00 for the first VM)
To put the whole example together, the first VM created in January 2021 would have the MAC address 00:16:3e:21:01:00.
If you're not sure, all that matters is that the MAC address is unique for our site. You can be reasonably sure these are unique by running grep 00:16:3e /etc/xen/* and making sure none of them match.
MAC addresses for main guests
- net: 00:16:3e:00:00:01
- web: 00:16:3e:00:00:02
MAC addresses of Other guests
- hamilton: 00:16:3e:00:01:00
- khwarizmi: 00:16:3e:00:01:01
- success: 00:16:3e:00:01:02
- franco: 00:16:3e:00:01:03
- crain: 00:16:3e:00:01:04
- qinzhou: 00:16:3e:00:01:05
Cloning an existing Logical Volume
1. create a snapshot as the source; the size is a buffer for holding any changes to the active volume during this process
lvcreate -s -L 1G -n eccs-foo-snapshot /dev/vmdata/eccs-foo-disk
2. create a new logical volume as destination
lvcreate -L 50G -n eccs-bar-disk vmdata
3. copy contents form snapshot to new volume; make sure to background the process and disown it disown -h
dd if=/dev/vmdata/eccs-foo-snapshot of=/dev/vmdata/eccs-bar-disk bs=100M & # You can issue a signal to the `dd` process to check the status of the copy # kill -SIGUSR1 <pid of dd>
4. remove snapshot
lvremove /dev/vmdata/eccs-foo-snapshot
5. create and boot xen guest VM
xl create -c xen-configs/eccs-foo.cfg
Setting up a new Xen guest
1. create root partition:
smiley~# lvcreate -L 50G -n eccs-foo-disk vmdata smiley~# mkfs.ext4 /dev/vmdata/eccs-foo-disk
2 create swap partition (skip if you're doing the new admin training section):
smiley~# lvcreate -L 128M -n eccs-foo-swap vmdata smiley~# mkswap /dev/vmdata/eccs-foo-swap
Making a Xen VM
This is step-by-step process for creating Xen VMs. This tutorial assumes that you have permissions on the respective machine with Debian OS installed on DomU, and have LVM configured (see admins if not).
Start at step 3 if you are running on smiley or a physical machine that already runs other xen VM's.
LV setup
Go here. DO NOT JUST COPY AND PASTE THESE COMMANDS.
XEN Setup
To install the Xen VM on Smiley, you must have Sudo access. You can do so by becoming root, or by putting Sudo in front of each of the following commands in part 1. To become root, you must type >> sudo su<< in the terminal. After you become root, you may type in all of the following command without having to use sudo. Become root
- sudo su
run a sudo command without being root
- sudo apt-get install xen-system
1. Become the Root User and install the required software. If you prefer not to become the root user, then use 'sudo' in front of every command.
>> apt-get install xen-system >> apt-get install xen-tools >> apt-get install bridge-utils # This will allow us to configure Network bridge for the VMs
2. Configure Network Bridge
- Figure out the name of the interface that is connected to the CS network. This might be called eth0, eno1, or something along those lines. You can check that by using the following command:
>> ifconfig -a Or >> ip a
For this tutorial, we will use eno1 as the name of the interface.
- Edit the network interfaces file to setup a bridge, let's call it xenbr0
>> vi /etc/network/interfaces
The file should look like. Please note that this might look a little bit different for your machine, for e.g. you may have a different name in place of xenbr0 or eno1.
auto lo
iface lo inet loopback
auto xenbr0
allow-hotplug xenbr0
iface xenbr0 inet static
address your_ip_address
network 159.28.22.0
broadcast 159.28.22.255
gateway 159.28.22.254
netmask 255.255.255.0
dns-nameservers 159.28.22.1 8.8.4.4
dns-domain cs.earlham.edu
bridge_ports eno1 regex vif* noregex
auto eno1
allow-hotplug eno1
iface eno1 inet dhcp
Now, type the following command to finish bridge setup. Make sure you do this part while you can physically access the machine, and not do it remotely.
>> sudo ifdown eno1 && sudo ifup xenbr0 && sudo ifup eno1
The following two commands can now be used to make sure that the bridge is setup correctly.
>> ifconfig -a # You should see xenbr0 alongside other interfaces Or >> brctl show # bridgeutil specific command that will only show the bridge configurations
Next, reboot the system and choose Xen Hypervisor.
3. Retrieving the VM installer for Ubuntu Make a directory and fetch the installer:
>> mkdir -p /var/lib/xen/images/ubuntu-netboot/xenial/ >> cd /var/lib/xen/images/ubuntu-netboot/xenial/ You can Use any mirrors and/or any version of Ubuntu. I'm using xenial from the main site >> wget http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/xen/vmlinuz >> wget wget http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/xen/vmlinuz/initrd.gz
4. Setup config file and prepare to install For this tutorial, we will name our VM "ubuntuVM", on the logical volume "ubuntuLV", with the assigned mac address "00:16:3e:00:xx:xx". You can find the LV path to ubuntuLV using the following command:
>> lvdisplay
Please do not just copy paste the code. Edit as necessary.
>> cd /etc/xen/ >> cp xlexample.pvlinux ubuntuVM.cfg >> vi ubuntuVM.cfg
Edit the config file so that it looks like:
name = "ubuntuVM" kernel = "/var/lib/xen/images/ubuntu-netboot/xenial/vmlinuz" ramdisk = "/var/lib/xen/images/ubuntu-netboot/xenial/initrd.gz" #bootloader = "/usr/lib/xen-4.8/bin/pygrub" extra = "root=/dev/xvda1" memory = 768 #maxmem = 512 vcpus = 1 vif = [ 'bridge=xenbr0,mac=00:16:3e:00:xx:xx' ] disk = [ 'path_to_ubuntuLV,raw,xvda,rw' ]
Once that is done, you are set to run install the VM, for which run the following command. Please note, that your VM should have a dns/dhcp entry on the CS side, in order to have access to the internet.
5. Install OS & create VM Use the following command to start the installation procedure:
>> xl create -c /etc/xen/ubuntuVM.cfg
Follow the steps for the installation. When asked to partition disks, choose the "Use entire disk" option. The entire disk is being referred to the Logical Volume you have set aside for this install. Once done, try logging in/out to test, and then continue with configurations on the host machine:
>> xl shutdown ubuntuVM >> vi /etc/xen/ubuntuVM.cfg
Edit the file so that the machine can start/stop properly. The updated file might look like:
name = "ubuntuVM" #kernel = "/var/lib/xen/images/ubuntu-netboot/xenial/vmlinuz" #ramdisk = "/var/lib/xen/images/ubuntu-netboot/xenial/initrd.gz" bootloader = "/usr/lib/xen-4.8/bin/pygrub" extra = "root=/dev/xvda1" memory = 768 #maxmem = 512 vcpus = 1 vif = [ 'bridge=xenbr0,mac=00:16:3e:00:xx:xx' ] disk = [ 'path_to_ubuntuLV,raw,xvda,rw' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' on_xend_start = 'start' on_xend_stop = 'shutdown'
Once done, create the VM once again, so that it loads with bootloader.
>> xl create -c /etc/xen/ubuntuVM.cfg
6. Test and Restart You can see which virtual machines are running under Xen by running the following command:
>> xl list
Reboot your system in order to make sure everything is working properly.
7. To launch your VM, type in the command below:
>> xl console vm_name
Abridged archival notes from Eamon on migrating a VM
Along with their normal cs connections, control and smiley are linked together with a long ethernet patch cable with an mtu of 9000 to expedite NFS transfers. control has address 192.168.0.1 and smiley has 0.2. This connection can be removed once the migration is done. [CE: I don't think we've removed this.]
LVM logical volumes for Xen are stored in the vmdata volume group on smiley. There are swap and root lvm logical volumes for each host. The process for migrating goes a bit like this:
On Smiley (using eccs-net as an example):
#Create a new lvm logical volume using for the new virtual machine on smiley #and create an ext4 partition on it. Sizes of partitions should more or less #match the size of the partitions on control. Run lvs as root to check. smiley~# lvcreate -L 50G -n eccs-net-disk vmdata smiley~# mkfs.ext4 /dev/vmdata/eccs-net-disk #Create a new mount point for the empty partition to copy to over nfs #and mount the new, empty partition. smiley~# mkdir -p /mnt/migrate-smiley smiley~# mount /dev/vmdata/eccs-net-disk /mnt/migrate-smiley #Add /mnt/migrate-smiley to /etc/exports and allow access from the 192.168.0.0/24 subnet. #Also make sure no_root_squash is set. This nfs mount can be reused for multiple hosts.
On control:
#Shutdown the virtual machine by running shutdown -h now on eccs-net #On control, mount the nfs share we just created on smiley. You must use this ip for it to perform #reasonably (thanks to jumbo frames). Also mount the root partition for the VM we just shut down. #You need to use vers=3 for NFS; otherwise it will complain about UID's and GID's not existing on the new #host server (since it doesn't use LDAP). control~# mkdir -p /mnt/to-smiley control~# mount -t nfs -o vers=3 192.168.0.2 :/ mnt/migrate-smiley /mnt/to-smiley control~# mkdir -p /mnt/from-control control~# mount /dev/vmdata-shared/net.cs.earlham.edu-disk /mnt/from-control #Use rsync to copy /mnt/from-control/* to /mnt/to-smiley #Make sure that rsync preserves file permissions and metadata control~# rsync --progress --numeric-ids -avWHAX /mnt/from-control/* /mnt/to-smiley/ #Go make some coffee. rsync is gonna be a while, especially on eccs-home (about 400GB of data to copy) #Also take some time to copy the xen configuration for the machine from ~sysadmin/xen-configs on control to somewhere on smiley. #Like on control, ~sysadmin/xen-configs is a good spot. #You will need to modify the configuration to match the new path to the swap and root partitions (e.g. /dev/vmdata/eccs-net-disk), #as well as set the path to the kernel and initrd file to use for the guest. You can just use smiley's kernel #image in /boot. #Once rsync finishes, unmount /mnt/migrate-smiley on smiley (make it contains root files for the vm you're copying #(etc, bin, usr, etc) first) and start the new virtual machine like so: (note that xen is managed using the xl #command now, not xm smiley~# xl create -c <path to copied config file> #Once that's started up and things are working properly, you can hit ^] (Ctrl + ]) to go back to smiley's shell. #Once all VM's are migrated, you can remove the entries added to /etc/exports and remove any mountpoints you've created. #Control should be shut down at this point. I believe smiley is stealing one of the Lovelace lab IP's at the moment, so #the ip address should be updated in /etc/network/interfaces as well as DNS to match control's old IP. You might want to do #this from the console down in dennis in case the network goes down in the process.
The four vm's have root disks at
/dev/vmdata/eccs-home-disk /dev/vmdata/eccs-tools-disk /dev/vmdata/eccs-net-disk /dev/vmdata/eccs-web-disk
They all have ext4 filesystems. Networking is set up. There is a bridge-utils bridge at xenbr0. It's the interface that the xen dom0 uses; the physical interface runs at the ethernet layer and does't have an IP address.
Hardware (archival, needs review)
New CS Hardware (11/14)
As of November 2014, we are working on installing new hardware to provide services to CS Students/Faculty/Staff. Details about what goes where and how resources are allocated can be found here.
Hardware Specs
| Model | Silicon Mechanics A346 (2U Chassis), Supermicro Motherboard |
|---|---|
| CPU | 2x 8-Core AMD Opteron 6308@3.5Ghz w/4MB L2 Cache |
| Ram | 64GB DDR3 (16 slots available total) |
| Storage | 6 3TB SAS HDD's configured in RAID10 (/dev/sda, 3 Striped Mirrored Pairs, 9TB effective storage, used for VM backing) + 2 240GB SSD's in RAID0 (/dev/sdb, used for Dom0 Backing) |
| Network | 2x Gigabit Ethernet Onboard, 4x Gigabit Ethernet on PCIe, 2x 10G Ethernet on PCIe |
Hard Drive Partitioning
Hard Disks are split using LVM, which is nested to some degree. The 9TB of rotational media is allocated exclusively for use by Xen to back virtual machines. Xen handles much of the partition creation on that end. ext4 is used for the most part as the underlying filesystem.
The 9TB Raid Drive forms a volume group (/dev/vmstore) split into two parts using LVM: one 1TB Logical Volume (/dev/vmstore/local) for local VM storage (NOT in production) and one 7TB Logica Volume (/dev/vmstore/shared) for production VM's that may be mirrored to other machines in the future. The rest of the drive is left over as extra storage for later use.
There is some reasoning behind the apparent complexity to this system: we may use Direct Replicated Block Device (drbd) in the future to mirror all VM data to another machine. Hypothetically, the shared LV /dev/vmstore/shared will be mirrored across both machines, taking all the suboordinate volumes with it.
Volume Layout (Super-Detailed version)
| Device Location | Type | Filesystem | Size | Mounted at | Usage |
|---|---|---|---|---|---|
| /dev/sda | Raw Disk | 9TB | Points to main 9TB RAID Array, not mounted directly | ||
| /dev/sda1 | Linux Extended | LVM | 9TB | Space allocation for use by LVM (acts as PV) | |
| /dev/sdb | Raw Disk | 239GB | Contains files for dom0 | ||
| /dev/sdb1 | Linux Primary | ext2 | 255MB | /boot | Contains boot files for dom0 |
| /dev/sdb2 | Linux Extended | 239GB | |||
| /dev/sdb5 | Linux Logical | LVM | 239GB | ||
| /dev/vmctrl | LVM VG (on /dev/sdb5) | 239GB | |||
| /dev/vmctrl/root | LVM LV | ext4 | 213GB | / | dom0 root filesystem (debian) |
| /dev/vmctrl/swap | LVM LV | Linux Swap | 9GB | dom0 swap partition | |
| /dev/sdb5 | Linux Logical | LVM | 239GB | ||
| /dev/vmstore | LVM VG | 9TB | Main volume group for organizing vm storage | ||
| /dev/vmstore/local | LVM LV | 1TB | Logical Disk for local vm storage (backs volume group) | ||
| /dev/vmstore/shared | LVM LV | 7TB | Logical Disk for shared vm storage (backs volume group) | ||
| /dev/vmstore-local | LVM VG (on /dev/vmstore/local) | 1TB | Volume group for local storage volumes | ||
| /dev/vmstore-shared | LVM VG (on /dev/vmstore/shared) | 7TB | Volume group for shared storage volumes | ||
| /dev/vmstore/local/admin | LVM LV | ext4 | 100GB | /mnt/vmdata-local/admin | Administration storage for local virtual machines |
| /dev/vmstore/local/config | LVM LV | ext4 | 1GB | /mnt/vmdata-local/config | Xen Config file storage for local virtual machines |
| /dev/vmstore/shared/admin | LVM LV | ext4 | 400GB | /mnt/vmdata-shared/admin | Administration storage for local virtual machines |
| /dev/vmstore/shared/config | LVM LV | ext4 | 1GB | /mnt/vmdata-shared/config | Xen Config file storage for local virtual machines |
Local LV
/dev/vmstore/local also acts as a physical drive to back another Volume Group (/dev/vmstore-local) which actually contains the logical partitions for the virtual machines. This has no direct physical counterpart, it is placed on top of the logical volume /dev/vmstore/local. /dev/vmstore-local contains two logical volumes with ext4 filesystems: config (1GB, for xen config files) and admin (100GB, for misc. admin tools). The rest is allocated by Xen and should end up containing several LVM volumes for use by Virtual Machines. These can be mounted on Dom0 for maintinence.
Shared LV
/dev/vmstore/shared has a similar layout to it's local counterpart. It contains two logical volumes: config (1GB), and admin (500GB), as well as volumes for any VM's that have been created.
VM Resource Allocation
The CS server architecture is backed by 5 Xen Virtual Machines running Debian Wheezy. Naming conventions are definitely going to change in the near future. All MAC addresses must be in the 00:16:3e:xx:xx:xx range; this is a hardware range that has been reserved for Xen VM's
| Name | HDD Space | RAM | Cores | MAC Address | IP Address | Services |
|---|---|---|---|---|---|---|
| dom0 | 240GB (SSD) | 1GB | 1 | 0c:c4:7a:07:cd:ab | 159.28.230.117 | Xen VM Management |
| vm0 (home) | 3TB | 16GB | 4 | 00:16:3e:00:00:00 | 159.28.230.240 | NFS (/clients), SSH |
| vm1 (net) | 500GB | 4GB | 2 | 00:16:3e:00:00:01 | 159.28.230.241 | DNS, DHCP, LDAP, CUPS |
| vm2 (web) | 2.5TB | 24GB | 4 | 00:16:3e:00:00:02 | 159.28.230.242 | Apache, Mailman, MySQL |
| vm3 (wiki) | 500GB | 8GB | 2 | 00:16:3e:00:00:03 | 159.28.230.243 | MediaWiki, Sage |