Sysadmin:XenDocs: Difference between revisions

From Earlham CS Department
Jump to navigation Jump to search
Craigje (talk | contribs)
mNo edit summary
Craigje (talk | contribs)
mNo edit summary
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
= All about Xen on the CS network =
We use the Xen hypervisor to manage virtual machines in the CS subdomain. This is a guide to what it is and how to use it.
We run Xen virtual machines on our physical hardware. In the CS network, those VM's run on smiley, an Ubuntu server (see <tt>/etc/os-release</tt> for details). The <code>xl</code> command line tool interfaces with Xen config files and VM's, e.g. through the console.


You may bridge over smiley's 1Gb interface (xenbr0) or over the 10Gb interface (xenbr1). "Both" may be the best answer. The documents on this wiki were written with the 1Gb as an example but <code>s/xenbr0/xenbr1/g</code> will get you most of the way to 10Gb networking.
Note that to do most of these steps you will need to access smiley, the host for the hypervisor (10.10.10.252, sysadmin only), and become the root user or use sudo.
 
= Common tasks =
 
If you're a bit familiar with Xen, have completed the new admin project, etc. then you may just want some of these commands.
 
If you don't understand a command in this section of the wiki, you probably shouldn't be running it yet.
 
The base path for all these commands is <code>/root/server-scripts/vm-scripts</code> on smiley.
 
Do not just copy and paste.
 
=== Create a VM ===
 
Run <code>bash build-vm.sh vmName os lastOctet</code> to create a 20GB VM on smiley, including the name, operating system of choice, and last octet of the IP address.
 
<pre>
root@eccs-smiley:~/server-scripts/vm-scripts# bash build-vm.sh --help
Usage: ./build-vm.sh vmName os lastOctet
    vmName: the hostname for your vm, avoid special characters
    os: vm's operating system
    lastOctet: last octet of IP addresses for the VM, must be between 100 and 200 inclusive
</pre>
 
=== Copy a VM ===
 
Copy the config file and the logical volume:
<pre>
cp /etc/xen/old-vm.cfg /etc/xen/new-vm.cfg
dd if=/dev/vmdata/old-vm-logical-volume of=/dev/vmdata/new-vm-logical-volume bs=4M
xl create /etc/xen/new-vm.cfg
</pre>
 
=== Delete a VM ===
 
This one is simple: <code>bash erase-vm.sh vm-to-delete</code>
 
=== Start a VM ===
 
This doesn't require a wrapper script, only a built-in Xen command: <code>xl create /etc/xen/my-vm.cfg</code>
 
=== Stop a VM ===
 
This doesn't require a wrapper script, only a built-in Xen command: <code>xl destroy my-vm</code> (note this is not the cfg file)
 
= Xen vocabulary =


= VM's or "Guests" =
== VM's or "Guests" ==


In Xen parlance, a VM is called a "guest virtual machine" or just "guest". These will be used interchangeably in this document.
In Xen parlance, a VM is called a "guest virtual machine" or just "guest". These will be used interchangeably in this document.
Line 10: Line 54:
You can see a list of active guests by running <tt>xl list</tt>. This will display the name, ID, memory it gets to itself, virtual CPU's it gets to itself, its state, and its uptime in seconds.
You can see a list of active guests by running <tt>xl list</tt>. This will display the name, ID, memory it gets to itself, virtual CPU's it gets to itself, its state, and its uptime in seconds.


= Networking =
= Xen as it runs on the CS subnet =
In the CS network, those VM's run on smiley, an Ubuntu server (see <tt>/etc/os-release</tt> for details). The <code>xl</code> command line tool interfaces with Xen config files and VM's, e.g. through the console.
 
You may bridge over smiley's 1Gb interface (xenbr0) or over the 10Gb interface (xenbr1). "Both" may be the best answer. The documents on this wiki were written with the 1Gb as an example but <code>s/xenbr0/xenbr1/g</code> will get you most of the way to 10Gb networking.
 
== Installing Xen ==
 
If you ever need to, e.g. for the new sysadmin project, installing Xen is done as follows. It's slightly trickier than more common package-managed software.
 
== Networking ==


A VM will get network interfaces bridged to the underlying server's physical network interface. A VM may use the 1Gb interface, the 10Gb interface, or both.
A VM will get network interfaces bridged to the underlying server's physical network interface. A VM may use the 1Gb interface, the 10Gb interface, or both.
Line 16: Line 69:
You can see virtual interfaces of Xen guests with <tt>ip a</tt> on smiley. Each interface associated with a guest will have that guest's ID in its name - for example, if the guest ID is 97, it may have interfaces <tt>vif97.0</tt> and <tt>vif97.1</tt>.
You can see virtual interfaces of Xen guests with <tt>ip a</tt> on smiley. Each interface associated with a guest will have that guest's ID in its name - for example, if the guest ID is 97, it may have interfaces <tt>vif97.0</tt> and <tt>vif97.1</tt>.


== Choosing a MAC address for a guest ==
=== Choosing a MAC address for a guest ===
MAC addresses should have the form 00:16:3e:xx:yy:zz, as this is the OUI reserved by Xen for use creating Xen guests.
MAC addresses should have the form 00:16:3e:xx:yy:zz, as this is the OUI reserved by Xen for use creating Xen guests.


Line 29: Line 82:
If you're not sure, all that matters is that the MAC address is unique for our site. You can be reasonably sure these are unique by running <tt>grep 00:16:3e /etc/xen/*</tt> and making sure none of them match.
If you're not sure, all that matters is that the MAC address is unique for our site. You can be reasonably sure these are unique by running <tt>grep 00:16:3e /etc/xen/*</tt> and making sure none of them match.


=== MAC addresses for main guests ===
==== MAC addresses for main guests ====


* net: 00:16:3e:00:00:01
* net: 00:16:3e:00:00:01
* web: 00:16:3e:00:00:02
* web: 00:16:3e:00:00:02


=== MAC addresses of Other guests ===
==== MAC addresses of Other guests ====
* hamilton: 00:16:3e:00:01:00
* hamilton: 00:16:3e:00:01:00
* khwarizmi: 00:16:3e:00:01:01
* khwarizmi: 00:16:3e:00:01:01
Line 68: Line 121:
</pre>
</pre>


= Setting up a new Xen guest =
= Making a Xen VM =
This is step-by-step process for installing a new Xen instance and creating a Xen VM. This tutorial assumes that you have permissions on the respective machine with Debian OS installed on DomU, and have LVM configured (see admins if not).
 
Start at the beginning if you are doing Project Zero, the project for new sysadmins. Start at step 3 if you are running on smiley or a physical machine that already runs other xen VM's.
 
Do NOT just copy-and-paste commands. In most cases you will need to substitute names, disk sizes, etc. that are appropriate for your use case. These are patterns only.


1. create root partition:
== Setting up a new LV and Xen guest ==
 
1. Create a root partition:
<pre>
<pre>
smiley~# lvcreate -L 50G -n eccs-foo-disk vmdata
smiley~# lvcreate -L 50G -n eccs-foo-disk vmdata
Line 76: Line 136:
</pre>
</pre>


2 create swap partition (skip if you're doing the new admin training section):
2. Create a swap partition (skip if you're doing the new admin training section):
<pre>
<pre>
smiley~# lvcreate -L 128M -n eccs-foo-swap vmdata
smiley~# lvcreate -L 128M -n eccs-foo-swap vmdata
Line 82: Line 142:
</pre>
</pre>


= Making a Xen VM =  
=== XEN Setup ===
This is step-by-step process for creating Xen VMs. This tutorial assumes that you have permissions on the respective machine with Debian OS installed on DomU, and have LVM configured (see admins if not).


Start at step 3 if you are running on smiley or a physical machine that already runs other xen VM's.
Note: This section is written with new sysadmins in mind.


=== LV setup ===
To install the Xen VM on Smiley, you must have sudo access. You can do so by becoming root, or by putting sudo in front of each of the following commands in part 1.


[[Sysadmin:XenDocs#Setting_up_a_new_Xen_guest|Go here.]] DO NOT JUST COPY AND PASTE THESE COMMANDS.
To become root, you must type <code>sudo su</code> in the terminal. After you become root, you may type in all of the following command without having to use sudo.


=== XEN Setup ===
Become root: <code>sudo su - root</code>
To install the Xen VM on Smiley, you must have Sudo access. You can do so by becoming root, or by putting Sudo in front of each of the following commands in part 1.
To become root, you must type >> sudo su<< in the terminal. After you become root, you may type in all of the following command without having to use sudo.
Become root  
: sudo su
run a sudo command without being root
:sudo apt-get install xen-system


1. Become the Root User and install the required software. If you prefer not to become the root user, then use 'sudo' in front of every command.
Run a sudo command without being root: <code>sudo apt-get install xen-system</code>


  >> apt-get install xen-system
1. Become the root user and install the required software. If you prefer not to become the root user, then use 'sudo' in front of every command.
  >> apt-get install xen-tools
  >> apt-get install bridge-utils  # This will allow us to configure Network bridge for the VMs


<pre>
apt-get install xen-system
apt-get install xen-tools
apt-get install bridge-utils  # This will allow us to configure Network bridge for the VMs
</pre>


2. Configure Network Bridge
2. Configure Network Bridge
* Figure out the name of the interface that is connected to the CS network. This might be called eth0, eno1, or something along those lines. You can check that by using the following command:
* Figure out the name of the interface that is connected to the CS network. This might be called eth0, eno1, or something along those lines. You can check that by using the following command:
  >> ifconfig -a
<pre>
  Or
ifconfig -a
  >> ip a
#or
ip a
</pre>


For this tutorial, we will use eno1 as the name of the interface.
For this tutorial, we will use eno1 as the name of the interface.


* Edit the network interfaces file to setup a bridge, let's call it xenbr0
 
   >> vi /etc/network/interfaces
Edit the network interfaces file to setup a bridge, let's call it xenbr0:
The file should look like. Please note that this might look a little bit different for your machine, for e.g. you may have a different name in place of xenbr0 or eno1.
   vi /etc/network/interfaces
 
The file should look like this. Please note that this might look a little bit different for your machine, for e.g. you may have a different name in place of xenbr0 or eno1.
   auto lo
   auto lo
   iface lo inet loopback
   iface lo inet loopback
Line 135: Line 196:
   allow-hotplug eno1
   allow-hotplug eno1
   iface eno1 inet dhcp
   iface eno1 inet dhcp
Now, type the following command to finish bridge setup. Make sure you do this part while you can physically access the machine, and not do it remotely.
Now, type the following command to finish bridge setup. Make sure you do this part while you can physically access the machine, and not do it remotely.
   >> sudo ifdown eno1 && sudo ifup xenbr0 && sudo ifup eno1
   sudo ifdown eno1 && sudo ifup xenbr0 && sudo ifup eno1
The following two commands can now be used to make sure that the bridge is setup correctly.
The following two commands can now be used to make sure that the bridge is setup correctly.
   >> ifconfig -a  # You should see xenbr0 alongside other interfaces
   ifconfig -a  # You should see xenbr0 alongside other interfaces
   Or
   # or
   >> brctl show  # bridgeutil specific command that will only show the bridge configurations
   brctl show  # bridgeutil specific command that will only show the bridge configurations


Next, reboot the system and choose Xen Hypervisor.  
Reboot the system and choose Xen Hypervisor.  


3. Retrieving the VM installer for Ubuntu
3. Retrieving the VM installer for Ubuntu
Make a directory and fetch the installer:
Make a directory and fetch the installer:
   >> mkdir -p /var/lib/xen/images/ubuntu-netboot/xenial/
   mkdir -p /var/lib/xen/images/ubuntu-netboot/xenial/
   >> cd /var/lib/xen/images/ubuntu-netboot/xenial/
   cd /var/lib/xen/images/ubuntu-netboot/xenial/
   You can Use any mirrors and/or any version of Ubuntu. I'm using xenial from the main site
   # You can Use any mirrors and/or any version of Ubuntu. I'm using xenial from the main site
   >> wget http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/xen/vmlinuz
   wget http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/xen/vmlinuz
   >> wget wget http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/xen/vmlinuz/initrd.gz
   wget http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/xen/vmlinuz/initrd.gz


4. Setup config file and prepare to install
4. Setup config file and prepare to install
For this tutorial, we will name our VM "ubuntuVM", on the logical volume "ubuntuLV", with the assigned mac address "00:16:3e:00:xx:xx". You can find the LV path to ubuntuLV using the following command:
For this tutorial, we will name our VM "ubuntuVM", on the logical volume "ubuntuLV", with the assigned mac address "00:16:3e:00:xx:xx". You can find the LV path to ubuntuLV using the following command:
   >> lvdisplay
   lvdisplay
 
'''Please do not just copy paste the code. Edit as necessary.'''
'''Please do not just copy paste the code. Edit as necessary.'''
   >> cd /etc/xen/
 
   >> cp xlexample.pvlinux ubuntuVM.cfg
   cd /etc/xen/
   >> vi ubuntuVM.cfg
   cp xlexample.pvlinux ubuntuVM.cfg
   vi ubuntuVM.cfg
 
Edit the config file so that it looks like:
Edit the config file so that it looks like:
   name = "ubuntuVM"
   name = "ubuntuVM"
Line 170: Line 235:
   vif = [ 'bridge=xenbr0,mac=00:16:3e:00:xx:xx' ]
   vif = [ 'bridge=xenbr0,mac=00:16:3e:00:xx:xx' ]
   disk = [ 'path_to_ubuntuLV,raw,xvda,rw' ]
   disk = [ 'path_to_ubuntuLV,raw,xvda,rw' ]
Once that is done, you are set to run install the VM, for which run the following command. Please note, that your VM should have a dns/dhcp entry on the CS side, in order to have access to the internet.
Once that is done, you are set to run install the VM, for which run the following command. Please note, that your VM should have a dns/dhcp entry on the CS side, in order to have access to the internet.


5. Install OS & create VM
5. Install OS & create VM
Use the following command to start the installation procedure:
Use the following command to start the installation procedure:
   >> xl create -c /etc/xen/ubuntuVM.cfg  
   xl create -c /etc/xen/ubuntuVM.cfg  
 
Follow the steps for the installation. When asked to partition disks, choose the "Use entire disk" option. The entire disk is being referred to the Logical Volume you have set aside for this install. Once done, try logging in/out to test, and then continue with configurations on the host machine:
Follow the steps for the installation. When asked to partition disks, choose the "Use entire disk" option. The entire disk is being referred to the Logical Volume you have set aside for this install. Once done, try logging in/out to test, and then continue with configurations on the host machine:
   >> xl shutdown ubuntuVM
   xl shutdown ubuntuVM
   >> vi /etc/xen/ubuntuVM.cfg
   vi /etc/xen/ubuntuVM.cfg
 
Edit the file so that the machine can start/stop properly. The updated file might look like:
Edit the file so that the machine can start/stop properly. The updated file might look like:
   name = "ubuntuVM"
   name = "ubuntuVM"
Line 197: Line 265:
   on_xend_start = 'start'
   on_xend_start = 'start'
   on_xend_stop = 'shutdown'
   on_xend_stop = 'shutdown'
Once done, create the VM once again, so that it loads with bootloader.
Once done, create the VM once again, so that it loads with bootloader.
   >> xl create -c /etc/xen/ubuntuVM.cfg  
   xl create -c /etc/xen/ubuntuVM.cfg  


6. Test and Restart
6. Test and Restart
You can see which virtual machines are running under Xen by running the following command:
You can see which virtual machines are running under Xen by running the following command:
   >> xl list
   xl list
Reboot your system in order to make sure everything is working properly.
Reboot your system in order to make sure everything is working properly.


7. To launch your VM, type in the command below:
7. To launch your VM, type in the command below:
   >> xl console vm_name
   xl console vm_name
 
= Set up a staging environment =
 
One of the most important things to do as a sysadmin is to give yourself space to try something without causing problems on a production server.
 
To create a staging server, go to smiley (10.10.10.252, sysadmin only) and become root. Go to <code>/root/server-scripts/vm-scripts</code> and then run <code>bash build-vm.sh $myname-staging debian $myIP</code>, where <code>$myname</code> is your username and </code>$myIP</code> is a number between 100 and 250. You will get a server with a 159.28.22.X address, LDAP configuration, and some default packages.
 
If you are testing a web server, you will need to add your new IP address to the web servers list in the [[Sysadmin:Firewall|firewall]].


The files that seed these servers need cleanup, but these will work.


= Abridged archival notes from Eamon on migrating a VM =
= Abridged archival notes from Eamon on migrating a VM =

Latest revision as of 17:17, 2 September 2021

We use the Xen hypervisor to manage virtual machines in the CS subdomain. This is a guide to what it is and how to use it.

Note that to do most of these steps you will need to access smiley, the host for the hypervisor (10.10.10.252, sysadmin only), and become the root user or use sudo.

Common tasks

If you're a bit familiar with Xen, have completed the new admin project, etc. then you may just want some of these commands.

If you don't understand a command in this section of the wiki, you probably shouldn't be running it yet.

The base path for all these commands is /root/server-scripts/vm-scripts on smiley.

Do not just copy and paste.

Create a VM

Run bash build-vm.sh vmName os lastOctet to create a 20GB VM on smiley, including the name, operating system of choice, and last octet of the IP address.

root@eccs-smiley:~/server-scripts/vm-scripts# bash build-vm.sh --help
Usage: ./build-vm.sh vmName os lastOctet
    vmName: the hostname for your vm, avoid special characters
    os: vm's operating system
    lastOctet: last octet of IP addresses for the VM, must be between 100 and 200 inclusive

Copy a VM

Copy the config file and the logical volume:

cp /etc/xen/old-vm.cfg /etc/xen/new-vm.cfg
dd if=/dev/vmdata/old-vm-logical-volume of=/dev/vmdata/new-vm-logical-volume bs=4M
xl create /etc/xen/new-vm.cfg

Delete a VM

This one is simple: bash erase-vm.sh vm-to-delete

Start a VM

This doesn't require a wrapper script, only a built-in Xen command: xl create /etc/xen/my-vm.cfg

Stop a VM

This doesn't require a wrapper script, only a built-in Xen command: xl destroy my-vm (note this is not the cfg file)

Xen vocabulary

VM's or "Guests"

In Xen parlance, a VM is called a "guest virtual machine" or just "guest". These will be used interchangeably in this document.

You can see a list of active guests by running xl list. This will display the name, ID, memory it gets to itself, virtual CPU's it gets to itself, its state, and its uptime in seconds.

Xen as it runs on the CS subnet

In the CS network, those VM's run on smiley, an Ubuntu server (see /etc/os-release for details). The xl command line tool interfaces with Xen config files and VM's, e.g. through the console.

You may bridge over smiley's 1Gb interface (xenbr0) or over the 10Gb interface (xenbr1). "Both" may be the best answer. The documents on this wiki were written with the 1Gb as an example but s/xenbr0/xenbr1/g will get you most of the way to 10Gb networking.

Installing Xen

If you ever need to, e.g. for the new sysadmin project, installing Xen is done as follows. It's slightly trickier than more common package-managed software.

Networking

A VM will get network interfaces bridged to the underlying server's physical network interface. A VM may use the 1Gb interface, the 10Gb interface, or both.

You can see virtual interfaces of Xen guests with ip a on smiley. Each interface associated with a guest will have that guest's ID in its name - for example, if the guest ID is 97, it may have interfaces vif97.0 and vif97.1.

Choosing a MAC address for a guest

MAC addresses should have the form 00:16:3e:xx:yy:zz, as this is the OUI reserved by Xen for use creating Xen guests.

A simple system for choosing xx:yy:zz:

  • xx: the YY of the year (e.g. 21 for 2021)
  • yy: the MM of the month (e.g. 01 for January)
  • zz: an incrementing value (e.g. 00 for the first VM)

To put the whole example together, the first VM created in January 2021 would have the MAC address 00:16:3e:21:01:00.

If you're not sure, all that matters is that the MAC address is unique for our site. You can be reasonably sure these are unique by running grep 00:16:3e /etc/xen/* and making sure none of them match.

MAC addresses for main guests

  • net: 00:16:3e:00:00:01
  • web: 00:16:3e:00:00:02

MAC addresses of Other guests

  • hamilton: 00:16:3e:00:01:00
  • khwarizmi: 00:16:3e:00:01:01
  • success: 00:16:3e:00:01:02
  • franco: 00:16:3e:00:01:03
  • crain: 00:16:3e:00:01:04
  • qinzhou: 00:16:3e:00:01:05

Cloning an existing Logical Volume

1. create a snapshot as the source; the size is a buffer for holding any changes to the active volume during this process

lvcreate -s -L 1G -n eccs-foo-snapshot /dev/vmdata/eccs-foo-disk

2. create a new logical volume as destination

lvcreate -L 50G -n eccs-bar-disk vmdata

3. copy contents form snapshot to new volume; make sure to background the process and disown it disown -h

dd if=/dev/vmdata/eccs-foo-snapshot of=/dev/vmdata/eccs-bar-disk bs=100M &

# You can issue a signal to the `dd` process to check the status of the copy
# kill -SIGUSR1 <pid of dd>

4. remove snapshot

lvremove /dev/vmdata/eccs-foo-snapshot

5. create and boot xen guest VM

xl create -c xen-configs/eccs-foo.cfg

Making a Xen VM

This is step-by-step process for installing a new Xen instance and creating a Xen VM. This tutorial assumes that you have permissions on the respective machine with Debian OS installed on DomU, and have LVM configured (see admins if not).

Start at the beginning if you are doing Project Zero, the project for new sysadmins. Start at step 3 if you are running on smiley or a physical machine that already runs other xen VM's.

Do NOT just copy-and-paste commands. In most cases you will need to substitute names, disk sizes, etc. that are appropriate for your use case. These are patterns only.

Setting up a new LV and Xen guest

1. Create a root partition:

smiley~# lvcreate -L 50G -n eccs-foo-disk vmdata
smiley~# mkfs.ext4 /dev/vmdata/eccs-foo-disk

2. Create a swap partition (skip if you're doing the new admin training section):

smiley~# lvcreate -L 128M -n eccs-foo-swap vmdata
smiley~# mkswap /dev/vmdata/eccs-foo-swap

XEN Setup

Note: This section is written with new sysadmins in mind.

To install the Xen VM on Smiley, you must have sudo access. You can do so by becoming root, or by putting sudo in front of each of the following commands in part 1.

To become root, you must type sudo su in the terminal. After you become root, you may type in all of the following command without having to use sudo.

Become root: sudo su - root

Run a sudo command without being root: sudo apt-get install xen-system

1. Become the root user and install the required software. If you prefer not to become the root user, then use 'sudo' in front of every command.

apt-get install xen-system
apt-get install xen-tools
apt-get install bridge-utils   # This will allow us to configure Network bridge for the VMs

2. Configure Network Bridge

  • Figure out the name of the interface that is connected to the CS network. This might be called eth0, eno1, or something along those lines. You can check that by using the following command:
ifconfig -a
#or
ip a

For this tutorial, we will use eno1 as the name of the interface.


Edit the network interfaces file to setup a bridge, let's call it xenbr0:

 vi /etc/network/interfaces

The file should look like this. Please note that this might look a little bit different for your machine, for e.g. you may have a different name in place of xenbr0 or eno1.

 auto lo
 iface lo inet loopback
 
 auto xenbr0
 allow-hotplug xenbr0
 iface xenbr0 inet static
         address your_ip_address
         network 159.28.22.0
         broadcast 159.28.22.255
         gateway 159.28.22.254
         netmask 255.255.255.0
         dns-nameservers 159.28.22.1 8.8.4.4
         dns-domain cs.earlham.edu
         bridge_ports eno1 regex vif* noregex
 
 auto eno1
 allow-hotplug eno1
 iface eno1 inet dhcp

Now, type the following command to finish bridge setup. Make sure you do this part while you can physically access the machine, and not do it remotely.

 sudo ifdown eno1 && sudo ifup xenbr0 && sudo ifup eno1

The following two commands can now be used to make sure that the bridge is setup correctly.

 ifconfig -a  # You should see xenbr0 alongside other interfaces
 # or
 brctl show  # bridgeutil specific command that will only show the bridge configurations

Reboot the system and choose Xen Hypervisor.

3. Retrieving the VM installer for Ubuntu Make a directory and fetch the installer:

 mkdir -p /var/lib/xen/images/ubuntu-netboot/xenial/
 cd /var/lib/xen/images/ubuntu-netboot/xenial/
 # You can Use any mirrors and/or any version of Ubuntu. I'm using xenial from the main site
 wget http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/xen/vmlinuz
 wget http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/xen/vmlinuz/initrd.gz

4. Setup config file and prepare to install For this tutorial, we will name our VM "ubuntuVM", on the logical volume "ubuntuLV", with the assigned mac address "00:16:3e:00:xx:xx". You can find the LV path to ubuntuLV using the following command:

 lvdisplay

Please do not just copy paste the code. Edit as necessary.

 cd /etc/xen/
 cp xlexample.pvlinux ubuntuVM.cfg
 vi ubuntuVM.cfg

Edit the config file so that it looks like:

 name = "ubuntuVM"
 kernel = "/var/lib/xen/images/ubuntu-netboot/xenial/vmlinuz"
 ramdisk = "/var/lib/xen/images/ubuntu-netboot/xenial/initrd.gz"
 #bootloader = "/usr/lib/xen-4.8/bin/pygrub"
 extra = "root=/dev/xvda1"
 memory = 768
 #maxmem = 512
 vcpus = 1
 vif = [ 'bridge=xenbr0,mac=00:16:3e:00:xx:xx' ]
 disk = [ 'path_to_ubuntuLV,raw,xvda,rw' ]

Once that is done, you are set to run install the VM, for which run the following command. Please note, that your VM should have a dns/dhcp entry on the CS side, in order to have access to the internet.

5. Install OS & create VM Use the following command to start the installation procedure:

 xl create -c /etc/xen/ubuntuVM.cfg 

Follow the steps for the installation. When asked to partition disks, choose the "Use entire disk" option. The entire disk is being referred to the Logical Volume you have set aside for this install. Once done, try logging in/out to test, and then continue with configurations on the host machine:

 xl shutdown ubuntuVM
 vi /etc/xen/ubuntuVM.cfg

Edit the file so that the machine can start/stop properly. The updated file might look like:

 name = "ubuntuVM"
 #kernel = "/var/lib/xen/images/ubuntu-netboot/xenial/vmlinuz"
 #ramdisk = "/var/lib/xen/images/ubuntu-netboot/xenial/initrd.gz"
 bootloader = "/usr/lib/xen-4.8/bin/pygrub"
 extra = "root=/dev/xvda1"
 memory = 768
 #maxmem = 512
 vcpus = 1
 vif = [ 'bridge=xenbr0,mac=00:16:3e:00:xx:xx' ]
 disk = [ 'path_to_ubuntuLV,raw,xvda,rw' ]
 #
 #  Behaviour
 #
 on_poweroff = 'destroy'
 on_reboot   = 'restart'
 on_crash    = 'restart'
 on_xend_start = 'start'
 on_xend_stop = 'shutdown'

Once done, create the VM once again, so that it loads with bootloader.

 xl create -c /etc/xen/ubuntuVM.cfg 

6. Test and Restart You can see which virtual machines are running under Xen by running the following command:

 xl list

Reboot your system in order to make sure everything is working properly.

7. To launch your VM, type in the command below:

 xl console vm_name

Set up a staging environment

One of the most important things to do as a sysadmin is to give yourself space to try something without causing problems on a production server.

To create a staging server, go to smiley (10.10.10.252, sysadmin only) and become root. Go to /root/server-scripts/vm-scripts and then run bash build-vm.sh $myname-staging debian $myIP, where $myname is your username and $myIP is a number between 100 and 250. You will get a server with a 159.28.22.X address, LDAP configuration, and some default packages.

If you are testing a web server, you will need to add your new IP address to the web servers list in the firewall.

The files that seed these servers need cleanup, but these will work.

Abridged archival notes from Eamon on migrating a VM

Along with their normal cs connections, control and smiley are linked together with a long ethernet patch cable with an mtu of 9000 to expedite NFS transfers. control has address 192.168.0.1 and smiley has 0.2. This connection can be removed once the migration is done. [CE: I don't think we've removed this.]

LVM logical volumes for Xen are stored in the vmdata volume group on smiley. There are swap and root lvm logical volumes for each host. The process for migrating goes a bit like this:

On Smiley (using eccs-net as an example):

#Create a new lvm logical volume using for the new virtual machine on smiley
#and create an ext4 partition on it. Sizes of partitions should more or less
#match the size of the partitions on control. Run lvs as root to check.

smiley~# lvcreate -L 50G -n eccs-net-disk vmdata
smiley~# mkfs.ext4 /dev/vmdata/eccs-net-disk

#Create a new mount point for the empty partition to copy to over nfs
#and mount the new, empty partition.

smiley~# mkdir -p /mnt/migrate-smiley
smiley~# mount /dev/vmdata/eccs-net-disk /mnt/migrate-smiley

#Add /mnt/migrate-smiley to /etc/exports and allow access from the 192.168.0.0/24 subnet.
#Also make sure no_root_squash is set. This nfs mount can be reused for multiple hosts.

On control:

#Shutdown the virtual machine by running shutdown -h now on eccs-net
#On control, mount the nfs share we just created on smiley. You must use this ip for it to perform
#reasonably (thanks to jumbo frames). Also mount the root partition for the VM we just shut down.
#You need to use vers=3 for NFS; otherwise it will complain about UID's and GID's not existing on the new
#host server (since it doesn't use LDAP).

control~# mkdir -p /mnt/to-smiley
control~# mount -t nfs -o vers=3 192.168.0.2 :/ mnt/migrate-smiley /mnt/to-smiley
control~# mkdir -p /mnt/from-control
control~# mount /dev/vmdata-shared/net.cs.earlham.edu-disk /mnt/from-control

#Use rsync to copy /mnt/from-control/* to /mnt/to-smiley
#Make sure that rsync preserves file permissions and metadata

control~# rsync --progress --numeric-ids -avWHAX /mnt/from-control/* /mnt/to-smiley/

#Go make some coffee. rsync is gonna be a while, especially on eccs-home (about 400GB of data to copy)
#Also take some time to copy the xen configuration for the machine from ~sysadmin/xen-configs on control to somewhere on smiley.
#Like on control, ~sysadmin/xen-configs is a good spot.

#You will need to modify the configuration to match the new path to the swap and root partitions (e.g. /dev/vmdata/eccs-net-disk),
#as well as set the path to the kernel and initrd file to use for the guest. You can just use smiley's kernel
#image in /boot.

#Once rsync finishes, unmount /mnt/migrate-smiley on smiley (make it contains root files for the vm you're copying
#(etc, bin, usr, etc) first) and start the new virtual machine like so: (note that xen is managed using the xl
#command now, not xm

smiley~# xl create -c <path to copied config file>

#Once that's started up and things are working properly, you can hit ^] (Ctrl + ]) to go back to smiley's shell. 
#Once all VM's are migrated, you can remove the entries added to /etc/exports and remove any mountpoints you've created.
#Control should be shut down at this point. I believe smiley is stealing one of the Lovelace lab IP's at the moment, so
#the ip address should be updated in /etc/network/interfaces as well as DNS to match control's old IP. You might want to do
#this from the console down in dennis in case the network goes down in the process.

The four vm's have root disks at

/dev/vmdata/eccs-home-disk /dev/vmdata/eccs-tools-disk /dev/vmdata/eccs-net-disk /dev/vmdata/eccs-web-disk

They all have ext4 filesystems. Networking is set up. There is a bridge-utils bridge at xenbr0. It's the interface that the xen dom0 uses; the physical interface runs at the ethernet layer and does't have an IP address.

Hardware (archival, needs review)

New CS Hardware (11/14)

As of November 2014, we are working on installing new hardware to provide services to CS Students/Faculty/Staff. Details about what goes where and how resources are allocated can be found here.

Hardware Specs

Model Silicon Mechanics A346 (2U Chassis), Supermicro Motherboard
CPU 2x 8-Core AMD Opteron 6308@3.5Ghz w/4MB L2 Cache
Ram 64GB DDR3 (16 slots available total)
Storage 6 3TB SAS HDD's configured in RAID10 (/dev/sda, 3 Striped Mirrored Pairs, 9TB effective storage, used for VM backing) + 2 240GB SSD's in RAID0 (/dev/sdb, used for Dom0 Backing)
Network 2x Gigabit Ethernet Onboard, 4x Gigabit Ethernet on PCIe, 2x 10G Ethernet on PCIe

Hard Drive Partitioning

Hard Disks are split using LVM, which is nested to some degree. The 9TB of rotational media is allocated exclusively for use by Xen to back virtual machines. Xen handles much of the partition creation on that end. ext4 is used for the most part as the underlying filesystem.

The 9TB Raid Drive forms a volume group (/dev/vmstore) split into two parts using LVM: one 1TB Logical Volume (/dev/vmstore/local) for local VM storage (NOT in production) and one 7TB Logica Volume (/dev/vmstore/shared) for production VM's that may be mirrored to other machines in the future. The rest of the drive is left over as extra storage for later use.

There is some reasoning behind the apparent complexity to this system: we may use Direct Replicated Block Device (drbd) in the future to mirror all VM data to another machine. Hypothetically, the shared LV /dev/vmstore/shared will be mirrored across both machines, taking all the suboordinate volumes with it.

Volume Layout (Super-Detailed version)
Device Location Type Filesystem Size Mounted at Usage
/dev/sda Raw Disk 9TB Points to main 9TB RAID Array, not mounted directly
/dev/sda1 Linux Extended LVM 9TB Space allocation for use by LVM (acts as PV)
/dev/sdb Raw Disk 239GB Contains files for dom0
/dev/sdb1 Linux Primary ext2 255MB /boot Contains boot files for dom0
/dev/sdb2 Linux Extended 239GB
/dev/sdb5 Linux Logical LVM 239GB
/dev/vmctrl LVM VG (on /dev/sdb5) 239GB
/dev/vmctrl/root LVM LV ext4 213GB / dom0 root filesystem (debian)
/dev/vmctrl/swap LVM LV Linux Swap 9GB dom0 swap partition
/dev/sdb5 Linux Logical LVM 239GB
/dev/vmstore LVM VG 9TB Main volume group for organizing vm storage
/dev/vmstore/local LVM LV 1TB Logical Disk for local vm storage (backs volume group)
/dev/vmstore/shared LVM LV 7TB Logical Disk for shared vm storage (backs volume group)
/dev/vmstore-local LVM VG (on /dev/vmstore/local) 1TB Volume group for local storage volumes
/dev/vmstore-shared LVM VG (on /dev/vmstore/shared) 7TB Volume group for shared storage volumes
/dev/vmstore/local/admin LVM LV ext4 100GB /mnt/vmdata-local/admin Administration storage for local virtual machines
/dev/vmstore/local/config LVM LV ext4 1GB /mnt/vmdata-local/config Xen Config file storage for local virtual machines
/dev/vmstore/shared/admin LVM LV ext4 400GB /mnt/vmdata-shared/admin Administration storage for local virtual machines
/dev/vmstore/shared/config LVM LV ext4 1GB /mnt/vmdata-shared/config Xen Config file storage for local virtual machines
Local LV

/dev/vmstore/local also acts as a physical drive to back another Volume Group (/dev/vmstore-local) which actually contains the logical partitions for the virtual machines. This has no direct physical counterpart, it is placed on top of the logical volume /dev/vmstore/local. /dev/vmstore-local contains two logical volumes with ext4 filesystems: config (1GB, for xen config files) and admin (100GB, for misc. admin tools). The rest is allocated by Xen and should end up containing several LVM volumes for use by Virtual Machines. These can be mounted on Dom0 for maintinence.

Shared LV

/dev/vmstore/shared has a similar layout to it's local counterpart. It contains two logical volumes: config (1GB), and admin (500GB), as well as volumes for any VM's that have been created.

VM Resource Allocation

The CS server architecture is backed by 5 Xen Virtual Machines running Debian Wheezy. Naming conventions are definitely going to change in the near future. All MAC addresses must be in the 00:16:3e:xx:xx:xx range; this is a hardware range that has been reserved for Xen VM's

Name HDD Space RAM Cores MAC Address IP Address Services
dom0 240GB (SSD) 1GB 1 0c:c4:7a:07:cd:ab 159.28.230.117 Xen VM Management
vm0 (home) 3TB 16GB 4 00:16:3e:00:00:00 159.28.230.240 NFS (/clients), SSH
vm1 (net) 500GB 4GB 2 00:16:3e:00:00:01 159.28.230.241 DNS, DHCP, LDAP, CUPS
vm2 (web) 2.5TB 24GB 4 00:16:3e:00:00:02 159.28.230.242 Apache, Mailman, MySQL
vm3 (wiki) 500GB 8GB 2 00:16:3e:00:00:03 159.28.230.243 MediaWiki, Sage