Archive:LittleFe Cluster

From Earlham CS Department
Revision as of 17:52, 12 February 2025 by Pelibby16 (talk | contribs) (More condensing LittleFe pages)
Jump to navigation Jump to search

N O T I C E

Original Note: Much of the content below is stale, there are a few good nuggets though. We're going to harvest those and move them to the new LittleFe website at some point RSN.
2025 Note (Porter): I've condensed many of the pages below that were originally links into this page. Most are from 2005, and are no longer actively used. While they aren't active, they are an important part of what built CS at Earlham, and will be preserved on this page.

Application for an Intel/EAPF LittleFe

Notes:

  • Flavors of application:
    • Designated: Assembled unit delivered to your institution.
    • OU Build-out: Attend the Intermediate Parallel Programming and Cluster Computing workshop at OU on XXX
    • SC11 Build-out: Attend the Build-out hosted by the SC Education Program at SC11 in Seattle
  • Items the three application types have in common:
    • Individual and institutional commitment as shown with a letter outlining their commitment to incorporating the LittleFe/BCCD into their curriculum and to the development of new parallel programming or cluster computing curriculum modules. On institutional letterhead signed by someone(s) with the authority to make those commitments.
    • Take-back clause, that is after one year of quarterly check-ins if the plans outlined in the letter have not been met we can recall the unit.
      This and related policies will need to be vetted by the "granting agency", ACM/EAPF.  Check with Donna Capo.
      We also need to identify who pays shipping, if take-back is necessary.
  • Items that are different for each application:
    • The build-it-yourself options require that a team of two people (faculty and student or two faculty) apply.

Designated

  • This option is only available to institutions selected by representatives of Intel and the SC Education Program. Between 5 and 10 LittleFe units will be available through this mechanism. Institutions can request to be part of the OU or SC11 build group.

Build-out at Intermediate Parallel Workshop @ OU in Norman, Oklahoma

  • Ability to arrive by Sunday morning for the build session which will take place Sunday afternoon.
  • Built units are FOB OU. Recipients are responsible for all shipping costs from OU back to their home institution, typically an extra bag charge for recipients that arrived by airplane.
  • Between 5 and 10 LittleFe units will be available through this mechanism.

Build-out at SC11 in Seattle, Washington

  • Availability all or part of the two build slots on each of Sunday and Monday afternoon, and for the three build slots on each of Tuesday and Wednesday afternoon of the conference. Preference may be given to institutions with needed build slot availability.
  • Built units are FOB SC11. Recipients are responsible for all shipping costs from SC11 back to their home institution, typically an extra bag charge for recipients that arrived by airplane.
  • Between 5 and 10 LittleFe units will be available through this mechanism.


How to contribute the the liberation package

Building a test liberation.tar.gz

su to root on hopper. All the liberation build environment is checked out into /root/bccd-liberation/ In order to build a test version of the liberation package:

 
cvs update
cvs commit
./deploy-liberation-pkg.sh <dst path> noupload

This will do the checkout and taring of the liberation package for you. It will end up in the destination path you provide. In order to install this package copy it to a littlefe (wget,scp), untar it into /usr/local/ and use the instructions but skip the step invovling downloading the liberation package. http://www.littlefe.net/mediawiki/index.php/Liberation_Instructions

Things worth editing

There are a few important things worth editing in the bccd-liberation checkout There are three overlays. These are directory structures that will be coppied over the standard bccd either on the server (lf0), clients (lf n>0) or both.

  • x86-server
  • x86-client
  • x86-common

Beyond this there are two scripts that are run: liberate and prepareserver. Commands that go into liberate should be commands that are needed to copy the bccd onto a single machine. The commands that go into prepareserver are commands that are needed to setup lf0 as a server for pxe booting the client nodes. Also anything that edits the clients install should also go into prepareserver.

Tagging & deploying a release

To tag and deploy a liberation package release edit deploy-liberation-pkg.sh and change the $TAG variable. Change this to whatever string you want to tag the release with.

Now, run deploy-liberation-pkg.sh without any arguments. This will build liberation.tar.gz, liberation.tar.gz.sig and upload these files to bccd.cs.uni.edu.

Little-Fe PPC

This is page contains information about the 4 node PPC (Pegasos) version of Little-Fe.

This version of Little-Fe PPC is based on a Debian GNU/Linux installation. It employs UnionFS to facilitate consolidation of system and cluster software on a single hard drive (attached to lf0). All other nodes netboot from the main image by masking the server-specific files with a lightweight overlay.

  • lf0 can be found at lf-ppc.cluster.earlham.edu

Documentation and bug reporting

We are documenting the production of Little-Fe in two ways: First, bugs are filed in bugzilla (and hopefully fixed). Second, we're putting flesh on a set of instructions to build a replica of the original Little-Fe PPC. The latter is probably (but not necessarily) based on the former. The distinction is mainly that Bugzilla will show how things were done wrong, while the wiki-based instructions will show how to do things right the first time.

Adding/Setting up a new node in the Debian Unified Root

These are current as of May 19, 2006.

Server Configuration

  • add MAC addresses for the 100Mbit and Gbit network interfaces to /etc/dhcp3/dhcpd.conf
  • restart dhcp with /etc/init.d/dhcp3-server restart

Client Firmware

These are the current client firmware settings necessary to boot lf[1-n] via the Debian unified root setup. These must be set on every single client node in order to netboot successfully. If they are not there already, add or correct the following lines in nvedit:

setenv boot-device eth:dhcp,0.0.0.0,,0.0.0.0
setenv boot-file vmlinuz-2.6.15.6 init=/linuxrc root=/dev/nfs ip=dhcp console=ttyS1,115200n1

After this is setup, type setenv auto-boot? true at the main firmware prompt (not in nvedit). Reboot to read in the new environment variables or set them manually and then type boot.

Creating a new Little-Fe PPC

Follow the instructions on the Diskless Cluster Setup page.

Related pages


Fossilizing the BCCD

Your mileage may vary, and we are not updating this page any longer. We use it internally for reference, but we are now working on Liberating the BCCD (see the main Cluster Computing Group page).

This section outlines the steps required to disassemble a BCCD ISO, manifest it on a hard disk drive, and boot from that hard drive. Most or all of this must be done as root.

Mount the Images

These scripts, used for the lnx-bbc project, might prove to be helpful in working with the BCCD images: FossilScripts

The Basic Images

cd /mnt # or where ever
mkdir bccd
mount -t iso9660 -o loop bccd-ppc-2005-08-30T00-0500.iso bccd

# on PPC
mkdir initrd
gunzip < bccd/boot/root.bin > initrd.ext2
mount -t ext2 -o loop initrd.ext2 initrd

# on x86
mkdir lnx
mount -o loop bccd/lnx.img lnx
mkdir root
gunzip < lnx/root.bin > root.ext2
mount -o loop root.ext2 root

The singularity

First, decompress the singularity with the cloop utility extract_compressed_fs:

wget http://developer.linuxtag.net/knoppix/sources/cloop_0.66-1.tar.gz
tar xzf cloop_0.66-1.tar.gz
cd cloop-0.66
vim Makefile # add APPSONLY=1 at the top
make zcode
make extract_compressed_fs
./extract_compressed_fs ../bccd/singularity > ../singularity.romfs
cd ..

The latest currently-available version of cloop (2.01) doesn't work for this purpose; others might (I didn't experiment), but 0.66 definitely does.

Next, mount the singularity (you must have romfs support compiled into the kernel):

mkdir singularity
mount -t romfs -o loop singularity.romfs singularity

Extract the singularity

cd singularity
tar cf - . | (cd /path/to/destination/partition;tar xvf -)

Create a working initrd

Create an initrd for fossilized booting with the linuxrc at http://ppckernel.org/~tobias/bccd/linuxrc:

cd /mnt/root # or where ever you mounted root.ext2 (from root.bin)
wget http://ppckernel.org/~tobias/bccd/linuxrc # replace the existing linuxrc
chmod a+x linuxrc
cd ..
umount root
gzip < root.ext2 > /path/to/destination/partition/boot/root.bin

Edit singularity-init

Add / remount read-write hook

Edit /sbin/singularity-init to remount / read-write during init, using the following command:

debug "Remounting / read-write..."
mount -o rw,remount /dev/root /

This can be placed somewhere around the proc mount command.

Prepare for Fossilization of /mnt/rw

Comment out lines concerning /mnt/rw

# mount -n -t tmpfs none /mnt/rw

Add network setup to singularity-init

ifconfig eth0 inet 192.168.10.1 netmask 255.255.255.0 broadcast 192.168.10.255 up
route add default gw 192.168.10.1 eth0

Configure the bootloader

Configure your bootloader (e.g., yaboot, lilo, or grub) as follows:

  • boot the kernel /boot/vmlinux on PowerPC or /boot/bzImage on x86
  • use the initrd /boot/root.bin
  • execute the init script /linuxrc.

Here is a sample lilo.conf.

Setup Compatibility Nodes

Add the following to /linuxrc:

  • /sbin/devfsd /dev

De-Obfuscation

Remove Unneeded Symlinks

The deal is that the BCCD is now on a different (read/writeable) medium: a harddisk. Let's un-obfuscate some of the workings. An ls -l on / will reveal a few symlinks: /etc, /home, /local, /tmp, and /var. All of these point to an appropriate directory in /mnt/rw. What happens is that since the CD is not writeable, it creates a ramdisk, copies files from /etc.ro/ to /mnt/rw/etc/ (change etc accordingly), and then the /etc symlink becomes a writeable medium.

Here's the works:

rm /etc /home /local /tmp /var
mkdir /etc /home /local /tmp /var
cd /etc.ro   && tar cf - . | (cd /etc/;   tar vf -)
cd /home.ro  && tar cf - . | (cd /home/;  tar vf -)
cd /local.ro && tar cf - . | (cd /local/; tar vf -)
cd /tmp.ro   && tar cf - . | (cd /tmp/;   tar vf -)
cd /var.ro   && tar cf - . | (cd /var/;   tar vf -)

You're almost done, except you should remove the place in the scripts where the bootup copies the files from /<dir>.ro/. Just comment out the lines in /sbin/singularity-init that do the copying (around line 105):

# cp -a /etc.ro /mnt/rw/etc
# cp -a /var.ro /mnt/rw/var

While you're editing /sbin/singularity-init, also comment out these lines:

# rsync -plarv /lib/mozilla-1.6/plugins.ro/ /mnt/rw/plugins/
# chmod 1777 /mnt/rw/tmp
# debug "Making /mnt/rw/tmp/build links"
# mkdir -p /mnt/rw/tmp/build/
# mkdir -p /mnt/rw/tmp/build/staging
# mkdir -p /mnt/rw/tmp/build/staging/singularity
# mkdir -p /mnt/rw/tmp/build/staging/singularity/image
# ln -s /lib /mnt/rw/tmp/build/staging/singularity/image/lib

Configure gcc Environment

Though the BCCD is now fossilized onto the harddrive, the gcc environment does not know this as it was compiled for the CD. It will look for files in (effectively) /tmp/build/staging/singularity/image/lib ... the directories and symlink creation that we just commented out. Since /tmp is a fossilized directory, just create a symlink inside of it:

mkdir -p /tmp/build/staging/singularity/image
cd /tmp/build/staging/singularity/image/
ln -s /lib

TODO

  • fix the mounting commands so that / is only mounted once (?)
  • decide how to handle directories like /etc that are mounted in ram at /dev/rw/etc and populated with items from /etc.ro (leave as is, or create a script to simplify the setup for hard disk booting?)
    • Kevin's done this, we just need to document
      • DONE
  • modify init scripts to make them appropriate for hard disk booting (e.g., remove the "Enter a password for the default user" prompt)
    • This appears to be done
  • finish setting up networking
  • create a patch against the original singularity image for /sbin/singularity-init and other modified configuration files for automating the fossilize process
  • package up any binary additions with list-packages (see the package instructions in the wiki)
  • last but not least, keep track of all the changes we make!

Good luck! Direct questions and comments to tobias@cs.earlham.edu.


Intel Letter

Dr. Stephen Wheat, Director
HPC Platform Office
Intel, USA

Dr. Henry Neeman of the OU Supercomputing Center for Education & Research (OSCER) suggested that we write you about the following issue.

For the past several years, the National Computational Science Institute (www.computationalscience.org) has been teaching workshops on Computational Science & Engineering, and on Parallel & Cluster Computing, to hundreds of faculty across the United States. Our subteam has taken responsibility for teaching the Parallel & Cluster Computing workshops, including three held at the University of Oklahoma and co-sponsored by OSCER, hosted by Dr. Neeman. He believes that there may be substantial synergy between our goals and Intel's.

Recently we have been tasked by the SuperComputing conference series to design and implement the education program for the SC07-SC09 conferences. As you may be aware, the overwhelming majority of the High Performance Computing (HPC) resources deployed currently are dedicated to research rather than education -- yet the nation faces a critical shortage of HPC expertise, largely because of the lack of a broad enough base of university faculty trained in HPC pedagogy.

To address this situation, our group spends a significant portion of our time designing and implementing software and hardware solutions to support teaching parallel and cluster computing and CSE. The Bootable Cluster CD (http://bccd.cs.uni.edu) and Little-Fe (http://cluster.earlham.edu/projects.html) are two manifestations of our work. The BCCD is a live CD that transforms an x86 based lab into an ad-hoc computational cluster. Little-Fe is an inexpensive, portable, 4-8 node computational cluster. The principle cost component of the Little-Fe design is the motherboard and CPUs. Our design is based on small form-factor motherboards, such as the Intel D945GPMLKR Media Series boards.

In order to support computational science curriculum development and delivery we are gearing-up to build a number of Little-Fe units, approximately 20, for use by science faculty across the country. These faculty members, working with their undergraduate student researchers, will develop curriculum modules and deliver workshops and presentations in a variety of venues. The curriculum and workshops are preparatory activities for the education program we are implementing for SC07-SC09.

Because of financial considerations, we currently find ourselves forced to use low cost non-Intel components in our Little-Fe units. However, we are aware that Intel has been a longtime supporter of HPC research and education, and that you in particular have been an advocate for precisely the kind of work that our team has been pursuing.

In light of these points, we wonder if Intel might be interested in either donating a number of these boards and CPUs or permitting us to purchase them at a discount? In exchange we could provide Intel with appropriate credit on both the physical units and in our articles about the project.

Thank-you for your time.

Paul Gray
David Joiner
Thomas Murphy
Charles Peck


Intel design

Intel® Desktop Board D945GPMLKR Media Series
http://www.intel.com/products/motherboard/d945gpm/index.htm
microATX (9.60 inches by 9.60 inches [243.84 millimeters by 243.84 millimeters])
10/100/1000 interface and 10/100 interface

HPC Wire article (very stale now)

What is Little-Fe

One of the principle challenges to computational science and high performance computing (HPC) education is that many institutions do not have access to HPC platforms for demonstrations and laboratories. Paul Gray's Bootable Cluster CD (BCCD) project (http://bccd.cs.uni.edu) has made great strides in this area by making it possible to non-destructively, and with little effort, convert a computer lab of Windows or Macintosh computers into an ad-hoc cluster for educational use. Little-Fe takes that concept one step further by merging the BCCD with an inexpensive design for an 8 node portable computational cluster. The result is a machine that weighs less than 50 pounds, easily and safely travels via checked baggage on the airlines, and sets-up in 10 minutes wherever there is a 110V outlet and a wall to project an image on. The BCCD's list-packages feature supports curriculum modules in a variety of natural science disciplines, making the combination of Little-Fe and the BCCD a ready-to-run solution for computational science and HPC education.

In addition to making a case for the value of Little-Fe-like clusters, this article describes Little-Fe's hardware and software configuration including plans for a "do-it-yourself" version.

Why Little-Fe is Useful

Besides being fundamentally cool, Little-Fe's principle edge is resource availability for computational science education. To teach a realistic curriculum in computational science, there must be guaranteed and predictable access to HPC resources. There are currently two common barriers to this access. Local policies typically allocate HPC resources under a "research first, pedagogy second" prioritization scheme, which often precludes the use of "compute it now” science applications in the classroom. The second barrier is the capital and on-going maintenance costs associated with owning an HPC resource, this affects most mid-size and smaller educational institutions.

While relatively low-cost Beowulf-style clusters have improved this situation somewhat, HPC resource ownership is still out of reach for many educational institutions. Little-Fe's total cost is less than $2,500, making it easily affordable by a wide variety of K-16 schools.

Little-Fe's second important feature is ease of use, both technically and educationally. Our adoption of the BCCD as the software distribution toolkit makes it possible to smoothly and rapidly advance from bare hardware to science. Further, we have minimized ongoing maintenance since both hardware and software are standardized. Paul Gray from the University of Northern Iowa has successfully maintained the BCCD for many years now via a highly responsive and personable web presence directly benefiting all BCCD users.

The BCCD also provides a growing repository of computational science software and curriculum modules. We are committed to expanding these modules to enhance the use of Little-Fe. More importantly, we seek to advance the amount of quality computational science woven into the classroom, into laboratory explorations, and into student projects. As others build their Little-Fes, our efforts will leverage their support through the development of additional open source curriculum modules.

Portability is useful in a variety of settings, such as workshops, conferences, demonstrations, and the like. Portability is also useful for educators, whether illustrating principles in the K-12 arena or being easily passed from college classroom to college classroom. Little-Fe is an immediate, full-fledged, available computational resource.

Little-Fe's Hardware

Little-Fe v1 consisted of eight Travla mini-ITX VIA computers placed in a nearly indestructible Pelican case. To use it you took all the nodes, networking gear, power supplies, etc. out of the case and set it up on a table. Each node was a complete computer with its own hard drive. While this design met the portability, cost, and low power design goals, it was overweight and deployment was both time-consuming and error-prone.

Successive versions of Little-Fe have moved to a physical architecture where the compute nodes are bare Mini-ITX motherboards mounted in a custom designed cage, which in turn is housed in the Pelican case. To accomplish this we stripped the Travla nodes down, using only their motherboards, replaced their relatively large power supplies with daughter board style units which mount directly to the motherboard's ATX power connector. These changes saved both space and weight. Little-Fe v2 and beyond use diskless compute nodes, that is only the head node has a disk drive. The mechanics of this setup are described in the software section of this article. Removing 7 disk drives from the system reduced power consumption considerably and further reduced the weight and packaging complexity.

The current hardware manifest consists of:

As we continue to develop Little-Fe the parts we employ will evolve. The current parts list can be found at http://contracosta.edu/hpc/resources/Little_Fe/.

Assembling Little-Fe consists of:

  1. Mounting the power supplies to the motherboards
  2. Installing the motherboards in the cage
  3. Mounting the system power supply to the cage
  4. Cabling the power supplies
  5. Mounting the Ethernet switch and installing the network cabling
  6. Mounting the disk drive and CD-RW/DVD drive to the cage and installing the power and data cables
  7. Installing the cooling fans in the cage
  8. Plugging in the monitor, keyboard, and mouse
  9. Performing the initial power-up tests
  10. Configuring the BIOS on each motherboard to boot via the LAN and PXE

Cooling Little-Fe has been an on-going challenge which we have just recently begun to solve. The problem hasn't been the total amount of heat generated, but rather airflow to particular locations on the motherboards during compute intensive loads. By re-using the 25mm fans which came with the Travla cases we have been able to improve inter-board cooling within the motherboard cage. The importance of testing heat dissipation during a variety of system loads became painfully clear to us during a NCSI Parallel Computing Workshop at Oklahoma University in August, 2005. After a presentation on Little-Fe we left it running a POV ray tracing run that was particularly large. Not 10 minutes later there was a dramatic "pop" and a small puff of smoke as one of voltage regulators on one of the motherboards went up-in-smoke. Fortunately Little-Fe can adapt easily to a 7, or fewer, node architecture.

For transportation the cage simply sits inside the Pelican case. The base of the cage is sized so that it fits snugly in the bottom of the case, this prevents Little-Fe from moving around inside the box. The addition of a single layer of foam padding on each of the six sides further cushions Little-Fe.

Little-Fe's Software

Early versions of Little-Fe used the Debian Linux distribution as the basis for the system software. This was augmented by a wide variety of system, communication, and computational science packages, each of which had to be installed and configured on each of the 8 nodes. Even with cluster management tools such as C3 this was still a time-consuming process. One of our primary goals has been to reduce the friction associated with using HPC resources for computational science education. This friction is made-up of the time and knowledge required to configure and maintain HPC resources. To this end we re-designed Little-Fe's system software to use Paul Gray's Bootable Cluster CD (BCCD) distribution. The BCCD comes ready-to-run with all of the system and scientific software tools necessary to support a wide range of computational science education. A list of highlights include:

  • gcc, g77, and development tools, editors, profiling libraries and debugging utilities
  • Cluster Command and Control (C3) tools
  • MPICH, LAM-MPI and PVM in every box
  • The X Window System
  • OpenMosix with openmosixview and userland OpenMosix tools
  • Full openPBS Scheduler support
  • octave, gnuplot, Mozilla's Firefox, and about 1400 userland utlities
  • Network configuration and debugging utilities
  • Ganglia and other monitoring packages

Another important aspect of the BCCD environment is the ability to dynamically install packages to tailor the running environment. The BCCD distribution offers supplemental binary packages that are designed to be added as desired to a running BCCD image to extend curricular explorations, to promote research, to further profile or debug applications, and so on. These supplemental packages, installable using the BCCD "list-packages" tool, add

  • functionality, such as program profiling support through perfctr and the Performance API utilities
  • curricular components, such as lesson plans that leverage Gromacs
  • research tools such as planned support for mpiBLAST and CONDOR
  • more utilities, such as pyMPI support
  • and workarounds for less-than-optimal configurations

More information about the BCCD can be found at http://bccd.cs.uni.edu.

While the name would imply that it is exclusively used for running off of a CDROM image, the BCCD has evolved to support many other forms of operation including network or PXE-booting, running from a RAM disk, and even recent success running off of USB "pen" drives. The BCCD is designed to be completely non-intrusive to the local hard drive, that is you boot from the CD. For teaching lab configurations this very important. Little-Fe's environment permits permanent installation on the local hard drive. This both simplifies the on-going use and improves performance for some types of loads. In order to accomplish this "fossilization" on the head node's hard disk the following steps are performed:

  • Download and burn the current BCCD x86 ISO image from http://bccd.cs.uni.edu.
  • Place the CD in Little-Fe's drive and boot the head node.
  • Login as root.
  • Follow the Little-Fe Bootstrap instructions at http://cluster.earlham.edu.
  • Reboot the head node.
  • Login as root.
  • Run "$ list-packages" and install the Little-Fe Configuration package.
  • The configuration package will be downloaded and run by list-packages. The script will be prompt you for information about your hardware and network configuration.
  • Reboot the head node.
  • Boot each of the compute nodes.
  • Login as bccd.
  • Start teaching computational science.

This set of steps is only required when the BCCD is initially installed on the Little-Fe hardware. Successive uses only require booting the head node and then each of the compute nodes.

The BCCD image was motivated and its evolution is sustained by efforts in the teaching of high performance computing. Curricular modules developed for the BCCD are installed through the above-mentioned "list-packages" tool. Some of the curricular modules that have been developed for the BCCD image, and used in the recent week-long National Computational Science Institute workshop on high parallel and cluster computing held this past summer at the OU Supercomputing Center for Education and Research, include content on molecular dynamics using Gromacs, application performance evaluation using PAPI, and linear algebra explorations that compare BLAS from LINPACK, ATLAS, and Kazushige Goto. Through these and other curricular packages that are being developed, the educational aspect of the LittleFe environment boasts eye-catching educational content which is extremely robust and requires minimal system setup.

Future Plans for Little-Fe

Little-Fe is very much a work-in-progress. Each time we use it as part of a workshop, class, etc. we learn more about ways to improve the design and extend its reach to additional educational settings. We are currently working on these items:

  • Standardization of motherboard cage design for commercial fabrication.
  • Detailed, step-by-step, plans for assembling the hardware and installing the software.
  • Gigabit network fabric.
  • Head node motherboard with a full set of peripheral connections, 7 compute nodes with just Ethernet. Compute node motherboards can then be cheaper, consume less power, and generate less heat.
  • An as yet unrealized design goal was to be able to use Little-Fe with no external power. We originally thought to do this just via a UPS. Currently we are considering solar panels to support truely standalone usage. We are also considering MPI's regenerative braking feature, which is built into some implementations of MPI_Finalize.
  • Cheaper/lighter/faster. Moore's law will affect us, as it will all other compute technology. For instance, we hear that in five years we will have a 64 core processor on our desks. It is reasonable that the five-year-in-the-future-Little-Fe will have at least 256 processors, and will be capable of exploring SMP, ccNUMA, and cluster architectures simultaneously? We hope so.


Hardware Manifest

Last updated 2005-10-19, this is now more current than the HPC Wire article

The current hardware manifest consists of:


FAQ

Question: What do I do when the Via EPIA-M motherboard immediately exits the PXE boot ROM before even issuing a DHCP query?

Answer: Use the red jumper located near the battery on the motherboard to drain the capacitors and therefor reset the NVRAM. To do this move the jumper from the two pins it is stored on to the other two pins and leave it there for about 10 seconds.


Setup

Pymol

  1. As root, run startx
  2. When X is started, open a terminal.
  3. cd /home/gromacs
  4. pymol 2LZM.pdb




To Be Sorted