Difference between revisions of "Folding@Clusters"

From Earlham CS Department
Jump to navigation Jump to search
m
m
 
Line 11: Line 11:
 
= Original notes from the cluster website about Folding@Clusters =  
 
= Original notes from the cluster website about Folding@Clusters =  
  
Folding@Clusters
+
=== Folding@Clusters ===
 +
 
 
This software tool is designed to make high performance computational resources easily available to chemists and biologists for running simulations of large bio-molecules using open source molecular dynamics packages.
 
This software tool is designed to make high performance computational resources easily available to chemists and biologists for running simulations of large bio-molecules using open source molecular dynamics packages.
  
Line 27: Line 28:
  
 
Folding@Clusters uses open source building blocks, such as the GROMACS molecular dynamics package and the LAM-MPI communications library, to provide the lowest-level functionality. Building on this foundation we construct a three-tier architecture: cluster, node, and science core, which provides a basis on which to abstract the process of performing a molecular dynamics simulation. This includes work unit preparation, distribution, and result aggregation, on a compute resource with arbitrary capabilities (CPU speed, CPU count, memory, etc.)
 
Folding@Clusters uses open source building blocks, such as the GROMACS molecular dynamics package and the LAM-MPI communications library, to provide the lowest-level functionality. Building on this foundation we construct a three-tier architecture: cluster, node, and science core, which provides a basis on which to abstract the process of performing a molecular dynamics simulation. This includes work unit preparation, distribution, and result aggregation, on a compute resource with arbitrary capabilities (CPU speed, CPU count, memory, etc.)
 +
 +
= Other notes =
 +
 +
* See config files /etc/fahclient/config.xml on layout or Whedon’s compute nodes, /var/lib/fahclient is working directory, tail the log file to see what’s going on - there’s a lot of waiting for a work unit
 +
* Team stats for 577 - give resource name as part of the config, so Whedon and layout are tracked separately
 +
* Init script: /etc/init.d/fahclient --help # Mostly it sits and runs, starts on reboot only on compute nodes

Latest revision as of 15:06, 11 May 2020

This page contains notes about Folding@Clusters and about Earlham's involvement in Folding@Home (fah, folding at home, fahclient, etc.) more generally.

The Earlham CS Folding@Home team page contains our most recent data.

Specific resources

Original notes from the cluster website about Folding@Clusters

Folding@Clusters

This software tool is designed to make high performance computational resources easily available to chemists and biologists for running simulations of large bio-molecules using open source molecular dynamics packages.

Josh Hursey, Josh McCoy, Charles Peck, and John Schaefer gave a presentation on this work at SuperComputing04, at the Purdue University Research area, in November, 2004. We also presented this work as a poster at SIAM's Computational Science and Engineering conference in February, 2005.

An article based on this work appears in the November, 2005 issue of Dr. Dobb's Journal.

The abstract for our SC04 submission follows:

Instead of traditional, tightly coupled massively parallel computing, current distributed computing projects such as SETI@home or Folding@Home use a client-server model to perform embarrassingly parallel computing, allowing for one to tap resources (hundreds of thousands of CPUs in PCs throughout the world) impossible to obtain by other means. However, certain algorithms could greatly benefit from a hybrid approach, combining the massive resources available to distributed computing with the tight coupling traditionally found only in supercomputers.

Towards this end, Folding@Clusters is an adaptive framework for harnessing tightly coupled cluster resources for protein folding research. It combines capability discovery, load balancing, process monitoring, and checkpoint/re-start services to provide a platform for molecular dynamics simulations on a range of grid-based parallel computing resources.

The raw computing power available for scientific inquiry continues to grow while the abstraction level of the tools available to scientists does not advance in a similar manner. Folding@Clusters provides chemists, investigating protein folding, with a high-level interface to a variety of parallel compute architectures, e.g. lab clusters, Beowulf clusters, large SMP machines, and clusters of SMP machines.

Folding@Clusters uses open source building blocks, such as the GROMACS molecular dynamics package and the LAM-MPI communications library, to provide the lowest-level functionality. Building on this foundation we construct a three-tier architecture: cluster, node, and science core, which provides a basis on which to abstract the process of performing a molecular dynamics simulation. This includes work unit preparation, distribution, and result aggregation, on a compute resource with arbitrary capabilities (CPU speed, CPU count, memory, etc.)

Other notes

  • See config files /etc/fahclient/config.xml on layout or Whedon’s compute nodes, /var/lib/fahclient is working directory, tail the log file to see what’s going on - there’s a lot of waiting for a work unit
  • Team stats for 577 - give resource name as part of the config, so Whedon and layout are tracked separately
  • Init script: /etc/init.d/fahclient --help # Mostly it sits and runs, starts on reboot only on compute nodes