Difference between revisions of "Folding@Clusters"
m |
|||
(4 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
− | * [[Checkpoint and Restarting| | + | This page contains notes about Folding@Clusters and about Earlham's involvement in Folding@Home (fah, folding at home, fahclient, etc.) more generally. |
+ | |||
+ | The [https://stats.foldingathome.org/team/577 Earlham CS Folding@Home team page] contains our most recent data. | ||
+ | |||
+ | = Specific resources = | ||
+ | |||
+ | * [[Checkpoint and Restarting|Checkpoint and Restarting]] | ||
+ | * [[Forcing checkpoints in GROMACS]] | ||
+ | * [[Cluster:Using the F@C PBS script|Using the F@C PBS script]] | ||
+ | |||
+ | = Original notes from the cluster website about Folding@Clusters = | ||
+ | |||
+ | === Folding@Clusters === | ||
+ | |||
+ | This software tool is designed to make high performance computational resources easily available to chemists and biologists for running simulations of large bio-molecules using open source molecular dynamics packages. | ||
+ | |||
+ | Josh Hursey, Josh McCoy, Charles Peck, and John Schaefer gave a presentation on this work at SuperComputing04, at the Purdue University Research area, in November, 2004. We also presented this work as a poster at SIAM's Computational Science and Engineering conference in February, 2005. | ||
+ | |||
+ | An article based on this work appears in the November, 2005 issue of Dr. Dobb's Journal. | ||
+ | |||
+ | The abstract for our SC04 submission follows: | ||
+ | |||
+ | Instead of traditional, tightly coupled massively parallel computing, current distributed computing projects such as SETI@home or Folding@Home use a client-server model to perform embarrassingly parallel computing, allowing for one to tap resources (hundreds of thousands of CPUs in PCs throughout the world) impossible to obtain by other means. However, certain algorithms could greatly benefit from a hybrid approach, combining the massive resources available to distributed computing with the tight coupling traditionally found only in supercomputers. | ||
+ | |||
+ | Towards this end, Folding@Clusters is an adaptive framework for harnessing tightly coupled cluster resources for protein folding research. It combines capability discovery, load balancing, process monitoring, and checkpoint/re-start services to provide a platform for molecular dynamics simulations on a range of grid-based parallel computing resources. | ||
+ | |||
+ | The raw computing power available for scientific inquiry continues to grow while the abstraction level of the tools available to scientists does not advance in a similar manner. Folding@Clusters provides chemists, investigating protein folding, with a high-level interface to a variety of parallel compute architectures, e.g. lab clusters, Beowulf clusters, large SMP machines, and clusters of SMP machines. | ||
+ | |||
+ | Folding@Clusters uses open source building blocks, such as the GROMACS molecular dynamics package and the LAM-MPI communications library, to provide the lowest-level functionality. Building on this foundation we construct a three-tier architecture: cluster, node, and science core, which provides a basis on which to abstract the process of performing a molecular dynamics simulation. This includes work unit preparation, distribution, and result aggregation, on a compute resource with arbitrary capabilities (CPU speed, CPU count, memory, etc.) | ||
+ | |||
+ | = Other notes = | ||
+ | |||
+ | * See config files /etc/fahclient/config.xml on layout or Whedon’s compute nodes, /var/lib/fahclient is working directory, tail the log file to see what’s going on - there’s a lot of waiting for a work unit | ||
+ | * Team stats for 577 - give resource name as part of the config, so Whedon and layout are tracked separately | ||
+ | * Init script: /etc/init.d/fahclient --help # Mostly it sits and runs, starts on reboot only on compute nodes |
Latest revision as of 14:06, 11 May 2020
This page contains notes about Folding@Clusters and about Earlham's involvement in Folding@Home (fah, folding at home, fahclient, etc.) more generally.
The Earlham CS Folding@Home team page contains our most recent data.
Contents
Specific resources
Original notes from the cluster website about Folding@Clusters
Folding@Clusters
This software tool is designed to make high performance computational resources easily available to chemists and biologists for running simulations of large bio-molecules using open source molecular dynamics packages.
Josh Hursey, Josh McCoy, Charles Peck, and John Schaefer gave a presentation on this work at SuperComputing04, at the Purdue University Research area, in November, 2004. We also presented this work as a poster at SIAM's Computational Science and Engineering conference in February, 2005.
An article based on this work appears in the November, 2005 issue of Dr. Dobb's Journal.
The abstract for our SC04 submission follows:
Instead of traditional, tightly coupled massively parallel computing, current distributed computing projects such as SETI@home or Folding@Home use a client-server model to perform embarrassingly parallel computing, allowing for one to tap resources (hundreds of thousands of CPUs in PCs throughout the world) impossible to obtain by other means. However, certain algorithms could greatly benefit from a hybrid approach, combining the massive resources available to distributed computing with the tight coupling traditionally found only in supercomputers.
Towards this end, Folding@Clusters is an adaptive framework for harnessing tightly coupled cluster resources for protein folding research. It combines capability discovery, load balancing, process monitoring, and checkpoint/re-start services to provide a platform for molecular dynamics simulations on a range of grid-based parallel computing resources.
The raw computing power available for scientific inquiry continues to grow while the abstraction level of the tools available to scientists does not advance in a similar manner. Folding@Clusters provides chemists, investigating protein folding, with a high-level interface to a variety of parallel compute architectures, e.g. lab clusters, Beowulf clusters, large SMP machines, and clusters of SMP machines.
Folding@Clusters uses open source building blocks, such as the GROMACS molecular dynamics package and the LAM-MPI communications library, to provide the lowest-level functionality. Building on this foundation we construct a three-tier architecture: cluster, node, and science core, which provides a basis on which to abstract the process of performing a molecular dynamics simulation. This includes work unit preparation, distribution, and result aggregation, on a compute resource with arbitrary capabilities (CPU speed, CPU count, memory, etc.)
Other notes
- See config files /etc/fahclient/config.xml on layout or Whedon’s compute nodes, /var/lib/fahclient is working directory, tail the log file to see what’s going on - there’s a lot of waiting for a work unit
- Team stats for 577 - give resource name as part of the config, so Whedon and layout are tracked separately
- Init script: /etc/init.d/fahclient --help # Mostly it sits and runs, starts on reboot only on compute nodes