Difference between revisions of "Cluster Information"
Jump to navigation
Jump to search
Amweeden06 (talk | contribs) (→Summer of Fun (2009)) |
Amweeden06 (talk | contribs) m (→Summer of Fun (2009)) |
||
Line 6: | Line 6: | ||
* MPI (hybrid mp and shared) | * MPI (hybrid mp and shared) | ||
* OpenMP + MPI (hybrid) | * OpenMP + MPI (hybrid) | ||
+ | |||
GalaxSee Goals | GalaxSee Goals | ||
* Good piece of code, serves as teaching example for n-body problems in petascale. | * Good piece of code, serves as teaching example for n-body problems in petascale. | ||
Line 11: | Line 12: | ||
* Architecture generally supports hybrid model running on large-scale constellations. | * Architecture generally supports hybrid model running on large-scale constellations. | ||
* Produces runtime data that enables nice comparisons across multiple resources (scaling, speedup, efficiency). | * Produces runtime data that enables nice comparisons across multiple resources (scaling, speedup, efficiency). | ||
+ | * Render in BCCD, metaverse, and /dev/null environments. | ||
GalaxSee - scale to petascale with MPI and OpenMP hybrid. | GalaxSee - scale to petascale with MPI and OpenMP hybrid. |
Revision as of 10:27, 14 May 2009
Contents
Summer of Fun (2009)
Implementations of area under the curve
- Serial
- OpenMP (shared)
- MPI (message passing)
- MPI (hybrid mp and shared)
- OpenMP + MPI (hybrid)
GalaxSee Goals
- Good piece of code, serves as teaching example for n-body problems in petascale.
- Dials, knobs, etc. in place to easily control how work is distributed when running in parallel.
- Architecture generally supports hybrid model running on large-scale constellations.
- Produces runtime data that enables nice comparisons across multiple resources (scaling, speedup, efficiency).
- Render in BCCD, metaverse, and /dev/null environments.
GalaxSee - scale to petascale with MPI and OpenMP hybrid.
- GalaxSee - render in-world and steer from in-world.
- Area under a curve - serial, MPI, and OpenMP implementations.
- OpenMPI - testing, performance.
- Start May 11th
To Do
- Subscribe to ccg@cs.earlham.edu
- Work on two poster abstracts
- Work on team essay
(Old) To Do
BCCD Liberation
- v1.1 release - upgrade procedures
Curriculum Modules
- POVRay
- GROMACS
- Energy and Weather
- Dave's math modules
- Standard format, templates, how-to for V and V
LittleFe
- Explore machines from first Intel donation (notes and pictures)
- Build 4 SCED units
Infrastructure
- Masa's GROMACS interface on Cairo
- gridgate configuration, Open Science Grid peering
- hopper'
SC Education
- Scott's homework (see the message)
Current Projects
Past Projects
General Stuff
- Todo
- General
- Hopper
- Howto's
- Networking
- 2005-11-30 Meeting
- 2006-12-12 Meeting
- 2006-02-02 Meeting
- 2006-03-16 Meeting
- 2006-04-06 Meeting
- Node usage
- Numbers for Netgear switches
- Latex Poster Creation
- Bugzilla Etiquette
Items Particular to a Specific Cluster
Curriculum Modules
Possible Future Projects
Archive
- TeraGrid '06 (Indianapolis, June 12-15, 2006)
- SIAM Parallel Processing 2006 (San Fransisco, February 22-24, 2006)
- Conference webpage
- Little-Fe abstract
- Low Latency Kernal abstract
- Folding@Clusters
- Best practices for teaching parallel programming to science faculty (Charlie only)