Difference between revisions of "Cluster Information"

From Earlham CS Department
Jump to navigation Jump to search
(Summer of Fun (2009): GalaxSee runs on BobSCEd with lamboot)
(Summer of Fun (2009): GalaxSee works on both BobSCEd and c13)
Line 1: Line 1:
 
== Summer of Fun (2009) ==
 
== Summer of Fun (2009) ==
What works so far?
+
What works so far? B = builds, R = runs, W = works
 
{| class="wikitable" border="1"
 
{| class="wikitable" border="1"
! rowspan="2" |
+
! rowspan="2" | B-builds, R-runs
 
! colspan="4" | area under curve
 
! colspan="4" | area under curve
! colspan="4" | GalaxSee
+
! colspan="4" | GalaxSee (standalone)
 
|-
 
|-
 
!  Serial
 
!  Serial
Line 16: Line 16:
 
|-
 
|-
 
!  acls
 
!  acls
Y
+
BRW
Y
+
BRW
Y
+
BRW
Y
+
BRW
 
|   
 
|   
Y
+
BR
 
|   
 
|   
 
|
 
|
 
|-
 
|-
BobSCEd
+
bobsced0
Y
+
BRW
Y
+
BRW
Y
+
BRW
|  Y
+
BRW
|
 
Y (lam)
 
 
|
 
|
 +
|  BR
 +
 
|
 
|
 
|-
 
|-
Cairo
+
c13
|
 
 
|
 
|
 
|
 
|
Line 42: Line 41:
 
|
 
|
 
|
 
|
 +
|  BR
 
|
 
|
 
|
 
|
Line 61: Line 61:
 
|
 
|
 
|
 
|
Y
+
BR
 
|
 
|
 
|
 
|

Revision as of 13:10, 18 June 2009

Summer of Fun (2009)

What works so far? B = builds, R = runs, W = works

B-builds, R-runs area under curve GalaxSee (standalone)
Serial MPI OpenMP Hybrid Serial MPI OpenMP Hybrid
acls BRW BRW BRW BRW BR
bobsced0 BRW BRW BRW BRW BR
c13 BR
pople
Charlie's laptop BR

Implementations of area under the curve

  • Serial
  • OpenMP (shared)
  • MPI (message passing)
  • MPI (hybrid mp and shared)
  • OpenMP + MPI (hybrid)

GalaxSee Goals

  • Good piece of code, serves as teaching example for n-body problems in petascale.
  • Dials, knobs, etc. in place to easily control how work is distributed when running in parallel.
  • Architecture generally supports hybrid model running on large-scale constellations.
  • Produces runtime data that enables nice comparisons across multiple resources (scaling, speedup, efficiency).
  • Render in BCCD, metaverse, and /dev/null environments.
  • Serial version
  • Improve performance on math?

GalaxSee - scale to petascale with MPI and OpenMP hybrid.

  • GalaxSee - render in-world and steer from in-world.
  • Area under a curve - serial, MPI, and OpenMP implementations.
  • OpenMPI - testing, performance.
  • Start May 11th

LittleFe

  • Testing
  • Documentation
  • Touch screen interface

To Do

  • Subscribe to ccg@cs.earlham.edu
  • Work on two poster abstracts
  • Work on team essay

Notes from May 21, 2009 Review

  • Combined Makefiles with defines to build on a particular platform
  • Write a driver script for GalaxSee ala the area under the curve script, consider combining
  • Schema
    • date, program_name, program_version, style, command line, compute_resource, NP, wall_time
  • Document the process from start to finish
  • Consider how we might iterate over e.g. number of stars, number of segments, etc.
  • Command line option to stat.pl that provides a Torque wrapper for the scripts.
  • Lint all code, consistent formatting
  • Install latest and greatest Intel compiler in /cluster/bobsced

(Old) To Do

BCCD Liberation

  • v1.1 release - upgrade procedures

Curriculum Modules

  • POVRay
  • GROMACS
  • Energy and Weather
  • Dave's math modules
  • Standard format, templates, how-to for V and V

LittleFe

Infrastructure

  • Masa's GROMACS interface on Cairo
  • gridgate configuration, Open Science Grid peering
  • hopper'

SC Education

Current Projects

Past Projects

General Stuff

Items Particular to a Specific Cluster

Curriculum Modules

Possible Future Projects

Archive