https://wiki.cs.earlham.edu/api.php?action=feedcontributions&user=Skylar&feedformat=atom
Earlham CS Department - User contributions [en]
2024-03-29T06:36:25Z
User contributions
MediaWiki 1.32.1
https://wiki.cs.earlham.edu/index.php?title=Al-salam&diff=10648
Al-salam
2010-01-28T01:21:55Z
<p>Skylar: /* Quick breakdown */</p>
<hr />
<div>Al-Salam is the working name for the Earlham Computer Science Department's upcoming cluster computer.<br />
<br />
At the moment Al-Salam exists only as a $40,000 grant and a growing list of tentative specifications:<br />
<br />
== Latest Overarching Questions ==<br />
*Should we build this machine ourselves?<br />
*#Are we wasting our money and learning opportunity letting them do the building for us?<br />
*#If it is cheaper, Would it be a useful experience for the students this coming semester to take a large collection of hardware and make it into a cluster?<br />
*# Yes, this was pretty clear from the email thread in early December.<br />
*How much if any GPGPU hardware do we want? 0, 1 or 2 nodes worth?<br />
*Do we want a high bandwidth/low latency network?<br />
**We do not. More expensive than it is worth.<br />
*What software stack do we want to run? Vendor supplied or the BCCD?<br />
**Both. Vendor-supplied base with a BCCD virtual machine<br />
***Will the virtual machine support CUDA?<br />
* Do compute nodes have spinning disk? <br />
** Compute nodes have a spinning disk. Solid state is still expensive<br />
* What's on the local persistent store? /tmp? An entire OS?<br />
* Support<br />
** Consider getting the cheapest hardware support, loosing a node isn't critical as long as they send a replacement quickly.<br />
<br />
== Parts List ==<br />
# Nodes - case, motherboard(s), power supply, CPU, RAM, GPGPU cards<br />
# Switch - managed, cut-through<br />
# Power distribution - rack-mount PDUs<br />
<br />
== Tentative Specifications ==<br />
<br />
=== Budget ===<br />
* $35,000 (leaving $5,000 for discretionary spending)<br />
<br />
=== Nodes ===<br />
* Intel Nehalem processors<br />
* 4 core processors minimum<br />
** Six cores still expensive<br />
* 1.5GB RAM per core<br />
<br />
=== Specialty Nodes ===<br />
* Two nodes should support CUDA GPGPU<br />
<br />
Educationally, we could expect to get significant use out of GPGPUs, but the production use is limited.<br />
Increasing the variance of the architecture landscape would be a bonus to education.<br />
<br />
=== Network ===<br />
* Gigabit Ethernet fabric with switch<br />
<br />
=== Disk ===<br />
* Spinning Disk<br />
<br />
=== OS ===<br />
* Virtual BCCD on top of built-in OS.<br />
<br />
== Quick breakdown ==<br />
<br />
=== Nodes ===<br />
<br />
{| class="wikitable" border="1"<br />
!<br />
! [[Al-salam#ION_Computer_Systems_Quotation_.2361116|ION #61116]]<br />
! [[Al-salam#ION_Computer_Systems_Quotation_.2361164|ION #61164]]<br />
! [[Al-salam#Silicon_Mechanics_Quote_.23174536|SM #174536]]<br />
! [[Al-salam#Newegg_Quote_.231|Newegg #1]]<br />
! [[Al-salam#Newegg_Quote_.232|Newegg #2]]<br />
|-<br />
| '''CPU'''<br />
| 72 2.4GHz Intel E5530<br />
| 80 2.4GHz Intel E5530<br />
| 80 2.4GHz Intel E5530<br />
| 128 2.4GHz Intel E5530<br />
| 112 2.4GHz Intel E5530<br />
|-<br />
| '''RAM'''<br />
| 108GB PC3-10600<br />
| 120GB PC3-10600<br />
| 120GB DDR3-1333<br />
| 192GB DDR3-1333<br />
| 168GB DDR3-1333<br />
|-<br />
| '''GPU'''<br />
| 2 Tesla C1060<br />
| 2 Tesla C1060<br />
| 2 Tesla C1060<br />
| None<br />
| 4 Tesla C1060<br />
|-<br />
| '''Local disk'''<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Yes<br />
| Yes<br />
|-<br />
| '''Shared chassis'''<br />
| No<br />
| Yes<br />
| Yes<br />
| No<br />
| No<br />
|-<br />
| '''Remote mgmt'''<br />
| No<br />
| No<br />
| IPMI<br />
| No<br />
| IPMI on GPU nodes<br />
|-<br />
| '''Size (just nodes)'''<br />
| 9U<br />
| 6U<br />
| 6U<br />
| 16U<br />
| 12U<br />
|-<br />
| '''Price'''<br />
| $33,173.20<br />
| $33,054.30<br />
| $30,078.00<br />
| $32,910.56<br />
| $34,696.78<br />
|}<br />
<br />
=== Power distribution ===<br />
<br />
{| class="wikitable" border="1"<br />
!<br />
! [http://www.tripplite.com/en/products/model.cfm?txtSeriesID=446&EID=14295&txtModelID=2005 PDU1220]<br />
! [http://www.tripplite.com/en/products/model.cfm?txtSeriesID=513&EID=77300&txtModelID=3867 PDUMH20]<br />
! [http://accessories.us.dell.com/sna/products/Server_Network/productdetail.aspx?c=us&l=en&cs=04&sku=A0151958 AP9563]<br />
! [http://accessories.us.dell.com/sna/products/Server_Network/productdetail.aspx?c=us&l=en&s=bsd&cs=04&sku=A0748689 AP7801]<br />
|-<br />
| '''Vendor'''<br />
| TrippLite<br />
| TrippLite<br />
| APC<br />
| APC<br />
|-<br />
| '''Size'''<br />
| 1U<br />
| 1U<br />
| 1U<br />
| 1U<br />
|-<br />
| '''Capabilities'''<br />
| Dumb<br />
| Metered<br />
| Dumb<br />
| Metered<br />
|-<br />
| '''Input power'''<br />
| 20A, 1x NEMA 5-20P<br />
| 20A, 1x NEMA L5-20P w/ NEMA 5-20P adapter<br />
| 20A, 1x NEMA 5-20P<br />
| 20A, 1x NEMA 5-20P<br />
|-<br />
| '''Output power'''<br />
| 13x NEMA 5-20R<br />
| 12x NEMA 5-20R<br />
| 10x NEMA 5-20R<br />
| 8x NEMA 5-20R<br />
|-<br />
| '''Price'''<br />
| $195<br />
| $230<br />
| $120<br />
| $380<br />
|}<br />
<br />
== ION Computer Systems Quotation #61116 ==<br />
*2 ION G10 Server with GPU: 4,972.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** (1) NVidia Tesla C1060 w. 4GB DDR3<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
*7 ION G10 Server without GPU: $3,697.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
* Networking Fabric<br />
**Network not included<br />
<br />
* Other stuff<br />
** scorpion: ION bootable USB Flash device for trouble shooting.<br />
** 3 year Next Business Day response Onsite Repair Service by Source Support<br />
** Default load for testing (Service Partition + CentOS 5.3 for Intel64)<br />
<br />
* Price tag: $33,173.20<br />
<br />
== ION Computer Systems Quotation #61164 ==<br />
<br />
* 2 ION G10 Server with GPU: $4,972.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** (1) NVidia Tesla C1060 w. 4GB DDR3<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
* 4 ION T11 DualNode: $6,477.0 each<br />
** (2x2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** Total memory: 12GB DDR3_1333 per node<br />
** No RAID, Separate disks (NO redundancy)<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate Constellation 160GB, 7200RPM, SATA 3Gb NCQ 2.5“ Disk<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
** These nodes are modular. One can be unplugged and worked on while the others remain running.<br />
<br />
* Network<br />
** Network Not Included<br />
<br />
* Other stuff<br />
** scorpion: ION bootable USB Flash device for trouble shooting.<br />
** 3 year Next Business Day response Onsite Repair Service by Source Support<br />
** Default load for testing (Service Partition + CentOS 5.3 for Intel64)<br />
<br />
*Price Tag: $33,054.30<br />
<br />
==Silicon Mechanics Quote #174536==<br />
* 2x Rackform iServ R4410: $11043.00 each ($10601.00 each with education) [http://www.siliconmechanics.com/quotes/174536?confirmation=879366549 link]<br />
**Shared Chassis: The following chassis resources are shared by all 4 compute nodes<br />
**External Optical Drive: No Item Selected<br />
**Power Supply: Shared, Redundant 1400W Power Supply with PMBus - 80 PLUS Gold Certified<br />
**Rail Kit: Quick-Release Rail Kit for Square Holes, 26.5 - 36.4 inches<br />
**Compute Nodes x4<br />
***CPU: 2 x Intel Xeon E5530 Quad-Core 2.40GHz, 8MB Cache, 5.86GT/s QPI<br />
***RAM: 12GB (6 x 2GB) Operating at 1333MHz Max (DDR3-1333 ECC Unbuffered DIMMs)<br />
***NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated<br />
***Management: Integrated IPMI with KVM over LAN<br />
***Hot-Swap Drive - 1: 250GB Western Digital RE3 (3.0Gb/s, 7.2Krpm, 16MB Cache) SATA<br />
<br />
* 2x Rackform iServ R350-GPU: $5196.00 each ($4433.00 each with education) [http://www.siliconmechanics.com/quotes/174542?confirmation=712641984 link]<br />
**CPU: 2 x Intel Xeon E5530 Quad-Core 2.40GHz, 8MB Cache, 5.86GT/s QPI<br />
**RAM: 12GB (6 x 2GB) Operating at 1333MHz Max (DDR3-1333 ECC Unbuffered DIMMs)<br />
**NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated<br />
**Management: Integrated IPMI 2.0 & KVM with Dedicated LAN<br />
**GPU: 1U System with 1 x Tesla C1060 GPU, Actively Cooled<br />
**LP PCIe x4 2.0 (x16 Slot): No Item Selected<br />
**Hot-Swap Drive - 1: 250GB Seagate Barracuda ES.2 (3Gb/s, 7.2Krpm, 32MB Cache, NCQ) SATA<br />
**Power Supply: 1400W Power Supply with PMBus - 80 PLUS Gold Certified<br />
**Rail Kit: 1U Rail Kit<br />
<br />
*Price Tag: $32,478 ($30,078 with education)<br />
<br />
*Questions<br />
**Can we lose the hot-swappability to save money?<br />
**Do we need to get a Gig-Switch?<br />
***Would Cairo do?<br />
<br />
==Newegg Quote #1==<br />
* 16x [http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=16958187 Newegg list]<br />
** 1U [http://www.newegg.com/Product/Product.aspx?item=N82E16811116011 link]<br />
** 2x Intel Xeon (Nehalem) E5530, Quad-Core, 2.4GHz, 80 Watt [http://www.newegg.com/Product/Product.aspx?item=N82E16819117184 link]<br />
** Slim CD/DVD Drive<br />
** 4x Gigabit ethernet [http://www.newegg.com/Product/Product.aspx?item=N82E16813151195R motherboard]<br />
** 500W non-redundant power supply<br />
** 160GB 7200RPM Seagate [http://www.newegg.com/Product/Product.aspx?item=N82E16822148511 link]<br />
** 12GB RAM (240-pin DDR3 1333 ECC, unbuffered)<br />
<br />
* Price Tag: $32,910.56<br />
<br />
==Newegg Quote #2==<br />
* 2x [http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=8560589 Newegg list]<br />
** 1U<br />
** 2x Intel Xeon (Nehalem) E5530, Quad-Core, 2.4GHz, 80 Watt<br />
** 2x Gigabit ethernet<br />
** IPMI<br />
** 1400W non-redundant power supply<br />
** 2x C1060 Tesla<br />
** 160GB 7200RPM Seagate<br />
** 12GB RAM (240-pin DDR3 1333 ECC, unbuffered)<br />
<br />
* 12x [http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=16958187 Newegg list]<br />
** 1U [http://www.newegg.com/Product/Product.aspx?item=N82E16811116011 link]<br />
** 2x Intel Xeon (Nehalem) E5530, Quad-Core, 2.4GHz, 80 Watt<br />
** Slim CD/DVD Drive<br />
** 4x Gigabit ethernet<br />
** 500W non-redundant power supply<br />
** 160GB 7200RPM Seagate<br />
** 12GB RAM (240-pin DDR3 1333 ECC, unbuffered)<br />
<br />
* Price tag: $34,696.78</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Al-salam&diff=10579
Al-salam
2009-12-17T05:15:12Z
<p>Skylar: /* Quick breakdown */</p>
<hr />
<div>Al-Salam is the working name for the Earlham Computer Science Department's upcoming cluster computer.<br />
<br />
At the moment Al-Salam exists only as a $40,000 grant and a growing list of tentative specifications:<br />
<br />
== Latest Overarching Questions ==<br />
*Should we build this machine ourselves?<br />
*#Are we wasting our money and learning opportunity letting them do the building for us?<br />
*#If it is cheaper, Would it be a useful experience for the students this coming semester to take a large collection of hardware and make it into a cluster?<br />
*How much if any GPGPU hardware do we want? 0, 1 or 2 nodes worth?<br />
*Do we want a high bandwidth/low latency network?<br />
**We do not. More expensive than it works<br />
*What software stack do we want to run? Vendor supplied or the BCCD?<br />
**Both. Vendor-supplied base with a BCCD virtual machine<br />
* Do compute nodes have spinning disk? <br />
** Compute nodes have a spinning disk<br />
* What's on the local persistent store? /tmp? An entire OS?<br />
<br />
== Tentative Specifications ==<br />
<br />
=== Budget ===<br />
* $35,000 (leaving $5,000 for discretionary spending)<br />
<br />
=== Nodes ===<br />
* Intel Nehalem processors<br />
* 4 core processors minimum<br />
** Six cores still expensive<br />
* 1.5GB RAM per core<br />
<br />
=== Specialty Nodes ===<br />
* Two nodes should support CUDA GPGPU<br />
<br />
Educationally, we could expect to get significant use out of GPGPUs, but the production use is limited.<br />
Increasing the variance of the architecture landscape would be a bonus to education.<br />
<br />
=== Network ===<br />
* Gigabit Ethernet fabric with switch<br />
<br />
=== Disk ===<br />
* Spinning Disk<br />
<br />
=== OS ===<br />
* Virtual BCCD on top of built-in OS.<br />
<br />
== Quick breakdown ==<br />
<br />
{| class="wikitable" border="1"<br />
!<br />
! [[Al-salam#ION_Computer_Systems_Quotation_.2361116|ION #61116]]<br />
! [[Al-salam#ION_Computer_Systems_Quotation_.2361164|ION #61164]]<br />
! [[Al-salam#Silicon_Mechanics_Quote_.23174536|SM #174536]]<br />
|-<br />
| '''CPU'''<br />
| 72 2.4GHz Intel E5530<br />
| 80 2.4GHz Intel E5530<br />
| 80 2.4GHz Intel E5530<br />
|-<br />
| '''RAM'''<br />
| 108GB PC3-10600<br />
| 120GB PC3-10600<br />
| 120GB DDR3-1333<br />
|-<br />
| '''GPU'''<br />
| 2 Tesla C1060<br />
| 2 Tesla C1060<br />
| 2 Tesla C1060<br />
|-<br />
| '''Local disk'''<br />
| Yes<br />
| Yes<br />
| Yes<br />
|-<br />
| '''Shared chassis'''<br />
| No<br />
| Yes<br />
| Yes<br />
|-<br />
| '''Remote mgmt'''<br />
| No<br />
| No<br />
| IPMI<br />
|-<br />
| '''Price'''<br />
| $33,173.20<br />
| $33,054.30<br />
| $30,078.00<br />
|}<br />
<br />
== ION Computer Systems Quotation #61116 ==<br />
*2 ION G10 Server with GPU: 4,972.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** (1) NVidia Tesla C1060 w. 4GB DDR3<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
*7 ION G10 Server without GPU: $3,697.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
* Networking Fabric<br />
**Network not included<br />
<br />
* Other stuff<br />
** scorpion: ION bootable USB Flash device for trouble shooting.<br />
** 3 year Next Business Day response Onsite Repair Service by Source Support<br />
** Default load for testing (Service Partition + CentOS 5.3 for Intel64)<br />
<br />
* Price tag: $33,173.20<br />
<br />
== ION Computer Systems Quotation #61164 ==<br />
<br />
* 2 ION G10 Server with GPU: $4,972.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** (1) NVidia Tesla C1060 w. 4GB DDR3<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
* 4 ION T11 DualNode: $6,477.0 each<br />
** (2x2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** Total memory: 12GB DDR3_1333 per node<br />
** No RAID, Separate disks (NO redundancy)<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate Constellation 160GB, 7200RPM, SATA 3Gb NCQ 2.5“ Disk<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
** These nodes are modular. One can be unplugged and worked on while the others remain running.<br />
<br />
* Network<br />
** Network Not Included<br />
<br />
* Other stuff<br />
** scorpion: ION bootable USB Flash device for trouble shooting.<br />
** 3 year Next Business Day response Onsite Repair Service by Source Support<br />
** Default load for testing (Service Partition + CentOS 5.3 for Intel64)<br />
<br />
*Price Tag: $33,054.30<br />
<br />
==Silicon Mechanics Quote #174536==<br />
* 2x Rackform iServ R4410: $11043.00 each ($10601.00 each with education) [http://www.siliconmechanics.com/quotes/174536?confirmation=879366549 link]<br />
**Shared Chassis: The following chassis resources are shared by all 4 compute nodes<br />
**External Optical Drive: No Item Selected<br />
**Power Supply: Shared, Redundant 1400W Power Supply with PMBus - 80 PLUS Gold Certified<br />
**Rail Kit: Quick-Release Rail Kit for Square Holes, 26.5 - 36.4 inches<br />
**Compute Nodes x4<br />
***CPU: 2 x Intel Xeon E5530 Quad-Core 2.40GHz, 8MB Cache, 5.86GT/s QPI<br />
***RAM: 12GB (6 x 2GB) Operating at 1333MHz Max (DDR3-1333 ECC Unbuffered DIMMs)<br />
***NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated<br />
***Management: Integrated IPMI with KVM over LAN<br />
***Hot-Swap Drive - 1: 250GB Western Digital RE3 (3.0Gb/s, 7.2Krpm, 16MB Cache) SATA<br />
<br />
* 2x Rackform iServ R350-GPU: $5196.00 each ($4433.00 each with education) [http://www.siliconmechanics.com/quotes/174542?confirmation=712641984 link]<br />
**CPU: 2 x Intel Xeon E5530 Quad-Core 2.40GHz, 8MB Cache, 5.86GT/s QPI<br />
**RAM: 12GB (6 x 2GB) Operating at 1333MHz Max (DDR3-1333 ECC Unbuffered DIMMs)<br />
**NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated<br />
**Management: Integrated IPMI 2.0 & KVM with Dedicated LAN<br />
**GPU: 1U System with 1 x Tesla C1060 GPU, Actively Cooled<br />
**LP PCIe x4 2.0 (x16 Slot): No Item Selected<br />
**Hot-Swap Drive - 1: 250GB Seagate Barracuda ES.2 (3Gb/s, 7.2Krpm, 32MB Cache, NCQ) SATA<br />
**Power Supply: 1400W Power Supply with PMBus - 80 PLUS Gold Certified<br />
**Rail Kit: 1U Rail Kit<br />
<br />
*Price Tag: $32,478 ($30,078 with education)<br />
<br />
*Questions<br />
**Can we lose the hot-swappability to save money?<br />
**Do we need to get a Gig-Switch?<br />
***Would Cairo do?</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Al-salam&diff=10578
Al-salam
2009-12-17T05:13:58Z
<p>Skylar: /* Table */</p>
<hr />
<div>Al-Salam is the working name for the Earlham Computer Science Department's upcoming cluster computer.<br />
<br />
At the moment Al-Salam exists only as a $40,000 grant and a growing list of tentative specifications:<br />
<br />
== Latest Overarching Questions ==<br />
*Should we build this machine ourselves?<br />
*#Are we wasting our money and learning opportunity letting them do the building for us?<br />
*#If it is cheaper, Would it be a useful experience for the students this coming semester to take a large collection of hardware and make it into a cluster?<br />
*How much if any GPGPU hardware do we want? 0, 1 or 2 nodes worth?<br />
*Do we want a high bandwidth/low latency network?<br />
**We do not. More expensive than it works<br />
*What software stack do we want to run? Vendor supplied or the BCCD?<br />
**Both. Vendor-supplied base with a BCCD virtual machine<br />
* Do compute nodes have spinning disk? <br />
** Compute nodes have a spinning disk<br />
* What's on the local persistent store? /tmp? An entire OS?<br />
<br />
== Tentative Specifications ==<br />
<br />
=== Budget ===<br />
* $35,000 (leaving $5,000 for discretionary spending)<br />
<br />
=== Nodes ===<br />
* Intel Nehalem processors<br />
* 4 core processors minimum<br />
** Six cores still expensive<br />
* 1.5GB RAM per core<br />
<br />
=== Specialty Nodes ===<br />
* Two nodes should support CUDA GPGPU<br />
<br />
Educationally, we could expect to get significant use out of GPGPUs, but the production use is limited.<br />
Increasing the variance of the architecture landscape would be a bonus to education.<br />
<br />
=== Network ===<br />
* Gigabit Ethernet fabric with switch<br />
<br />
=== Disk ===<br />
* Spinning Disk<br />
<br />
=== OS ===<br />
* Virtual BCCD on top of built-in OS.<br />
<br />
== Quick breakdown ==<br />
<br />
{| class="wikitable" border="1"<br />
!<br />
! ION #61116<br />
! ION #61164<br />
! SM #174536<br />
|-<br />
| '''CPU'''<br />
| 72 2.4GHz Intel E5530<br />
| 80 2.4GHz Intel E5530<br />
| 80 2.4GHz Intel E5530<br />
|-<br />
| '''RAM'''<br />
| 108GB PC3-10600<br />
| 120GB PC3-10600<br />
| 120GB DDR3-1333<br />
|-<br />
| '''GPU'''<br />
| 2 Tesla C1060<br />
| 2 Tesla C1060<br />
| 2 Tesla C1060<br />
|-<br />
| '''Local disk'''<br />
| Yes<br />
| Yes<br />
| Yes<br />
|-<br />
| '''Shared chassis'''<br />
| No<br />
| Yes<br />
| Yes<br />
|-<br />
| '''Remote mgmt'''<br />
| No<br />
| No<br />
| IPMI<br />
|-<br />
| '''Price'''<br />
| $33,173.20<br />
| $33,054.30<br />
| $30,078.00<br />
|}<br />
<br />
== ION Computer Systems Quotation #61116 ==<br />
*2 ION G10 Server with GPU: 4,972.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** (1) NVidia Tesla C1060 w. 4GB DDR3<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
*7 ION G10 Server without GPU: $3,697.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
* Networking Fabric<br />
**Network not included<br />
<br />
* Other stuff<br />
** scorpion: ION bootable USB Flash device for trouble shooting.<br />
** 3 year Next Business Day response Onsite Repair Service by Source Support<br />
** Default load for testing (Service Partition + CentOS 5.3 for Intel64)<br />
<br />
* Price tag: $33,173.20<br />
<br />
== ION Computer Systems Quotation #61164 ==<br />
<br />
* 2 ION G10 Server with GPU: $4,972.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** (1) NVidia Tesla C1060 w. 4GB DDR3<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
* 4 ION T11 DualNode: $6,477.0 each<br />
** (2x2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** Total memory: 12GB DDR3_1333 per node<br />
** No RAID, Separate disks (NO redundancy)<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate Constellation 160GB, 7200RPM, SATA 3Gb NCQ 2.5“ Disk<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
** These nodes are modular. One can be unplugged and worked on while the others remain running.<br />
<br />
* Network<br />
** Network Not Included<br />
<br />
* Other stuff<br />
** scorpion: ION bootable USB Flash device for trouble shooting.<br />
** 3 year Next Business Day response Onsite Repair Service by Source Support<br />
** Default load for testing (Service Partition + CentOS 5.3 for Intel64)<br />
<br />
*Price Tag: $33,054.30<br />
<br />
==Silicon Mechanics Quote #174536==<br />
* 2x Rackform iServ R4410: $11043.00 each ($10601.00 each with education) [http://www.siliconmechanics.com/quotes/174536?confirmation=879366549 link]<br />
**Shared Chassis: The following chassis resources are shared by all 4 compute nodes<br />
**External Optical Drive: No Item Selected<br />
**Power Supply: Shared, Redundant 1400W Power Supply with PMBus - 80 PLUS Gold Certified<br />
**Rail Kit: Quick-Release Rail Kit for Square Holes, 26.5 - 36.4 inches<br />
**Compute Nodes x4<br />
***CPU: 2 x Intel Xeon E5530 Quad-Core 2.40GHz, 8MB Cache, 5.86GT/s QPI<br />
***RAM: 12GB (6 x 2GB) Operating at 1333MHz Max (DDR3-1333 ECC Unbuffered DIMMs)<br />
***NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated<br />
***Management: Integrated IPMI with KVM over LAN<br />
***Hot-Swap Drive - 1: 250GB Western Digital RE3 (3.0Gb/s, 7.2Krpm, 16MB Cache) SATA<br />
<br />
* 2x Rackform iServ R350-GPU: $5196.00 each ($4433.00 each with education) [http://www.siliconmechanics.com/quotes/174542?confirmation=712641984 link]<br />
**CPU: 2 x Intel Xeon E5530 Quad-Core 2.40GHz, 8MB Cache, 5.86GT/s QPI<br />
**RAM: 12GB (6 x 2GB) Operating at 1333MHz Max (DDR3-1333 ECC Unbuffered DIMMs)<br />
**NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated<br />
**Management: Integrated IPMI 2.0 & KVM with Dedicated LAN<br />
**GPU: 1U System with 1 x Tesla C1060 GPU, Actively Cooled<br />
**LP PCIe x4 2.0 (x16 Slot): No Item Selected<br />
**Hot-Swap Drive - 1: 250GB Seagate Barracuda ES.2 (3Gb/s, 7.2Krpm, 32MB Cache, NCQ) SATA<br />
**Power Supply: 1400W Power Supply with PMBus - 80 PLUS Gold Certified<br />
**Rail Kit: 1U Rail Kit<br />
<br />
*Price Tag: $32,478 ($30,078 with education)<br />
<br />
*Questions<br />
**Can we lose the hot-swappability to save money?<br />
**Do we need to get a Gig-Switch?<br />
***Would Cairo do?</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Al-salam&diff=10577
Al-salam
2009-12-17T05:11:40Z
<p>Skylar: </p>
<hr />
<div>Al-Salam is the working name for the Earlham Computer Science Department's upcoming cluster computer.<br />
<br />
At the moment Al-Salam exists only as a $40,000 grant and a growing list of tentative specifications:<br />
<br />
== Latest Overarching Questions ==<br />
*Should we build this machine ourselves?<br />
*#Are we wasting our money and learning opportunity letting them do the building for us?<br />
*#If it is cheaper, Would it be a useful experience for the students this coming semester to take a large collection of hardware and make it into a cluster?<br />
*How much if any GPGPU hardware do we want? 0, 1 or 2 nodes worth?<br />
*Do we want a high bandwidth/low latency network?<br />
**We do not. More expensive than it works<br />
*What software stack do we want to run? Vendor supplied or the BCCD?<br />
**Both. Vendor-supplied base with a BCCD virtual machine<br />
* Do compute nodes have spinning disk? <br />
** Compute nodes have a spinning disk<br />
* What's on the local persistent store? /tmp? An entire OS?<br />
<br />
== Tentative Specifications ==<br />
<br />
=== Budget ===<br />
* $35,000 (leaving $5,000 for discretionary spending)<br />
<br />
=== Nodes ===<br />
* Intel Nehalem processors<br />
* 4 core processors minimum<br />
** Six cores still expensive<br />
* 1.5GB RAM per core<br />
<br />
=== Specialty Nodes ===<br />
* Two nodes should support CUDA GPGPU<br />
<br />
Educationally, we could expect to get significant use out of GPGPUs, but the production use is limited.<br />
Increasing the variance of the architecture landscape would be a bonus to education.<br />
<br />
=== Network ===<br />
* Gigabit Ethernet fabric with switch<br />
<br />
=== Disk ===<br />
* Spinning Disk<br />
<br />
=== OS ===<br />
* Virtual BCCD on top of built-in OS.<br />
<br />
== Table ==<br />
<br />
{| class="wikitable" border="1"<br />
!<br />
! ION #61116<br />
! ION #61164<br />
! SM #174536<br />
|-<br />
| '''CPU'''<br />
| 72 2.4GHz Intel E5530<br />
| 80 2.4GHz Intel E5530<br />
| 80 2.4GHz Intel E5530<br />
|-<br />
| '''RAM'''<br />
| 108GB PC3-10600<br />
| 120GB PC3-10600<br />
| 120GB DDR3-1333<br />
|-<br />
| '''GPU'''<br />
| 2 Tesla C1060<br />
| 2 Tesla C1060<br />
| 2 Tesla C1060<br />
|-<br />
|}<br />
<br />
== ION Computer Systems Quotation #61116 ==<br />
*2 ION G10 Server with GPU: 4,972.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** (1) NVidia Tesla C1060 w. 4GB DDR3<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
*7 ION G10 Server without GPU: $3,697.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
* Networking Fabric<br />
**Network not included<br />
<br />
* Other stuff<br />
** scorpion: ION bootable USB Flash device for trouble shooting.<br />
** 3 year Next Business Day response Onsite Repair Service by Source Support<br />
** Default load for testing (Service Partition + CentOS 5.3 for Intel64)<br />
<br />
* Price tag: $33,173.20<br />
<br />
== ION Computer Systems Quotation #61164 ==<br />
<br />
* 2 ION G10 Server with GPU: $4,972.00 each<br />
** (2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** 12GB RAM [Bank 1 of 2: (6) 2GB ECC PC3-10600 1333MHz 2rank DDR3 RDIMM Modules][Smart]<br />
** Total memory: 12GB DDR3_1333<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate SV35.3 250GB, 7200RPM, SATA 3Gb for SDVR 3.5“ Disk<br />
** (1) NVidia Tesla C1060 w. 4GB DDR3<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
<br />
* 4 ION T11 DualNode: $6,477.0 each<br />
** (2x2) Intel® Quad-Core Xeon® processor E5530 (2.40GHz, 8MB Cache, 5.86GT/s, 80W)<br />
** Total memory: 12GB DDR3_1333 per node<br />
** No RAID, Separate disks (NO redundancy)<br />
** Configure 1 RAID sets / arrays.<br />
** Seagate Constellation 160GB, 7200RPM, SATA 3Gb NCQ 2.5“ Disk<br />
** Dual Intel Gigabit Server NICs with IOAT2 Integrated<br />
** These nodes are modular. One can be unplugged and worked on while the others remain running.<br />
<br />
* Network<br />
** Network Not Included<br />
<br />
* Other stuff<br />
** scorpion: ION bootable USB Flash device for trouble shooting.<br />
** 3 year Next Business Day response Onsite Repair Service by Source Support<br />
** Default load for testing (Service Partition + CentOS 5.3 for Intel64)<br />
<br />
*Price Tag: $33,054.30<br />
<br />
==Silicon Mechanics Quote #174536==<br />
* 2x Rackform iServ R4410: $11043.00 each ($10601.00 each with education) [http://www.siliconmechanics.com/quotes/174536?confirmation=879366549 link]<br />
**Shared Chassis: The following chassis resources are shared by all 4 compute nodes<br />
**External Optical Drive: No Item Selected<br />
**Power Supply: Shared, Redundant 1400W Power Supply with PMBus - 80 PLUS Gold Certified<br />
**Rail Kit: Quick-Release Rail Kit for Square Holes, 26.5 - 36.4 inches<br />
**Compute Nodes x4<br />
***CPU: 2 x Intel Xeon E5530 Quad-Core 2.40GHz, 8MB Cache, 5.86GT/s QPI<br />
***RAM: 12GB (6 x 2GB) Operating at 1333MHz Max (DDR3-1333 ECC Unbuffered DIMMs)<br />
***NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated<br />
***Management: Integrated IPMI with KVM over LAN<br />
***Hot-Swap Drive - 1: 250GB Western Digital RE3 (3.0Gb/s, 7.2Krpm, 16MB Cache) SATA<br />
<br />
* 2x Rackform iServ R350-GPU: $5196.00 each ($4433.00 each with education) [http://www.siliconmechanics.com/quotes/174542?confirmation=712641984 link]<br />
**CPU: 2 x Intel Xeon E5530 Quad-Core 2.40GHz, 8MB Cache, 5.86GT/s QPI<br />
**RAM: 12GB (6 x 2GB) Operating at 1333MHz Max (DDR3-1333 ECC Unbuffered DIMMs)<br />
**NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated<br />
**Management: Integrated IPMI 2.0 & KVM with Dedicated LAN<br />
**GPU: 1U System with 1 x Tesla C1060 GPU, Actively Cooled<br />
**LP PCIe x4 2.0 (x16 Slot): No Item Selected<br />
**Hot-Swap Drive - 1: 250GB Seagate Barracuda ES.2 (3Gb/s, 7.2Krpm, 32MB Cache, NCQ) SATA<br />
**Power Supply: 1400W Power Supply with PMBus - 80 PLUS Gold Certified<br />
**Rail Kit: 1U Rail Kit<br />
<br />
*Price Tag: $32,478 ($30,078 with education)<br />
<br />
*Questions<br />
**Can we lose the hot-swappability to save money?<br />
**Do we need to get a Gig-Switch?<br />
***Would Cairo do?</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Talk:Al-salam&diff=10576
Talk:Al-salam
2009-12-17T04:59:05Z
<p>Skylar: New page: * Re BCCD in a VM: Can CUDA be exported to a VM? If not, is that a disadvantage?</p>
<hr />
<div>* Re BCCD in a VM: Can CUDA be exported to a VM? If not, is that a disadvantage?</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:Modules&diff=10201
Cluster:Modules
2009-09-04T03:40:55Z
<p>Skylar: /* Software build options */</p>
<hr />
<div>= Software build options =<br />
<br />
ARCHPATH=`uname -s`/`/cluster/software/os_release`/`uname -p`<br />
<br />
* Tcl: <code>./configure --prefix=/cluster/software/modules-sw/tcl/8.5.7/$ARCHPATH --enable-shared && make</code><br />
* Modules: <code>./configure --prefix=/cluster/software/modules-sw/modules/3.2.7/$ARCHPATH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCHPATH/lib --with-static=yes && make</code><br />
** Had to remove bash_completion from init/Makefile.<br />
** After installation, changed the version from 3.2.6->3.2.7 in the init directory, and /usr/share/Modules to /cluster/software/modules-sw/modules/3.2.7/$ARCHPATH/Modules<br />
* OpenMPI: <code>./configure --prefix=/cluster/software/modules-sw/openmpi/1.3.1/$ARCHPATH && make</code></div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:Modules&diff=10200
Cluster:Modules
2009-09-04T03:30:51Z
<p>Skylar: </p>
<hr />
<div>= Software build options =<br />
<br />
ARCHPATH=`uname -s`/`/cluster/software/os_release`/`uname -p`<br />
<br />
* Tcl: <code>./configure --prefix=/cluster/software/modules-sw/tcl/8.5.7/$ARCHPATH --enable-shared && make</code><br />
* Modules: <code>./configure --prefix=/cluster/software/modules-sw/modules/3.2.7/$ARCHPATH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCHPATH/lib --with-static=yes && make</code><br />
** Had to remove bash_completion from init/Makefile.<br />
** After installation, changed the version from 3.2.6->3.2.7 in the init directory, and /usr/share/Modules to /cluster/software/modules-sw/modules/3.2.7/$ARCHPATH/Modules</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:Modules&diff=10167
Cluster:Modules
2009-08-30T01:58:03Z
<p>Skylar: /* Software build options */</p>
<hr />
<div>= Software build options =<br />
<br />
ARCH=`uname -s`/`/cluster/software/os_release`/`uname -p`<br />
<br />
* Tcl: <code>./configure --prefix=/cluster/software/modules-sw/tcl/8.5.7/$ARCH --enable-shared && make</code><br />
* Modules: <code>./configure --prefix=/cluster/software/modules-sw/modules/3.2.7/$ARCH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCH/lib --with-static=yes && make</code><br />
** Had to remove bash_completion from init/Makefile.<br />
** After installation, changed the version from 3.2.6->3.2.7 in the init directory, and /usr/share/Modules to /cluster/software/modules-sw/modules/3.2.7/$ARCH/Modules</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:Modules&diff=10166
Cluster:Modules
2009-08-30T01:21:47Z
<p>Skylar: /* Software build options */</p>
<hr />
<div>= Software build options =<br />
<br />
ARCH=`uname -s`/`/cluster/software/os_release`/`uname -p`<br />
<br />
* Tcl: <code>./configure --prefix=/cluster/software/modules-sw/tcl/8.5.7/$ARCH && make</code><br />
* Modules: <code>./configure --prefix=/cluster/software/modules-sw/modules/3.2.7/$ARCH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCH/lib --with-static=yes && make</code><br />
** Had to remove bash_completion from init/Makefile.<br />
** After installation, changed the version from 3.2.6->3.2.7 in the init directory, and /usr/share/Modules to /cluster/software/modules-sw/modules/3.2.7/$ARCH/Modules</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:Modules&diff=10165
Cluster:Modules
2009-08-30T01:12:26Z
<p>Skylar: /* Software build options */</p>
<hr />
<div>= Software build options =<br />
<br />
ARCH=`uname -s`/`/cluster/software/os_release`/`uname -p`<br />
<br />
* Tcl: <code>./configure --prefix=/cluster/software/modules-sw/tcl/8.5.7/$ARCH && make</code><br />
* Modules: <code>./configure --prefix=/cluster/software/modules-sw/modules/3.2.7/$ARCH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCH/lib && make</code><br />
** Had to remove bash_completion from init/Makefile.<br />
** After installation, changed the version from 3.2.6->3.2.7 in the init directory, and /usr/share/Modules to /cluster/software/modules-sw/modules/3.2.7/$ARCH/Modules</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:Modules&diff=10164
Cluster:Modules
2009-08-30T01:02:02Z
<p>Skylar: /* Software build options */</p>
<hr />
<div>= Software build options =<br />
<br />
ARCH=`uname -s`/`/cluster/software/os_release`/`uname -p`<br />
<br />
* Tcl: <code>./configure --prefix=/cluster/software/modules-sw/tcl/8.5.7/$ARCH && make</code><br />
* Modules: <code>./configure --prefix=/cluster/software/modules-sw/modules/3.2.7/$ARCH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCH/lib && make</code><br />
** Had to remove bash_completion from init/Makefile.</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:Modules&diff=10163
Cluster:Modules
2009-08-30T00:59:22Z
<p>Skylar: /* Software build options */</p>
<hr />
<div>= Software build options =<br />
<br />
ARCH=`uname -s`/`/cluster/software/os_release`/`uname -p`<br />
<br />
* Tcl: <code>./configure --prefix=/cluster/software/modules-sw/3.2.7/$ARCH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCH/lib && make -j3</code><br />
* Modules: <code>./configure --prefix=/cluster/software/modules-sw/3.2.7/$ARCH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCH/lib && make -j3</code><br />
** Had to remove bash_completion from init/Makefile.</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:Modules&diff=10162
Cluster:Modules
2009-08-30T00:57:28Z
<p>Skylar: /* Software build options */</p>
<hr />
<div>= Software build options =<br />
<br />
ARCH=`uname -s`/`/cluster/software/os_release`/`uname -p`<br />
<br />
* Tcl: <code>./configure --prefix=/cluster/software/modules/modules-sw/3.2.7/$ARCH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCH/lib && make -j3</code><br />
* Modules: <code>./configure --prefix=/cluster/software/modules/modules-sw/3.2.7/$ARCH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCH/lib && make -j3</code><br />
** Had to remove bash_completion from init/Makefile.</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:Modules&diff=10161
Cluster:Modules
2009-08-30T00:54:06Z
<p>Skylar: New page: = Software build options = ARCH=`uname -s`/`/cluster/software/os_release`/`uname -p` * Tcl: <code>./configure --prefix=/cluster/software/modules/modules-sw/3.2.7/$ARCH --with-tcl=/clu...</p>
<hr />
<div>= Software build options =<br />
<br />
ARCH=`uname -s`/`/cluster/software/os_release`/`uname -p`<br />
<br />
* Tcl: <code>./configure --prefix=/cluster/software/modules/modules-sw/3.2.7/$ARCH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCH/lib && make -j3</code><br />
* Modules: <code>./configure --prefix=/cluster/software/modules/modules-sw/3.2.7/$ARCH --with-tcl=/cluster/software/modules-sw/tcl/8.5.7/$ARCH/lib && make -j3</code></div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster_Information&diff=10160
Cluster Information
2009-08-30T00:51:43Z
<p>Skylar: /* General Stuff */</p>
<hr />
<div>== Summer of Fun (2009) ==<br />
[[GalaxSee|An external doc for GalaxSee]]<br /><br />
[[Cluster:OSGal|Documentation for OpenSim GalaxSee]]<br />
<br />
What's in the database?<br />
{| class="wikitable" border="1"<br />
! rowspan ="2" |<br />
! colspan ="3" | GalaxSee (MPI) <br />
! colspan ="3" | area-under-curve (MPI, openmpi) <br />
! colspan ="3" | area-under-curve (Hybrid, openmpi)<br />
|- <br />
! acl0-5<br />
! bs0-5 GigE<br />
! bs0-5 IB<br />
! acl0-5<br />
! bs0-5 GigE<br />
! bs0-5 IB<br />
! acl0-5<br />
! bs0-5 GigE<br />
! bs0-5 IB<br />
|-<br />
| np X-XX<br />
| 2-20<br />
| 2-48<br />
| 2-48<br />
| 2-12<br />
| 2-48<br />
| 2-48<br />
| 2-20<br />
| 2-48<br />
| 2-48<br />
|}<br />
<br />
What works so far? B = builds, R = runs, W = works<br />
{| class="wikitable" border="1"<br />
! rowspan="2" | B-builds, R-runs<br />
! colspan="4" | area under curve<br />
! colspan="4" | GalaxSee (standalone)<br />
|-<br />
! Serial<br />
! MPI<br />
! OpenMP<br />
! Hybrid<br />
! Serial<br />
! MPI<br />
! OpenMP<br />
! Hybrid<br />
|-<br />
! acls<br />
| BRW<br />
| BRW<br />
| BRW<br />
| BRW<br />
| <br />
| BRW<br />
| <br />
|<br />
|-<br />
! bobsced0<br />
| BRW<br />
| BRW<br />
| BRW<br />
| BRW<br />
|<br />
| BRW<br />
| <br />
|<br />
|-<br />
! c13<br />
|<br />
|<br />
|<br />
|<br />
|<br />
| BRW<br />
|<br />
|<br />
|-<br />
! pople<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
! Charlie's laptop<br />
|<br />
| <br />
|<br />
|<br />
|<br />
| BRW<br />
|<br />
|<br />
|}<br />
<br />
To Do<br />
* Fitz/Charlie's message<br />
* Petascale review<br />
* BobSCEd stress test<br />
<br />
Implementations of area under the curve<br />
* Serial<br />
* OpenMP (shared)<br />
* MPI (message passing)<br />
* MPI (hybrid mp and shared)<br />
* OpenMP + MPI (hybrid)<br />
<br />
GalaxSee Goals<br />
* Good piece of code, serves as teaching example for n-body problems in petascale.<br />
* Dials, knobs, etc. in place to easily control how work is distributed when running in parallel.<br />
* Architecture generally supports hybrid model running on large-scale constellations.<br />
* Produces runtime data that enables nice comparisons across multiple resources (scaling, speedup, efficiency).<br />
* Render in BCCD, metaverse, and /dev/null environments.<br />
* Serial version<br />
* Improve performance on math?<br />
<br />
GalaxSee - scale to petascale with MPI and OpenMP hybrid.<br />
* GalaxSee - render in-world and steer from in-world.<br />
* Area under a curve - serial, MPI, and OpenMP implementations.<br />
* OpenMPI - testing, performance.<br />
* Start May 11th<br />
<br />
LittleFe<br />
* Testing<br />
* Documentation <br />
* Touch screen interface<br />
<br />
Notes from May 21, 2009 Review<br />
* Combined Makefiles with defines to build on a particular platform<br />
* Write a driver script for GalaxSee ala the area under the curve script, consider combining<br />
* Schema <br />
** date, program_name, program_version, style, command line, compute_resource, NP, wall_time <br />
* Document the process from start to finish<br />
* Consider how we might iterate over e.g. number of stars, number of segments, etc.<br />
* Command line option to stat.pl that provides a Torque wrapper for the scripts.<br />
* Lint all code, consistent formatting<br />
* Install latest and greatest Intel compiler in /cluster/bobsced<br />
<br />
== BobSCEd Upgrade ==<br />
Build a new image for BobSCEd:<br />
# One of the Suse versions supported for Gaussian09 on EM64T [v11.1] - Red Hat Enterprise Linux 5.3; SuSE Linux 9.3, 10.3, 11.1; or SuSE Linux Enterprise 10 (see [http://www.gaussian.com/g09_plat.htm G09 platform list]) <-- CentOS 5.3 runs Gaussian binaries for RHEL ok<br />
# Firmware update?<br />
# C3 tools and configuration [v4.0.1]<br />
# Ganglia and configuration [v3.1.2]<br />
# PBS and configuration [v2.3.16]<br />
# /cluster/bobsced local to bs0 <br />
# /cluster/... passed-through to compute nodes<br />
# Large local scratch space on each node<br />
# Gaussian09<br />
# WebMO and configuration [v9.1]<br />
# [[Bobsced Infiniband | Infiniband and configuration]]<br />
# GNU toolchain with OpenMPI and MPICH [GCC v4.4.0], [OpenMPI v1.3.2] [MPICH v1.2.7p1]<br />
# Intel toolchain with OpenMPI and native libraries<br />
# Sage with do-dads (see Charlie)<br />
# Systemimager for the client nodes?<br />
<br />
Installed:<br />
* [[Cluster: New BobSCEd Install Log | New BobSCEd Install Log]]<br />
<br />
Fix the broken nodes.<br />
<br />
== (Old) To Do ==<br />
BCCD Liberation<br />
* v1.1 release - upgrade procedures<br />
<br />
Curriculum Modules <br />
* POVRay<br />
* GROMACS<br />
* Energy and Weather <br />
* Dave's math modules<br />
* Standard format, templates, how-to for V and V<br />
<br />
LittleFe<br />
* Explore machines from first Intel donation ([[intel-lf-server|notes and pictures]])<br />
* Build 4 SCED units<br />
<br />
Infrastructure<br />
* Masa's GROMACS interface on Cairo<br />
* gridgate configuration, Open Science Grid peering<br />
* [[hopperprime|hopper']]<br />
<br />
SC Education <br />
* Scott's homework (see [[sc-education-homework-1|the message]])<br />
<br />
== Current Projects ==<br />
* [[BCCD]]<br />
* [[LittleFe Cluster|LittleFe]]<br />
* [[Folding@Clusters|Folding@Clusters]]<br />
* [[OpenMPI|Benchmarking OpenMPI]]<br />
<br />
== Past Projects ==<br />
* [[Cluster:Big-Fe|Big-FE]]<br />
* [[Cluster:LowLatency|Low Latency Linux Kernel]]<br />
<br />
== General Stuff == <br />
* [[Cluster:Todo|Todo]]<br />
* [[General Cluster Information|General]]<br />
* [[Cluster:Hopper|Hopper]]<br />
* [[Cluster Howto's|Howto's]]<br />
* [[Cluster:Networking|Networking]]<br />
* [[Cluster:2005-11-30 Meeting|2005-11-30 Meeting]]<br />
* [[Cluster:2006-12-12 Meeting|2006-12-12 Meeting]]<br />
* [[Cluster:2006-02-02 Meeting|2006-02-02 Meeting]]<br />
* [[Cluster:2006-03-16 Meeting|2006-03-16 Meeting]]<br />
* [[Cluster:2006-04-06 Meeting|2006-04-06 Meeting]]<br />
* [[Cluster:Node usage|Node usage]]<br />
* [[Cluster:Netgear numbers|Numbers for Netgear switches]]<br />
* [[Cluster:Latex poster creation|Latex Poster Creation]]<br />
* [[Cluster:Bugzilla|Bugzilla Etiquette]]<br />
* [[Cluster:Modules|Modules]]<br />
<br />
== Items Particular to a Specific Cluster ==<br />
* [[ACL Cluster|ACL]]<br />
* [[Athena Cluster|Athena]]<br />
* [[Bazaar Cluster|Bazaar]]<br />
* [[Cairo Cluster|Cairo]]<br />
* [[Bobsced Cluster|Bobsced]]<br />
<br />
== Curriculum Modules ==<br />
* [[Cluster:Curriculum|Curriculum]]<br />
* [[Cluster:Fluid Dynamics|Fluid Dynamics]]<br />
* [[Cluster:Population Ecology|Population Ecology]]<br />
* [[Cluster:GROMACS Web Interface|GROMACS Web Interface]]<br />
* [[Cluster:Wiki|Wiki Life for Academics]]<br />
<br />
== Possible Future Projects ==<br />
* [[Cluster:Realtime Parallel Visualization|Realtime Parallel Visualization]]<br />
<br />
== Archive ==<br />
* TeraGrid '06 (Indianapolis, June 12-15, 2006)<br />
** [http://www.teragrid.org Conference webpage]<br />
** [http://www.teragrid.org/events/2006conference/contest_poster.html Student poster guidelines]<br />
** [[Big-FE-teragrid-abstract|Big-FE abstract]]<br />
<br />
* SIAM Parallel Processing 2006 (San Fransisco, February 22-24, 2006)<br />
** [http://www.siam.org/meetings/pp06 Conference webpage] <br />
** [[Cluster:little-fe-siam-pp06-abstract|Little-Fe abstract]]<br />
** [[Cluster:llk-siam-pp06-abstract|Low Latency Kernal abstract]]<br />
** Folding@Clusters<br />
** Best practices for teaching parallel programming to science faculty (Charlie only)<br />
<br />
* [[College Avenue]]</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Hopperprime&diff=3867
Hopperprime
2008-01-15T03:36:30Z
<p>Skylar: </p>
<hr />
<div>= hp TODO =<br />
<br />
Located at <tt>hp.cluster.earlham.edu, 159.28.234.108</tt><br />
<br />
Description:<br />
<p>The hardware is an S5000 motherboard with 2 3.2GHz dual-core Xeons<br />
(with hyper-threading enabled, i.e. 8 cores total), 2GB RAM, and two<br />
250GB SATA drives configured as RAID1. <br />
*Intel's description of the board is here: http://www.intel.com/support/motherboards/server/s5000pal/index.htm<br />
*The manuals and memory specs are in the sna CVS module in /cluster/cvsroot<br />
<br />
Tasks:<br />
* (Complete) System Install [CP] 20070613<br />
* (Complete) sudo [HH] 20070613<br />
* (Complete) Need /usr/src [HH] 20070613<br />
* (Complete) Security Patches (Keep downloaded patches in /root/patches) [HH] 20070615<br />
** (Complete) Applied FreeBSD-SA-07:02.bin [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:03.ipv6 [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:04.file [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:08.openssl [HH] 20071006<br />
** (Complete) Applied FreeBSD-SA-07:09.random [HH] 20071130<br />
** (Complete) Applied FreeBSD-SA-07:10.gtar [HH] 20071130<br />
* (Complete) cvsup [HH] 20070615<br />
** (Complete) Setup cron job [HH] 20070613<br />
* (Complete) SSH config and banner [HH] 20070613<br />
* (Complete) portupgrade [HH] 20070615<br />
* (Complete) Upgrade X to X 7 (all x-{related_packages}) [HH] 20070615<br />
* (Complete) Screen [NM]<br />
* (Complete) Pine + Pico [NM]<br />
* (Complete) Bash [HH] 20070615<br />
* (Complete) Apache2 /PHP,MySQL,Session [HH] 20070623<br />
* (Complete) [[ntpd]] [NM]<br />
* (Complete) NFS (/Cluster over TCP) [HH] 20070623<br />
* (Complete) Password migration [HH] 20070624<br />
* (Complete) [[MYSQL]] [NM]<br />
* (Complete) [[PortAudit]] [NM]<br />
* (Complete) [[PROFTPD]] [NM]<br />
* (Complete) chkrootkit [NM]<br />
* (Complete) [[Bugzilla]] (over MySQL).[NM]<br />
* (Complete) [[PostgreSQL]]. [NM]<br />
* (Complete) Ganglia [HH] 20070819<br />
* (Complete) SSH Keys from Hopper [HH] 20070819<br />
* (Complete) DNS [HH] 20070819<br />
* (Complete) Cacti [HH]<br />
* (Complete) New NIC Card [multiple ports] [CP]<br />
* (Complete) DHCPD [HH]<br />
* (Complete) Talk [HH]<br />
* (Complete) Kernel tweaks [HH] 20071202<br />
* (Not Installed) NIS [HH]<br />
* (Not Installed) CVS [NM]<br />
* (Complete) [[Hoperrprime:Trac]] [ST]<br />
* (Complete) [[Hopperprime:WebDAV]] [ST]<br />
* (Not Installed) Remote Access Controller card [CP]<br />
* (Not Complete) Final Cut-over (mysql/postgreSQL/Bugzilla) [NM]<br />
* Figure-out what's on admin that should move to hp (so we can abandon admin) [CP]<br />
* Setup NFS over UDP and TCP sharing /cluster (after copying the current /cluster) [CP]</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Hoperrprime:WebDAV&diff=5878
Hoperrprime:WebDAV
2008-01-15T03:35:05Z
<p>Skylar: </p>
<hr />
<div># Build Subversion with Apache support by adding the <tt>WITH_MOD_DAV_SVN=yes WITH_APACHE2_APR=yes</tt> options to the build.<br />
# Make sure the Apache modules are loaded in <tt>httpd.conf</tt>:<br />
<pre><br />
LoadModule dav_svn_module libexec/apache22/mod_dav_svn.so<br />
LoadModule authz_svn_module libexec/apache22/mod_authz_svn.so<br />
</pre><br />
# Create a <tt>Includes/svn.conf</tt> with<br />
<br />
<pre><br />
<IfModule dav_svn_module><br />
<Location /svn><br />
DAV svn<br />
SVNPath /cluster/svnroot<br />
</Location><br />
</IfModule><br />
<br />
<Location /detail/svnroot><br />
Order allow,deny<br />
Allow from none<br />
Deny from all<br />
</Location><br />
</pre></div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Hoperrprime:WebDAV&diff=3774
Hoperrprime:WebDAV
2008-01-15T03:29:22Z
<p>Skylar: </p>
<hr />
<div># Build Subversion with Apache support by adding the <tt>WITH_MOD_DAV_SVN=yes WITH_APACHE2_APR=yes</tt> options to the build.<br />
# Make sure the Apache modules are loaded in <tt>httpd.conf</tt>:<br />
<pre><br />
LoadModule dav_svn_module libexec/apache22/mod_dav_svn.so<br />
LoadModule authz_svn_module libexec/apache22/mod_authz_svn.so<br />
</pre><br />
# Create a <tt>Includes/svn.conf</tt> with<br />
<br />
<pre><br />
<IfModule dav_svn_module><br />
<Location /svn><br />
DAV svn<br />
SVNPath /cluster/svnroot<br />
</Location><br />
</IfModule><br />
<br />
<Location /detail/svnroot><br />
Order allow,deny<br />
Allow from none<br />
Deny from all<br />
</Location><br />
</pre><br />
# Install WebSVN<br />
# Add an alias for it: <tt>Alias /WebSVN "/usr/local/www/data/WebSVN"</tt></div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Hoperrprime:WebDAV&diff=3773
Hoperrprime:WebDAV
2008-01-15T03:28:45Z
<p>Skylar: </p>
<hr />
<div># Build Subversion with Apache support by adding the <tt>WITH_MOD_DAV_SVN=yes WITH_APACHE2_APR=yes</tt> options to the build.<br />
# Make sure the Apache modules are loaded in <tt>httpd.conf</tt>:<br />
<pre><br />
LoadModule dav_svn_module libexec/apache22/mod_dav_svn.so<br />
LoadModule authz_svn_module libexec/apache22/mod_authz_svn.so<br />
</pre><br />
# Create a <tt>Includes/svn.conf</tt> with<br />
<br />
<pre><br />
<IfModule dav_svn_module><br />
<Location /svn><br />
DAV svn<br />
SVNPath /cluster/svnroot<br />
</Location><br />
</IfModule><br />
<br />
<Location /detail/svnroot><br />
Order allow,deny<br />
Allow from none<br />
Deny from all<br />
</Location><br />
</pre></div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Hoperrprime:Trac&diff=5877
Hoperrprime:Trac
2008-01-15T03:18:55Z
<p>Skylar: </p>
<hr />
<div># Subversion<br />
## Special compile for apr-svn w/ <tt>APR_UTIL_WITH_BERKELEY_DB=yes</tt><br />
## portinstall subversion<br />
## svnadmin hotcopy to move Subversion over NFS<br />
# Install mod_python-3.3.1 from ports<br />
# Install trac-0.10.4 from ports (made sure Postgres support was enabled)<br />
# Loaded trac_debian-cluster Postgres database from hopper with pgdump and psql<br />
# Added <tt>Includes/trac.conf</tt> to apache configuration, based on TracModPython documentation.<br />
<br />
<pre><br />
Alias "/trac" "/cluster/trac"<br />
<br />
<Location /trac><br />
SetHandler mod_python<br />
PythonHandler trac.web.modpython_frontend<br />
PythonOption TracEnvParentDir /cluster/trac<br />
PythonOption TracUriRoot /trac<br />
</Location><br />
<br />
<Location "/trac/debian-cluster/login"><br />
AuthType Basic<br />
AuthName "Trac"<br />
AuthUserFile /cluster/trac/htpasswd<br />
require valid-user<br />
</Location><br />
</pre></div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Hoperrprime:Trac&diff=3772
Hoperrprime:Trac
2008-01-15T03:18:01Z
<p>Skylar: </p>
<hr />
<div># Subversion<br />
## Special compile for apr-svn w/ <tt>APR_UTIL_WITH_BERKELEY_DB=yes</tt><br />
## portinstall subversion<br />
## svnadmin hotcopy to move Subversion over NFS<br />
# Install mod_python-3.3.1 from ports<br />
# Install trac-0.10.4 from ports (made sure Postgres support was enabled)<br />
# Loaded trac_debian-cluster Postgres database from hopper with pgdump and psql<br />
# Added <tt>Includes/trac.conf</tt> to apache configuration, based on TracModPython documentation.</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Hopperprime&diff=3775
Hopperprime
2008-01-15T02:51:36Z
<p>Skylar: </p>
<hr />
<div>= hp TODO =<br />
<br />
Located at <tt>hp.cluster.earlham.edu, 159.28.234.108</tt><br />
<br />
Description:<br />
<p>The hardware is an S5000 motherboard with 2 3.2GHz dual-core Xeons<br />
(with hyper-threading enabled, i.e. 8 cores total), 2GB RAM, and two<br />
250GB SATA drives configured as RAID1. <br />
*Intel's description of the board is here: http://www.intel.com/support/motherboards/server/s5000pal/index.htm<br />
*The manuals and memory specs are in the sna CVS module in /cluster/cvsroot<br />
<br />
Tasks:<br />
* (Complete) System Install [CP] 20070613<br />
* (Complete) sudo [HH] 20070613<br />
* (Complete) Need /usr/src [HH] 20070613<br />
* (Complete) Security Patches (Keep downloaded patches in /root/patches) [HH] 20070615<br />
** (Complete) Applied FreeBSD-SA-07:02.bin [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:03.ipv6 [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:04.file [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:08.openssl [HH] 20071006<br />
** (Complete) Applied FreeBSD-SA-07:09.random [HH] 20071130<br />
** (Complete) Applied FreeBSD-SA-07:10.gtar [HH] 20071130<br />
* (Complete) cvsup [HH] 20070615<br />
** (Complete) Setup cron job [HH] 20070613<br />
* (Complete) SSH config and banner [HH] 20070613<br />
* (Complete) portupgrade [HH] 20070615<br />
* (Complete) Upgrade X to X 7 (all x-{related_packages}) [HH] 20070615<br />
* (Complete) Screen [NM]<br />
* (Complete) Pine + Pico [NM]<br />
* (Complete) Bash [HH] 20070615<br />
* (Complete) Apache2 /PHP,MySQL,Session [HH] 20070623<br />
* (Complete) [[ntpd]] [NM]<br />
* (Complete) NFS (/Cluster over TCP) [HH] 20070623<br />
* (Complete) Password migration [HH] 20070624<br />
* (Complete) [[MYSQL]] [NM]<br />
* (Complete) [[PortAudit]] [NM]<br />
* (Complete) [[PROFTPD]] [NM]<br />
* (Complete) chkrootkit [NM]<br />
* (Complete) [[Bugzilla]] (over MySQL).[NM]<br />
* (Complete) [[PostgreSQL]]. [NM]<br />
* (Complete) Ganglia [HH] 20070819<br />
* (Complete) SSH Keys from Hopper [HH] 20070819<br />
* (Complete) DNS [HH] 20070819<br />
* (Complete) Cacti [HH]<br />
* (Complete) New NIC Card [multiple ports] [CP]<br />
* (Complete) DHCPD [HH]<br />
* (Complete) Talk [HH]<br />
* (Complete) Kernel tweaks [HH] 20071202<br />
* (Not Installed) NIS [HH]<br />
* (Not Installed) CVS [NM]<br />
* (Not Complete) [[Hoperrprime:Trac]] [ST]<br />
* (Not Complete) [[Hopperprime:WebDAV]] [ST]<br />
* (Not Installed) Remote Access Controller card [CP]<br />
* (Not Complete) Final Cut-over (mysql/postgreSQL/Bugzilla) [NM]<br />
* Figure-out what's on admin that should move to hp (so we can abandon admin) [CP]<br />
* Setup NFS over UDP and TCP sharing /cluster (after copying the current /cluster) [CP]</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Hoperrprime:Trac&diff=3771
Hoperrprime:Trac
2008-01-15T02:50:49Z
<p>Skylar: </p>
<hr />
<div># Subversion<br />
## Special compile for apr-svn w/ <tt>APR_UTIL_WITH_BERKELEY_DB=yes</tt><br />
## portinstall subversion<br />
# Install mod_python-3.3.1 from ports<br />
# Install trac-0.10.4 from ports</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Hopperprime&diff=3770
Hopperprime
2008-01-15T02:46:10Z
<p>Skylar: </p>
<hr />
<div>= hp TODO =<br />
<br />
Located at <tt>hp.cluster.earlham.edu, 159.28.234.108</tt><br />
<br />
Description:<br />
<p>The hardware is an S5000 motherboard with 2 3.2GHz dual-core Xeons<br />
(with hyper-threading enabled, i.e. 8 cores total), 2GB RAM, and two<br />
250GB SATA drives configured as RAID1. <br />
*Intel's description of the board is here: http://www.intel.com/support/motherboards/server/s5000pal/index.htm<br />
*The manuals and memory specs are in the sna CVS module in /cluster/cvsroot<br />
<br />
Tasks:<br />
* (Complete) System Install [CP] 20070613<br />
* (Complete) sudo [HH] 20070613<br />
* (Complete) Need /usr/src [HH] 20070613<br />
* (Complete) Security Patches (Keep downloaded patches in /root/patches) [HH] 20070615<br />
** (Complete) Applied FreeBSD-SA-07:02.bin [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:03.ipv6 [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:04.file [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:08.openssl [HH] 20071006<br />
** (Complete) Applied FreeBSD-SA-07:09.random [HH] 20071130<br />
** (Complete) Applied FreeBSD-SA-07:10.gtar [HH] 20071130<br />
* (Complete) cvsup [HH] 20070615<br />
** (Complete) Setup cron job [HH] 20070613<br />
* (Complete) SSH config and banner [HH] 20070613<br />
* (Complete) portupgrade [HH] 20070615<br />
* (Complete) Upgrade X to X 7 (all x-{related_packages}) [HH] 20070615<br />
* (Complete) Screen [NM]<br />
* (Complete) Pine + Pico [NM]<br />
* (Complete) Bash [HH] 20070615<br />
* (Complete) Apache2 /PHP,MySQL,Session [HH] 20070623<br />
* (Complete) [[ntpd]] [NM]<br />
* (Complete) NFS (/Cluster over TCP) [HH] 20070623<br />
* (Complete) Password migration [HH] 20070624<br />
* (Complete) [[MYSQL]] [NM]<br />
* (Complete) [[PortAudit]] [NM]<br />
* (Complete) [[PROFTPD]] [NM]<br />
* (Complete) chkrootkit [NM]<br />
* (Complete) [[Bugzilla]] (over MySQL).[NM]<br />
* (Complete) [[PostgreSQL]]. [NM]<br />
* (Complete) Ganglia [HH] 20070819<br />
* (Complete) SSH Keys from Hopper [HH] 20070819<br />
* (Complete) DNS [HH] 20070819<br />
* (Complete) Cacti [HH]<br />
* (Complete) New NIC Card [multiple ports] [CP]<br />
* (Complete) DHCPD [HH]<br />
* (Complete) Talk [HH]<br />
* (Complete) Kernel tweaks [HH] 20071202<br />
* (Not Installed) NIS [HH]<br />
* (Not Installed) CVS [NM]<br />
* (Not Complete) [[Hoperrprime:Trac]] [ST]<br />
* (Not Installed) Remote Access Controller card [CP]<br />
* (Not Complete) Final Cut-over (mysql/postgreSQL/Bugzilla) [NM]<br />
* Figure-out what's on admin that should move to hp (so we can abandon admin) [CP]<br />
* Setup NFS over UDP and TCP sharing /cluster (after copying the current /cluster) [CP]</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Hopperprime&diff=3769
Hopperprime
2008-01-15T02:45:51Z
<p>Skylar: </p>
<hr />
<div>= hp TODO =<br />
<br />
Located at <tt>hp.cluster.earlham.edu, 159.28.234.108</tt><br />
<br />
Description:<br />
<p>The hardware is an S5000 motherboard with 2 3.2GHz dual-core Xeons<br />
(with hyper-threading enabled, i.e. 8 cores total), 2GB RAM, and two<br />
250GB SATA drives configured as RAID1. <br />
*Intel's description of the board is here: http://www.intel.com/support/motherboards/server/s5000pal/index.htm<br />
*The manuals and memory specs are in the sna CVS module in /cluster/cvsroot<br />
<br />
Tasks:<br />
* (Complete) System Install [CP] 20070613<br />
* (Complete) sudo [HH] 20070613<br />
* (Complete) Need /usr/src [HH] 20070613<br />
* (Complete) Security Patches (Keep downloaded patches in /root/patches) [HH] 20070615<br />
** (Complete) Applied FreeBSD-SA-07:02.bin [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:03.ipv6 [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:04.file [HH] 20070613<br />
** (Complete) Applied FreeBSD-SA-07:08.openssl [HH] 20071006<br />
** (Complete) Applied FreeBSD-SA-07:09.random [HH] 20071130<br />
** (Complete) Applied FreeBSD-SA-07:10.gtar [HH] 20071130<br />
* (Complete) cvsup [HH] 20070615<br />
** (Complete) Setup cron job [HH] 20070613<br />
* (Complete) SSH config and banner [HH] 20070613<br />
* (Complete) portupgrade [HH] 20070615<br />
* (Complete) Upgrade X to X 7 (all x-{related_packages}) [HH] 20070615<br />
* (Complete) Screen [NM]<br />
* (Complete) Pine + Pico [NM]<br />
* (Complete) Bash [HH] 20070615<br />
* (Complete) Apache2 /PHP,MySQL,Session [HH] 20070623<br />
* (Complete) [[ntpd]] [NM]<br />
* (Complete) NFS (/Cluster over TCP) [HH] 20070623<br />
* (Complete) Password migration [HH] 20070624<br />
* (Complete) [[MYSQL]] [NM]<br />
* (Complete) [[PortAudit]] [NM]<br />
* (Complete) [[PROFTPD]] [NM]<br />
* (Complete) chkrootkit [NM]<br />
* (Complete) [[Bugzilla]] (over MySQL).[NM]<br />
* (Complete) [[PostgreSQL]]. [NM]<br />
* (Complete) Ganglia [HH] 20070819<br />
* (Complete) SSH Keys from Hopper [HH] 20070819<br />
* (Complete) DNS [HH] 20070819<br />
* (Complete) Cacti [HH]<br />
* (Complete) New NIC Card [multiple ports] [CP]<br />
* (Complete) DHCPD [HH]<br />
* (Complete) Talk [HH]<br />
* (Complete) Kernel tweaks [HH] 20071202<br />
* (Not Installed) NIS [HH]<br />
* (Not Installed) CVS [NM]<br />
* (Not Complete) Trac [ST]<br />
* (Not Installed) Remote Access Controller card [CP]<br />
* (Not Complete) Final Cut-over (mysql/postgreSQL/Bugzilla) [NM]<br />
* Figure-out what's on admin that should move to hp (so we can abandon admin) [CP]<br />
* Setup NFS over UDP and TCP sharing /cluster (after copying the current /cluster) [CP]</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=File:New_fiber_plan.jpg&diff=5538
File:New fiber plan.jpg
2006-04-24T05:30:42Z
<p>Skylar: Post April '06 fiber plan</p>
<hr />
<div>Post April '06 fiber plan</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=File:Old_fiber_plan.jpg&diff=5439
File:Old fiber plan.jpg
2006-04-24T05:27:12Z
<p>Skylar: Pre-April '06 Fiber Plan</p>
<hr />
<div>Pre-April '06 Fiber Plan</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:2006-04-06_Meeting&diff=5532
Cluster:2006-04-06 Meeting
2006-04-06T16:50:02Z
<p>Skylar: </p>
<hr />
<div>* Look at hopeless (Toby, Skylar)<br />
* Finish up bigfe (JoshM)<br />
* Send pointer to TomM's folks wrt BCCD download (Skylar)<br />
* Reconfigure weatherduck db (Skylar)<br />
* Look at apt for BCCD (Toby)<br />
* TeraGrid<br />
* Backup (Skylar)<br />
** Get magazine<br />
** Fix admin AMANDA backups<br />
* Backup c10/c11 (Toby)<br />
* Start using Bugzilla for BCCD liberation (Skylar, Kevin)</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:2006-04-06_Meeting&diff=1538
Cluster:2006-04-06 Meeting
2006-04-06T16:41:19Z
<p>Skylar: </p>
<hr />
<div>* Look at hopeless (Toby, Skylar)<br />
* Finish up bigfe (JoshM)<br />
* Send pointer to TomM's folks wrt BCCD download (Skylar)<br />
* Reconfigure weatherduck db (Skylar)<br />
* Look at apt for BCCD (Toby)<br />
* TeraGrid<br />
* Backup (Skylar)<br />
** Get magazine<br />
** Fix admin AMANDA backups<br />
* Backup c10/c11 (Toby)</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:2006-04-06_Meeting&diff=1537
Cluster:2006-04-06 Meeting
2006-04-06T16:36:16Z
<p>Skylar: </p>
<hr />
<div>* Look at hopeless (Toby, Skylar)<br />
* Finish up bigfe (JoshM)<br />
* Send pointer to TomM's folks wrt BCCD download (Skylar)<br />
* Reconfigure weatherduck db (Skylar)<br />
* Look at apt for BCCD (Toby)<br />
* TeraGrid<br />
* Backup (Skylar)<br />
** Get magazine<br />
** Fix admin AMANDA backups</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:2006-04-06_Meeting&diff=1536
Cluster:2006-04-06 Meeting
2006-04-06T16:29:03Z
<p>Skylar: </p>
<hr />
<div>* Look at hopeless (Toby, Skylar)<br />
* Finish up bigfe (JoshM)<br />
* Send pointer to TomM's folks wrt BCCD download (Skylar)<br />
* Reconfigure weatherduck db (Skylar)<br />
* Look at apt for BCCD (Toby)</div>
Skylar
https://wiki.cs.earlham.edu/index.php?title=Cluster:2006-04-06_Meeting&diff=1535
Cluster:2006-04-06 Meeting
2006-04-06T16:24:21Z
<p>Skylar: </p>
<hr />
<div>* Look at hopeless (Toby, Skylar)<br />
* Finish up bigfe (JoshM)<br />
* Send pointer to TomM's folks wrt BCCD download (Skylar)<br />
* Reconfigure weatherduck db (Skylar)</div>
Skylar