UWMLSC > Beowulf Systems > Nemo
  

Welcome to the Nemo cluster at UWM!


Nemo is a large beowulf-class parallel computer, built and operated by the LSC group at the University of Wisconsin - Milwaukee (UWM). It become operational in the first quarter of 2006 and is used for development work and production data analysis for the Laser Interferometer Gravitational-wave Observatory (LIGO). Nemo is used by members of the LIGO Scientific Collaboration (LSC) and also serves as a resource for the GriPhyN collaboration.

This web site contains documentation for LSC members about how to use Nemo, and how Nemo will work. On the left-hand side of this page are a set of links which you can follow to get additional information about Nemo. There will also a search tool which you'll be able to use to find relevant web pages.

Nemo Slave Nodes

A few facts about Nemo

  • Nemo is a 780-node (1560 core) linux beowulf cluster with a Gb/s ethernet interlink.
  • Nemo was funded on July 20, 2004 by a Major Research Instrumentation grant from the National Science Foundation (NSF) and by matching funds from UWM.  It's anticipated lifetime is three years or more.
  • The total cost is $1,891,065.  This is funded as follows: $1,444,972 (NSF) + $446,093 (UWM). UWM is also providing new cluster room space.
  • The construction schedule was
    • June 2005-November 2005: Benchmarking & Testing.
    • December 2005: Purchase order issued.
    • January 2006: First rack delivery (32 nodes).
    • February 2006: 19 racks delivery (748 nodes).
    • March 2006: Storage server delivery (300 TB).
    • Q1 2006: Commissioning.
  • Nemo baseline design highlights:
    • 780 nodes, each having 8.8 Gflop peak performance. Each node has one AMD Opteron 175 dual-core CPU and an 80GB SATA disk.
    • 2 Gbytes of RAM per box, or a total of about 1.6 TB of RAM.
    • 300 usable Terabytes of inexpensive (SATA-II) distributed disk attached to hardware RAID controllers. These will be in separate storage servers attached to the network
    • The system is networked with a Force10 E1200 ethernet switch. This is oversubscribed by using edge switches to feed the nodes. A switching design is linked from the left hand column on this page.
    • All equipment is connected to a 500kVA/400 kW UPS system. This is a Powerware 9315 with three battery cabinets (6 minutes runtime at full load).
    • The dedicated cluster room has approximately 1400 square feet of usable floor space, an 18-inch raised floor, and four Data-Aire 26-ton air conditioning units. Heat is transfered to a water/glycol mixture, which is circulated to dry coolers on the roof.
Additional pictures and information can be found in the Photo Gallery (to the left).


$Id: index.html,v 1.16 2010/01/27 23:45:29 rosso Exp $

Check this page for dead links, sloppy HTML, or a bad style sheet; or strip it for printing.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.