Medusa is a large beowulf-class
parallel computer, built and operated by the LSC
group at the University of Wisconsin - Milwaukee (UWM). It became
operational in August 2001 and is
being used to develop and prototype data analysis for the Laser
Interferometer Gravitational-wave Observatory (LIGO). Medusa is
used by members of the LIGO
Scientific Collaboration (LSC) and also serves as a resource for
the GriPhyN collaboration.
This web site contains documentation for LSC members about how to use
MEDUSA, and how MEDUSA works. On the left-hand side of this page are a set of links which you can follow to get additional information about MEDUSA. There is also a search tool which you can use to find relevant web pages.
A few facts about MEDUSA
MEDUSA is a 300-node linux beowulf cluster with a mixture of 100 Mb/s
and Gb/s ethernet.
MEDUSA was funded on September 1, 2000 by a Major
Research Infrastructure grant from the National Science Foundation (NSF)
and by matching funds from UWM. It's anticipated lifetime is three
years or more.
The total cost is $593,323. This is funded as follows: $415,326 (NSF)
+ $177,997 (UWM).
The construction schedule was
September 2000-January 2001: Benchmarking & Testing.
February 2001: Final design and design review.
Spring 2001: Purchasing and construction.
July/August 2001: Commissioning.
300 nodes, each having 1 Gflop peak performance. Each node has a 1GHz Intel
Pentium III "Coppermine" processor.
24 Terabytes of inexpensive (ATA-100) distributed disk. Each node has an
80 Gbyte disk drive
512 Mbytes of PC-133 RAM per node, or a total of 150 Gbytes of RAM.
The system is networked with a fully-meshed Foundry Networks FastIron III
backplane switch, with a combination of 100 Mb/s and Gb/s
All nodes & networking are connected to uninterruptible power
supplies that cleanly shut down the system if power is absent for more
than about five minutes.
These pictures show two views of MEDUSA. In the first picture, the
back shelf contains 120 nodes, the middle shelf 80 nodes, and the closest
shelf 96 nodes. The second pictures shows the shelf with 120 nodes.
The networking switch can be seen at the back of the middle shelf, with
cable trays leading to it. The large white boxes on top of the shelves
are 2.2 KVA Uninteruptible Power Supplies (UPSes). Additional pictures can be found in the Photo Gallery (to the left).
$Id: index.html,v 1.20 2002/11/13 23:32:56 kflasch Exp $