| The point of this system is to make
a Beowulf cluster out of existing hardware. (So please don't tell me that
faster machines with 100baseT NICs and 100mbps switches are readily available.)
Login or Worldly Node Beowoof
Power Macintosh 6100/66
Asante Nubus ethernet card
Quantum 1GB hard drive
Apple CD-ROM drive
Radius Pivot 824x768 monitor
Macintosh keyboard & mouse
The additional Ethernet card lets this machine act as a firewall and
router for the remaining nodes. The existing Linux kernel did not support
the Asante card. Thanks to Takashi Oe, it now does.
Work Nodes b2, b3, b4, ... b9
Power Macintosh 6100/66
stock 500MB hard drive
Eight of these will be configured with a 25MB partition for Mac OS
7.1, a 120MB Swap partition, and a single big root/usr partition.
They will share a single Macintosh Monochrome monitor, keyboard, and
mouse during setup, and run headless thereafter.
A single external CD-ROM drive is used during system software installation.
Once that's done, it can stay with Beowoof.
Linksys 10-port 10BaseT Ethernet hub.
The hub will work but is slower than a switch. I won't use a switch
right away... When I've profiled a specific application and found that
the system needs a switch, I'll say, "Hmm, this application needs
more network throughput." Beowulf clusters are inherently not optimal
for parallel tasks that need a lot of intercommunication. 2D cellular
automata ("Life") or weather simulations are excellent examples
of problems inefficiently solved on a Beowulf. Ray-tracing image rendering
or parallel compilation, however, are examples of things that work very
well on Beowulfs. Each individual task is fairly independent of all
Each 6100/66 is capable of 132 bogomips. A 10 Mbps Ethernet connection
is generally capable of 3mbps. Since it's a hub, the overall communication
rate for the whole system is 3Mbps. The nine computers have between
them ~1200 bogomips. Divided by 3Mbps, that's a load of ~400 instructions
per transmitted bit.
A 10baseT switch would allow a bandwidth of 3 Mbps spread over 4 pairs
of connected machines, or 12 Mbps. That would reduce the load to ~100
instructions per transmitted bit. A 4x improvement in network throughput
is not worth it for this particular Beowulf cluster.
Now if the project were to start with 2GHz Pentiums, anything less
than 1GbaseT NICs and a 1Gbps switch would be foolish.
X-10 Power Controllers
I am considering controlling the power to the work nodes through
two or more X-10 modules. Along with some scripts to manage and monitor
the presence of the work nodes, this would provide power savings for
when the Beowulf cluster was not being used. At a first-guess, nodes
2, 3, 4 would be on one switch while the remaining nodes 5, 6, 7, 8,
9 would be on the other. The idea would be to develop and debug the
program on the main node, then run it on four or nine nodes.