SuSE 7.2 linux distribution was installed on IBM IntelliStation M Pro Workstation. There are not to many tools for automated
creation of cluster on SuSE distribution.
All files and services needed for cluster operation were created manually
Etherboot program was used to create program supporting 3COM 905 network
card, for booting a diskless node from a floppy disk.
IP -Address scheme was planned using one of the reserved Class-C networks,
giving possibility of having 244 workstations associated with a single
server. For the master IP address was 192.168.1.2 ( IP address 192.168.1.1
was used by Red Hat cluster), and for the nodes addresses from
192.168.1.10 to 192.168.1.254 were assigned.
Kernel image for nodes was compiled with root file system on NFS support
and automatic IP kernel level configuration.
DHCP server on the master was configured to allow dynamic registration of
any nodes in the range chosen ( nodes network card MAC addresses were not
used ) and to point the node to the location of kernel image.
Inetd service was configured to support tftp service for remote
instalation of linux kernel image.
Tftpboot directory structure was created. Custom made file system for
nodes was created. Master distribution file system: 5.9 GB, node file
system: 101 MB for each node and 1.1 GB of common files. Due to the hard
drive space restriction file system was populated for four nodes only.
Network File Service on master was configured in /etc/exports file giving
each node access to appropriate directories. Files /etc/hosts and
/etc/host.equiv were edited to allow network communication between nodes
Files /etc/pam.d/rlogin and /etc/pam.d/rsh on the master and on all nodes were
modified to ease security for the purpose of running parallel programs and
For administrators of cluster secure shell key-generator was used to provide secure encrypted communications without password between master and hosts.
For a detailed description of the
hardware setup, please check here.
The server is booted from the /boot sub-directory
using kernel 2.4.4 (Master kernel configuration).
Clients are booted from the floppy disk using
kernel 2.4.4 (Floppy kernel configuration).
Following clustering tools and parallel libraries were installed and
tested or evaluated:
Oracle 9i was installed, standard and limited clustered(*1) version of
database was created and initially tested for performance on ext2 and reiserfs.
- bWatch - a cluster performance monitor.
- clusterit - set of parallel commands
- ganglia - cluster reporting and monitoring toolkit
- heartbeat - heartbeat subsystem for High-Availability Linux
- lam - Local Area Multicomputer
- procstatd - proc monitoring daemon for Beowulf Clusters
- pvm - Parallel Virtual Machine
- pvmpov - POVRay with PVM support
- vacm - VA-Cluster Manager
- xmpi - a graphical user interface for MPI program development
- xmtv - a graphic server for LAM/MPI
- xpvm - a graphical console and monitor for PVM
Results of tests performed in Lab S142
(December 12, 2001):
Basic tasks were performed measuring performance of operating system and Oracle db on two different partitions. Four task were accomplished on both ext2 and reiserfs file systems:
- Coping large file (325 MB)
- Creating database
- Select statment (all tables of user SYSTEM)
- Importing user data ( dump file 10.4 MB)
- Kernel related problems
- SuSE distribution related problems
- Miscellaneous problems
- System initialization script were identified and then modified to allow remote node rebooting