LCDP at Sheridan
   over a high-speed network

Linux Cluster on SuSE 7.2

A High Performance Diskless Linux Cluster

General Info

SuSE 7.2 linux distribution was installed on IBM IntelliStation M Pro Workstation. There are not to many tools for automated creation of cluster on SuSE distribution.
All files and services needed for cluster operation were created manually as described:
Etherboot program was used to create program supporting 3COM 905 network card, for booting a diskless node from a floppy disk.
IP -Address scheme was planned using one of the reserved Class-C networks, giving possibility of having 244 workstations associated with a single server. For the master IP address was ( IP address was used by Red Hat cluster), and for the nodes addresses from to were assigned.
Kernel image for nodes was compiled with root file system on NFS support and automatic IP kernel level configuration.
DHCP server on the master was configured to allow dynamic registration of any nodes in the range chosen ( nodes network card MAC addresses were not used ) and to point the node to the location of kernel image.
Inetd service was configured to support tftp service for remote instalation of linux kernel image.
Tftpboot directory structure was created. Custom made file system for nodes was created. Master distribution file system: 5.9 GB, node file system: 101 MB for each node and 1.1 GB of common files. Due to the hard drive space restriction file system was populated for four nodes only. Network File Service on master was configured in /etc/exports file giving each node access to appropriate directories. Files /etc/hosts and /etc/host.equiv were edited to allow network communication between nodes and master.
Files /etc/pam.d/rlogin and /etc/pam.d/rsh on the master and on all nodes were modified to ease security for the purpose of running parallel programs and commands.
For administrators of cluster secure shell key-generator was used to provide secure encrypted communications without password between master and hosts.

For a detailed description of the hardware setup, please check here.

Server Setup

The server is booted from the /boot sub-directory using kernel 2.4.4 (Master kernel configuration).

Client Setup

Clients are booted from the floppy disk using kernel 2.4.4 (Floppy kernel configuration).

Clustering Tools

Following clustering tools and parallel libraries were installed and tested or evaluated:

  • bWatch - a cluster performance monitor.
  • clusterit - set of parallel commands
  • ganglia - cluster reporting and monitoring toolkit
  • heartbeat - heartbeat subsystem for High-Availability Linux
  • lam - Local Area Multicomputer
  • procstatd - proc monitoring daemon for Beowulf Clusters
  • pvm - Parallel Virtual Machine
  • pvmpov - POVRay with PVM support
  • vacm - VA-Cluster Manager
  • xmpi - a graphical user interface for MPI program development
  • xmtv - a graphic server for LAM/MPI
  • xpvm - a graphical console and monitor for PVM

Oracle 9i was installed, standard and limited clustered(*1) version of database was created and initially tested for performance on ext2 and reiserfs.


Results of tests performed in Lab S142
(December 12, 2001):

Basic tasks were performed measuring performance of operating system and Oracle db on two different partitions. Four task were accomplished on both ext2 and reiserfs file systems:
  1. Coping large file (325 MB)
  2. Creating database
  3. Select statment (all tables of user SYSTEM)
  4. Importing user data ( dump file 10.4 MB)

Known Bugs/Problems

  • Kernel related problems
    • The latest kernel was compiled (2.4.4) with custom configuration

  • SuSE distribution related problems

  • Miscellaneous problems
    • System initialization script were identified and then modified to allow remote node rebooting

Back to [TOP]