Beowulf Setup

This web page describes a simple way of setting up a cluster Using rsh without passwords is necessary for running parallel programs using MPI on your cluster, although you can substitute ssh for rsh. Here only the setup for rsh is described (you can setup ssh within your cluster quite similarly or quite differently using host keys).
Running NIS on a cluster can be regarded as a security risk. Furthermore, it complicates matters unnecessarily.

In order to allow rsh without passwords in a secure fashion all internal nodes must be on a private network. For the purpose of this description I choose the 192.168.1 net. The master node has (at least) two network cards, one for the outside world, one for the 192.168.1 net. IP-forwarding is switched off on all interfaces, i.e., users can logon to the cluster only by connecting to the master node. Once they have been authenticated on the master node they can connect to another node without providing a password again.

Most Linux distributions nowadays are configured with support for tcp-wrappers that allows you to specify access restrictions in the /etc/hosts.deny and /etc/hosts.allow files (for a more extensive description of tcp-wrappers and on how to make a Linux box more secure, see Linux Security 101). In order to secure the master node against unwanted intruders the first and most important step is to put the line

ALL : ALL
into the /etc/hosts.deny file on the master node. This should besides comments be the only line in that file. Without any entries in the /etc/hosts.allow file this would disallow any connection to your master node - probably not what you want.
For security reasons, I allow connections to the master node from the internet only via ssh. Furthermore, I want to allow rsh and rlogin connections within the cluster, i.e., within the 192.168.1 net. Also for NFS purposes connections to portmapper must be allowed within the cluster. Hence, my /etc/hosts.allow file looks like
#
# hosts.allow	This file describes the names of the hosts which are
#		allowed to use the local INET services, as decided
#		by the '/usr/sbin/tcpd' server.
#
in.rshd : 192.168.1.
in.rlogind : 192.168.1.
portmap : 192.168.1.
sshd : ALL
[note that I'm using openssh which is tcp-wrapped and therefore requires an entry in the /etc/hosts.allow file].

In order to allow rsh without passwords within the cluster the hostnames of all nodes must be listed in the /etc/hosts.equiv file on all nodes (including the master node).

Now run "chmod 500 /usr/bin/passwd" on all nodes but the master node and tell your users that they can change their password only on the master node. Every time you create a new account on the master node you must use rdist (or equivalent) to copy /etc/passwd, /etc/shadow, and /etc/group to all internal nodes.

Now you are set: There is no need to periodically update /etc/shadow on the internal nodes every time somebody changes a user password since no program is ever going to look at /etc/shadow on the internal nodes!

This setup requires that a user who wants to login to an internal node must login to the master first, but that isn't really a disadvantage because passwords don't have to be typed again. Furthermore, from a sysadmin's point of view, this has the huge advantage that you only have to secure the master node which makes your life quite a bit easier.