Running Programs That Aren't Parallelized

Starting and Migrating Programs to Compute Nodes (bpsh)

There are no executable programs (binaries) on the file system of the compute nodes. This means that there is no getty, no login, nor any shells on the compute nodes.

Instead of the remote shell (rsh) and secure shell (ssh) commands that are available on networked stand-alone computers (each of which has its own collection of binaries), a Scyld ClusterWare has the bpsh command. The following example shows the standard ls command running on node 2 using bpsh:

[user@cluster username]$ bpsh 2 ls -FC /
dev/ etc/ home/ lib/ lost+found/ proc/ sbin/ scratch/ tmp/ usr/

Although the output shows that there is no /bin directory, ls is nonetheless able to execute. Bpsh starts a process running ls on the master node, and creates a process memory image that includes the binary and references to all of its dynamically linked libraries. The process is then copied (migrated) to the compute node, and the dynamic libraries are then remapped into the process address space. The ls command does not begin executing until after it is migrated to the compute node.

bpsh isn't a special version of ls, but a special way of handling execution. This process works with any program. Be aware of the following:

For additional information on the BProc Distributed Process Space and how processes are migrated to compute nodes, see the Administrator's Guide.

Copying Information to Compute Nodes (bpcp)

Just as traditional Unix has copy (cp), remote copy (rcp), and secure copy (scp) to move files to and from networked machines, Scyld ClusterWare has the bpcp command.

Although the default sharing of the master node's home directories via NFS is useful for sharing small files, it is not a good solution for large data files. Having the compute nodes read large data files served via NFS from the master node will result in major network congestion, or even an overload and shutdown of the NFS server. In these cases, staging data files on compute nodes using the bpcp command is an alternate solution. Other solutions include using dedicated NFS servers or NAS applicances, and using cluster file systems.

Following are some examples of using bpcp.

This example shows the use of bpcp to copy a data file named foo2.dat from the current directory to the /tmp directory on node 6:

[user@cluster username]$ bpcp foo2.dat 6:/tmp

The default directory on the compute node is the current directory on the master node. The current directory on the compute node may already be NFS-mounted from the master node, but it may not exist. The example above works, since /tmp exists on the compute node, but will fail if the destination does not exist. To avoid this problem, you can create the necessary destination directory on the compute node before copying the file, as shown in the next example.

In this example, we change to the /tmp/foo directory on the master, use bpsh to create the same directory on the node 6, then copy foo2.dat to the node:

[user@cluster username]$ cd /tmp/foo
[user@cluster username]$ bpsh 6 mkdir /tmp/foo
[user@cluster username]$ bpcp foo2.dat 6:

This example copies foo2.dat from node 2 to node 3 directly, without the data being stored on the master node. As in the first example, this works because /tmp exists:

[user@cluster username]$ bpcp 2:/tmp/foo2.dat 3:/tmp