IRMACS Computational Cluster Support

The IRMACS cluster nodes exist as nodes within the larger Colony Cluster managed by the Research Computing Group at SFU. The cluster is running 64-bit Linux. The IRMACS hardware that has been migrated to the Colony Cluster consists of ten blades with each blade having 2 Quad Core Intel Xeon CPUs with 16 GB of memory (a total of 80 CPUs and 160 GB of memory).

For those users who used to use the "old" IRMACS cluster and want to know about the migration of the IRMACS cluster migration to the Colony Cluster, please see the IRMACS Cluster to Colony Cluster migration documentation.

For questions on using computational resources in IRMACS, including the Colony Cluster and the IRMACS Compute Canada resource allocation, please email

For questions on urgent technical issues (storage problems, queueing system problems, etc) on the Colony Cluster, please email and CC CC'ing  research-support@sfu will make sure the entire technical team that runs the Colony Cluster sees your report on technical issues and will ensure that your problem gets answered as quickly as possible.

Details on how to use the "new" configuration of the IRMACS cluster nodes is described below.

Login Node

The login node for the Colony Cluster is To log in to the new head node simply ssh to

Username and Password

Your username and password on is your SFU user name and password. This is the same user name and password as you use to log in to the IRMACS lab machines as well as to your SFU email account. Note for most of you this will be the same as what you used on, but for some of you it will be different. Please ensure you use your SFU credentials on

Your files and home directory

Your IRMACS files from ~USERNAME on can be found in /rcg/irmacs/cluster-personal/USERNAME where USERNAME is your SFU username. Please do not confuse this space with your personal network folder on (i.e. username-irmacs). If you had any project files they will be stored in /rcg/irmacs/cluster-projects/PROJECTNAME. The /rcg/irmacs file system is backed up.

This file system is a TB in size and can be made larger as required. Please let us know if you think you will need more disk space.

Note: Your home directory on (~USENRAME), has a 3 GB quota on it and should only be used to store small amounts of data. Please use /rcg/irmacs/cluster-personal/USERNAME for your research data.


The Colony Cluster uses modules to manage the software that you are using, much like the IRMACS cluster did. General information on using modules on the Colony Cluster is available here.

Although almost identical in functionality between the IRMACS and Colony clusters, the module naming convention and/or versions may be slightly different. For example, to use python on you used to use:

module load LANG/PYTHON/2.7.2

On you would use:

module load LANG/PYTHON/2.7.6-SYSTEM

Note the version and the naming convention are slightly different. To see the modules available on type "module avail". If there is software that you would like installed on the cluster please email both and

Submitting Jobs

Submitting jobs is done almost exactly the same way as on the IRMACS cluster, with the exception of a couple of details as described below.

You should always specify three parameters in your job submission script, the amount of memory the job will use, the number of processors you will use, and the expected wall time of the job. These tasks can be accomplished by using the PBS directives

  • -l mem=1gb
  • -l nodes=1:ppn=1
  • -l walltime = 1:00:00

to request 1 GB of memory, one processor on one node, and 1 hour of wall time for your job to complete.

To take advantage of the IRMACS allocation on the colony cluster, you should specify the accounting group that you are submitting to. The accounting group in this case is rcg-irmacs. Thus you should use the following PBS directive:

  • -W group_list=rcg-irmacs

A typical batch submission script on would therefore look like this:

#PBS -W group_list=rcg-irmacs
#PBS -l nodes=1:ppn=1,pmem=1g,walltime=1:00:00
#PBS -m abe
matlab -nosplash -nodisplay < test.m

More information on job submission on the Colony Cluster is available here.

Monitoring Jobs

Monitoring jobs is done using the qstat and showq commands as on The main difference is that qstat reports jobs as having a "C" or complete status when they are done, whereas on and the jobs are removed from the queue.

More information on job monitoring on the Colony Cluster is available here.