Skip to content
Skip to navigation menu

 

Raven user guide - quick start

Raven is the new University Supercomputer service, replacing the original Merlin system (NOTE: all user accounts should have been migrated onto the new service - if you have trouble accessing Raven please contact arcca: arcca<at>cardiff.ac.uk). 

The Linux cluster consists of  2048 cores Intel Sandy Bridge processors (2.6GHz / 4GB per core / 8 cores per processor) as the main parallel MPI partition (including a SMP section), with an additional 864 cores Intel Westmere (2.8GHz / 3GB per core / 6 cores per processor) as a serial/high throughput subsystem. Raven is configured with 8+TB of total memory across the entire cluster, with a 50 TB global parallel file storage managed by the Lustre file system and 100TB NFS /home partition for longer term data store.  Nodes are connected with InfiniBand QDR technology (40Gbps / 1.2μsec latency).

Quick start guide to using Raven

Below is a very quick guide for those experienced users who just want to understand the basics to get started on the cluster. If you do experience any problems using the cluster, please refer to the more detailed User Guide. If this does not resolve the issue, then please inform the team (arcca<at>cardiff.ac.uk) who will respond to your query.

A powerpoint presentation, including worked examples is also available:

 

This guide covers the following topics:

  1. Logging on to Raven
  2. Setting environment variables
  3. Compilers & mpi wrappers
  4. Job Scheduler: PBS Pro (simple script & job submission)

1. Logging on to Raven

From a linux environment:

ssh -X <username> ravenlogin.arcca.cf.ac.uk

Enter password when prompted.

The system is configured to use the University's LDAP authentication method - i.e. your standard university login credentials will form your username/password combination. 

From a windows environment:

You will need to install Xming and putty on your PC in order to access Raven (available on the Networked Applications under the Departmental Software → ARCCA directory).  Double click on the Xming icon prior to launching putty (Xming only needs to be started once during any session). In the putty window complete the following:

hostname window type ravenlogin.arcca.cf.ac.uk

Port window 22 (should be the default setting)

Optional settings:

Under the windows category select the colours folder, select the "Use systems colour" box (normally unchecked) - this will provide a window with black text on a white background (default is white text on a black background).

Under the SSH category, X11 section, select the "Enable X11 forwarding" (ensure can bring back additional windows from your terminal window).

2. Setting environment variables.

The default log in shell is bash. Your .bashrc file will contain the default settings - please do not modify this file. For user defined variables/commands please use the .myenv file which is also present in your home directory. 

A modular environment is now available on the cluster - by default this will use the Intel MPI / Intel compiler combination. If you wish to change this to use the Bullxmpi environment you will have to unload the Intel MPI environment prior to adding the bullxmpi.

module list this command will list the modules currently installed on Raven.

module avail will list the modules loaded in your session

module unload <module-name> will remove a specific module  from your current session.

module load <module-name> this command will load a new module into your account.

Unless these module commands are saved in your .myenv file, then any changes will only be "live" during your current log-in session and will revert to the default settings the next time you log onto the cluster.

Example to use Intel MPI / Intel compiler environment:

1. module load intel/compiler

2. module unload bullxmpi

3. module load intel/mpi

Once additional software (e.g. Portland Group compilers) is available on the cluster, users will be able to use modules to make this software available in their environment.

NOTE: Intel and gnu compilers are available on the cluster. The Portland Group Compilers will be available on the cluster soon.

3. Compilers and MPI wrappers

Raven uses the Intel® C++ and Fortran compilers as default.

Compiler Program Type Suffix Example
iccC.cicc [compiler_options] prog.c
iccC++.C .cc .cpp .cxxicc [compiler_options] prog.cpp
ifortF77.f .for .ftnifort [compiler_options] prog.f
ifortF90.f90 .fppifort [compiler_options] prog.f90

Example C Program.

Create a simple program, for example:

#include

main ()

{      

         printf("Hello, world\n");

}

Save as hello.c. and compile:

icc hello.c

Then run: ./a.out

More compiler options and information can be found using either icc -help or man icc

Example Fortran (to follow):

Compiling with MPI wrappers

The mpiicc, mpiiCC, mpiifort compiler scripts (wrappers) compile MPI code and automatically link start up and message passing libraries to the executable.To determine which libraries are automatically included using the mpi wrapper scripts:

Bullx MPI:

Compiler Program Type Suffix Example
mpiccC.cmpicc [compiler_options] prog.c
mpiCCC++.C .cc .cpp .cxxmpiCC [compiler_options] prog.cpp
mpif77F77.f .for .ftnmpif77 [compiler_options] prog.f
mpif90F90.f90 .fppmpif90 [compiler_options] prog.f90

mpif90 -show or mpicc -show

Intel MPI:

Compiler Program Type Suffix Example
mpiicc -cc=iccC.cmpicc [compiler_options] prog.c
mpiiCCC++.C .cc .cpp .cxxmpiCC [compiler_options] prog.cpp
mpiifortF77.f .for .ftnmpif77 [compiler_options] prog.f
mpiifortF90.f90 .fppmpif90 [compiler_options] prog.f90

mpiifort-show or mpiicc -cc=icc -show

When the Portland Group compilers are available on the system, for the MPI implementations use -f90=pgf90 or -cc=pgcc to enable the Portland compilers to be used rather than the default Intel Compilers.

Additional options for these wrappers can be found using the man pages.

The MPI compiler wrappers use the same compilers as for serial compilation (icc, icpc, ifort) and therefore any of the compiler flags used here can be used with the mpi wrappers. Common options include:

Compiler Option Description
-O3Performs some compilation time & memory intensive optimizations in addition to those executed with "-O2". NOTE: it may not improve the performance of all codes
-xTFlag for the Xeon chipset - includes specialized code for the SSE4 instruction set.
-ipoInter Procedural Optimization
-gDebugging information produced during compilation

 

4. Job Scheduler: PBS Pro

All jobs run on Raven must be submitted via the batch scheduler, PBS Pro.

A working job submission script takes the following form:

#!/bin/bash

#PBS -l select=2:ncpus=16:mpiprocs=16

#PBS -l place=scatter:excl

#PBS -o <output-file.txt>

#PBS -e <error-file.txt>

 mpirun -np 32 <code> <inputfiles>

Where the line "-l select=2:ncpus=16:mpiprocs=16" is the number of processors required for the mpi job.  "select" specifies the number of nodes required; "ncpus" indicates the number of CPUs per node required; and "mpiprocs" represents the number of mpi processes to run per node (normally ncpus=mpiprocs).

As this is not the most intuitive command, the following table is provided as guidance as to how this command works:

select ncpus mpiprocs description
2161632 Processor job, using 2 nodes and 16 processors per node
48832 Processor job, using 4 nodes and 8 processors per node
161116 Processor job, using 16 nodes running 1 mpi process per processor and utilising 1 processor per node
81616128 Processor job, using 8 nodes and 16 processors per node (each running an mpi process)

 

Job Submission

"qstat" command displays the status of the PBS scheduler and queues. Using the flags "-Qa" shows the queue partitions available. By default, if no queue is defined, it will use the workq.

qstat -Qa

Queue              Max   Tot Ena Str   Que   Run   Hld   Wat   Trn   Ext Type
---------------- ----- ----- --- --- ----- ----- ----- ----- ----- ----- ----
workq                0     3 yes yes     0     3     0     0     0     0 Exec
R480                 0     0 yes yes     0     0     0     0     0     0 Exec
queue_512            0     0 yes yes     0     0     0     0     0     0 Exec
queue_1024           0     0 yes yes     0     0     0     0     0     0 Exec

 

To select a different queue use:

#PBS -q queue_512

Once your job script is prepared, it can be submitted to PBS using the command "qsub":

qsub <job-script-name.sh>

To check on progress use the "qstat" command. 

If for some reason the job is not running as expected and it needs to be cancelled, use the command qdel together with the job number.

 

For further information on the system, please refer to the User Guide.