Skip to content
Skip to navigation menu

 

Frequently Asked Questions

Our Frequently Asked Questions (FAQs) are guided by the questions asked of us, therefore this page is under continuous development.

Accessing ARCCA

Who can use the ARCCA systems?

Any Cardiff University student, staff, or faculty member with a valid research-related computational task and a University ID may use the system. Requests for external collaborator access can be facilitated via a Cardiff University staff sponsorship.

How do I apply for access to use the ARCCA systems?

In order to obtain an account on Raven (the ARCCA HPC Cluster), users must complete an account request application form (Windows or pdf versions are available to download).

Please include a brief summary of the research being undertaken and applications being used. For researchers who require a large amount of CPU or storage resources (in excess of 50,000 CPU hours or 500GB) we would appreciate this being indicated on the application form. If it is an existing project, then it will require authorisation from the Principle Investigator (PI) to add your account to the project membership. Alternatively, if this is a new project, then a project request form will also need to be completed to provide a brief summary of the type of research to be undertaken on the supercomputer. This form is also available to download.

Occasionally we may request additional information on the project to ensure we can support the computational requirements effectively on the Raven service. These forms can be returned electronically or via the internal mail system.

We welcome requests from PGRs and undergraduates although this is subject to accompanying authorization from your project supervisor / principle investigator.

Who provides support for ARCCA Systems?

Support for the ARCCA facilities is provided by a team located in the Redwood building on the Cathays campus. Select the ARCCA team member you wish to contact - these are given in the 'Who's Who' web page.

Please contact the ARCCA support team on ARCCA-help@cardiff.ac.uk if you have any problems, such as requiring the installation of a package or application or any technical issues.

How do I log in to Raven (the supercomputer)?

Access to Raven is via secure shell (SSH, a remote login program), through a login node which is available from the Internet. From a Unix system, use:

ssh <username><at>ravenlogin.arcca.cf.ac.uk

where <username> should be replaced by your Cardiff University ID. You will be required to set a password on your first login.

Note that the login node is part of the cluster reserved for interactive use such as compilation, job submission and control.  If the login nodes are busy, please be courteous to other users and try and undertake extensive compilation of code outside of standard hours.

If you are using a Windows computer, you will need an SSH client such as PuTTY to access Raven. Installation and use of PuTTY is described in the online quick start user guide.

Can I access the Raven Supercomputer from my personal device?

Raven can be accessed from anywhere with network connectivity.

Various clients are freely available and can be installed on tablets and smart phones. However, these are installed at the users own discretion and ARCCA have no liability for any software installed on personal devices. A range of clients are available for Android (e.g. JuiceSSH, ConnectBot) and iPhones/iPads (e.g. iTerminal, serverauditor) – but it is recommended users review the installation requirements prior to downloading. Further advice on personal device connectivity is provided by the Portfolio Management and IT Services.

I can no longer log in to Raven

Unable to log in to Raven could be caused by a number of possible reasons:

  • Raven could be offline for a maintenance session – you will normally receive a message saying service is offline and an estimated restoration time. The outage will also have been communicated via the arcca mailing list and MOTD. ARCCA is in the process of being integrated into the service status website, so please check https://status.cardiff.ac.uk.
  • Incorrect password – to avoid malicious dictionary-style automated attacks, ARCCA have implemented a 'five attempt' limit on passwords. If you have typed the password incorrectly more than five times (noting it is CASE SENSITIVE), then your account will be temporarily suspended for 24 hours. If you need to clear this suspension urgently, then please contact ARCCA-help@cardiff.ac.uk requesting clearing of incorrect password attempts. Please note: ARCCA staff cannot reset your password (see the FAQ's below regarding password resets).
  • Account expired – if your account has expired then you will require your University sponsor to request reactivation of your UID to University IT. Once this is active, you should be able to log on to Raven with your existing user name and password combination.
  • No Account / Project – prior to being able to access the Raven supercomputer, you need to apply for a user account and project (either added to the membership of an existing project, or the creation of a new project on the system). Please see FAQs for advice on how to apply for these.
  • Problem on the service – occasionally a component might fail causing a temporary service outage. In these instances if the outage is longer than 30 minutes we try to inform the user community and provide an estimation of the resolution time (it will also form part of the service status page). It can be useful to contact ARCCA in these instances if you’ve not seen a community announcement, as whilst we proactively monitor for alerts it is on a best endeavours basis outside of standard working hours.

How do I configure my PuTTy client to not time out due to inactivity?

The ssh client PuTTY can be configured to maintain a connection and not time out due to inactivity. To set up a new connection with 'keep alives' to maintain your connection follow the steps below:

  1. Open the PuTTy application and navigate to the Options panel.
  2. Select Connection.
  3. In the field Sending null packets to keep session active change the default value from 0 to 1800 (30 minutes).
  4. Check the Enable TCP keepalives (SO_KEEPALIVE option) check box. Note: This option may not be available in older versions of the PuTTY client.
  5. Select Session from the left hand menu.
  6. In the Host Name (or IP Address) field enter the destination hostname, for example vayu.nci.org.au.
  7. In the Saved Sessions box enter a name for the session, for example keepalive.
  8. Select Save

Any new sessions will now use these modified connection options. Note that to avoid excess logins, Raven will automatically disconnect sessions that have been inactive for more than two hours.

How do I change my password?

It is not possible to change your password on Raven as we use the University's central authentication process (LDAP). If you do wish to change your password then you will need to follow the University's guidance. Please note this change will affect all your University authenticated systems, not just Raven.

I've forgotten my password. What should I do?

It is not possible for ARCCA to reset passwords as Raven uses the University's central authentication process (LDAP). To reset a password you can:

How do I add new users to my project?

For users with existing accounts on Raven, then it is simply an email to the helpdesk (ARCCA-help@cardiff.ac.uk) requesting specific users (including user name and Project ID) to be added as members of the project. There are currently no limitations on the number of projects a user can be associated with or how many members are involved in a specific project.

For users not currently registered on Raven, then they will first need to submit an account request form to ARCCA (see earlier FAQ) and identify which projects they need access to (this will need to be authorised by the project PI).

How do I transfer files to and from Raven?

From a Unix system, the 'scp' command can be used to transfer files to and from a Raven login node node. For example:

scp <file> <username><at>arccalogin.cf.ac.uk: (Note the colon terminating the remote system name.)

For Windows-based users, a file transfer utility will need to be installed e.g., FileZilla or Winscp.

Are there quotas on CPU and disk usage?

Strict quotas are not applied on CPU usage, although Users are expected to comply with any fair usage policies which require that no user should have more than 20 jobs in the scheduler at any one time. There is a 20 GB limit quota on /home storage – users requesting an increased allocation should contact ARCCA-help@cardiff.ac.uk. Regular clean-up of /scratch storage will be introduced shortly following an email communication to the user community.

Please see the best practice policy for full conditions of use.

I need to finish my project urgently. Can I get priority access to the computers?

We are always sympathetic to the requirement for priority access, but need to balance such requests against the requirements of the user community as a whole. Please contact ARCCA-help@cardiff.ac.uk specifying your requirement and we'll respond without delay.

 Back to top


Information sources

I'm new to HPC. How do I get started?

ARCCA can provide advice and guidance on how your work may benefit from our services, along with examples of how other researchers across the university and external collaborators are making use of Raven, our HPC. Please contact ARCCA-help@cardiff.ac.uk to discuss your requirements. ARCCA also run a range of training sessions that review 'what is supercomputing?' and also introductory sessions to using the Raven supercomputer. Please see our training page for more details.

How do I use Linux?

Once you have logged on to Raven you will be at what is known as the Linux Shell, or command line interface. ARCCA run a series of introductory training courses, including a tailored introduction to Linux – please see our training page for more details.

There are a number of tutorials available depending on your Linux vendor, but Raven uses Redhat Enterprise Linux and guides for that or CentOS Linux, which is based upon Redhat, will be most relevant. For beginners, the Linux Foundation has an introduction to the operating system along with links to related tutorials.

Contact the ARCCA team for more information and guidance on using Linux.

Where can I find information on using the ARCCA systems?

The main source of information on using the ARCCA systems is the online quick start user guide. Note also the 'arcca-help' facility on the raven login nodes.

How can I report a problem or raise a query about using the system?

ARCCA offers a comprehensive and highly-available support network.  You can obtain help by phoning the IT Service Desk, or emailing ARCCA-help@cardiff.ac.uk. Emails are then automatically redirected to the IT Service Desk and a call logged on the VSM system. You will then receive an email with a Call Number and the call is forwarded to the ARCCA team for resolution.

Does ARCCA run courses on using HPC?

We offer a wide variety of training courses targeted at multiple levels. Please see our training page for details.

What is the difference between OpenMP and MPI? How can I find out more?

OpenMP (Open Multi-Processing) is an API that allows software to take advantage of multiple processing units in shared memory systems, such as a desktop computer with a multi-core CPU or a single node on Raven.

MPI (Message Passing Interface) is a communications protocol that allows the passing of data between shared memory systems across a network. This allows the splitting up of models into discrete components, each communicating with each other periodically to share data.

As far as Raven is concerned, OpenMP-enabled software allows you to use only a single node (albeit multiple cores), whereas MPI allows you to use many nodes at the cost of complexity and inter-process communication speed. There are many useful guides to learn more.

Are there application guides available for commonly used applications?

Application guides are in the process of being produced for a number of the most commonly used applications on Raven. These are being uploaded onto the Staff Intranet – URL to follow. In the meantime, please contact ARCCA for more information on the guides.

 Back to top

Hardware and software resources

What computer systems are available?

ARCCA has a variety of available research computing resources, including a large computing cluster – 'Raven', virtual machine hosting capabilities and a flexible data storage facility. Read an online overview of the computer cluster, Raven.

Raven is a multi-processor system with a range of different processors and partitions. More details are provided below:

  • A dedicated MPI Compute Node Partition with 112 dual-socket nodes containing 2048 Intel Xeon (Sandy Bridge / E5-2670) 2.60GHz cores and 4GB memory per core.
  • 16 x 128 GB SMP nodes each with 2*Intel Sandy Bridge E5-2670 (2.60GHz), with 8GB per core.
  • A dedicated HTC Compute Node Partition with 72 dual-socket nodes, containing 864 Intel Xeon (Westmere / X5660) 2.80GHz cores and 4GB memory per core.
  • A second HTC Compute Node Partition with 60 dual-socket nodes, containing 1440 Intel Xeon (Haswell / E5-2680) 2.50GHz cores and 128GB memory per node.

What is the difference between Westmere and Sandy Bridge Nodes ?

The Raven supercomputer system comprises two distinct partitions, the 'HTC' system, designed for serial, single processor jobs, and the second 'MPI or HPC partition', designed for close-coupled parallel jobs. The HTC system employs Intel Xeon processors that have an architecture that is code-named 'Westmere', whereas the MPI partition is built from more recent 'Sandy Bridge' processors. For the user, the main difference is in the number of cores per dual-socket node, where the Westmere systems have 12 and Sandy Bridge have 16. This means, for example, that for parallel (MPI) jobs run on the HTC system you should specify in job scripts run on Westmere:

         #PBS -l select=2:ncpus=12:mpiprocs=12  

and in job scripts run on Sandy Bridge:

         #PBS -l select=2:ncpus=16:mpiprocs=16

You may also find that some applications are only available on the Westmere systems, and others are only available on Sandy Bridge.

Sandy Bridge offers a number of performance benefits over Westmere, including availability of AVX instructions, larger cache and improved memory bandwidth, which translates into improved application performance. For a comparison of Westmere and Sandy Bridge for a number of open-source applications, please see the User Guide for more details.

Information on the new Intel Haswell Partition to follow.

How to build a program using MKL.

Intel's Math Kernel Library (MKL) is a library of highly optimized, extensively threaded mathematical routines for applications that require high performance. If you are using the Intel compilers to build your application, then in most cases it will be sufficient to use:

module load compiler/intel:

and include the option –mkl in your compilation and link commands. (Other forms which may be useful are –mkl=sequential to use the non-threaded MKL and –mkl=cluster to use libraries like Scalapack built for Intel MPI.)

If your requirements do not fit with the above (e.g. you are using 64-bit integers or you are using GNU compilers or Open MPI), we recommend that you visit https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/ which provides a tool to generate the required linker input.

Can I import my own binaries?

We strongly discourage users from importing their own binaries as these will not be optimised for our systems. If packages are not available on the HPC facilities and users require them they should request, where possible, for ARCCA to install the packages from source on our systems (under the modules environment). If there is no option but to import a binary it must be compiled for a 64-bit x86-64 architecture.

What file systems are available?

There are two main file systems accessible to users on Raven:

  • /home contains your home directory. At login, the system automatically sets the current working directory to your home directory. Store your source code and build your executables here. This file system is backed up, and the front-end nodes and any compute node can access this directory. Use $HOME to reference your home directory in scripts.
  • /scratch is the directory in which to store large, temporary files. /scratch is a Lustre file system designed for parallel and high performance data access from within applications. It has been configured to work well with MPI-IO, accessing data from many compute nodes. If your jobs have significant input/output requirements, change to this directory in your batch scripts and run jobs in this file system. Please note this file system is not backed up.

Users must not use /tmp on each compute node for even the temporary storage of large data files, as the filling up of this filesystem may cause severe problems on the service.

Where can I store my research data when it is not being used on the ARCCA systems and is it secure?

At the moment ARCCA do not have any remote storage capability.

There is a pilot programme 'Pilotomics' which a number of research groups are investigating providing additional storage capabilities to Raven – please contact us for more details regarding this solution. In parallel the University is introducing the Research Data Information Management service (RDIM) that should be piloted in Summer 2015 – please contact Portfolio Management and IT Services for more details regarding this and options to be an early stage adopter for this service.

How much does it cost to store my research data?

The RDIM project is currently reviewing the costs associated with longer term data storage. At the moment, it is envisaged there will be a default allocation and additional requirements will be chargeable, but the funding models are still being finalised. Please speak to the Portfolio Management and IT Services RDIM Programme Manager for more details.

How can I check my disk usage?

From your home directory, type “quota –s <username>” to determine your usage of /home, and

Alternatively you can see your usage of both file systems online. Please note this URL is only available on campus unless you are connected via VPN.

Will my files be backed up?

Data in user home directories on the ARCCA systems is backed up to tape every night. Data on scratch is for transient working data only and is not backed up (please note: scratch is subject to a 60 day purge policy).

Is there local storage available on each node?

Users must not use /tmp on each compute node for even the temporary storage of large data files, as the filling up of this file system may cause severe problems on the service.

Use should be made of /scratch which is designed for parallel and high performance data access from within applications. /scratch has been configured to work well with MPI-IO, accessing data from many compute nodes.

My application requires a large amount of memory. Can it be run on ARCCA?

Yes, to an extent. For large-memory jobs, there are a number of options. On the Raven system, submitting to the SMP_queue will allow use of up to 16 hosts each with 16 cores and 128 GB of memory i.e. 8GByte/core.

As an alternative, on hosts with 64 GB of memory one could restrict the number of running processes so that there is more memory available per process. (This is sometimes called ‘under-committing’ or ‘under-populating’ nodes.) For example, to run 4 processes on a 64 GB node, so that each has up to 16 GB of physical memory available to it, use:

#PBS -l select=1:ncpus=4:mpiprocs=4

#PBS -l place=scatter:excl

#PBS –q workq

Note that this should only be done if there is a demonstrable need for more than 4 GB per process and the alternative SMP_queue is not suitable, as under-committing nodes could be viewed as inefficient use of the system.

Is there a list of available applications software?

The Raven system has a large number of software packages installed and supported by the ARCCA technical staff.

Why can't I access some programs?

Running the command:
module avail

Will return a list of all the modules available to you on Raven. Please note certain programs are licensed or available on a restricted basis. If you would like to use a program listed there, but cannot gain access, please contact ARCCA-help@cardiff.ac.uk.

If the program you wish to use is not listed as a module or on the applications page then again, please contact ARCCA-help@cardiff.ac.uk.

We are always open to software requests – please see the FAQ regarding software requests for more information.

Is MATLAB available?

MATLAB is being procured via a University Site Wide License. Discussions regarding the licensing are being finalised with Portfolio Management and IT Services and will be available on the cluster imminently.

Can I request that a new application be installed?

We are always open to software requests although it may not always be possible to install your requested software as it may not be suitable for HPC applications or maybe cost prohibitive for commercially licensed applications. Please contact ARCCA-help@cardiff.ac.uk. Wherever possible we will endeavour to work with customers to ensure applications or alternatives are suggested.

 Back to top


Interactive use

What are the login nodes used for?

The login nodes are for compiling, debugging, and file editing activities. They are intended for interactive SSH sessions, file editing, compiling, debugging and managing jobs in the job scheduler. They are reserved for activities which do not consume lots of processing or memory resource. Any job dedicated to computing must be run on a compute node.

What are Environment Modules?

ARCCA continually updates application packages, compilers, communications libraries, tools, and math libraries. To facilitate this task and to provide a uniform mechanism for accessing different revisions of software, ARCCA uses the Environment Modules utility.

Environment Modules are predefined environmental settings which can be applied and removed dynamically. They are usually used to manage different versions of applications, by modifying shell variables such as PATH and MANPATH.

Modules are controlled through the 'module' command. Common tasks are shown here:

module avail Show available modules

module load <modulename> Load a module

module unload <modulename> Unload a module

module swap <modulename> Swap a loaded module to a different version

module purge Unload all modules

module show Show the settings that a module will implement

module help Display information about the module command

module help <modulename> Display information about module <modulename>

Please see the online quick start user guide for more detailed information.

How do I compile and link my code?

Compilation and linking is very application-dependent. However, the common first step should be to load the desired compiler and MPI environment.  Please see the FAQ describing what compilers are available on Raven.

For example, for Intel compilers and Intel MPI, use:

module load compiler/intel

module load mpi/intel

This will make the compilers available in your path and will set environment variables such that compiler-dependent and MPI libraries are linked correctly into your application. (Please remember to load the same modules when it comes time to run the application via a batch job script.) Please note, the Intel compilers generally provide more efficient codes, but for some open-source software the Intel compilers are not compatible and the GNU open-source compilers should be used (module load compiler/gnu).

Other libraries that your code uses may also be available via modules. Use 'module avail' to check, and 'module load <library>' to load the library before compiling and linking. A detailed guide to compiling and linking codes is available in the online quick start user guide.

Can I use X-Windows applications on ARCCA?

X-Windows applications can be run on the Raven login nodes. To enable your local machine to act as the X display, log in to Raven by using:

ssh –Y <username><at>ravenlogin.arcca.cf.ac.uk

You will of course need to have an X-Windows environment running on your local machine in order to display the output from the remote application.

 Back to top

Running jobs

What is the Job Scheduler on Raven?

PBS Pro, developed by Altair, is the job scheduler used on Raven. Advice on using and setting up job scripts are provided in the online quick start user guide.

How do I run a job on Raven?

A job is submitted with a command called qsub. The job can be submitted either directly from the command line or using a job script that contains directives which qsub can understand. For example a job script (myjob.pbs) can contain:

#PBS –q serial

#PBS –P PR39

#PBS –l select=1:ncpus=1

#PBS –l place=free:shared

The job (or batch job) would then be submitted with:

$ qsub myjob.pbs

Depending on the available resources on Raven the job will either run immediately or be held in a queue until the resources are available (such as a number of CPUs).

Where do I find the output from my job?

When a job is run it is important to know where it was run on the system, we recommend jobs are run within the /scratch filesystem.  PBS Pro also writes the output which normally would go to the screen (the standard output and error) to a file.  This is defined within the job directives, for example to save the standard output and error into files output.txt and error.txt in the directory you submitted the job, the directives would be:

#PBS –o output.txt

#PBS –e error.txt

These output files, unless explicitly directed elsewhere, will be written to the directory from which the job was submitted from (i.e. the directory where the qsub command was issued).

Is there a way to get status information on the nodes in the cluster?

The command 'qstate' can be used to find the status of the nodes within the queues on Raven.

My job terminates without any message or warnings.

If you know the PBS job id number you can find out information using:

$ qstat –x –f <jobid>

The –x option allows qstat to use jobs which have completed and the –f option provides full information on the job.  The comment section can sometimes be useful and might provide some information. If it is not clear as to why the job failed please contact ARCCA.

My program, which used to work, stopped working.

ARCCA tries to reduce the impact of maintenance work on Raven but occasionally the operating system or software modules can change. This may impact some software in unexpected ways. If the code is compiled by the user then please try recompiling the program which may link the new locations of libraries. If the software is provided by a module please report the error to ARCCA.

Can I get email notification when a job finishes?

A PBS Pro directive provides an option to email the user when the job reaches a certain status such as completing. The PBS Pro directives are:

#PBS –m abe -M <user@cardiff.ac.uk>

Email addresses can be set with –M flag and the argument abe is a series of letters to signify which events. 'a' when the job is aborted, 'b' when the job starts, and 'e' when the job finishes. Note: replace <user@cardiff.ac.uk> with your email address!

My code uses both message passing and thread parallelism. How do I run it?

Hybrid programming is possible by specifying the total number of CPUs in the PBS Pro directives (MPI tasks multiplied by OpenMP threads). For example:

#PBS –l select=1:ncpus=16:mpiprocs=8:ompthreads=2

This will reserve one chunk of resource (select=1) where the chunk will have 16 CPUs, where 8 will be MPI tasks and each MPI task will have 2 OpenMP threads. Another example:

#PBS –l select=2:ncpus=16:mpiprocs=4:ompthreads=4

This will reserve 2 chunks of resource, where each chunk has 16 CPUs and the 16 CPUs are split with 4 MPI tasks and each MPI task has 4 OpenMP threads.

It can also be important to compile with the thread-safe MPI compiler.  To do this automatically with the Intel MPI libraries use:

$ mpiifort –openmp hello.f90 –o hello

Where mpiifort will automatically the thread-safe library due to the -openmp option.

What limits are imposed on jobs?

By default there are number of limits to reduce the impact that one user can have on the system. These are:

  1. There is a maximum number of 20 jobs which can be queued (per queue).
  2. The maximum number of CPUs that can be used across all jobs in the workq is 256.
  3. The workq has a maximum time limit of 72 hours and the maximum time limit in the serial queue is 120 hours.

These limits can be modified if the job requirement exceeds these parameters, please contact ARCCA for more detail.

My job takes longer than the maximum walltime. How can I run it?

Please contact ARCCA where we can provide access to special queues where your job can run (if the requirement is not too high). ARCCA recommends running jobs with job walltime limits where possible since smaller jobs can be run more efficiently by the job scheduler.

Why isn't my job just running?

PBS Pro attempts to use Raven as efficiently as possible given the attributes of jobs and the resources available.  If the job is queueing then it may be just waiting for resources to become free (if possible reduce the walltime limit requested to allow the job scheduler to fit your job into small gaps within the queueing system). Please see the FAQ regarding finding out more details about your job.

My job seems to be running slowly. What should I do?

First make sure your job is running on the high performance filesystem on /scratch.  The /home filesystem is provided via NFS which is not suitable for work. Occasionally, poor performance can be caused by issues on the compute nodes. Please contact ARCCA is you suspect there may be a problem on the cluster.

If you have compiled code yourself then profile your code to see where the bottlenecks are. This can be done with the –pg option to the compiler.  When the software is run a gmon.out file is created and can be viewed using gprof. If you require assistance with profiling your code, please contact ARCCA for advice.

How do I kill a running job?

The PBSPro batch manager used on Raven has the command

qdel < job_identifier>

that allows users to delete their current jobs, where job_identifier is the sequence number assigned to the job when submitted under control of the qsub command.

I can't kill a job. What should I do?

If a job cannot be killed then it is a very unusual circumstance. First make sure it is your job you are trying to kill and you are the owner of the job. If there is no obvious reason why the job cannot be killed please contact ARCCA.

I have a technical problem when running jobs. Who should I contact?

ARCCA can provide advice to help solve technical problems. We may also raise the issue with Raven's supplier to support the resolution of incidents. Please do not hesitate to contact ARCCA should you require any assistance with any aspect of the service.

 Back to top

General

I've accidentally deleted a file. Can it be restored?

Any file that was stored on the /home file system within the last three months can be restored by our admin team. If you require a recovery of a file or directory please email ARCCA-help@cardiff.ac.uk including the name and path of the files / directories you would like recovered.

Please note that files stored on /scratch are not backed up and cannot be recovered.

Is there a parallel debugger available on ARCCA?

There are a number of parallel debuggers available on Raven, including the Intel® Debugger (IDBC/IDB) and Allinea’s DDT (Distributed Debugging Tool).

The Intel® Debugger for Linux is available as part of the intel compiler module, which can be accessed as follows:

module load intel

Both command line (idbc) and graphical (idb) versions are available.

Further information and documentation can be obtained via the Intel web site.

Additionally, it is possible to attach the GNU serial debugger (gdb) to running MPI processes.

What software is available for profiling applications?

Two standard linux profilers are available, gprof and valgrind.

A range of profiler tools are available on Raven, please run 'module avail' on the cluster for more information.

How do I acknowledge ARCCA in journal publications and conference presentations?

The standard acknowledgement is:

This work was performed using the computational facilities of the Advanced Research Computing@Cardiff (ARCCA) Division, Cardiff University.

This can be adapted particularly if a specific member of ARCCA staff has supported you in enabling this work to include them by name in this acknowledgement. Please acknowledge ARCCA in all articles that are published which used ARCCA resources - this includes publications in journals, conference proceedings, presentations and posters.

 Back to top