Skip to content
Skip to navigation menu


Introduction to Raven

PR Photograph of the ARCCA High Performance Computer supplied by Bull UK.

The Cardiff University Supercomputer, Raven 

See also:

Raven user guide (Quick Start)

Account Request



Overview of Raven

High Level schematic diagram of the Raven InfiniBand network  75.3 Kb

Click on the image above to view the larger image


The schematic above summarises the Raven system which comprises of:

  • Dedicated MPI Compute Node Partition with 128 dual-socket nodes (64* Bullx B510 blades), containing 2048 Intel Xeon (Sandy Bridge / E5-2670) 2.60GHz cores and 4GB memory per core
  • 8 x SMP bullx B510 twin blades each with 2*Intel Sandy Bridge E5-2670 (2.60GHz), 8GB per core (128GB DDR3@1600MHz 8*16GB DIMMS),  and 1*128GB SATA SSD
  • Dedicated HTC Compute Node Partition with 72 dual-socket nodes, containing 864 Intel Xeon (Westmere / X5660) 2.80GHz cores and 4GB memory per core.
  • Redundant Administration nodes
  • 2 Front-end login nodes for users to access the cluster
  • Infiniband (Connect2-X) 4x QDR/PCIe gen2 16x network infrastructure across the entire system (40Gbps HS/LL QDR, 1.2μsec latency)
  • "Fabric A" is a QDR Infiniband for job traffic
  • "Fabric B" is a 1Gbps network dedicated to cluster management traffic
  • 50TB useable Lustre Cluster File System with:
    • 2 x Lustre MetaData Servers (MDS)
    • 2 x Lustre Object Storage Servers (OSS) nodes
  • 100TB useable resiliant disk storage (SATA)
  • 320TB Tape Storage
  • Redundant NFS Storage Access Nodes

Back to top


Compute nodes

Core MPI Partition

128 compute blades are Bullx B510 which accommodate two servers in a blade. Each compute blade has 2 sockets, containing an Intel Xeon E5-2670 (Sandy Bridge) 2.60GHz processors (8cores/socket, 2,60GHz, 8,00GT/s, Turbo+, 115W), giving 16 cores per node, with 4GB (DDR3-1600Mhz ECC SDR) RAM per core, a 128GB SATA2 Flash SSD disk, and a single port Connect2-X 4x QDR PCIe Gen2-x8 Infiniband Interface.

Serial / High Throughput Partition

72 dual-socket compute blades using Bullx500, containing 864 Intel Xeon (Westmere / X5660) 2.80GHz cores (12MB / 6.4GT/s)  with 4GB per core (using 48GB memory 1333MHz), a 1*128GB SATA SSD disk and a single Infiniband 4x QDR/PCIe Gen2-x8 interface embedded in the motherboard.

8 SMP compute nodes

8 SMP compute nodes (64 core subset of the MPI Nodes) are Bull B510 blade servers. These are the same specification as the MPI nodes but with 128GB memory per server (8GB per core).


Qty Make & Description Details
128Standard MPI Compute Node
Model: B510

2x Xeon E5-2670 2.6GHz - 1600FSB -  64GB DDR3 RAM - 128GB SATA2 Flash SSD disk

Adapters: Connect2X 4x QDR PCIe Gen2-x8


Standard HTC Compute Node

Model: B500

2xXeon X5660-2.80GHz - 48GB DDR3 RAM - 128GB SATA SSD disk

Adapters: Connect2X 4x QDR PCIe Gen2-x8


SMP Compute Node
Model: B510

(subset of MPI Compute nodes)

2x Xeon E5-2670 2.6GHz - 1600FSB -  128GB DDR3 RAM - 128GB SATA2 Flash SSD disk

Adapters: Connect2X 4x QDR PCIe Gen2-x8

2Login Nodes
Model: Bullx R423-E3
2x Xeon E5-2650 2.0GHz - 8.00GT/s - 20MB - HT - Turbo+ - 95W - 1600 FSB - 32GB RAM DDR3-1600ECC
2x 1000GB@7.2k RPM SATA2 HDD
Adapters: Connect2X 4x QDR PCIe gen2-x8 HCA


Back to top



The High Speed, Low Latency (HS,LL), high performance interconnect is provided by an InfiniBand 8x QDR network using Mellanox MIS5030Q InfiniScale IV 36-Port QSFP 40Gb/s non-blocking switches. In addition the management network is provided by Gigabit fabric using Cisco Catalyst 48 Port switches with 4 SFP prots.

Schematic diagram of the Raven Supercomputer system  134.2 Kb

Click on the image above to view the larger image


The schematic above describes the full non-blocking topology enables collision-less switching of both MPI and I/O traffic. A Connectx-2 single port 4x QDR PCIe gen2-x8 HCA (host card adaptor) is provided in each compute node.

Key features of the network:


Feature Description
Links4x QDR InfiniBand (40Gbps)
Number of ports36 Ports per switch
Switching Performance3.2GB/s per port
Switching capacity2.88TB/s
Latency100ns port-to-port

Back to top


Storage and Cluster File System

There are two main storage sub-systems:

Fast 50TB cluster file system based on ClusterFS inc. Lustre software (RAID-6)

Redundant NFS system of 100TB useable RAID-6 disk.

Cluster File System is the scalable Parallel File System, Lustre from CFS and utilises two cross connected OSS arrays containing 60x1TB 3.5" 6Gb SAS 3.5" PI, FDE disks. This provides high performance, fault tolerant, IO servers for data storage and meta-data storage. Bulls' PFS (Parallel File System) solution is designed and balanced so that the sustained bandwidth is very close to the theoretical peak. Sustained write performance of 6.2GB/s and Sustained read of 7.8GB/s. 

The Lustre File System scales linearly when more OSS (Object Storage Servers) and more storage is added to the cluster.  The OSS arrays each contain 60x1TB 3.5" 6Gb SAS 3.5" PI FDE disks. Each IO cell (1xNetAPP and 2xOSS) provides 3.2GB/s write and 4GB/s read performance.

High Level Raven Storage Schematic Diagram  81.3 Kb

Click on the image above to view the larger image


The NFS Storage is a no-single-point-of-failure system running NFS over IB. The array has dual controllers which are cross connected to two servers. The disks will be configured in RAID6 for protection against up to 2 disks concurrently failing. The solution will commit to 450MB/s on sustained write and 600MB/s for sustained read performance.

Back to top



The main software components on the system are:

Operating System: Bullxlinux 6.0 (based on RHEL6)

Job Scheduler: PBS Pro & PBS Analytics

Cluster Management Tools: Bull MCM (Monitoring and Control Management tools)

Cluster File System: Lustre

Software tools include a range of libraries, compilers and applications (still under revision at the moment):


  • Intel® Cluster Studio (Floating Academic 5 Seat Pack (ESD)) including Intel C / C++ / Fortran


  • Intel Math Kernel Library - Cluster Edition Medium Cluster License for Linux
  • FFTW
  • HDF5
  • netCDF
  • gsl

Analysers, Profilers and Debuggers

  • Intel® VTune™ Performance Analyzer for Linux - Floating Academic 1 Seat Pack (ESD)
  • Intel® Trace Analyzer & Collector (ITA / ITC), Large Cluster System License, Single Cluster, unlimited Developers, Academic
  • Bullx Supercomputer MCM (Monitoring and Control Management Tools)
  • Allinea: DDT (Distributed Debugging Toolkit) - 64 processor license
  • Allinea OPT (Optimization toolkit) - 64 processor license

Back to top