Dr Daniel Gallichan

Dr Daniel Gallichan

Lecturer in Medical Imaging

School of Engineering

Email:
gallichand@cardiff.ac.uk
Telephone:
+44 (0)29 2087 0045
Location:
Cardiff University Brain Research Imaging Centre, Maindy Road, Cardiff, CF24 4HQ

I joined the School of Engineering as a lecturer in November 2016, and have experience working on the research of various aspects of the physics of Magnetic Resonance Imaging, with my most recent work focusing on the development of methods for motion-correction for ultra-high resolution imaging. My research is based at CUBRIC.

3D brain render

The image above was created using the open-source software Blender and I placed my brain into the 'Class room' demo scene created by Christophe Seux. You can also read about how to get your own brain into Blender by following the instructions from my blog.

You can also take a closer look at my brain in 3D if you like, with this browser-based viewer:

3D brain viewer

Education and qualifications

  • 2007: DPhil in Medical Physics. University of Oxford. Measuring Cerebral Blood Flow using ASL in MRI.
  • 2003: Life Sciences Interface Doctoral Training Year. University of Oxford.
  • 2002: MSci Physics with a European Language (German), University of Nottingham (exchange year at LMU Munich)

Career overview

  • 2016 - present: Lecturer in Engineering with research at CUBRIC, Cardiff University
  • 2011 - 2016: Senior Research Scientist, EPFL Lausanne, Switzerland
  • 2009 - 2011: Post-doctoral Research Scientist, University Medical Center Freiburg, Germany
  • 2007 - 2008: Post-doctoral Research Scientist, FMRIB Centre, University of Oxford

2018

2017

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

Currently I am teaching on the following modules: Computing 1 - MATLAB (EN2106), Medical Image Processing (EN4505) and Clinical Engineering 2 (EN4506)

Example Movie of FatNavs in action

Example of real 3D FatNavs during a scan where the subject made small deliberate movements

Motion-correction with 3D FatNavs

There is continual interest in pushing the boundaries of what can be achieved with MRI, especially regarding the spatial resolution of the images. At CUBRIC we are fortunate to have 4 state-of-the-art MR systems, including a very powerful 7T magnet (scanners in hospitals typically operate at 1.5T or 3T). This power enables us to acquire full 3D images of the brain with exceptionally high resolution (voxel sizes < 500 microns) - yet these high resolutions sitll require long scan times. It is easy to understand that during long scan times (up to 30 mins or more) you will probably move your head by a milimetre or two, even if you try to remain as still as possible - and with these very high resolution images this will still affect the achievable image quality.

Over the past few years - mainly while working at the EPFL in Switzerland - I have been developing a method to use rapid acquisitions of just the fat within the head (3D FatNavs) to allow tracking of very tiny head movements - which can then be corrected by post-processing of the raw data.

We are keen for other sites to start trying out 3D FatNavs - and we already have collaborators testing the seqeunces at different sites, both on Siemens and Philips platforms. If you are interested in collaborating, please contact me by email.

The set of Matlab tools which were developed to perform the whole retrospective correction pipeline can also be freely downloaded from the RetroMoCoBox Github page.

Imaging the brain at ultra-high resolution

In May 2016, we also demonstrated using 3D FatNavs to allow ultra-high resolution imaging at 7T, down to around 350 micron isotropic resolution of the whole brain. The full paper is Open Access and available from PLOS ONE - and you can also download the full datasets in NIFTI format from the Open Science Framework.

Manually segmented hippocampus

3D software rendering of a manually segmented hippocampus from an ultra-high resolution MRI scan. Read more in Federau and Gallichan, PLOS ONE 2016.

The principle behind 3D FatNavs

A typical MRI image of the head at low resolution (2mm) might take around 30s to acquire:

Water image

But if we perform the same scan but at the frequency specific to fat rather than water, we obtain this image:

Fat image

The fat within the head is primarily localized to the scalp, which results in an image which is sparse (i.e. most of the image is zero or close to zero). Sparsity is an important concept in signal processing, as sparse signals can be more easily compressed without losing information. The corresponding concept In MRI is that if we can represent our image in a sparse way, then we should also be able to acquire the data for our image much faster (via parallel imaging) while still maintaining a reasonable image quality.

Here is the same image, but acquired in just over 1 second (an effective acceleration factor of 28 is achieved by combining 4x4 GRAPPA acceleration with 6/8 partial Fourier in both phase-encoding directions):

Fat image accelerated

The concept of the 3D FatNav is therefore to regularly acquire volumes such as that shown above (for example, a 'fat navigator' image could be acquired every 6 seconds during an MP2RAGE structural scan) and to track the small movements of the head which occurred during the whole scan by aligning these fat images. The motion information can then be used to retrospectively correct the raw data from the primary structural scan via post-processing.

You can read in more detail about 3D FatNavs in our paper, which is featured in the March 2016 edition of Magnetic Resonance in Medicine - click here to download a preprint.

Animated versions of figures from 3D FatNavs paper

Figure 4 from Gallichan et alA zoomed section of the full MP2RAGE volume acquired at 0.33x0.33x1.00 mm, showing the improvements following retrospective motion-correction using the 3D FatNavs: Figure 4

Figure 5 from Gallichan et alA zoomed section of the GRETI2 volume from the same 0.33x0.33x1.00mm MP2RAGE dataset, with a minimum intensity projection taken over a 10 mm slab in the z-direction, showing the improvements following retrospective motion-correction using the 3D FatNavs: Figure 5

Figure 6 from Gallichan et al
A zoomed section of the GRETI2 volume from the same 0.33x0.33x1.00mm MP2RAGE dataset, with a maximum intensity projection taken over the full 80 mm slab in the z-direction, showing the improvements following retrospective motion-correction using the 3D FatNavs: Figure 6

Figure 8 from Gallichan et al
A zoomed section of the 0.6x0.6x0.6mm TSE volume showing the improvement following retrospective motion-correction using the 3D FatNavs: Figure 8