Skip to main content

Robotics and Autonomous Intelligent Machines

Creating autonomous robots and intelligent systems that can tackle challenging problems in unstructured environments.

Recent years have seen significant progress in robotics and AI (Artificial Intelligence), enabling robots and machines to accomplish challenging missions autonomously. Research in robotics and AI can generate an enormous impact on industry and society, through collaboratively developing solutions to real-world-driven challenges.

The Robotics and Autonomous Intelligent Machines (RAIM) group undertake fundamental research in autonomous navigation, manipulation, machine vision/smart sensing, and more, to create autonomous robots and intelligent systems that can tackle challenging problems in unstructured environments, including healthcare, energy, agriculture, manufacturing, and environmental science.

We have four main members and more than 13 associate members from across all disciplines within the School of Engineering. Our broad spectrum of expertise enables us to apply our work in several high-impact research areas. Some of these include:

  • robot perception and robot learning
  • autonomous navigation and manipulation
  • human-robot collaboration
  • robot and machine vision
  • smart sensing and signal processing

We aim to give a higher level of autonomy to unmanned systems, providing capabilities of advanced situation awareness, localisation and mapping, multi-modal sensing, robot learning, autonomous manipulation, path planning, and human-robot collaboration. This includes robot perception, in which we aim to enhance robot cognitive capability by deploying advanced perceptual capabilities, such as vision, to understand environments and interpret human gestures.

Our current research areas include:

Robot perception and learning

We envisage an enhanced robot cognitive capability through deploying advanced multi-modal perceptual capabilities, and continuous self-learning to understand the environment in 3D, predict situations via machine learning, and support humans in real-world problems. This includes:

  • intelligent sensor processing/sensor fusion for multi-modal sensors, such as computer cameras and Lidar, for environmental mapping and robot localisation, including GPS-denied environments
  • robot learning and planning for safe and reliable navigation solutions for autonomous systems and mobile robots to operate in real-world environments that are safety-critical, dynamic, and unstructured
  • robot learning and planning for autonomous manipulation tasks, such as robotic assembly and motion planning.

Human-robot collaboration

Our work in this area introduces humans into the loop of robotic control, presenting more technical challenges such as:

  • environmental understanding through visual object recognition and pose identification for grasping and manipulations in unstructured workspaces
  • human behaviour monitoring and understanding
  • shared task planning for robots and humans or multiple robots
  • robot programming by demonstration, where robots learn new skills by observing demonstration by human operators.

Autonomous structural health monitoring

We have extensive experience in structural health monitoring (SHM), specifically acoustic emission and acousto-ultrasonics. We aim to develop solutions for autonomous SHM, by integrating our low power and wireless acoustic emission system into autonomous systems. This would reduce requirements of the number of sensors required to monitor large structures such as bridges or turbines, and integrate artificial intelligence to optimise data collection, management and interrogation.

Deep learning models

The popularisation of video surveillance and the vast increase of video content on the web has rendered video one of the fastest growing resources of data. Deep learning methods have demonstrated success in many areas of computer vision, including human action and activity recognition. However, to be confident in their predictions, their decisions need to be transparent and explainable. The aim of our research in this area is to develop algorithms capable of explaining the decisions made by the deep learning methods, specifically when applied to human activity recognition. This research is a result of collaboration with the School of Computer Science and Informatics.

Project: PHYDL: Physics-informed Differentiable Learning for Robotic Manipulation of Viscous and Granular Media

  • Ze Ji, Yukun Lai
  • Funder: EPSRC New Horizons
  • October 2022 – October 2024
  • Value: 250k

Project: Reinforcement Learning for autonomous navigation with GNSS-based localisation

  • Ze Ji
  • Funder: Spirent Communications
  • Value: £32,000
  • 2021 – 2024

Project: BIM and Digital Twins in support of Smart Bridge Structural Surveying

  • Haijiang Li, Ze Ji, Abhishek Kundu
  • Funder: Innovate UK/KTP (with Industry partner: Centregreat Rail)
  • September 2021 – September 2024
  • Value: £280,000

Project: DIGIBRIDGE: Physics-informed digital twin supported smart bridge maintenance

  • Haijiang Li, Abhishek Kundu, Ze Ji
  • Industry partners: Centregreat Rail, Crouch Waterfall, Centregreat Engineering)
  • Funder: SmartExpertise/WEFO
  • Value: £111,884
  • November 2021 - December 2022

Project: Active Robot Learning for Subtractive, Formative, and Additive Manipulations ofGranular and Viscous Materials

  • Ze Ji
  • Funder: Royal Society
  • March 2020 - March 2022
  • £17,812

Project: Additive Manufacturing and Robotics to Support Improved Response to Increased Flexibility

  • Value: £232,012
  • Funder: WEFO (through ASTUTE), collaborating with Continental Teves
  • February 2018 – February 2019
  • Rossi Setchi, Ze Ji

Project: 3D reconstruction and characterisation of spattering behaviours in SLM processing by fusing images from multiple cameras

  • Value: £40,000
  • April 2019 – April 2020
  • Ze Ji, Samuel Bigot, Rossi Setchi

Project: Pushing the boundary of vision-based 3D surface imaging

  • Ze Ji, Jing Wu, Rossi Setchi
  • £40,000,
  • Funder: Renishaw and Cardiff Strategic Partner Fund,
  • April 2018 - April 2019

Project: SRS – Multi-role shadow robotics system for independent living

  • EU FP7 project,
  • EUR 5 136 039,
  • Funder: Commission of the European Communities,
  • February 2010 – February 2013

Project: IWARD - Intelligent robot swarm for attendance, recognition, cleaning and delivery

  • Total cost: EUR 3 880 067,
  • Funder: Commission of the European Communities,
  • January 2007 - January 2010

Academic group leader

Dr Ze Ji

Dr Ze Ji

Senior lecturer
Teaching and research

Email
jiz1@cardiff.ac.uk
Telephone
+44 (0)29 2087 0017

Academic staff

Professor Rhys Pullin

Professor Rhys Pullin

Professor

Email
pullinr@cardiff.ac.uk
Telephone
+44 (0)29 2087 9374
Dr Yulia Hicks

Dr Yulia Hicks

Senior Lecturer - Teaching and Research

Email
hicksya@cardiff.ac.uk
Telephone
+44 (0)29 2087 5945
Dr Seyed Amir Tafrishi

Dr Seyed Amir Tafrishi

Lecturer in Robotics and Autonomous Systems

Email
tafrishisa@cardiff.ac.uk
Telephone
+44 (0)29 208 76176

Associated staff

Dr Matthew Pearson

Dr Matthew Pearson

Lecturer

Email
pearsonmr@cardiff.ac.uk
Telephone
02920876120
Professor Jianzhong Wu

Professor Jianzhong Wu

Head of School, Engineering.

Email
wuj5@cardiff.ac.uk
Telephone
+44 (0)29 2087 0668
Dr Daniel Gallichan

Dr Daniel Gallichan

Lecturer in Medical Imaging

Email
gallichand@cardiff.ac.uk
Telephone
+44 (0)29 2087 0045
Professor Agustin Valera-Medina

Professor Agustin Valera-Medina

Co-Director of the Net Zero Innovation Institute
Professor - Teaching and Research

Email
valeramedinaa1@cardiff.ac.uk
Telephone
02920875948
Dr Nicholas Bill

Dr Nicholas Bill

Senior Lecturer

Email
billn@cardiff.ac.uk
Dr Samuel Bigot

Dr Samuel Bigot

Senior Lecturer - Teaching and Research

Email
bigots@cardiff.ac.uk
Telephone
+44 (0)29 2087 5946
Dr Maurizio Albano

Dr Maurizio Albano

Lecturer - Teaching and Research

Email
albanom@cardiff.ac.uk
Telephone
+44 (0)29 2087 0672
Dr Zhangming Wu

Dr Zhangming Wu

Senior Lecturer

Email
wuz12@cardiff.ac.uk
Telephone
+44 (0) 29 2087 4542
Dr Yue Zhou

Dr Yue Zhou

Lecturer in Cyber Physical Systems

Email
zhouy68@cardiff.ac.uk
Telephone
+44 (0)7851974512
Dr Jonathan Lees

Dr Jonathan Lees

Head of Department, Electrical & Electronic Engineering
Reader

Email
leesj2@cardiff.ac.uk
Telephone
+44 (0)29 2087 4318
Jin Li

Jin Li

Lecturer

Email
lij40@cardiff.ac.uk
Dr Abhishek Kundu

Dr Abhishek Kundu

Senior Lecturer - Teaching and Research

Email
kundua2@cardiff.ac.uk
Telephone
+44 (0)29 2087 5953

Robotics and Autonomous Systems Laboratory

The Robotics and Autonomous Systems Laboratory was established in 2016 and is managed by Dr Ze Ji. It provides cutting-edge robotic facilities, including two Kuka LBR iiwa collaborative robots, one Robotnik VOGUE+ mobile manipulator, three Kuka mobile robots (Youbot), a number of Turtlebots, quadcopters and many advanced sensors, such as high-definition 3D cameras (e.g., structured light and stereo vision), Lidar, RTK GPS, and UWB to support a broad range of research activities.

Equipment in the lab

  • Robotnik Vogue+:
  • Kuka Youbot (x3)
  • Kuka iiwa lbr Collaborative Robots (x2)
  • High-definition cameras:
    • Zivid 3D Camera
    • Roboception Stereo 3D camera
    • RealSense cameras
    • Industrial cameras (GigE)
  • Robot-based large-scale high-definition 3D surface imaging (Multi-view Photometric Stereo)
  • Autonomous Collaborative Drones and USVs (Unmanned Surface Vehicle)

Human Factors Technology Laboratory

This is an interdisciplinary lab established between the Schools of Engineering, Computer Science and Informatics and Psychology under the direction of Dr Yulia Hicks, Professor David Marshall and Professor Simon Rushton respectively.

Key equipment includes:

  • Motion Capture Systems including Phasespace 80 infrared marker 480Hz 16 Camera system (3 Person Tracker) and several electromagnetic trackers.
  • 3dMD 4D Colour Video Camera with 100Hz Frame Rate and colour + 3D point output.
  • Powerful PCs with multiple GPUs.

Find out more about the Human Factors Technology Laboratory.