Ewch i’r prif gynnwys

Sparsity and structures in large-scale machine learning problems

This research project is in competition for funding with one or more projects available across the EPSRC Doctoral Training Partnership (DTP). Usually the projects which receive the best applicants will be awarded the funding. Find out more information about the DTP and how to apply.

Application deadline: 15 March 2019

Start date: 1 October 2019

The main objective of this project is to investigate the design of efficient approaches to scale-up and improve state-of-the-art machine learning techniques, while providing theoretical guarantees on their behaviour.

Background

The term 'big data' is generally used to refer to datasets that are too large or complex for traditional data-processing technics to adequately deal with them. In parallel with the use of high-performance computing solutions (eg parallelisation, computation using graphic processing units), many alternatives exist to try to overcome the difficulties inherent to the learning-with-big-data framework.

For instance, problems related to the size of the datasets might be addressed through sample-size and dimension reduction techniques, while feature extraction, low-dimensional approximation or sparsity-inducing penalisation techniques might be used to prevent the model complexity to explode.

Such operations need, however, to be applied with great care, since they might have a significant impact on the quality of the final model - their effects being in addition often intrinsically connected. To make matters worse, the existing theory surrounding such approximation techniques is generally quite modest.

Project aims and methods

The main objective of this project is to investigate the design of efficient approaches to scale-up and improve state-of-the-art machine learning techniques, while providing theoretical guarantees on their behaviour. A special emphasis will be drawn on sample size reduction and feature extraction procedures based on the notion of kernel discrepancy (also referred to as maximum mean discrepancy, or kernel mean embedding).

Thanks to its ability to characterise representative samples, this notion has recently emerged as a powerful concept in machine learning, statistics and approximation theory: combined with auto-encoder techniques, it is for instance at the core of recent developments in Generative Adversarial Networks (MMD-GAN method); it has also lead to new strategies in Markov Chain Monte Carlo methods (kernel adaptive Metropolis-Hasting), and has appeared a valuable tool for kernel low-rank approximation (conic squared-kernel discrepancy). Investigating to what extent this type of approaches can be generalised is one of the main motivations behind this project.

You will acquire strong theoretical and practical skills in machine learning (kernel-based methods, artificial neural network, random-field models, etc). Machine learning being by essence an interdisciplinary field, you will also strengthen your expertise in mathematics through the study of the relevant underlying mathematical concepts (eg reproducing kernel Hilbert spaces, operator theory, optimisation, multiresolution analysis, statistics, etc) and will in parallel develop strong computational/numerical abilities through the implementation and the benchmarking of the proposed methodologies. You will have the opportunity to attend summer/winter schools relevant to your project, and will follow a selection of advanced courses proposed by the Broadening Study in Mathematics program for Postgraduate Students.

Goruchwylwyr

Dr Bertrand Gauthier photograph

Dr Bertrand Gauthier

Lecturer

Email:
gauthierb@caerdydd.ac.uk
Telephone:
+44(0)29 2087 5544

Gwybodaeth am y Rhaglen

I gael gwybodaeth am strwythur y rhaglen, gofynion mynediad a sut i wneud cais ewch i’r rhaglen Mathemateg.

Gweld y Rhaglen
Mae'r Academi Ddoethurol yn falch i'ch gwahodd chi i'w Gŵyl Ymchwil Ôl-raddedig cyntaf.

Rhaglenni cysylltiedig

Dolenni perthnasol