Skip to content
Skip to navigation menu

Quadratic and Convex Optimisation

Quadratic and Convex OptimisationWe study optimisation algorithms for quadratic and convex programming. The main idea of the study is based on the renormalisation of the space. Even in simple cases, algorithms that are simply convergent can be considered as dynamic systems, after a suitable transformation is applied at each iteration. The main idea is to renormalize the region containing the search object x, for instance the point where a given function f(.) is optimum, so that the target remains in a standard region X. Therefore x is no longer fixed but moves in X at each iteration, and follows the evolution of a dynamic system. Methods of analysis relying on ergodic theory and chaos thus apply. Various algorithms are considered, e.g. line search, ellipsoid, steepest descent. The associated dynamic systems show different features: in some cases the asymptotic behaviour depends on x and/or on the function f(.), and periodic and strange attractors are typical. A main purpose is to use the theory of dynamic systems to analyse the asymptotic behaviour of the algorithms, and, when possible, to obtain optimal rates of convergence.

For the problem of quadratic programming in high dimensions, simple and robust gradient algorithms have been developed that are as efficient as the celebrated conjugate gradient algorithm which requires computations with infinite precision.

Applications to diverse problems of applied mathematics are under consideration.

Cardiff investigators

Collaborators


Selected publication