Control and Signal Processing Lab

2014 Seminars

Past Seminars 2014

The scenario approach for design in the presence of uncertainty: some recent results

Speaker

Algo Carè, Research Fellow, Electrical and Electronic Engineering Department, The University of Melbourne.

Algo Carè received the M.Sc.Eng. degree in Computer Science in 2009 and the Ph.D. degree in Informatics and Automation Engineering in 2013 both from the University of Brescia, Italy. He spent one year at the Department of Information Engineering at the same university before joining the University of Melbourne in November 2013, where he is a Research Fellow. His current interests include finite-sample identification, randomised methods for convex optimization and learning theory.

Abstract

In this seminar we will focus on design problems where a convex cost function affected by uncertainty has to be minimised. In this context, which is common in control systems engineering, operations research, finance, etc., a commonly used heuristic is that of making a decision based on a set of collected data called "scenarios". We will focus on two important approaches to data-based decision-making: the worst-case approach and the least-squares approach. Once the design decision has been made according to one of these approaches, we are interested in the probability that a new situation, i.e. a new uncertainty instance, carries a cost higher than some empirically meaningful cost thresholds. The probability that a cost threshold is exceeded is called "risk". By studying the risks of meaningful cost thresholds, we gain quantitative information about the reliability of our design. Some recent theoretical developments have shown that, under the assumption that the sce narios are drawn independently according to the same probability distribution, the risks can — in many situations of interest — be effectively studied in a distribution-free context, that is, without any knowledge of the probability distribution according to which data are generated.

Slides: The scenario approach for design in the presence of uncertainty: some recent results pdf 7.14mb

Sparse and low-rank representation methods in system identification

Speaker

Professor Bo Wahlberg, KTH Royal Institute of Technology, Stockholm Sweden

Bo Wahlberg received the M.Sc. degree in Electrical Engineering 1983 and the Ph.D. degree in 1987 from Linköping University, Sweden. In December 1991, he became Professor of the Chair of Automatic Control at KTH Royal Institute of Technology, Stockholm, Sweden. He was a visiting professor at the Department of Electrical Engineering, Stanford University, USA, August 1997 – July 1998 and August 2009 – June 2010, and vice president of KTH 1999 – 2001. He is a Fellow of the IEEE for his contributions to system identification using orthonormal basis functions. His research interests include modeling, system identification, estimation and control of industrial processes. Bo Wahlberg is currently a visiting professor at University of Newcastle, NSW, Australia, where he also was a post-doc in the late eighties.

Professor Bo Wahlberg

Abstract

Since the 90’s, sparsity has been playing an important role in several aspects of statistics, machine learning and signal processing, among other fields. In this tutorial talk, we will focus on some specific connections of sparsity to important problems in system identification. In particular, the problem of sparse estimation will be seen in the context of model structure selection, where $l_1$ regularization becomes a handy tool for overcoming the curse of dimensionality in model selection (when the number of candidate model structures is large) and for imposing prior knowledge, thus delivering parsimonious models. Then, the so-called nuclear norm relaxation, a convex surrogate of the rank function, corresponding to the $l_1$ norm of the singular values of a matrix, will be used for subspace-like identification and for the estimation of structured systems. Finally, we will discuss the use of $l_1$ regularization for change point detection. These techniques unfortunately are not fool-proof, so we will discuss some conditions under which these sparsity-inducing methods might fail and when they can be more safely used.

Slides: Sparse and low-rank representation methods in system identification pdf 3.73mb

Control Profiles of Complex Networks

Speaker

Justin Ruths, Assistant Professor, Singapore University of Technology and Design

Justin Ruths is an assistant professor at the Singapore University of Technology and Design with the faculty of Engineering Systems and Design. Justin holds degrees in Physics (BS, Rice University), Mechanical Engineering (MS, Columbia University), Electrial Engineering (MS, Washington University in Saint Louis), and Systems Science and Applied Math (PhD, Washington University in Saint Louis). His research themes include casting problems in the natural sciences and medicine as optimal control problems and investigating the control of large-scale complex systems. Towards this latter goal, some of his recent work is at the interface of control and network science.

Abstract

Recent work at the borders of network science and control theory have begun to investigate the control of complex systems by studying their underlying network representations. A majority of the work in this nascent field has looked at the number of controls required in order to fully control a network. In this talk I will present research that provides a ready breakdown of this number into categories that are both easy to observe in real world networks as well as instructive in understanding the underlying functional reasons for why the controls must exist. This breakdown is able to shed light on several observations made in the previous literature regarding controllability of networks. This decomposition produces a mechanism to cluster networks into classes that are consistent with their large scale architecture and purpose. Finally, we observe that synthetic models of formation generate networks with control breakdowns substantially different from what is observed in real world networks.

Video: Control Profiles of Complex Networks

Computer architectures and algorithms for real-time linear algebra, optimization and control

Speaker

Dr Eric Kerrigan, Senior Lecturer and Reader in Control Engineering and Optimization, Faculty of Engineering, Imperial College London

Dr Eric Kerrigan is a Reader in Control Engineering and Optimization at Imperial College London and is currently on sabbatical at the University of Melbourne. His research is on efficient numerical methods and computer architectures for solving optimization, control and estimation problems in real-time, with applications in aerospace and renewable energy. He has active collaborations with Siemens, EADS, National Instruments and LMS International. He is chair-elect of the United Kingdom Automatic Control Council and on the editorial boards of the IEEE Transactions on Control Systems Technology, Control Engineering Practice and IEEE Control Systems Society Conferences.

Abstract

The performance of an algorithm is a function of the architecture of the computing system on which it is implemented. This is particularly true if measurements from a physical system are used to update and solve a sequence of linear algebra or mathematical optimization problems in real-time, such as in control or signal processing. In these applications the designer has to trade off computing time, space and energy against each other, while satisfying constraints on the performance and robustness of the resulting cyber-physical system.

We will give an overview of research undertaken at Imperial College London aimed at designing the computing hardware and real-time algorithms at the same time. This co-design process can result in systems with efficiencies and performances that are not possible when decoupling hardware and algorithm design. We will concentrate on three different problems:

  1. The Lanczos algorithm is the building block of the most widely used iterative linear solvers, such as the conjugate gradient and minimal residual method. We will present a new algorithm to avoid overflow errors. Our fixed-point implementation of the Lanczos algorithm can sustain more than 40 billion operations per second per watt on current embedded processors, while still achieving the same accuracy as a double-precision floating-point implementation. This compares favorably to the Nvidia GeForce GTX 690, which has a specification of 18.74 billion floating-point operations per second (gigaflops) per watt.
  2. The alternating direction method of multipliers (ADMM) and fast gradient method of Nesterov have attracted considerable attention over the last few years, due to their ease of implementation and good performance on a large class of optimization problems. These methods are particularly amenable to analysis under the assumption of fixed-point arithmetic. We will present new theoretical results that can be used to determine a priori the number of bits required to achieve a given accuracy in the solution. We have successfully used these results to implement a predictive controller for an atomic force microscope on a low-end processor, while achieving control update rates in excess of 1 MHz.
  3. Constrained LQR problems have to be solved in predictive control applications. Our novel idea is to include suitably-defined decision variables in the quadratic program to allow for smaller roundoff errors in the solver. This enables one to trade off the number of bits used against speed and/or hardware resources, so that smaller numerical errors can be achieved for the same number of bits (same silicon area). Because of data dependencies, the algorithm complexity does not necessarily increase despite the larger number of decision variables. Examples show that a 10-fold reduction in hardware resources is possible compared to using double precision floating point, without loss of closed-loop performance.

Slides: Computer architectures and algorithms for real-time linear algebra, optimization and control pdf 6.0mb

Growth rates for persistently excited linear systems

Speaker

Professor Fritz Colonius

Fritz Colonius was born in Göttingen, Germany and received a diploma degree in Mathematics from the University of Bielefeld (1975) and a doctoral degree from the University of Bremen (1979). After several postdoctoral positions, among others at the University of Graz and Brown University, he won a Heisenberg grant from Deutsche Forschungsgemeinschaft (DFG). Since 1988 he is Professor of Mathematics at the University of Augsburg. Fritz Colonius is, jointly with Wolfgang Kliemann, the author of the monograph The Dynamics of Control (Birkhäuser, 2000). His research interests include nonlinear control and deterministic and stochastic dynamical systems. He is currently an associate editor for Journal of Dynamical Systems and Control, and has previously been an associate editor for ESAIM: Control, Optimisation and Calculus of Variations, SIAM Journal on Control and Optimization, and Systems and Control Letters.

Abstract

We consider a family of linear control systems where the transmission of data from the controller to the system is restricted. Here this is modelled by persistently exciting signals taking values in the unit interval. Their average values must be bounded away from zero. Uniform stabilization and destabilization by means of linear feedbacks will be discussed. This presentation is based on joint work with Yacine Chitour (Université Paris-Sud) and Mario Sigalotti (Ecole Polytechnique, Palaiseau).


Contact Us

Prof Jonathan Manton

Director, Control and Signal Processing Laboratory

E: