Date: Tuesday 23 February 2021 – 13:00 (Europe/London)
Speaker: Professor Simon Maskell, Dept. of Electrical Engineering and Electronics, University of Liverpool
Abstract
We are often interested in a setting where we can define a model for some data (i.e. a likelihood) and explicitly describe our (potentially uninformative) prior knowledge such that we can pose the generic statistical inference task of inferring some parameters as an equivalent Bayesian inference problem (irrespective of whether we subscribe to the Bayesian paradigm). High-performance algorithms (e.g. the No-U-Turn-Sampler) exist to make inferences from data using such models. These algorithms are complicated. It is therefore common to use probabilistic programming languages (PPLs) as a flexible mechanism for describing the models and then use pre-existing implementations of algorithms associated with those PPLs to make the inferences given the definition of the models (and some data). Examples of PPLs include well-established (i.e. old) software tools like BUGS but also more modern variants, e.g. Stan and PyMC3. The key advantage of using a PPL is that you just need to define the model. Everything else is done for you. This, and PPLs’ use of state-of-the-art algorithms, makes it possible for users of PPLs to be very much more ambitious in terms of the scale and complexity of model that is considered. This is why PPLs are used extensively by organisations like Google, Facebook etc. Thanks to approximately £10M of investment from EPSRC, we have been working to generate next-generation algorithms and to integrate those algorithms into one PPL, Stan. These algorithmic enhancements focus on integration of Sequential Monte-Carlo (SMC) samplers, a new numerical Bayesian approach that offers substantial (ie >>100x) improvements in terms of run-time and therefore substantial potential to be ambitious with respect to the models considered. In this talk, we will motivate the use of PPLs in general, explain a little about our focus on Stan specifically and describe why we think the advantages we are able to offer to users of SMC-Stan (now) have motivated significant interest and uptake from IBM, Google, GCHQ, epidemiologists working on COVID and UK government. Our aim is to ensure that those who attend are amongst the early adopters of what we believe will become a world-leading tool for Bayesian inference.
Biography
Simon Maskell is a Professor of Autonomous Systems at the University of Liverpool within the School of Electrical Engineering, Electronics and Computer Science where he is affiliated to both the Centre for Autonomous Systems and the Institute for Risk and Uncertainty. He is also Director of the EPSRC Centre for Doctoral Training in Distributed Systems. Professor Simon Maskell completed his PhD at the Signal Processing Group of Cambridge University Engineering Department on ‘Sequentially Structured Bayesian Solutions’ where he researched how Bayesian tracking algorithms exploit the structure of problem that they tackle.
Currently he focuses on the ability to use the structure of problems in general in the design of algorithms for their solution. As such, Prof Maskell is working on difficult problems being tackled by the Artificial Intelligence community for which he hopes to develop particularly efficient and robust solutions. These include for example: learning strategies in partially observed games (i.e. getting a computer to learn from experience how to fool a human); tracking of articulated objects (e.g. tracking people in crowds using a network of webcams). The solutions that he has developed have, for example, benefited organisations operating within defence, security, insurance, petrochemicals, pharma and transport.