Logo du séminaire du DMATHS
  • Recherche

Séminaire du CERAMATHS - DMATHS : exposé de Mihaly Petreczky

Le séminaire du département de mathématiques du CERAMATHS accueillera Mihaly Petreczky (CRIStAL, Univ. de Lille), jeudi 12 octobre 2023

  • Le 12/10/2023

  • 14:00 - 15:00
  • Campus Mont Houy - Bâtiment Abel de Pujol 2 - amphi 70E

Le séminaire du département de mathématiques du CERAMATHS accueillera à 14h Mihaly Petreczky  (CRIStAL, université de Lille), jeudi 12 octobre 2023, pour l'exposé suivant :

PAC(-Bayesian) guarantees for learning dynamical systems

In this talk I will talk about non-asymptotic PAC-like theoretical guarantees for learning dynamical systems. We will mainly consider linear dynamical systems in discrete-time with stochastic noise, and then we will discuss some extension of these results to continuous-time systems. Learning linear systems is an established topic in control theory, more precisely in the subfield of control theory known as system identification. However, most of the established results deal with asymptotic guarantees for learning, i.e., they show statistical consistency of the learning algorithms. In contrast, there are relatively few results providing finite-sample bounds on the estimation error and generalisation error of the learned models. In particular, there are almost no results on probably approximately correct (PAC) and PAC-Bayesian bounds on the generalisation gap for dynamical systems. This is especially the case for stochastic systems which are learned from a single time-series. This problem is challenging for several reasons: the data is not i.i.d, the models use an increasing number of data points to generate predictions, the signals involved need not be bounded. The motivation for studying PAC(-Bayesian) bounds is as follows. First, such bounds could be useful for LQG reinforcement learning. Second, as recurrent neural networks (RNN) contain linear systems as a special case, PAC-Bayesian bounds for linear systems could be useful as a first step for deriving similar bounds for RNNs. In turn, PAC-Bayesian bounds turned out to be promising for deriving non-trivial generalisation bounds for neural networks.

In this talk I will present recent results on PAC-Bayesian bounds for linear stochastic systems in discrete-time learned from a single time series. I will then mention recent extensions to non-linear systems.   I will also discuss extensions to continuous-time systems learned from i.i.d. data.

   

Responsables du séminaire :

Serge Nicaise

Bouchaïb Sodaïgui