Register Here
Seminar 1: BrIAS Fellow Prof. Ilya Kolmanovsky
Exploiting Supervisory Schemes and the Interplay Between Computations and Closed-Loop Properties in Model Predictive Control
Abstract: Model Predictive Control (MPC) leads to algorithmically defined nonlinear feedback laws for systems with pointwise-in-time state and control constraints. These feedback laws are defined by solutions to appropriately posed optimal control/trajectory optimization problems that are (typically) solved online. There is a growing interest in the use of MPC for practical applications, including as an enabling technology for control and trajectory generation in autonomous vehicles, including in aerospace, automotive and robotics domains. To enable MPC implementation, the solutions to MPC optimization problems must be computed reliably and within the available time. After describing several motivating applications in aerospace and automotive domains, the talk will reflect on recent research by the presenter and his students/collaborators into strategies for computing solutions in optimization problems arising in receding horizon and shrinking horizon MPC formulations. These strategies include methods for solving MPC problems inexactly, and the use of add-on supervisory schemes for MPC which reduce the computational time and enlarge the constrained closed-loop region of attraction. In particular, a Computational Governor (CG) will be described which maintains feasibility and bounds the suboptimality of the MPC warm-start by altering the reference command provided to the inexactly solved MPC problem. As it also turns out, the analysis of time distributed implementation of MPC based on fixed number of optimization algorithm iterations per time step and warm-starting benefits from the application of control-theoretic tools such as the small gain theorem; intriguingly, similar tools can be exploited in “control-aware” multi-disciplinary design optimization.
Seminar 2: BrIAS Fellow Prof. Bruno Sinopoli
Linear Methods for Dimensionality Reduction: is there life beyond PCA?
Abstract: Feature extraction and selection at the presence of nonlinear dependencies among the data is a fundamental challenge in unsupervised learning. We propose using a Gram-Schmidt (GS) type orthogonalization process over function spaces to detect and map out such dependencies. Specifically, by applying the GS process over some family of functions, we construct a series of covariance matrices that can either be used to identify new large-variance directions, or to remove those dependencies from known directions. In the former case, we provide information-theoretic guarantees in terms of entropy reduction. In the latter, we provide precise conditions by which the chosen function family eliminates existing redundancy in the data. Each approach provides both a feature extraction and a feature selection algorithm. Our feature extraction methods are linear, and can be seen as natural generalization of principal component analysis (PCA). We provide experimental results for synthetic and real-world benchmark datasets which show superior performance over state-of-the-art (linear) feature extraction and selection algorithms. Surprisingly, our linear feature extraction algorithms are comparable and often outperform several important nonlinear feature extraction methods such as autoencoders, kernel PCA, and UMAP. Furthermore, one of our feature selection algorithms strictly generalizes a recent Fourier-based feature selection mechanism (Heidari et al., IEEE Transactions on Information Theory, 2022), yet at significantly reduced complexity.
BrIAS Sustainable Robotics, AI and Automation Closing Ceremony