Overview

Supported by the increasing availability of massive computing power and extensive data, machine learning (ML) and artificial intelligence (AI) are omnipresent these days and have found their way into a wide variety of application areas. Tasks that were previously carried out by humans, but also tasks that were previously unmanageable due to their complexity, are increasingly being performed by ML and AI methods. This is particularly true in the areas of machine perception and image recognition, speech recognition, and communication as well as time series analysis, where AI methods have become a standard. In addition, traditional methods are augmented or even replaced by such methods. This is associated with the expectation that AI methods can generate results efficiently, safely and reliably.

Problem-informed Machine Learning for optimization-based Control

Our focus is on problem-informed machine learning for automatic control and path planning tasks. Problem-informed machine learning has conceptual advantages in terms of real-time capability and handling of complex situations. However, the demand for (formal or practical) certification, robustness, and resilience is a widely unresolved challenge.

Our aim is to develop a widely applicable framework for optimization-based controller design using machine learning. To this end we do not rely on a purely data driven supervised learning approach, but rather aim at learning underlying principles, such as optimality conditions or Bellman equations. The latter opens the door for a quantitative assessment of the output and may allow to detect failures.

RLscheme.png

 

We investigate the following research directions:

  • Reinforcement Learning for path planning tasks in automated driving and space robotics
  • imitation learning model-predictive control for time critical processes
  • problem-informed machine learning for parametric optimization problems
  • problem-informed machine learning for value functions

 

Equipment

Computing power, large memory, and GPU architectures are essential for training machine learning models. We use the following equipment:    

  • Cluster of four Nvidia DGX A100 each with 8x Nvidia A100 80 GB Tensor Core GPUs, 640 GB GPU memory, dual AMD Rome 7742 CPU with 128 cores (2.25 GHz base, 3.4 GHz max boost)
  • PCs with GPUs
  • Nvidia Jetson Orin for on-board computations on mobile platforms