AI for Inverse Sensing Problems

Abstract

Inverse problems appear e.g. in computer science, engineering and physics and are therefore a core field in data science. In general, the problem is to infer from a set of observations on some unknown quantity of interest. The so called forward model is often known or at least partially known, but the estimation problem itself is ill-posed, i.e., it can not be easily and uniquely inverted. Instead, at least prior structural or distributional assumptions on the unknown quantity are required.

For a well-defined analytical formulation algorithms can be derived, directly or from a Bayesian perspective, and in many cases it even possible to obtain performance bounds. Also, tuning parameters and acceleration techniques can be often computed explicitly. Compressed sensing and its successors are here good examples. However, for many real-world problems it is quite difficult to have such an analytical formulation. Often the forward model is incomplete, explicit prior models do not fit well and algorithms perform poor or need hand-crafted tuning. It is therefore of fundamental importance how data-driven methods can be incorporated.

An important direction is to extend the idea of regularization towards data-driven approach. Several ideas exist already in the literature, like “network Tikhonov regularization” or “regularization by denoising”. For example deep auto-encoders can be trained to represent a certain low-dimensional structure in the data. The corresponding decoder can be used to setup an operational regularization penalty for classical inverse problem algorithms. Another approach is to train a deep generative model to describe the unknown quantity by training data and optimize over latent space variables to solve the inverse problem. In both approaches it is desirable to follow a variational approach since smoothness and regularity in the latent space is crucial for the success of many iterative descent algorithms. Another promising direction is the idea of unfolding or unrolling of classical signal processing algorithms into deep networks and train the computational graph.

In all these approaches, important questions arise about the robustness and uncertainty with respect to changes in the distribution or the domain when going from training to testing. If the new distribution can probed by sampling, changing to the distribution may require retraining and the question is if this can be done efficiently using ideas of transfer learning. Also, the training strategy is important to study. The models, the regularizer or the unfolded algorithm itself can be trained layer-wise, hierarchically or end-to-end with different optimizers. Depending on the loss and its smoothness several difficulties may arise and optimizer may perform differently. Also, it has been intensively discussed recently how important training is versus the structure and the connectivity of the deep networks. Approaches for analytically or directly computing weights and instead learn only parameters of activation functions have been proposed allowing to compute rigorous recovery bounds.

Related Publications

deepinverse
[1] Osman Musa, Peter Jung, and Giuseppe Caire. Plug-And-Play Learned Gaussian-mixture Approximate Message Passing. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, nov 2021. [ bib | arXiv | http ]
[2] Linh Kastner, Samim Ahmadi, Florian Jonietz, Peter Jung, Giuseppe Caire, Mathias Ziegler, and Jens Lambrecht. Classification of Spot-welded Joints in Laser Thermography Data using Convolutional Neural Networks. IEEE Access, pages 1--1, oct 2021. [ bib | DOI | arXiv | http ]
[3] Samim Ahmadi, Linh Kästner, Jan Christian Hauffen, Peter Jung, and Mathias Ziegler. Photothermal-SR-Net: A Customized Deep Unfolding Neural Network for Photothermal Super Resolution Imaging. apr 2021. [ bib | arXiv | http ]
[4] Martin Reiche and Peter Jung. DeepInit Phase Retrieval. jul 2020. [ bib | arXiv | http ]
[5] Freya Behrens, Jonathan Sauder, and Peter Jung. Towards Neurally Augmented ALISTA. In NeurIPS 2020 Workshop Deep Inverse, page 4, 2020. [ bib | .pdf ]
[6] Peter Jung. AI-Aided Signal Reconstruction for Inverse Problems, 2020. [ bib | .pdf ]
[7] Osman Musa, Peter Jung, and Giuseppe Caire. Plug-And-Play Learned Gaussian-mixture Approximate Message Passing, 2020. [ bib | .pdf ]
[8] Anko Börner, Heinz-wilhelm Hübers, Odej Kao, Florian Schmidt, Sören Becker, Joachim Denzlern, Daniel Matolin, David Haber, Sergio Lucia, Wojciech Samek, Rudolph Triebel, Sascha Eichstädt, Felix Biessmann, Anna Kruspe, Peter Jung, Manon Kok, Guillermo Gallego, and Ralf Berger. Sensor Artificial Intelligence and its Application to Space Systems – A White Paper. 2020. [ bib | .pdf ]

This file was generated by bibtex2html 1.99.