Data sets with a very large number of explanatory variables are becoming more and more common as features of both applications and theoretical investigations. In economical applications for instance, the revealed preference of market players is observed, and the analyst tries to understand them by a complex model by which the players' behavior can be understood as an indirect observation. State-of-the art statistical approaches often formulate such models as inverse problems, but the corresponding methods can suffer of the curse of dimensionality: when there are "too many" possible explanatory variables, additional regularization is needed. Inverse problem theory already offers sophisticated regularization methods for smooth models, but is just beginning to integrate sparsity concepts. For high-dimensional linear models, sparsity regularizations have proved to be a convincing way to tackle the issue both in theory and practice, but there remains a vast ground to be explored. Paralleling the statistics community are also recent advances in machine learning methodology and statistical learning theory, where the themes of sparsity and inverse problems have been intertwined.
The workshop will focus on the different ways to attack a same question: there are many potential models to choose from, but each of them is relatively simple - each model is parameterized by many variables, most of them are zero. Yet, the choice of the right model or regularization parameter is crucial to obtain stable and reliable results.