• Relations between the luckiness framework, compatibility functions and empirically defined regularization strategies in general.
  • Luckiness and compatibility can be seen as defining a prior in terms of the (unknown but fixed) distribution generating the data. To what extent can this approach be generalised while still ensuring effective learning?
  • Models of prior knowledge that capture both complexity and distribution dependence for powerful learning.
  • Theoretical analysis of the use of additional (empirical) side information in the form of unlabeled data or data labeled by related problems
  • Examples of proper or natural luckiness or compatibility functions in practical learning tasks. How could, for example, luckiness be defined in the context of collaborative filtering?
  • The effect of (empirical) preprocessing of the data not involving the labels as for example in PCA, other data-dependent transformations or cleaning, as well as using label information as for example in PLS or in feature selection and construction based on the training sample.
  • Empirically defined theoretical measures such as Rademacher complexity or sparsity coefficients and their relevance for analyzing empirical hypothesis spaces.

This workshop is intended for researchers interested in the theoretical underpinnings of learning algorithms which do not comply to the standard learning theoretical assumptions.

Workshop Chairs

  • Maria-Florina Balcan
  • Shai Ben-David
  • Avrim Blum
  • Kristiaan Pelckmans
  • John Shawe-Taylor