Motivation

Intelligent beings commonly transfer previously learned “knowledge” to new domains, making them capable of learning new tasks from very few examples. In contrast, many recent approaches to machine learning have been focusing on “brute force” supervised learning from massive amount of labeled data. While this last approach makes a lot of sense practically when such data are available, it does not apply when the available training data are unlabeled for the most part. Further, even when large amounts of labeled data are available, some categories may be more depleted than others. For instance, for Internet documents and images the abundance of examples per category typically follows a power law. The question is whether we can exploit similar data (labeled with different types of labels or completely unlabeled) to improve the performance of a learning machine. This workshop will address a question of fundamental and practical interest in machine learning: the assessment of methods capable of generating data representations that can be reused from task to task. To pave the ground for the workshop, we organized a challenge on unsupervised and transfer learning.

Competition

The  unsupervised and transfer learning challenge just started and will end April 15, 2011. The results of the challenge will be discussed at the workshop and we will invite the best entrants to present their work. Further, we intend to launch a second challenge on supervised transfer learning whose results will be discussed at NIPS 2011. This workshop is not limited to the competition program that we are leading. We encourage researchers to submit papers on the topics of the workshop.

Participation

We invite contributions relevant to unsupervised learning and transfer learning (UTL), including:

Algorithms for UTL, in particular addressing:

  • Learning from unlabeled or partially labeled data
  • Learning from few examples per class, and transfer learning
  • Semi-supervised learning
  • Multi-task learning
  • Covariate shift
  • Deep learning architectures, including convolutional neural network
  • Integrating information from multiple sources
  • Learning data representations
  • Kernel or similarity measure learning

Applications pertinent to the workshop topic, including:

  •  Text processing (in particular from multiple languages)
  • Image or video indexing and retrieval
  • Bioinformatics
  • Robotics
  • Datasets and benchmarks

Program committee

  • David Aha
  • Yoshua Bengio
  • Joachim Buhmann
  • Gideon Dror
  • Isabelle Guyon
  • Quoc Le
  • Vincent Lemaire
  • Alexandru Niculescu-Mizil
  • Gregoire Montavon
  • Atiqur Rahman Ahad
  • Gerard Rinkus
  • Gunnar Raetsch
  • Graham Taylor
  • Prasad Tadepalli
  • Dale Schuurmans
  • Danny Silver