NIPS Workshop – Kernels for Multiple Outputs and Multi-task Learning: Frequentist and Bayesian Points of View

Call for contributions – Kernels for Multiple Outputs and Multi-task Learning: Frequentist and Bayesian Points of View

Workshop at the Twenty-Third Annual Conference on Neural Information Processing Systems (NIPS 2009)
Whistler, BC, Canada, December 12, 2009.



Accounting for dependencies between outputs has important applications in several areas. In sensor networks, for example, missing signals from temporal failing sensors may be predicted due to correlations with signals acquired from other sensors. In geo-statistics, prediction of the concentration of heavy pollutant metals (for example, Copper concentration), that require expensive procedures to be measured, can be done using inexpensive and oversampled variables
(for example, pH data).

Within machine learning, this framework is known as multi-task learning. Multi-task learning is a general learning framework in which it is assumed that learning multiple tasks simultaneously leads to better modeling results and performance that learning the same tasks individually. Exploiting correlations and dependencies among tasks, it becomes possible to handle common practical situations such as missing data or to increase the amount of potential data when only
few amount of data per task is available.

In the last few years there has been an increased amount of work on Multi-task Learning. From the Bayesian perspective, this problem has been tackled using hierarchical Bayesian together with neural networks. More recently, the Gaussian Processes framework has been considered, where the correlations among tasks can be captured by appropriate choices of covariance functions. Many of these choices have been inspired by the geo-statistics literature, in which a similar area is known as cokriging. From the frequentist perspective, regularization theory has provided a natural framework to deal with multi-task problems: assumptions on the relation of the different
tasks translate into the design of suitable regularizers. Despite the common traits of the proposed approaches, so far different communities have worked independently. For example it is natural to
ask whether the proposed choices of the covariance function can be interpreted from a regularization perspective. Or, in turn, if each regularizer induces a specific form of the covariance/kernel function. By bringing together the latest advances from both communities, we aim at establishing what is the state of the art and the possible future challenges in the context of multiple-task learning.

Although there are several approaches to multi-task learning out there, in this workshop we focus our attention to methods based on constructing covariance functions (kernels) for multiple outputs, to be employed, for example, together with Gaussian processes or
regularization networks.


David Higdon, Los Alamos National Laboratory, USA
Sayan Mukherjee, Duke University, USA
Andreas Argyriou, Toyota Technological Institute, USA
Hans Wackernagel, Ecole des Mines Paris, France


Deadline for abstract submission: October 23, 2009
Notification of acceptance: November 6, 2009
Workshop: December 12, 2009


Submissions will be accepted as 20 minutes talks or Spotlights.
Extended abstracts submitted will use NIPS style with a maximum number of 4 pages.
Abstracts should be sent to: nips.mock09 (at)


Mauricio A. Álvarez, University of Manchester
Lorenzo Rosasco, MIT
Neil D. Lawrence, University of Manchester

The workshop is sponsored by EU FP7 PASCAL2 Network of Excellence