PASCAL2 Posts

Special Issue on Statistical and Relational Learrning and mining in Bioinformatics

Special issue of Fundamenta Informaticae on Statistical
and Relational Learning and Mining in Bioinformatics
——————————————————-

Call for papers

There is an increasing interest for structured data in the machine
learning community as shown by the growing number of dedicated
Conferences and Workshops (MLG, SRL, ILP, MRDM). Bioinformatics is an
application domain of increasing popularity where information is
naturally represented in terms of relations between (possibly
heterogeneous) objects.

The special issue of Fundamenta Informaticae on Statistical and
Relational Learning in Bioinformatics will focus on learning methods
for structured biological data (relational data, graphs, logic-based
descriptions, etc) in the presence of uncertainty (probabilistic logic
models, Bayesian methods, etc). These methods are well-suited for this
application area, since the available data is highly complex and tends
to have a significant amount of missing information.

We therefore invite submissions that describe new theoretical insights
for new methods, problem settings, applications and models, exploiting
structured data in the field of biology. Survey papers discussing the
relationships among the various mathematical frameworks are especially
encouraged. Both new submissions and extended versions of contributions to the
StReBio workshops at ECML-08 and KDD-09 are welcome.

Methods include, but are not restricted to

* Statistical Relational Learning
* Relational Probabilistic Models
* Inductive Logic Programming
* Multi-relational Data Mining
* Graph Methods

The data, structures or models considered can include but are not limited to

* Sequences (DNA, RNA, protein)
* Pathways (chemical, metabolic, mutation, interaction pathways)
* 2D, 3D structures of proteins, RNA
* Chemical structures (e.g. QSAR, especially regarding interaction
of compounds with proteins)
* Evolutionary relations (phylogeny, homology relations)
* Ontologies integration (gene, enzyme, protein function ontologies)
* Large networks (regulatory, co-expression, interaction, and
metabolic networks)
* Concept graphs (including compounds, articles, authors, references)

Given the nature of the Fundamenta Informaticae journal, papers should
contain a more theoretical part presenting background, methods and
fundamental properties.

Important dates:
* Abstract submission: November 15th
– Paper submission: November 25th
– Notification: February 15th
– Revised versions: April 15th

Submission information:
Information on formatting (including Latex templates) can be found on
http://fi.mimuw.edu.pl/ (‘submissions’ tab). Please send submissions
to StReBioFI@cs.kuleuven.be

Guest editors:
Jan Ramon (K.U.Leuven, Belgium)
Fabrizio Costa (K.U.Leuven, Belgium)
Christophe Florencio Costa (K.U.Leuven, Belgium)
Joost Kok (Leiden University, Netherlands)

ICMI-MLMI 2009

ICMI-MLMI 2009
MIT Media Lab, Cambridge MA, USA
November 2-6, 2009

http://icmi2009.acm.org

Call for participation

The Eleventh International Conference on Multimodal Interfaces
and Workshop on Machine Learning for Multimodal Interaction
(ICMI-MLMI) will jointly take place at the MIT Media Lab in
Cambridge, MA, USA on November 2-6, 2009. The main aim of
ICMI-MLMI 2009 is to further scientific research within the
field of multimodal interaction, methods and systems. The joint
conference will focus on major trends and challenges in this
area, and work to identify a roadmap for future research and
commercial success. ICMI-MLMI 2009 will feature a single-track
main conference with keynote speakers, technical paper
presentations, poster sessions, demonstrations, and workshops.

General Co-chairs

James Crowley, INRIA
Yuri Ivanov, MERL
Christopher Wren, Google

Program Co-Chairs

Daniel Gatica-Perez, Idiap Research Institute
Michael Johnston, AT&T Research
Rainer Stiefelhagen University of Karlsruhe

Supported by PASCAL 2
———————————————————-

Call for Papers – IJCV Special Issue – Structured Prediction and Inference – 26FEB2010

Dear colleagues,

we would like to announce an upcoming IJCV special issue
on STRUCTURED PREDICITON AND INFERENCE. The call for
papers follows below. A PDF version is available from
the IJCV website: http://www.springer.com/11263

Best regards,

Matthew Blaschko and Christoph Lampert

**************************************************************************

——————————————————-
International Journal of Computer Vision
Special Issue on
Structured Prediction and Inference
——————————————————-

Guest Editors
Matthew B. Blaschko, University of Oxford (blaschko@robots.ox.ac.uk)
Christoph H. Lampert, Max Planck Institute for Biological Cybernetics,
Tuebingen, Germany (chl@tuebingen.mpg.de)
——————————————————-

Background

Many computer visions problems can be formulated naturally as prediction
tasks of structured objects. Image segmentation, stereo reconstruction,
human pose estimation and natural scene analysis are all examples of such
problems, in which the quantity one tries to predict consists of multiple
interdependent parts. The structured output learning paradigm offers a
natural framework for such tasks, and recently introduced methods for
end-to-end discriminative training of conditional random fields (CRFs)
and structured support vector machines (S-SVMs) for image classification
and interpretation show that computer vision is not just a consumer of
existing machine learning developments in this area, but one of the driving
forces behind their development. The complexity of structured prediction
models makes the problem of inference in these models an integral part of
their analysis. While the machine learning literature has largely focused
on message passing, computer vision research has introduced novel
applications of branch-and-bound and graph cuts as inference algorithms.
Articles addressing these issues are particularly encouraged for submission
to the special issue.

Topics

Original papers are being solicited that have as topic one or more aspects
of the structured prediction framework in a computer vision setting, that
is they address the problem of prediction from an input space, such as
images
or video, to a structured and interdependent output space. Submissions can
be theoretic or applied contributions as well as position papers. Topics of
interest include, but are not limited to:

* Training for structured output learning
– Probabilistic vs. max-margin training
– Generative vs. discriminative training
– Semi-supervised or unsupervised learning
– Dealing with label noise

* Inference methods for structured output learning
– Exact vs. approximate inference techniques
– Pixel, voxel, and superpixel random field optimization
– Priors and higher order clique optimization
– Approaches that scale to large amounts of training and test data

* Computer vision applications of structured output learning
– Segmentation
– Stereo reconstruction
– Relationship between scene components
– Hierarchical models

Authors are encouraged to submit high quality, original work that has
neither appeared in, nor is under consideration by, other journals. All open
submissions will be peer reviewed subject to the standards of the journal.
Manuscripts based on previously published conference papers must be extended
substantially. Springer offers authors, editors and reviewers of the
International Journal of Computer Vision a web-enabled online manuscript
submission and review system. Our online system offers authors the ability
to track the review process of their manuscript. Manuscripts should be
submitted to: http://VISI.edmgr.com. This online system offers easy and
straightforward log-in and submission procedures, and supports a wide range
of submission file formats.

* Paper submission deadline:
– February 26, 2010
* Estimated Online Publication:
– Fall, 2010

=================================================

NIPS 2009 Workshop on Approximate Learning of Large Scale Graphical Models: Theory and Applications

—————————————————————————————————————–
Call for Participation

NIPS 2009 Workshop on Approximate Learning of Large Scale Graphical Models: Theory and Applications

http://www.cs.toronto.edu/~rsalakhu/workshop_nips2009/

December 12, 2009
Whistler, Canada
—————————————————————————————————————–

DESCRIPTION:

Undirected graphical models provide a powerful framework for representing dependency structure between random variables. Learning the parameters of undirected models plays a crucial role in solving key problems in many machine learning applications, including natural language processing, visual object recognition, speech perception, information retrieval, computational biology, and many others.

Learning in undirected graphical models of large treewidth is difficult because of the hard inference problem induced by the partition function for maximum likelihood learning, or by finding the MAP assignment for margin-based loss functions. Over the last decade, there has been considerable progress in developing algorithms for approximating the partition function and MAP assignment, both via variational approaches (e.g., belief propagation) and sampling algorithms (e.g., MCMC). More recently, researchers have begun to apply these methods to learning large, densely-connected undirected graphical models that may contain millions of parameters. A notable example is the learning of Deep Belief Networks and Deep Boltzmann Machines, that employ MCMC strategy to greedily learn deep hierarchical models.

The goal of this workshop is to assess the current state of the field and explore new directions in both theoretical foundations and empirical applications. In particular, we shall be interested in discussing the following topics:

State of the field: What are the existing methods and what is the relationship between them? Which problems can be solved using existing algorithms and which cannot?

The use of approximate inference in learning: There are many algorithms for approximate inference. In principle all of these can be “plugged-into” learning algorithms. What are the relative merits of using one approximation vs. the other (e.g., MCMC approximation vs. a variational one). Are there effective combined strategies?

Learning with latent variables: Graphical models with latent (or hidden) variables often possess more expressive power than models with only observed variables. However, introducing hidden variables makes learning far more difficult. Can we develop better optimization and approximation techniques that would allow us to learn parameters in such models more efficiently?

Learning in models with deep architectures: Recently, there has been notable progress in learning deep probabilistic models, including Deep Belief Networks and Deep Boltzmann Machines, that contain many layers of hidden variables and millions of parameters. The success of these models heavily relies on the greedy layer-by-layer unsupervised learning of a densely-connected undirected model called a Restricted Boltzmann Machine (RBM). Can we develop efficient and more accurate learning algorithms for RBM’s and deep multilayer generative models? How can learning be extended to semi-supervised setting and be made more robust to dealing with highly ambiguous or missing inputs? What sort of theoretical guarantees can be obtained for such greedy learning schemes?

Scalability and success in real-world applications: How well do existing approximate learning algorithms scale to large-scale problems including problems in computer vision, bioinformatics, natural language processing, information retrieval? How well do these algorithms perform when applied to modeling high-dimensional real-world distributions (e.g. the distribution of natural images)?

Theoretical Foundations: What are the theoretical guarantees of the learning algorithms (e.g. accuracy using the learned parameters with respect to best possible, asymptotic convergence guarantees such as almost sure convergence to the maximum likelihood estimator). What are the tradeoffs between running time and accuracy?

Loss functions: In the supervised learning setting, two popular loss functions are log-loss (e.g., in conditional random fields) and margin-based-loss (e.g., in maximum margin Markov networks). In intractable models these approaches result in rather different approximation schemes (since the former requires partition function estimation, whereas the latter only requires MAP estimates). What can be said about the differences between these schemes? When is one model more appropriate than the other? Can margin-based models be applied in the unsupervised case?

Structure vs. accuracy: Which graph structures are more amenable to approximations and why? How can structure learning be combined with approximate learning to yield models that are both descriptive and learnable with good accuracy?

PROGRAM:

The program will consist entirely of invited talks. The invited speakers are:
Pedro Domingos, University of Washington
Bill Freeman, MIT
Geoffrey Hinton, University of Toronto
Daphne Koller, Stanford University
David McAllester, Toyota Technological Institute at Chicago
Ben Taskar, University of Pennsylvania
Noah Smith, Carnegie Mellon University
Eric Xing, Carnegie Mellon University

ORGANIZERS:

Ruslan Salakhutdinov, MIT
Amir Globerson, Hebrew University
David Sontag, MIT

The workshop is supported by PASCAL (a non-core workshop).

CFC: NIPS workshop “Clustering: Science or Art?”

December 2009, Whistler, Canada
http://clusteringtheory.org/
Submission deadline: Friday October 30th, 2009

Organizers:
————

Shai Ben-David, Ulrike von Luxburg, Avrim Blum, Isabelle Guyon, Robert C. Williamson, Reza Bosagh Zadeh, Margareta Ackerman

Topic of the workshop:
———————

Clustering is one of the most widely used techniques for exploratory data analysis. In the past five decades, many clustering algorithms have been developed and applied to a wide range of practical problems. However, in spite of the abundance of clustering research published every year, we are only beginning to understand some of the most basic issues in clustering. Even though there exist many claims to success, there seems to be a lack of well established methodological procedures. In particular, addressing issues that are independent of any specific clustering algorithm, objective function, or specific data generative model, is only in its infancy. The state of affairs is perhaps not dissimilar to that in computer programming at the time of Donald Knuth’s famous Turing award lecture: “It is clearly an art, but many feel that a science is possible and desirable”.

This workshop aims at initiating a dialog between theoreticians and practitioners, aiming to bridge the theory-practice gap in this area. We want to build our workshop along three main questions:

1. FROM THEORY TO PRACTICE: Which abstract theoretical characterizations / properties / statements about clustering algorithms exist that can be helpful for practitioners and should be
adopted in practice?

2. FROM PRACTICE TO THEORY: What concrete questions would practitioners like to see addressed by theoreticians? Can we identify de-facto practices in clustering in need of theoretical grounding?
Which obscure (but seemingly needed or useful) practices are in need of rationalization?

3. FROM ART TO SCIENCE: In contrast to supervised learning, where there exist rigorous methods to assess the quality of an algorithm, such standards do not exist for clustering – clustering is still
largely an art. How can we progress towards more principled approaches, including the introduction of falsifiable hypotheses and properly designed experimentation? How could one set up a clustering challenge to compare different clustering algorithms? What could be scientific standards to evaluate a clustering algorithm in a paper?

Call for Contributions:
————–

The workshop will consist of a mix of presentations and discussions. Researchers who want to contribute should submit an extended abstract of their work by email to
nips09 at clusteringtheory.org,
at most 4 pages, pdf format, following the NIPS style guide.

*** The deadline is Friday October 30th***

The organizers will review all submissions. Notification of acceptance will be sent out by Friday November 6th.

OPT 2009: Optimization for Machine Learning, First Call for Participation

OPT 2009
2nd NIPS Workshop on Optimization for Machine Learning
NIPS*2009 Workshop
December 11th or 12th, 2009, Whistler, Canada

URL: http://opt.kyb.tuebingen.mpg.de

Deadline for submission: 16th October 2009

Abstract
——–

It is fair to say that at the heart of every machine learning algorithm is an optimization problem. It is only recently that this viewpoint has gained significant following. Classical optimization techniques based on convex optimization have occupied center-stage due to their attractive
theoretical properties. But, new non-smooth and non-convex problems are being posed by machine learning paradigms such as structured learningand semi-supervised learning. Moreover, machine learning is now very important for real-world problems which often have massive datasets, streaming inputs, and complex models that also pose significant algorithmic and engineering challenges. In summary, machine learning not only provides interesting applications but also challenges the underlying assumptions of most existing optimization algorithms.

Therefore, there is a pressing need for optimization “tuned” to the machine learning context. For example, techniques such as non-convex optimization (for semi-supervised learning), combinatorial optimization and relaxations (structured learning), non-smooth optimization (sparsity
constraints, L1, Lasso, structure learning), stochastic optimization (massive datasets, noisy data), decomposition techniques (parallel and distributed computation), and online learning (streaming inputs) are relevant in this setting. These techniques naturally draw inspiration from other fields, such as operations research, theoretical computer science, and the optimization community.

Motivated by these concerns, we would like to address these issues in the framework of this workshop.

Background and Objectives
————————-
This workshop is in continuation to the successful PASCAL2 Workshop on Optimization for Machine Learning, which was held at NIPS*2008, in Whistler, Canada, and was very well-received with packed attendence almost throughout the day.

Other workshops, such as ‘Mathematical Programming in Machine Learning / Data
Mining’ held from 2005–2007 also share the spirit of our workshop. These workshops were quite extensive and provided a solid platform for encouraging exchange between machine learners and optimization researchers. Another relevant workshop was the BigML NIPS*2007 workshop that focused on algorithmic challeges faced for large-scale machine learning tasks, with a focus on
parallelization or online learning.

Our workshop addresses the following major issues, some of which have not been previously tackled as a combined optimization and machine learning effort. In particular, the main aims of our workshop are:

+ Bring together experts from machine learning, optimization, operations research, and statistics to further an exchange of ideas and techniques

+ Focus on problems of interest to the NIPS audience (some basic examples are given below)

+ Identify a set of important open problems and issues that lie at the intersection of both machine learning and optimization

Call for Participation
———————-

We invite high quality submissions for presentation as talks or poster presentations during the workshop. We are especially interested in participants who can contribute theory / algorithms, applications, or implementations with a machine learning focus in the following areas:

* Non-Convex Optimization,
– Non-negative matrix and tensor approximation
– Non-convex quadratic programming, including binary QPs
– Convex Concave Decompositions, D.C. Programming
– Training of deep architectures and large hidden variable models

* Optimization with Sparsity constraints
– Combinatorial methods for L0 norm minimization
– L1 and group L1 penalized methods
– Sparse PCA
– Rank minimization methods

* Optimization in Graphical Models
– Structure learning
– MAP estimation in continuous and discrete random fields

* Combinatorial Optimization,
– Clustering and graph-partitioning
– Semi-supervised and multiple-instance learning
– Feature and subspace selection

* Stochastic, Parallel and Online Optimization,
– Large-scale learning, massive data sets
– Distributed learning algorithms

* Algorithms and Techniques,
especially with a focus on an underlying application
– Polyhedral combinatorics, polytopes and strong valid inequalities
– Linear and higher-order relaxations
– Decomposition for large-scale, message-passing and online learning
– Global and Lipschitz optimization
– Algorithms for non-smooth optimization
– Approximation Algorithms

Important Dates
—————

* Deadline for submission of papers: 16th October 2009
* Notification of acceptance: 7th November 2009
* Final version of submission: 20th November 2009
* Workshop date: 12th December 2009

Please note that at least one author of each accepted paper must be available to present the paper at the workshop. Further details regarding the submission process are available at the workshop homepage.

Submission
———-

The submission should ideally be 3-4 pages long (with a hard-limit of 6 pages); Submissions should be double blind, using the NIPS format, and should be done via CMT at
https://cmt.research.microsoft.com/OPT2009/Default.aspx

For more details, please see the workshop webpage.

Workshop
——–

The workshop will be a one-day event with a morning and afternoon session. In addition to a lunch break, long coffee breaks will be offered both in the morning and afternoon. There will be a possibility to present posters and demonstrations during these breaks.

A panel discussing future directions and potential workshops that expand upon the topics of this workshop will be held in conclusion. Special focus will be laid on establishing areas, methods, and problems of interest.

Invited Speakers
—————-

* Arkadi Nemirovski — Georgia Institute of Technology
* Nathan Srebro — Toyota Institute of Technology
* TBD

Program Committee
—————–

* Andreas Argyriou, University College London
* Alexandre d’Aspremont, Princeton University
* Léon Bottou, NEC Laboratories America
* Tijl De Bie, University of Bristol
* Chuong Do, Stanford University
* John Duchi, University of California, Berkeley
* Vojtech Franc, Czech Technical University
* Dongmin Kim, University of Texas at Austin
* Sathiya Keerthi, Yahoo! Research
* Gert Lanckriet, University of California, San Diego
* Chih-Jen Lin, National Taiwan University
* Cheng Soon Ong, ETH Zurich
* Pradeep Ravikumar, University of Texas at Austin
* Onur Şeref, University of Florida
* Mark Schmidt, University of British Columbia
* Nathan Srebro, Toyota Technological Institute at Chicago and
University of Chicago
* Sandor Szedmák, University of Southampton

Workshop Organizers
——————-

* Sebastian Nowozin, Max Planck Institute for Biological Cybernetics
* Suvrit Sra, Max Planck Institute for Biological Cybernetics
* SVN Vishwanathan, Purdue University, West Lafayette
* Stephen Wright, University of Wisconsin, Madison

The organizers can be contacted through opt@tuebingen.mpg.de.

Acknowledgements
—————-
We gratefully acknowledge MOSEK (http://www.mosek.com) and the EU PASCAL2 network for helping us with the funding of this workhop.

Call for Contributions: NIPS 2009 Workshop on Connectivity Inference in Neuroimaging

Webpage
http://cini2009.kyb.tuebingen.mpg.de

Workshop description

Over the past decade, brain connectivity has become a central theme in the neuroimaging community. At the same time, causal inference has recently emerged as a major research topic in machine learning. Even though the two research questions are closely related, interactions
between the neuroimaging and machine-learning communities have been limited.

The aim of this workshop is to initiate productive interactions between neuroimaging and machine learning by introducing the workshop audience to the different concepts of connectivity/causal inference employed in each of the communities. Special emphasis is placed on discussing commonalities as well as distinctions between various approaches in the context of neuroimaging. Due to the increasing relevance of brain connectivity for analyzing mental states, we also highly welcome contributions discussing applications of brain connectivity measures to real-world problems such as brain-computer interfacing or mental state monitoring.

Topics

We solicit contributions on new approaches to connectivity and/or causal inference for neuroimaging data as well as on applications of connectivity inference to real-world problems. Contributions might address, but are not limited to, the following topics:

* Effective connectivity & causal inference
o Dynamic causal modelling
o Granger causality
o Structural equation models
o Causal Bayesian networks
o Non-Gaussian linear causal models
o Causal additive noise models
* Functional connectivity
o Canonical correlation analysis
o Phase-locking
o Imaginary coherence
o Independent component analysis
* Applications of brain connectivity to real-world problems
o Brain-computer interfaces
o Mental state monitoring

Invited speakers

* Jean Daunizeau, University of Zurich & University College London
* Rainer Goebel, Maastricht University
* Scott Makeig, University of California San Diego

Workshop format

CINI 2009 is a one-day workshop at the Twenty-Third Annual Conference on Neural Information Processing Systems (NIPS 2009). Besides three invited talks, in which the audience will be introduced to current approaches for inferring connectivity in neuroimaging data, there will be several contributed talks and an evening poster session. Special emphasis will be placed on a balanced contribution of talks from the neuroimaging and machine learning communities. To foster interaction between communities, approximately 50% of workshop time is reserved for discussions.

Key dates

* Extended abstract submission deadline: October 9th, 2009, 5 pm (PT)
* Notification of acceptance: October 23rd, 2009
* Workshop: December 11th or 12th, 2009

Submission instructions

Please submit extended abstracts (maximum two pages) in either pdf or doc format through the CINI 2009 submission site at https://cmt.research.microsoft.com/CINI2009/. Upon notification of
acceptance, authors will also be notified whether their contribution has been accepted as a contributed talk or poster.

Workshop location

Westin Resort and Spa / Hilton Whistler Resort and Spa
Whistler, B.C., Canada

Organization committee

* Moritz Grosse-Wentrup (primary contact), MPI for Biological Cybernetics, Tuebingen
* Uta Noppeney, MPI for Biological Cybernetics, Tuebingen
* Karl Friston, University College London
* Bernhard Schoelkopf, MPI for Biological Cybernetics, Tuebingen

Program committee

* Olivier David, Institut National de la Sante et de la Recherche
Medicale, Grenoble
* Justin Dauwels, Massachusetts Institute of Technology, Cambridge
* Michael Eichler, Maastricht University
* Jeremy Hill, Max Planck Institute for Biological Cybernetics, Tuebingen
* Guido Nolte, Fraunhofer FIRST, Berlin
* Will Penny, University College London
* Alard Roebroeck, Maastricht University
* Klaas Enno Stephan, University of Zurich
* Ryota Tomioka, University of Tokyo
* Pedro Valdes-Sosa, Cuban Neuroscience Center, Havana

NIPS Workshop – Kernels for Multiple Outputs and Multi-task Learning: Frequentist and Bayesian Points of View

Call for contributions – Kernels for Multiple Outputs and Multi-task Learning: Frequentist and Bayesian Points of View

http://intranet.cs.man.ac.uk/mlo/mock09/

Workshop at the Twenty-Third Annual Conference on Neural Information Processing Systems (NIPS 2009)
Whistler, BC, Canada, December 12, 2009.

—————————————————————————————————————————————

WORKSHOP DESCRIPTION

Accounting for dependencies between outputs has important applications in several areas. In sensor networks, for example, missing signals from temporal failing sensors may be predicted due to correlations with signals acquired from other sensors. In geo-statistics, prediction of the concentration of heavy pollutant metals (for example, Copper concentration), that require expensive procedures to be measured, can be done using inexpensive and oversampled variables
(for example, pH data).

Within machine learning, this framework is known as multi-task learning. Multi-task learning is a general learning framework in which it is assumed that learning multiple tasks simultaneously leads to better modeling results and performance that learning the same tasks individually. Exploiting correlations and dependencies among tasks, it becomes possible to handle common practical situations such as missing data or to increase the amount of potential data when only
few amount of data per task is available.

In the last few years there has been an increased amount of work on Multi-task Learning. From the Bayesian perspective, this problem has been tackled using hierarchical Bayesian together with neural networks. More recently, the Gaussian Processes framework has been considered, where the correlations among tasks can be captured by appropriate choices of covariance functions. Many of these choices have been inspired by the geo-statistics literature, in which a similar area is known as cokriging. From the frequentist perspective, regularization theory has provided a natural framework to deal with multi-task problems: assumptions on the relation of the different
tasks translate into the design of suitable regularizers. Despite the common traits of the proposed approaches, so far different communities have worked independently. For example it is natural to
ask whether the proposed choices of the covariance function can be interpreted from a regularization perspective. Or, in turn, if each regularizer induces a specific form of the covariance/kernel function. By bringing together the latest advances from both communities, we aim at establishing what is the state of the art and the possible future challenges in the context of multiple-task learning.

Although there are several approaches to multi-task learning out there, in this workshop we focus our attention to methods based on constructing covariance functions (kernels) for multiple outputs, to be employed, for example, together with Gaussian processes or
regularization networks.

INVITED SPEAKERS

David Higdon, Los Alamos National Laboratory, USA
Sayan Mukherjee, Duke University, USA
Andreas Argyriou, Toyota Technological Institute, USA
Hans Wackernagel, Ecole des Mines Paris, France

IMPORTANT DATES

Deadline for abstract submission: October 23, 2009
Notification of acceptance: November 6, 2009
Workshop: December 12, 2009

SUBMISSION FORMAT

Submissions will be accepted as 20 minutes talks or Spotlights.
Extended abstracts submitted will use NIPS style with a maximum number of 4 pages.
Abstracts should be sent to: nips.mock09 (at) gmail.com

ORGANISERS

Mauricio A. Álvarez, University of Manchester
Lorenzo Rosasco, MIT
Neil D. Lawrence, University of Manchester

The workshop is sponsored by EU FP7 PASCAL2 Network of Excellence

NIPS workshop on Probabilistic Approaches for Robotics and Control – call for contributions

-Workshop dates
Friday, December 11 or Saturday, December 12, 2009

-Workshop location
Whistler, B.C., Canada, at the Westin Resort and Spa and the
Hilton Whistler Resort and Spa

-Poster submission
Please send an extended abstract of max. 1 page describing the poster you intend to present to
mpd37 (at) cam.ac.uk
Choose a format of your liking, e.g., the standard NIPS template.

The deadline for abstract submissions is October 17, 2009.

The notification will be October 26, 2009.

-Workshop homepage
http://mlg.eng.cam.ac.uk/marc/nipsWS09

-Conference homepage
http://nips.cc

-Workshop Abstract

During the last decade, many areas of Bayesian machine learning have reached a high level of maturity. This has resulted in a variety of theoretically sound and efficient algorithms for learning and inference in the presence of uncertainty. However, in the context of control, robotics, and reinforcement learning, uncertainty has not yet been treated with comparable rigor despite its central role in risk-sensitive control, sensorimotor control, robust control, and cautious control. A consistent treatment of uncertainty is also essential when dealing with stochastic policies, incomplete state information, and exploration strategies.

A typical situation where uncertainty comes into play is when the exact state transition dynamics are unknown and only limited or no expert knowledge is available and/or affordable. One option is to learn a model from data. However, if the model is too far off, this approach can result in arbitrarily bad solutions. This model bias can be sidestepped by the use of flexible model-free methods. The disadvantage of model-free methods is that they do not generalize and
often make less efficient use of data. Therefore, they often need more trials than feasible to solve a problem on a real-world system. A probabilistic model could be used for efficient use of data while alleviating model bias by explicitly representing and incorporating uncertainty.

The use of probabilistic approaches requires (approximate) inference algorithms, where Bayesian machine learning can come into play. Although probabilistic modeling and inference conceptually fit
into this context, they are not widespread in robotics, control, and reinforcement learning. Hence, this workshop aims to bring researchers together to discuss the need, the theoretical properties, and the practical implications of probabilistic methods in control, robotics, and reinforcement learning.

One particular focus will be on probabilistic reinforcement learning approaches that profit recent developments in optimal control which show that the problem can be substantially simplified if certain structure is imposed. The simplifications include linearity of the (Hamilton-Jacobi) Bellman equation. The duality with Bayesian estimation allow for analytical computation of the optimal control laws and closed form expressions of the optimal value functions.

Format

The workshop will consist of short invited presentations and a session with contributed posters (plus poster spotlight). Topics (from a theoretical and practical perspective) to be addressed include, but are not limited to:

– How can we efficiently plan and act in the presence of uncertainty in states/rewards/observations/environment?

– Shall we model the lack of knowledge or can we simply ignore it?

– How can prior knowledge (e.g., expert knowledge and domain knowledge) be incorporated?

– How much manual tuning and human insight (e.g., domain knowledge) is a) required and b) available to achieve good performance?

– Is there a principled way to account for imprecise models and model bias?

– What roles should probabilistic models play in control? Are they needed at all?

– What kinds of probabilistic models are useful?

– In traditional control, hand-crafted control laws often prevail since optimal control laws are mostly too aggressive due to model errors while robust control laws can be too conservative since they always assume the worst case. Can “probabilistic control” bridge the gap between robust and optimal control laws?

– How can we exploit the linearity of the (Hamilton-Jacobi) Bellman equation and the duality with Bayesian estimation?

– Can we compute the optimal control law analytically and is there a closed-form expression of the value function?

– How can existing machine learning methods be applied to efficiently solve stochastic control problems?

*Invited speakers*

Dieter Fox (University of Washington), confirmed
Drew Bagnell (CMU), pending
Evangelos Theodorou (USC), confirmed
Jovan Popovic (MIT), confirmed
Konrad Koerding (Northwestern University), confirmed
Marc Toussaint (TU Berlin), confirmed
Miroslav Karny (Academy of Sciences of the Czech Republic), confirmed
Mohammad Ghavamzadeh (INRIA), pending
Roderick Murray-Smith (University of Glasgow), pending
Bert Kappen (University of Nijmegen), confirmed
Emanuel Todorov (University of Washington), confirmed

*Organizers*

Marc Peter Deisenroth
Bert Kappen
Emanuel Todorov
Duy Nguyen-Tuong
Carl Edward Rasmussen
Jan Peters

NIPS 2009 Workshop on Applications of Topic Models: Text and Beyond CFP

————————————————————–
Call for Contributions and Participation

NIPS 2009 Workshop on Applications of Topic Models: Text and Beyond
December 11 or 12, 2009
http://nips2009.topicmodels.net

Submission Deadline: Friday October 23, 2009
—————————————————————–

Description:
The primary goal of this workshop is to bring together a diverse set of researchers from multiple research areas, all of whom work on topic modeling. Statistical topic models are a class of Bayesian latent variable models, originally developed for analyzing the semantic content of large document corpora. With the increasing availability of other large, heterogeneous data collections, topic models have been adapted to model data from fields as diverse as computer vision,
finance, bioinformatics, cognitive science, music, and the social sciences. While the underlying models are often extremely similar, these communities use topic models in different ways in order to achieve different goals. This one-day workshop will bring together topic modeling researchers from multiple disciplines, providing an opportunity for attendees to meet, present their work and share ideas, as well as inform the wider NIPS community about current research in topic modeling. This workshop will address the following specific goals:

* Identify and formalize open research areas – e.g., how best to evaluate topic modeling performance both within and across different application domains
* Propose, explore, and discuss new application areas
* Discuss how best to facilitate transfer of research ideas between application domains
* Direct future work and generate new application areas, novel modeling approaches, and unexplored collaborative research directions

We encourage researchers to emphasize real-world applications – ranging from specific applications to entire application domains – in their submissions, and welcome the following types of papers:

* Research papers that propose new topic models for specific applications
* Research papers that apply existing topic models to novel application domains
* Position papers and speculative papers that discuss desiderata of existing application domains or propose new domains and approaches for future topic modeling research
* Papers that discuss practical issues relating to topic models, such as parallel computation environments and scalability for massive data collections
* Papers that investigate evaluation methodologies for topic models

The workshop will consist of invited talks (5 or 6) by established researchers from multiple research communities, contributed talks (4 or 5), a poster session, and a panel session.

Invited Speakers (confirmed):

David Blei (Princeton University)
Mark Johnson (Brown University)
Eric Xing (Carnegie Mellon University)
Mark Steyvers (U.C. Irvine)
Li Fei-Fei (Stanford University)

Submission Instructions:

Submissions should be sent to: nips2009-submit@topicmodels.net

They should include a title, authors, and abstract in plain text, and a 2-4 page extended abstract in NIPS pdf format. Final versions of extended abstracts will be posted on the workshop website.

Dates:
Submission Deadline: Friday October 23, 2009
Notifications: Monday November 9, 2009
Final Versions: Friday November 20, 2009
Workshop: December 11 or 12, 2009

Location:
Westin Resort and Spa / Hilton Whistler Resort and Spa
Whistler, B.C., Canada

http://nips.cc/Conferences/2009/

Organizers:
David Blei (Princeton University)
Jordan Boyd-Graber (Princeton University)
Jonathan Chang (Princeton University)
Katherine Heller (University of Cambridge)
Hanna Wallach (University of Massachusetts, Amherst)

Program Committee:
Edo Airoldi (Harvard University)
Hal Daumé (University of Utah)
Tom Dietterich (Oregon State University)
Laura Dietz (Max-Planck-Institut für Informatik)
Jacob Eisenstein (Carnegie Mellon University)
Tom Griffiths (University of California, Berkeley)
John Lafferty (Carnegie Mellon University)
Jia Li (Stanford University)
Andrew McCallum (University of Massachusetts, Amherst)
David Mimno (University of Massachusetts, Amherst)
Dave Newman (University of California, Irvine)
Padhraic Smyth (University of California, Irvine)
Erik Sudderth (Brown University)
Yee Whye Teh (Gatsby Unit, UCL)
Chong Wang (Princeton University)
Max Welling (University of California, Irvine)
Sinead Williamson (University of Cambridge)
Jerry Zhu (University of Wisconsin, Madison)

Sponsor:
PASCAL 2 (non-core workshop)

Contact:
nips2009@topicmodels.net