Lectureships in Statistical Inference & Multi-Modal Interaction

The Department of Computing Science of the University of Glasgow invites applications for three Lectureships in Statistical Inference, Multi-Modal Interaction, and Formal Methods. The post in Statistical Inference will be of particular interest to candidates with strong track records and interests in Bayesian methodology and Statistical Machine Learning.

[A Lectureship is roughly equivalent to a US Assistant Professor]

University of Glasgow
Faculty of Information and Mathematical Sciences Department of Computing Science
3 Lecturer Posts
Salary: £31,513-£35,469 (grade 7)/£38,757 – £44,930 (grade 8)

Computing Science at Glasgow was rated as one of the top 10 departments for research in the UK in the 2008 Research Assessment Exercise. For further information about the Department’s research strengths see http://www.dcs.gla.ac.uk/research/ and for more information about SICSA see http://www.sicsa.ac.uk/. The Scottish Informatics and Computer Science Alliance (SICSA) aims to create a world-leading Computer Science research community across the universities in Scotland. As part of this initiative, the Department of Computing Science is seeking to appoint three lecturers to undertake research and to develop and teach undergraduate or postgraduate courses within the following areas:

1.
Inference, covering statistical inferential methodology, computational statistics and statistical machine learning. We would welcome applicants working in the area of Bayesian Inference, especially those whose research focus is on probabilistic modelling and inferential methodology applied to themes in Multimodal Interaction and Modelling & Abstraction.

Ref: 00023-1 Informal enquiries can be directed to Professor Mark Girolami,
email: girolami (a) dcs.gla.ac.uk; tel: 0141 330 1623, www.dcs.gla.ac.uk/inference

2.
Multimodal Interaction, in particular applicants whose research focus is on the application of machine-learning or inference in context evaluation, location-aware, gestural interaction or information retrieval settings of interaction design.

Ref: 00022-2 Informal enquiries can be directed to Professor Stephen Brewster
email: stephen (at) dcs.gla.ac.uk; tel: 0141 330 4966

3.
Formal modelling, theory and analysis, in particular applicants working in the areas of theory and practice of formal modelling, automated analysis and reasoning, complex and concurrent systems, model checking or type theory.

Ref: 00021-1 Informal enquiries can be directed to Professor Muffy Calder,
email: muffy (at) dcs.gla.ac.uk; tel: 0141 330 4969

The posts are available at either grade 7 or grade 8 depending upon knowledge/qualifications and experience. For an application pack, please see our website at www.gla.ac.uk/jobs/vacancies.

Applications should be submitted to the Human Resources Department (Recruitment Section), University of Glasgow, G12 8QQ, not later than 17 April 2009.

Internet Mathematics 2009: call for participation

Internet Mathematics 2009
04.03.2009 – 15.05.2009
http://company.yandex.ru/grant/2009/en/

CALL FOR PARTICIPATION

‘Internet Mathematics’ is a series of contests, started by Yandex. This year’s contest is the third since its launch in 2004/05. Previously, this event was also held in 2006/07. This year, the competition is targeted mainly at students, postgrads, programmers, and young researchers. The purpose of this contest is to create a higher profile for current challenges in information retrieval and to stimulate research in the field of Web data analysis.

The problem to be solved within ‘Internet Mathematics 2009′ is the same for all participants; to obtain a document ranking function based on learning set. Within Internet Mathematics 2009 we distribute real relevance tables that are used for learning ranking formula at Yandex. The tables contain computed and normalized features of query-document pairs as well as relevance judgments made by Yandex assessors. The tables do not contain original queries or URLs of original documents, semantics of the features is not revealed. Data set corresponds to approximately 20,000 queries and 200,000 documents and is divided into learning set and test set.

Participants can submit their solutions at any time during the competition period. Portion of participants’ submissions will be considered for preliminary public evaluation. After the deadline evaluations will be finalized and the best results will be announced. Winner will be awarded with money prizes (eligibility restrictions apply).

Questions or suggestions about the contest are welcome at grant@yandex-team.ru.

ICML 2009 Workshop on Numerical Mathematics in Machine Learning

NUMML 2009 Numerical Mathematics in Machine Learning
http://numml.kyb.tuebingen.mpg.de/

Abstract

Most machine learning (ML) algorithms rely fundamentally on concepts of numerical mathematics. Standard reductions to black-box computational primitives do not usually meet real-world demands and have to be modified at all levels. The increasing complexity of ML problems requires layered approaches, where algorithms are components rather than stand-alone tools fitted individually with much human effort. In this modern context, predictable run-time and numerical stability behavior of algorithms become fundamental. Unfortunately, these aspects are widely ignored today by ML researchers, which limits the applicability of ML algorithms to complex problems, and therefore the practical scope of ML as a whole.

Background and Objectives

Our workshop aims to address these shortcomings. Ideally, a code of conduct can be established for MLers combining and modifying numerical primitives, a set of essential rules as a compromise between inadequate black-box reductions and highly involved complete numerical analyses. We will invite speakers with interest in *both* numerical methodology *and* real problems in applications close to machine learning. While numerical software packages of ML interest will be pointed out, our focus will rather be on how to best bridge the gaps between ML requirements and these computational libraries. A subordinate goal will be to address the role of parallel numerical computation in ML. One running example will be the linear model, or Gaussian Markov random field, a building block behind sparse estimation, Kalman smoothing and filtering, Gaussian process models, state space models, or (multi-layer) perceptrons. Basic tasks in this model require the solution of large linear systems, eigenvector approximations, matrix factorizations and their low-rank updates. In turn, model structure can often be used to drastically speed up, or even precondition, these low-level numerical computations.

Impact and Expected Outcome

We will call the community’s attention to the increasingly critical issue of numerical considerations in algorithm design and implementation. A set of essential rules for how to use and modify numerical software in ML is required, for which we aim to lay the groundwork in this workshop. These efforts should lead to an awareness of the problems, as well as increased focus on efficient and stable ML implementations. We will encourage speakers to point out useful software packages, together with their caveats, asking them to focus on examples of ML interest. Raising awareness about the increasing importance of stability and predictable run-time behaviour of numerical machine learning algorithms and primitives. Establishing a code of conduct for how to best select and modify existing numerical mathematics code for machine learning problems. Learning about developments in current numerical mathematics, a major backbone of most machine learning methods.

Potential Subtopics

* Solving large linear systems o Arise in the linear model/Gaussian MRF (mean computations), nonlinear optimization methods (Newton-Raphson, IRLS, …) o Linear conjugate gradients o Preconditioning, use of model structure
* B- Numerical linear algebra packages relevant to ML o LAPACK, BLAS, GotoBLAS, MKL, UMFPACK, …
* Eigenvector approximation o Arise in the linear model (covariance estimation), spectral clustering and graph Laplacian methods, PCA o Lanczos algorithm and specialized variants
* Exploiting matrix/model structure, fast matrix-vector multiplication o Matrix decompositions/approximations o Multi-pole methods o FFT-based multiplication
* Matrix factorizations, low-rank updates o Arise in the linear model, Gaussian process/kernel methods o Cholesky updates/downdates
* Parallel numerical computation for ML

Single-Neuron Modeling Competition

The Quantitative Single-Neuron Modeling Competition offers a coherent framework to compare neuronal models and fitting methods.

http://www.incf.org/community/competitions/spike-time-prediction/2009

– The INCF Prize (10 000 CHF)
– The FACETS Award (500 CHF)

Important Dates

– Submissions via the website will open June 25, 2009.
– The submission deadline is August 25, 2009.
– The results will be presented at the INCF Congress of Neuroinformatics in Pilsen, September 6-8, 2009.

For details see:
http://www.incf.org/community/competitions/spike-time-prediction/2009

The organizers of the competition
Richard Naud (EPFL)
Thomas Berger (EPFL)
Brice Bathellier (UBern)
Wulfram Gerstner (EPFL)

Call for Papers: Pattern Recognition in Bioinformatics *EXTENDED DEADLINE*

*Please note that the paper submission deadline has been extended to April 14th.*

CALL FOR PAPERS

4th IAPR International Conference in

Pattern Recognition for Bioinformatics (PRIB 2009)

City Hall, Sheffield, United Kingdom

7 – 9 September 2009

http://www.dcs.shef.ac.uk/ml/prib2009/prib2009.html

SCOPE
—–
The International Association of Pattern Recognition (IAPR) sponsored conference aims to bring together top researchers, practitioners and students from around the world to discuss the applications of pattern recognition methods in the field of bioinformatics to solve problems in the life sciences. Prospective authors are invited to submit papers in the research areas of interest to the workshop. These include:

Bio-sequence analysis
Gene and protein expression analysis
Protein structure and interaction prediction Motifs and signal detection
Metabolic modelling and analysis
Systems and synthetic biology
Pathway and network analysis
Immuno- and chemo-informatics
Evolution and phylogeny
Biological databases, integration and visualisation
Bio-imaging

Pattern recognition techniques of interest include, but not limited to:
Static, syntactic and structural pattern recognition Data mining, Data based modelling Neural networks, Fuzzy systems Evolutionary computation and swarm intelligence Hidden Markov models, Graphical models

KEYNOTE SPEAKERS
—————-
Pierre Baldi (University of California, Irvine)
Alvis Brazma (European Bioinformatics Institute, Cambridge)
Gunnar Raetsch (Max Planck Institute, Germany)

DEADLINES
———
Paper Submission 14 April 2009
Special Sessions Proposal 1 April 2009
Author Notification 15 May 2009
Camera Ready Papers 1 July 2009
Early Bird Registration 15 July 2009

PUBLICATION
———–
Accepted papers will be published in the Springer series of Lecture Notes in Bioinformatics (LNBI). Enhanced versions of selected special papers will be included in a special issue in the International Journal of Systems Science.

PRIZES
——
Awards will be presented to Best Student Paper and the Best Conference Paper. A limited number of travel awards are available to non-EU students.

ORGANISING COMMITTEE
——————–
General Chairs: Visakan Kadirkamanathan (UK), Guido Sanguinetti (UK)
General Co-Chairs: Raj Acharya (USA), Madhu Chetty (Australia)
Programme Chairs: Mahesan Niranjan (UK), Mark Girolami (UK), Jagath
Rajapakse (Singapore) Special Sessions Chair: Cesare Furlanello (Italy)
Tutorials Chair: Florence d’Alche-buc (France) Publicity Chair: Elena
Marchiori (Netherlands) Publications Chair: Josselin Noirel (UK) Local
Organisation Chair: Daniel Coca (UK)
Webmaster: Maurizio Fillipone (UK)

CONTACT
——-
For additional information, contact:
PRIB 2009 Secretariat,
Department of Automatic Control & Systems Engineering, The University of Sheffield, Mappin Street, Sheffield S1 3JD, United Kingdom
Tel: +44 114 2225618
Fax: +44 114 2225661
prib2009 (at) sheffield.ac.uk

Call for Abstracts and Participation: Multidisciplinary Symposium on Reinforcement Learning

Call for Abstracts and Participation
Multidisciplinary Symposium on Reinforcement Learning
Dates: June 18-19, 2009
Abstract Deadline: April 10, 2009
Location: Montreal, Canada

In the last 25 years, reinforcement learning research has made great strides and had a significant impact within several fields, including artificial intelligence, optimal control, neuroscience, psychology, economics and operations research. These are diverse areas, with different goals and different evaluation criteria. It is
striking that reinforcement learning ideas are playing new roles in all of them. The Multidisciplinary Symposium on Reinforcement Learning (MSRL) is meant to recognize this confluence of fields, to
celebrate the diversity of reinforcement learning research, and to facilitate the exchange of information among reinforcement learning researchers in all these fields.

The symposium will consist of invited plenary lectures, spanning the breadth of reinforcement learning, and an evening poster session. The confirmed invited speakers are:

-Andrew G. Barto, University of Massachusetts, Amherst, USA
-Dimitri Bertsekas, Massachusetts Institute of Technology, USA
-Peter Dayan, University College London, U.K.
-Read Montague, Baylor College of Medicine, USA
-Andrew Ng, Stanford University, USA
-Warren Powell, Princeton University, USA
-Wolfram Schultz, University of Cambridge, U.K.
-Terrence Sejnowski, Salk Institute, USA
-Richard Sutton, University of Alberta, Canada
-Gerald Tesauro, IBM Research, USA
-Benjamin Van Roy, Stanford University, USA

A few other invitations are pending.

The evening poster session is a highlight of the symposium and is a unique opportunity to share your work and network with researchers from the many disciplines that contribute to modern reinforcement learning. For this poster session, MSRL invites abstracts from all areas of reinforcement learning. We are especially interested in work that:

– Highlights the best of RL-related research in any discipline
– Provides general overviews of a research program with an RL component
– Extends RL to new areas or applications
– Tests RL ideas in new ways
– Illustrates the impact of RL in a field

Submissions should consist of an extended abstract of up to 2 pages. Student submissions are particularly encouraged. Please send all submissions to msrl09@rl-community.org by April 10, 2009.

Notifications of acceptance to the symposium will be sent by May 1, 2009.

MSRL will be co-located in Montreal, Canada, with the International Conference on Machine Learning (ICML), the Conference on Uncertainty in Artificial Intelligence (UAI), and the Conference on Learning Theory (COLT). For more information, please see the MSRL web site at http://msrl09.rl-community.org.

-MSRL Organizing Committee

Doina Precup, McGill University
Elliot Ludvig, University of Alberta
Richard Sutton, University of Alberta
Shie Mannor, McGill University
Satinder Singh Baveja, University of Michigan

RuSSIR 2009: call for participation

3rd Russian Summer School in Information Retrieval (RuSSIR 2009)
Friday September 11 – Wednesday September 16, 2009
Petrozavodsk, Russia
http://romip.ru/russir2009/eng/

FIRST CALL FOR PARTICIPATION

The 3rd Russian Summer School in Information Retrieval will be held September 11-16, 2009 in Petrozavodsk, Russia. The school is co-organized by the Russian Information Retrieval Evaluation Seminar (ROMIP, http://romip.ru/), Petrozavodsk State University (http://petrsu.ru/), and Institute of Applied Mathematical Research (http://mathem.krc.karelia.ru/). Yandex (http://yandex.ru/) confirmed as golden sponsor of the event. The first and second RuSSIRs took place in Ekaterinburg in 2007 and Taganrog in 2008, respectively (see http://romip.ru/russir2007/ and http://romip.ru/russir2008/). Both schools were very successful.

Petrozavodsk, the capital of the Republic of Karelia, was founded in 1703. It is a large industrial and cultural center of the Russian North-West. Petrozavodsk is 400 km away from Saint-Petersburg, an overnight train journey from Saint-Petersburg takes about eight hours.

The target audience of the Summer School is advanced graduate and PhD students, post-doctoral researchers, academic and industrial researchers, and developers. The mission of the school is to teach students about a wide range of modern problems and methods in Information Retrieval; to stimulate scientific research in the field of Information Retrieval; and to create an opportunity for informal contacts among scientists, students and industry professionals. RuSSIR2009 will host approximately 100 participants. The working languages of the school are English and Russian.

The main RuSSIR 2009 program includes four courses, five lectures each:

Information Retrieval Modeling
Djoerd Hiemstra, University of Twente

Modeling Web Searcher Behavior and Interactions
Eugene Agichtein, Emory University

Enterprise and Desktop search
Pavel Dmitriev, Yahoo! Labs
Pavel Serdyukov, University of Twente
Sergey Chernov, L3S Research Center

Computational advertising: business models, technologies and issues
James G. Shanahan, Independent Consultant

The Russian Conference for Young Scientists in Information Retrieval will be co-organized with the school. The school is expected to have a versatile social program.

Participation in the school is free of charge. The Program Committee will form the body of participants based on received applications.

RuSSIR 2009 is co-located with the yearly ROMIP meeting (http://romip.ru/) and Russian Conference on Digital Libraries 2009 (http://rcdl2009.krc.karelia.ru/).

All inquiries can be sent to school[at]romip[dot]ru.

PhD position in Machine Learning & Robotics, TU Berlin

The Machine Learning & Robotics group at TU Berlin is inviting applications for a PhD studentship. The position is financed from a cooperative project with the CoR-Lab Bielefeld and the Honda Research Institute (Offenbach), in particular with their ASIMO robotics lab. The project is about grasping using tactile and visual feedback and involves a realization on a robotics platform with a 7 DoF Schunk arm and a dexterous 3 finger Schunk hand. We aim to apply Machine Learning methods in this context, in particular probabilistic inference methods for the integration of goals, constraints and uncertain information on multiple sensor and motor representations, and learning of prototypes and representations. The position is based in Berlin but includes the unique chance to visit the Honda and Bielefeld lab for one or two months per year and actively transfer the developed methods. Applicants should have experience in one of the fields of robotics, control, or Machine Learning, and great interest in the combination of theoretical methods and robotic applications.

Applications and informal enquiries, e.g., concerning more details on the project, can be addressed to

Marc Toussaint, Ph.D., TU Berlin
http://ml.cs.tu-berlin.de/~mtoussai/
mtoussai (at) cs.tu-berlin.de, cc: nilsp (at) cs.tu-berlin.de

We would appreciate applications until April 15th.

ICML Workshop on Numerical Methods in Machine Learning: Call for Contributions

CALL FOR CONTRIBUTIONS
International Conference on Machine Learning (ICML)
Workshop on Numerical Mathematics in Machine Learning
June 18, 2009. Montreal, Canada
http://numml.kyb.tuebingen.mpg.de
Deadline for abstract submissions: April 27, 2009

Motivation:

Most machine learning (ML) algorithms rely fundamentally on concepts of numerical mathematics. Standard reductions to black-box computational primitives do not usually meet real-world demands and have to be modified at all levels. The increasing complexity of ML problems requires layered approaches, where algorithms are components rather than stand-alone tools fitted individually with much human effort. In this modern context, predictable run-time and numerical stability behavior of algorithms become
fundamental. Unfortunately, these aspects are widely ignored today by ML researchers, which limits the applicability of ML algorithms to complex problems.

Our workshop aims to address these shortcomings, by trying to distill a compromise between inadequate black-box reductions and highly involved complete numerical analyses. We will invite speakers with interest in *both* numerical methodology *and* real problems in applications close to machine learning.
While numerical software packages of ML interest will be pointed out, our focus will rather be on how to best bridge the gaps between ML requirements and these computational libraries. A subordinate goal will be to address the role of parallel numerical computation in ML.

Examples of machine learning founded on numerical methods include low level computer vision and image processing, non-Gaussian approximate inference, Gaussian filtering / smoothing, state space models, approximations to kernel methods, and many more.

The workshop will comprise a panel discussion, in which the invited speakers are urged to address the problems stated above, and offer individual views and suggestions for improvement. We highly recommend active or passive attendance at this event. Potential participants are encouraged to contact the organizers beforehand, concerning points they feel should be addressed in this event.

Invited Speakers:

Inderjit Dhillon University of Texas, Austin
Michael Mahoney Stanford University
Jacek Gondzio Edinburgh University, UK

[Further speaker to be confirmed]

Topics:

Potential short talks / posters should aim to address:

– Raising awareness about the increasing importance of stability and predictable run-time behaviour of numerical machine learning algorithms and primitives
– Stability and predictable behaviour as a criterion for making algorithm choices in machine learning
– Lessons learned (and not learned) in machine learning about numerical mathematics. Ideas for improvement
– Novel developments in numerical mathematics, with potential impact on machine learning problems

Contributions will be considered only if a clear effort is made to analyze problems that arise, and if choices of algorithms, preconditioning, etc. are clearly motivated. For reasons stated in “Motivation”, submissions that apply numerical methods in a black box fashion, or that treat numerical techniques without motivating the use for machine learning, cannot be considered. The usual “smoothing over problems” conference paper style is discouraged, and naming and analyzing problems is strongly encouraged.

Potential Subtopics (submissions are not limited to these):

A- Solving large linear systems
Arise in the linear model/Gaussian MRF (mean computations), nonlinear optimization methods (Newton- Raphson, IRLS, …)
– Preconditioning, use of model structure.
Our main interest is on semi-generic ideas that can be applied to a range of machine learning real-world situations

B- Novel numerical software developments relevant to ML
– Parallel implementations of LAPACK, BLAS
– Sparse matrix packages

C- Approximate eigensolvers
Arise in the linear model (covariance estimation), spectral clustering and graph Laplacian methods, PCA
– Lanczos algorithm and specialized variants
– Preconditioning

D- Exploiting matrix/model structure, fast matrix-vector multiplication
– Matrix decompositions/approximations
– Multi-pole methods
– Signal-processing primitives (e.g., variants of FFT)

F- Parallel numerical computation for ML

G- Other numerical mathematics (ODEs, PDEs, Quadrature, etc.) focusing on machine learning

Submission Instructions:

We invite submissions of extended abstracts, from 2 to 4 pages in length (using the ICML 2009 style file). Criteria for content are given in “Topics”. Submissions should be sent to suvadmin@googlemail.com

Accepted contributions will be allocated short talks or posters. There will be a poster session with ample time for discussion. Short talk contributions are encouraged to put up posters as well, to better address specific questions.

Important Dates:

Submissions due: April 27, 2009
Author notification: May 11, 2009
Workshop date: June 18, 2009

Matthias W. Seeger MPI Informatics / Saarland University, Saarbruecken
Suvrit Sra MPI Biological Cybernetics, Tuebingen
John P. Cunningham Stanford University (EE), Palo Alto

We acknowledge financial support through the PASCAL 2 Initiative of the European Union.

Call for Papers: Workshop on “On-line Learning with Limited Feedback” at ICML/UAI/COLT 2009

——————————————————————————–
CALL FOR PAPERS

On-line Learning with Limited Feedback
Workshop at ICML/UAI/COLT 2009
June 18, 2009, Montreal, Canada
Submission deadline: April 26, 2009
http://sequel.futurs.inria.fr/online-learning
——————————————————————————–

OVERVIEW

The main focus of the workshop is the problem of on-line learning when only limited feedback is available to the learner. In on-line learning, at each time step the learner has to predict the outcome corresponding to the next input based on the feedbacks obtained so far. Unlike the usual supervised
problem, in which after each prediction the learner is revealed sufficient information to evaluate the goodness of all predictions he could have made, in many cases only limited feedback may be available to the learner. Depending on the nature of the limitation on the feedback, different classes of problems can be identified:

1. Delayed feedback. The utility of an action (i.e., the prediction) is returned only after a certain amount of time. This is the case of reinforcement learning and on-line control problems where the final outcome
of an action may be available only when a goal is finally achieved.

2. Partial feedback. The feedback is limited to that on the learner’s prediction so that no information is available on what would other possible predictions bring. Multi-armed bandits, when only the utility of the pulled arm is returned to the learner, is the classic example for this.

3. Indirect feedback. Neither the true outcome, nor the utility of the prediction is observed. Only an indirect feedback loosely related to the prediction is returned.

The increasing interest in on-line learning with limited feedback is also motivated by a number of applications, such as recommender systems, web advertisement systems, in which the user’s feedback is limited to accepting/ignoring the proposed item, and the true label (i.e., the item the user would prefer the most) is never revealed to the learner.

GOALS

Although some aspects of on-line learning with limited feedback have been already thoroughly analyzed (e.g., multi-armed bandit problems), many problems are still open. For instance, bandits with large action spaces and side information, learning with delayed reward, on-line optimization, etc., are of primary concern in many recent works on on-line learning. Furthermore, on-line learning with limited feedback has strong connections with a number of other fields of Machine Learning such as active learning, semi-supervised learning, and multi-class classification.
The goal of the workshop is to provide researchers with the possibility to present their current research on these topics and to encourage the discussion about the main open issues and the possible connections between the different sub-fields.In particular, we expect the workshop to shed light on a number of theoretical issues, such as:

* how does the performance of learning algorithms scale in either large (e.g., infinity number of arms, either numerable or continuum, or in metric or measurable spaces) or changing action spaces?
* how does the performance of learning algorithms scale depending on the smoothness of the function to be optimized (Lipschitz, linear, convex, non convex)?
* what are the connections between the MDP reinforcement learning paradigm and the on-line learning problem with delayed feedback?
* how to define complexity measures for on-line learning with limited feedback?
* is it possible to define a unified view on the problem of learning with delayed, partial, and indirect feedback?

CALL FOR PARTICIPATION

The organizing committee would like to invite the submission of extended abstracts (three to four pages in the conference format plus appendix if needed) describing research on (but not restricted to) the following topics:

* adversarial/stochastic bandits
* bandits with side information (contextual bandits, associative RL)
* bandits with large and/or changing action spaces
* on-line learning with delayed feedback
* on-line learning in MDPs and beyond
* partial monitoring prediction
* on-line optimization (Lipschitz, linear, convex, non-convex)
* on-line learning in games
* applications

We also welcome work-in-progress contributions, as well as papers discussing potential research directions.

Submissions should be sent via email to Alessandro Lazaric at alessandro.lazaric@inria.fr and should be in Postscript, or PDF format.

IMPORTANT DATES

Deadline for submission: 26th April
Notification of acceptance: 15th May
Workshop: 18th June

INVITED SPEAKERS

Nicolo’ Cesa-Bianchi (Università degli Studi di Milano)
Sham Kakade (Toyota Technological Institute)
Gabor Lugosi (Pompeu Fabra University)
Shai Shalev-Shwartz (Toyota Technological Institute)

ORGANIZATION COMMITTEE

Jean-Yves Audibert (Certis-Université Paris Est-Ecole des Ponts ParisTech)
Peter Auer (University of Leoben)
Sebastien Bubeck (INRIA – Team SequeL)
Alessandro Lazaric (INRIA – Team SequeL) – (primary contact)
Odalric Maillard (INRIA – Team SequeL)
Remi Munos (INRIA – Team SequeL)
Daniil Ryabko (INRIA – Team SequeL)
Csaba Szepesvari (University of Alberta)

SPONSORS

The workshop is sponsered by PASCAL2 Network and the Alberta Ingenuity Center for Machine Learning .