PhD Studentships at Gatsby Computational Neuroscience Unit

Gatsby Computational Neuroscience Unit, UCL
4 year PhD Programme

The Gatsby Unit is a centre for theoretical neuroscience and machine
learning, focusing on unsupervised, semi-supervised and reinforcement
learning, neural dynamics, population coding, Bayesian and
nonparametric statistics, kernel methods and applications of these to
the analysis of perceptual processing, neural data, natural language
processing, machine vision and bioinformatics. It provides a unique
opportunity for a critical mass of theoreticians to interact closely
with each other, and with other world-class research groups in related
departments at UCL (University College London), including Anatomy,
Computer Science, Functional Imaging, Physics, Physiology, Psychology,
Neurology, Ophthalmology and Statistics, the cross-faculty Centre for
Computational Statistics and Machine Learning. We also have links with
other UK and overseas universities including Cambridge in the UK,
Columbia, New York and the Max Planck Institute in Germany.

The Unit always has openings for exceptional PhD candidates.
Applicants should have a strong analytical background, a keen interest
in machine learning and/or neuroscience and a relevant first degree,
for example in Computer Science, Engineering, Mathematics,
Neuroscience, Physics, Psychology or Statistics.

The PhD programme lasts four years, including a first year of
intensive instruction in techniques and research in machine learning
and theoretical neuroscience.

Competitive fully-funded studentships are available each year (to
students of any nationality) and the Unit also welcomes students with
pre-secured funding or with other scholarship/studentship applications
in progress.

Full details of our programme, and how to apply, are available at:
http://www.gatsby.ucl.ac.uk/teaching/phd/

For further details of research interests please see:
http://www.gatsby.ucl.ac.uk/research.html

Applications for 2010 entry (commencing late September 2010) should be
received no later than 6th January 2010. Shortlisted applicants will
be invited to attend interview in the week commencing 8th March 2010.

Call for papers: NIPS 2009 workshop on The Generative and Discriminative Learning Interface

CALL FOR PAPERS – The Generative and Discriminative Learning Interface
(supported by PASCAL 2)

Workshop at the 23rd Annual Conference on Neural Information
Processing Systems (NIPS 2009)
December 12, 2009, Whistler, Canada
http://gen-disc2009.wikidot.com/call

Submission Deadline: October 25, 2009

OVERVIEW

Generative and discriminative learning are two of the major paradigms
for solving prediction problems in machine learning, each offering
important distinct advantages. They have often been studied in
different sub-communities, but over the past decade, there has been
increasing interest in trying to understand and leverage the
advantages of both approaches. The goal of this workshop is to map out
our current understanding of the empirical and theoretical advantages
of each approach as well as their combination, and to identify open
research directions.

BACKGROUND AND OBJECTIVES

In generative approaches for prediction tasks, one models a joint
distribution on inputs and outputs and parameters are typically
estimated using a likelihood-based criterion. In discriminative
approaches, one directly models the mapping from inputs to outputs
(either as a conditional distribution or simply as a prediction
function); parameters are estimated by optimizing various objectives
related to a loss function. Discriminative approaches have shown
better performance with enough data, as they are better tuned to the
prediction task and are more robust to model misspecification. Despite
the strong empirical success of discriminative methods in a wide range
of applications, when the structures to be learned become complex
(e.g. in machine translation, scene understanding, biological process
discovery), even large training sets become sparse relative to the
task, and this sparsity can only be mitigated if some other source of
information comes into play to constrain the space of fitted models,
such as unlabeled examples, related data sources or human prior
knowledge about the problem. Generative modeling is a principle way of
encoding this additional information, e.g. through probabilistic
graphical models or stochastic grammar rules. Moreover, they provide a
natural way to make use of unlabeled data and can be more
computationally efficient for some models.

See http://gen-disc2009.wikidot.com/call for a more detailed
background with references.

The aim of this workshop is to provide a platform for both theoretical
and applied researchers from different communities to discuss the
status of our understanding on the interplay between generative and
discriminative learning, as well as to identify forward-looking open
problems of interest to the NIPS community. Examples of topics of
interest to the workshop are as follows:
* Theoretical analysis of generative vs. discriminative learning
* Techniques for combining generative / discriminative approaches
* Successful applications of hybrids
* Empirical comparison of generative vs. discriminative learning
* Inclusion of prior knowledge in discriminative methods
(semi-supervised approaches, generalized expectation criteria,
posterior regularization, etc.)
* Insights into the role of generative / discriminative interface for
deep learning
* Computational issues in discriminatively trained generative
models/hybrid models
* Map of possible generative / discriminative approaches and combinations
* Bayesian approaches optimized for predictive performance
* Comparison of model-free and model-based approaches in statistics or
reinforcement learning

INVITED SPEAKERS / PANELISTS

Dan Klein, UC Berkeley
http://www.cs.berkeley.edu/~klein/

Tony Jebara, Columbia University
http://www1.cs.columbia.edu/~jebara/

Phil Long, Google
http://www.phillong.info/

Ben Taskar, University of Pennsylvania
http://www.seas.upenn.edu/~taskar/

John Winn, Microsoft Research Cambridge
http://johnwinn.org/

IMPORTANT DATES

Deadline for abstract submission: October 25, 2009
Notification of acceptance: November 5, 2009
(NIPS early registration deadline is November 6)
Final version: November 20, 2009
Workshop: December 12, 2009

LOCATION

Westin Resort and Spa / Hilton Whistler Resort and Spa
Whistler, B.C., Canada
http://nips.cc/Conferences/2009/

CALL FOR PARTICIPATION

Researchers interested in presenting their work and ideas on the above
themes are invited to submit an extended abstract of 2-4 pages in pdf
format using the NIPS style available at
http://nips.cc/PaperInformation/StyleFiles (author names don’t need to
be anonymized). Submissions will be accepted either as contributed
talks or poster presentations, and we expect the speakers to provide a
final version of their paper by November 20 to be posted on the
workshop website.

Sign on at:
https://cmt.research.microsoft.com/GDLI2009
to submit your paper (you’ll need to create a login first).

WORKSHOP FORMAT

This 1 day workshop will have a mix of invited talks (3), contributed
talks (4-8), a poster session as well as a panel discussion. We will
leave plenty of time and encourage discussion throughout the day.

We also encourage the participants to visit the online forum in
December to discuss the submitted papers and the themes of the
workshop.

SPONSOR: PASCAL 2 (non-core workshop)..

ORGANIZERS

Simon Lacoste-Julien (University of Cambridge)
Percy Liang (UC Berkeley)
Guillaume Bouchard (Xerox Research Centre Europe)

CONTACT

gen.disc.nips09 at gmail.com

AERFAI Winter School on Eye-Tracking Methodology

AERFAI Winter School on Eye-Tracking Methodology
(WSETM 2009)

Computer Vision Centre
Universitat Autònoma de Barcelona
Barcelona, Spain. November 24-27, 2009

HOME PAGE: http://www.cvc.uab.es/wsetm2009
————————————————–

We are pleased to announce that the Winter School on Eye-Tracking
Methodology (WSETM’2009) will be held at the Computer Vision Centre (CVC),
Barcelona, Spain during November 24-27, 2009.

WSETM’2009 is organized by the Computer Vision Center (CVC), in the
Universitat Autònoma de Barcelona (UAB). AERFAI members will be able to
apply for financial support for attending the School. In addition,
SensoMotoric Instruments (SMI) will also offer a limited number of
stipends to help participants cover their travel expenses.

This educational activity will be the first major course on -tracking
in Spain. The course is devoted to provide the students with both
theoretical background and a strong hands-on experience. In this sense
the students will enjoy direct access to different eye-tracker hardware
and software systems.

By the end of the school the participants will have a thorough
background on eye-tracking methodology and will be able to use
eye-tracking in their own research. The programme is structured to
appeal to researchers and industry participants alike regardless of
their specific topic of research or application.

The preliminary programme of the school is available online at:
http://www.cvc.uab.es/wsetm2009/WSETM2009_program.pdf

Highly esteemed academics in the field are invited to deliver the
lectures on the state-of-the-art of eye-tracking methodology. The
curriculum is planned to correspond to the real-life workflow of
eye-tracking based research and will cover the following topics:

1. Eye movements and visual attention.
2. Setting up and testing a hypothesis.
3. Experimental design (stimuli preparation).
4. Recording data for different applications.
5. Data analysis.
6. Applications for computer vision.

ENROLMENT AND FEES:
====================
Enrolment deadline: November 1st, 2009
Registration fees: 400Eu per student. (optional +35Eu Banquet Dinner
tickets). Discounts are available for AERFAI members.

For further information please visit the WSETM2009 Home Page:
www.cvc.uab.es/wsetm2009

Call for Papers: NIPS 2009 Workshop on Transfer Learning for Structured Data (TLSD-09)

Call for Papers: NIPS 2009 Workshop on Transfer Learning for Structured
Data (TLSD-09)
in conjunction with NIPS 2009, Dec 7-12, 2009, Vancouver, B.C., Canada

http://www.cse.ust.hk/~sinnopan/nips09tlsd/

Description and background
————————
Recently, transfer learning (TL) has gained much popularity as an approach
to reduce the training-data calibration effort as well as to improve
generalization performance of learning tasks. Unlike traditional learning,
transfer learning methods make the best use of data from one or more
source tasks in order to learn a target task. Many previous works on
transfer learning have focused on transferring the knowledge across
domains where the data are assumed to be i.i.d. In many real-world
applications, such as identifying entities in social networks or
classifying web pages, data are often intrinsically non i.i.d., which
poses a major challenge to transfer learning. In this workshop, we call
for papers on the topic of transfer learning for structured data.
Structured data are those that have certain intrinsic structures such as
network topology, and present several challenges to knowledge transfer. A
first challenge is how to judge the relatedness between tasks and avoid
negative transfer. Since data are non i.i.d., standard methods for
measuring the distance between data distributions, such as KL divergence,
Maximum Mean Discrepancy (MMD) and A-distance, may not be applicable. A
second challenge is that the target and source data may be heterogeneous.
For example, a source domain is a bioinformatics network, while a target
domain may be a network of webpage. In this case, deep transfer or
heterogeneous transfer approaches are required.

Heterogeneous transfer learning for structured data is a new area of
research, which concerns transferring knowledge between different tasks
where the data are non-i.i.d. and may be even heterogeneous. This area has
emerged as one of the most promising areas in machine learning. In this
workshop, we wish to boost the research activities of knowledge transfer
across structured data in the machine learning community. We welcome
theoretical and applied disseminations that make efforts (1) to expose
novel knowledge transfer methodology and frameworks for transfer mining
across structured data. (2) to investigate effective (automated,
human-machined-cooperated) principles and techniques for acquiring,
representing, modeling and engaging transfer learning on structured data
in real-world applications.

Goals
—————
This workshop on “Transfer Learning for Structured Data” will bring active
researchers in artificial intelligence, machine learning and data mining
together to develop methods or systems, and to explore methods
for solving real-world problems associated with learning on structured
data. The workshop invites researchers interested in transfer learning,
statistical relational learning and structured data mining to contribute
their recent works on the topic of interest.

Topics of Interest
————————
(The topics of interest include but are not limited to the following)

Transfer learning for networked data.
Transfer learning for social networks.
Transfer learning for relational domains.
Transfer learning for non-i.i.d. and/or heterogeneous data.
Transfer learning from multiple structured data sources.
Transfer learning for bioinformatics networks.
Transfer learning for sensor networks.
Theoretical analysis on transfer learning algorithms for structured data.

Paper submission
———————–
We encourage authors submit extended abstracts of up
to 4 pages. To encourage that the best work in this field can be presented
TLSD, we also allow authors to submit their published or submitted work of up
to 9 pages. Submissions should be using NIPS style files (available at
http://nips.cc/PaperInformation/StyleFiles), and should include the title,
authors’ names, institutions and email addresses, and a brief abstract.
Accepted papers will be either presented as a talk or poster (with poster
spotlight). Details of submission instructions are available at
http://www.cse.ust.hk/~sinnopan/nips09tlsd/ .

Important Dates
————————
Deadline for submissions: October 26, 2009
Notification of acceptance: November 9, 2009
Deadline for camera-ready version: November 26, 2009
Workshop date: December 12, 2009 (Saturday)

Invited Speakers (Confirmed)
————————
Arthur Gretton, Carnegie Mellon University, USA
Shai Ben-David, University of Waterloo, Canada

Workshop Co-chairs
————————
Sinno Jialin Pan, Hong Kong University of Science and Technology, Hong Kong
Ivor W. Tsang, Nanyang Technological University, Singapore
Le Song, Carnegie Mellon University, USA
Karsten Borgwardt, MPI for Biological Cybernetics, Germany
Qiang Yang, Hong Kong University of Science and Technology, Hong Kong

Program Committee
————————
Andreas Argyriou, Toyota Technological Institute at Chicago, USA
Shai Ben-David, University of Waterloo, Canada
John Blitzer, University of California, USA
Hal Daume III, University of Utah, USA
Jesse Davis, University of Washington, USA
Jing Gao, University of Illinois, Urbana-Champaign, USA
Steven Hoi, Nanyang Technological University, Singapore
Jing Jiang, Singapore Management University, Singapore
Honglak Lee, Stanford University, USA
Lily Mihalkova, University of Maryland, USA
Raymond Mooney, University of Texas at Austin, USA
Massimiliano Pontil, University College London, UK
Masashi Sugiyama, Tokyo Institute of Technology, Japan
Koji Tsuda, AIST Computational Biology Research Center, Japan
Jingdong Wang, Microsoft Research Asia, China
Dong Xu, Nanyang Technological University, Singapore

If you have any questions, please contact us via tlsd09nips@gmail.com.

NIPS 2009 Causality and Time Series Analysis Mini Symposium

NIPS 2009
Causality and Time Series Analysis Mini Symposium
Thurday, December 10, 2009
Hyatt Regency, Vancouver, Canada

Sponsored by Pascal 2

The “NIPS 2009 Causality and Time Series Analysis Mini Symposium” will take place Thursday, December 10, 2009 at the Hyatt Regency, Vancouver, Canada, right after the conference. It is part of the program of the conference, but is an autonomous event.
For this mini symposium we have invited leading experts in the field to give presentation having both a tutorial component and presenting leading edge methods: http://clopinet.com/isabelle/Projects/NIPS2009/

Open PhD positions in Tuebingen, Germany

The

Max Planck research group “Machine Learning in Biology”
led by Gunnar Raetsch (http://www.fml.mpg.de/raetsch) and the

Interdepartmental Bioinformatics Max Planck research group
led by Karsten Borgwardt (http://www.kyb.mpg.de/kb)

have openings for several PhD positions in the field of Machine
Learning & Computational Biology.

Interested applicants shall apply through the official PhD programme of
the Max Planck Institute for Developmental Biology and the Friedrich
Miescher Laboratory available at:

http://phd.eb.tuebingen.mpg.de

Deadline: November 25, 2009.

With two Max Planck institutes and the Friedrich Miescher Laboratory,
the Max Planck campus in Tübingen offers an ideal environment for
interdisciplinary research at the interface of Machine Learning and
Biology. The Max Planck Institute for Biological Cybernetics hosts an
excellent department for Machine Learning, led by Bernhard Schoelkopf,
and the Max Planck Institute for Developmental Biology comprises six
departments led by world leaders in their field, including Nobel Prize
winner Christiane Nuesslein-Volhard and Leibniz Prize winner Detlef
Weigel.

COLT 2010 – Preliminary Call for Papers

The 23rd Annual Conference on Learning Theory (COLT 2010) will take place in Haifa, Israel, on June 27-29, 2010 and will be co-located with ICML 2010. We invite submissions of papers addressing theoretical aspects of machine learning and empirical inference. We strongly support a broad definition of learning theory, including:

* Analysis of learning algorithms and their generalization ability
* Computational complexity of learning
* Bayesian analysis
* Statistical mechanics of learning systems
* Optimization procedures for learning
* Kernel methods
* Inductive inference
* Boolean function learning
* Unsupervised and semi-supervised learning and clustering
* On-line learning and relative loss bounds
* Learning in planning and control, including reinforcement learning
* Learning in games, multi-agent learning
* Mathematical analysis of learning in related fields, e.g., game theory, natural language processing, neuroscience, bioinformatics, privacy and security, machine vision, data mining, information retrieval

We are also interested in papers that include viewpoints that are new to the COLT community. We welcome experimental and algorithmic papers provided they are relevant to the focus of the conference by elucidating theoretical results in learning. Also, while the primary focus of the conference is theoretical, papers can be strengthened by the inclusion of relevant experimental results.

Papers that have previously appeared in journals or at other conferences, or that are being submitted to other conferences, are not appropriate for COLT. Papers that include work that has already been submitted for journal publication may be submitted to COLT, as long as the papers have not been accepted for publication by the COLT submission deadline (conditionally or otherwise) and that the paper is not expected to be published before the COLT conference (June 2010).
Feedback on Review Quality

There will be no rebuttal phase this year. However, authors will be given the opportunity to assess the quality of reviews and provide feedback to the reviewers, after the decisions have been made. These assessments will be used in particular to determine the Best Reviewer award (see below).
Paper and Reviewer Awards

This year, COLT will award both best paper and best student paper awards. Best student papers must be authored or coauthored by a student. Authors must indicate at submission time if they wish their paper to be eligible for a student award. This does not preclude the paper to be eligible for the best paper award.

To further emphasize the importance of the reviewing quality, this year, COLT will also award a best reviewer award to the reviewer who has provided the most insightful and useful comments.
Open Problems Session

We also invite submission of open problems (see separate call). These should be constrained to two pages. There is a shorter reviewing period for the open problems. Accepted contributions will be allocated short presentation slots in a special open problems session and will be allowed two pages each in the proceedings.
Paper Format and Electronic Submission Instructions

Formatting and submission instructions will be available in early December at the conference website.

Important Dates

Preliminary call for papers issued
October 15, 2009

Electronic submission of papers (due by 5:59pm PST)
February 19, 2010

Electronic submission of open problems
March 13, 2010

Notice of acceptance or rejection
May 07, 2010

Submission of final version
May 21, 2010

Feedback on reviews due
May 28, 2010

Joint ICML/COLT workshop day
June 25, 2010

2010 COLT conference
June 27-29, 2010

Organization

Program Co-chairs:

* Adam Tauman Kalai (Microsoft Research)
* Mehryar Mohri (Courant Institute of Mathematical Sciences and Google Research)

Program Committee:

Shivani Agarwal
Mikhail Belkin
Shai Ben-David
Nicolò Cesa-Bianchi
Ofer Dekel
Steve Hanneke
Jeff Jackson
Sham Kakade
Vladimir Koltchinskii
Katrina Ligett
Phil Long
Gabor Lugosi
Ulrike von Luxburg
Yishay Mansour
Ryan O’Donnell
Massimiliano Pontil
Robert Schapire
Rocco Servedio
John Shawe-Taylor
Shai Shalev-Shwartz
Gilles Stoltz
Ambuj Tewari
Jenn Wortman Vaughan
Santosh Vempala
Manfred Warmuth
Robert Williamson
Thomas Zeugmann
Tong Zhang

Local Arrangements Chair:

* Shai Fine (IBM Research Haifa)

Invited speakers

* Prof. Noga Alon – School of Mathematical Sciences, Tel Aviv University
* Prof. Noam Nisan – School of Computer Science and Engineering, The Hebrew University Jerusalem

Special Issue on Statistical and Relational Learrning and mining in Bioinformatics

Special issue of Fundamenta Informaticae on Statistical
and Relational Learning and Mining in Bioinformatics
——————————————————-

Call for papers

There is an increasing interest for structured data in the machine
learning community as shown by the growing number of dedicated
Conferences and Workshops (MLG, SRL, ILP, MRDM). Bioinformatics is an
application domain of increasing popularity where information is
naturally represented in terms of relations between (possibly
heterogeneous) objects.

The special issue of Fundamenta Informaticae on Statistical and
Relational Learning in Bioinformatics will focus on learning methods
for structured biological data (relational data, graphs, logic-based
descriptions, etc) in the presence of uncertainty (probabilistic logic
models, Bayesian methods, etc). These methods are well-suited for this
application area, since the available data is highly complex and tends
to have a significant amount of missing information.

We therefore invite submissions that describe new theoretical insights
for new methods, problem settings, applications and models, exploiting
structured data in the field of biology. Survey papers discussing the
relationships among the various mathematical frameworks are especially
encouraged. Both new submissions and extended versions of contributions to the
StReBio workshops at ECML-08 and KDD-09 are welcome.

Methods include, but are not restricted to

* Statistical Relational Learning
* Relational Probabilistic Models
* Inductive Logic Programming
* Multi-relational Data Mining
* Graph Methods

The data, structures or models considered can include but are not limited to

* Sequences (DNA, RNA, protein)
* Pathways (chemical, metabolic, mutation, interaction pathways)
* 2D, 3D structures of proteins, RNA
* Chemical structures (e.g. QSAR, especially regarding interaction
of compounds with proteins)
* Evolutionary relations (phylogeny, homology relations)
* Ontologies integration (gene, enzyme, protein function ontologies)
* Large networks (regulatory, co-expression, interaction, and
metabolic networks)
* Concept graphs (including compounds, articles, authors, references)

Given the nature of the Fundamenta Informaticae journal, papers should
contain a more theoretical part presenting background, methods and
fundamental properties.

Important dates:
* Abstract submission: November 15th
– Paper submission: November 25th
– Notification: February 15th
– Revised versions: April 15th

Submission information:
Information on formatting (including Latex templates) can be found on
http://fi.mimuw.edu.pl/ (‘submissions’ tab). Please send submissions
to StReBioFI@cs.kuleuven.be

Guest editors:
Jan Ramon (K.U.Leuven, Belgium)
Fabrizio Costa (K.U.Leuven, Belgium)
Christophe Florencio Costa (K.U.Leuven, Belgium)
Joost Kok (Leiden University, Netherlands)

ICMI-MLMI 2009

ICMI-MLMI 2009
MIT Media Lab, Cambridge MA, USA
November 2-6, 2009

http://icmi2009.acm.org

Call for participation

The Eleventh International Conference on Multimodal Interfaces
and Workshop on Machine Learning for Multimodal Interaction
(ICMI-MLMI) will jointly take place at the MIT Media Lab in
Cambridge, MA, USA on November 2-6, 2009. The main aim of
ICMI-MLMI 2009 is to further scientific research within the
field of multimodal interaction, methods and systems. The joint
conference will focus on major trends and challenges in this
area, and work to identify a roadmap for future research and
commercial success. ICMI-MLMI 2009 will feature a single-track
main conference with keynote speakers, technical paper
presentations, poster sessions, demonstrations, and workshops.

General Co-chairs

James Crowley, INRIA
Yuri Ivanov, MERL
Christopher Wren, Google

Program Co-Chairs

Daniel Gatica-Perez, Idiap Research Institute
Michael Johnston, AT&T Research
Rainer Stiefelhagen University of Karlsruhe

Supported by PASCAL 2
———————————————————-

Call for Papers – IJCV Special Issue – Structured Prediction and Inference – 26FEB2010

Dear colleagues,

we would like to announce an upcoming IJCV special issue
on STRUCTURED PREDICITON AND INFERENCE. The call for
papers follows below. A PDF version is available from
the IJCV website: http://www.springer.com/11263

Best regards,

Matthew Blaschko and Christoph Lampert

**************************************************************************

——————————————————-
International Journal of Computer Vision
Special Issue on
Structured Prediction and Inference
——————————————————-

Guest Editors
Matthew B. Blaschko, University of Oxford (blaschko@robots.ox.ac.uk)
Christoph H. Lampert, Max Planck Institute for Biological Cybernetics,
Tuebingen, Germany (chl@tuebingen.mpg.de)
——————————————————-

Background

Many computer visions problems can be formulated naturally as prediction
tasks of structured objects. Image segmentation, stereo reconstruction,
human pose estimation and natural scene analysis are all examples of such
problems, in which the quantity one tries to predict consists of multiple
interdependent parts. The structured output learning paradigm offers a
natural framework for such tasks, and recently introduced methods for
end-to-end discriminative training of conditional random fields (CRFs)
and structured support vector machines (S-SVMs) for image classification
and interpretation show that computer vision is not just a consumer of
existing machine learning developments in this area, but one of the driving
forces behind their development. The complexity of structured prediction
models makes the problem of inference in these models an integral part of
their analysis. While the machine learning literature has largely focused
on message passing, computer vision research has introduced novel
applications of branch-and-bound and graph cuts as inference algorithms.
Articles addressing these issues are particularly encouraged for submission
to the special issue.

Topics

Original papers are being solicited that have as topic one or more aspects
of the structured prediction framework in a computer vision setting, that
is they address the problem of prediction from an input space, such as
images
or video, to a structured and interdependent output space. Submissions can
be theoretic or applied contributions as well as position papers. Topics of
interest include, but are not limited to:

* Training for structured output learning
– Probabilistic vs. max-margin training
– Generative vs. discriminative training
– Semi-supervised or unsupervised learning
– Dealing with label noise

* Inference methods for structured output learning
– Exact vs. approximate inference techniques
– Pixel, voxel, and superpixel random field optimization
– Priors and higher order clique optimization
– Approaches that scale to large amounts of training and test data

* Computer vision applications of structured output learning
– Segmentation
– Stereo reconstruction
– Relationship between scene components
– Hierarchical models

Authors are encouraged to submit high quality, original work that has
neither appeared in, nor is under consideration by, other journals. All open
submissions will be peer reviewed subject to the standards of the journal.
Manuscripts based on previously published conference papers must be extended
substantially. Springer offers authors, editors and reviewers of the
International Journal of Computer Vision a web-enabled online manuscript
submission and review system. Our online system offers authors the ability
to track the review process of their manuscript. Manuscripts should be
submitted to: http://VISI.edmgr.com. This online system offers easy and
straightforward log-in and submission procedures, and supports a wide range
of submission file formats.

* Paper submission deadline:
– February 26, 2010
* Estimated Online Publication:
– Fall, 2010

=================================================