ChaLearn organizes in 2015 parallel challenge tracks on RGB data for Human Pose Recovery, action/interaction spotting, and cultural event classification. For each Track, the awards for the first, second and third winners will consist of 500, 300 and 200 dollars, respectively.

 The challenge features three quantitative tracks:

Track 1: Human Pose Recovery: More than 8,000 frames of continuous RGB sequences are recorded and labeled with the objective of performing human pose recovery by means of recognizing more than 120,000 human limbs of different people. Examples of labeled frames are shown in Fig. 1.

Track 2: Action/Interaction Recognition: 235 performances of 11 action/interaction categories are recorded and manually labeled in continuous RGB sequences of different people performing natural isolated and collaborative actions randomly. Examples of labeled actions are shown in Fig. 1.

Track 3: Cultural event classification: More than 10,000 images corresponding to 50 different cultural event categories will be considered. In all the categories, garments, human poses, objects and context will be possible cues to be exploited for recognizing the events, while preserving the inherent inter- and intra-class variability of this type of images. Examples of cultural events will be Carnival, Oktoberfest, San Fermin, Maha-Kumbh-Mela and Aoi-Matsuri, among others, see Fig. 2. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for creating the baseline of this Track.

 

The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided. The twenty object classes that have been selected are:

  • Person: person
  • Animal: bird, cat, cow, dog, horse, sheep
  • Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train
  • Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor

There will be three main competitions: classification, detection, and segmentation; and three “taster” competition: person layout, action classification, and ImageNet large scale recognition:

Segmentation Competition

  • Segmentation: Generating pixel-wise segmentations giving the class of the object visible at each pixel, or “background” otherwise.
    Image Objects Class

Person Layout Taster Competition

  • Person Layout: Predicting the bounding box and label of each part of a person (head, hands, feet).
    Image Person Layout

Action Classification Taster Competition

  • Action Classification: Predicting the action(s) being performed by a person in a still image.
    10 action classes + “other”
  1. Classification: For each of the twenty classes, predicting presence/absence of an example of that class in the test image.
  2. Detection: Predicting the bounding box and label of each object from the twenty target classes in the test image.
    20 classes

Participants may enter either (or both) of these competitions, and can choose to tackle any (or all) of the twenty object classes. The challenge allows for two approaches to each of the competitions:

  1. Participants may use systems built or trained using any methods or data excluding the provided test sets.
  2. Systems are to be built or trained using only the provided training/validation data.

Data

To download the training/validation data, see the development kit.

The training data provided consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the twenty classes present in the image. Note that multiple objects from multiple classes may be present in the same image. Some example images can be viewed online. A subset of images are also annotated with pixel-wise segmentation of each object present, to support the segmentation competition. Some segmentation examples can be viewed online.

Annotation was performed according to a set of guidelines distributed to all annotators.

The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission.

In the second stage, the test set will be made available for the actual competition. As in the VOC2008-2010 challenges, no ground truth for the test data will be released.

The data has been split into 50% for training/validation and 50% for testing. The distributions of images and objects by class are approximately equal across the training/validation and test sets. In total there are 28,952 images. Further statistics are online.

Example images

Example images and the corresponding annotation for the classification/detection/segmentation tasks, and and person layout taster can be viewed online:

Development Kit

The development kit consists of the training/validation data, MATLAB code for reading the annotation data, support files, and example implementations for each competition.

The development kit will be available according to the timetable.

Test Data

The test data is now available. Note that the only annotation in the data is for the layout/action taster competitions. As in 2008-2010, there are no current plans to release full annotation – evaluation of results will be provided by the organizers.

The test data can now be downloaded from the evaluation server. You can also use the evaluation server to evaluate your method on the test data.

Useful Software

Below is a list of software you may find useful, contributed by participants to previous challenges.

Timetable

  • May 2011: Development kit (training and validation data plus evaluation software) made available.
  • June 2011: Test set made available.
  • 13 October 2011 (Thursday, 2300 hours GMT): Extended deadline for submission of results. There will be no further extensions.
  • 07 November 2011: Challenge Workshop in association with ICCV 2011, Barcelona.

Submission of Results

Participants are expected to submit a single set of results per method employed. Participants who have investigated several algorithms may submit one result per method. Changes in algorithm parameters do not constitute a different method – all parameter tuning must be conducted using the training and validation data alone.

Results must be submitted using the automated evaluation server:

It is essential that your results files are in the correct format. Details of the required file formats for submitted results can be found in the development kit documentation. The results files should be collected in a single archive file (tar/tgz/tar.gz).

Participants submitting results for several different methods (noting the definition of different methods above) should produce a separate archive for each method.

In addition to the results files, participants will need to additionally specify:

  • contact details and affiliation
  • list of contributors
  • description of the method (minimum 500 characters) – see below

New in 2011 we require all submissions to be accompanied by an abstract describing the method, of minimum length 500 characters. The abstract will be used in part to select invited speakers at the challenge workshop. If you are unable to submit a description due e.g. to commercial interests or other issues of confidentiality you must contact the organisers to discuss this. Below are two example descriptions, for classification and detection methods previously presented at the challenge workshop. Note these are our own summaries, not provided by the original authors.

  • Example Abstract: Object classification
    Based on the VOC2006 QMUL description of LSPCH by Jianguo Zhang, Cordelia Schmid, Svetlana Lazebnik, Jean Ponce in sec 2.16 of The PASCAL Visual Object Classes Challenge 2006 (VOC2006) Results. We make use of a bag-of-visual-words method (cf Csurka et al 2004). Regions of interest are detected with a Laplacian detector (Lindeberg, 1998), and normalized for scale. A SIFT descriptor (Lowe 2004) is then computed for each detection. 50,000 randomly selected descriptors from the training set are then vector quantized (using k-means) into k=3000 “visual words” (300 for each of the 10 classes). Each image is then represented by the histogram of how often each visual word is used. We also make use a spatial pyramid scheme (Lazebnik et al, CVPR 2006). We first train SVM classifiers using the chi^2 kernel based on the histograms of each level in the pyramid. The outputs of these SVM classifiers are then concatenated into a feature vector for each image and used to learn another SVM classifier based on a Gaussian RBF kernel.
  • Example Abstract: Object detection
    Based on “Object Detection with Discriminatively Trained Part Based Models”; Pedro F. Felzenszwalb, Ross B. Girshick, David McAllester and Deva Ramanan; IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 9, September 2010. We introduce a discriminatively-trained parts-based model for object detection. The model consists of a coarse “root” template of HOG features (Dalal and Triggs, 2006), plus a number of higher-resolution part-based HOG templates which can translate in a neighborhood relative to their default position. The responses of the root and part templates are combined by a latent-SVM model, where the latent variables are the offsets of the parts. We introduce a novel training algorithm for the latent SVM. We also make use of an iterative training procedure exploiting “hard negative” examples, which are negative examples incorrectly classified in an earlier iteration. Finally the model is scanned across the test image in a “sliding-window” fashion at a variety of scales to produce candidate detections, followed by greedy non-maximum suppression. The model is applied to all 20 PASCAL VOC object detection challenges.

If you would like to submit a more detailed description of your method, for example a relevant publication, this can be included in the results archive.

Best Practice

The VOC challenge encourages two types of participation: (i) methods which are trained using only the provided “trainval” (training + validation) data; (ii) methods built or trained using any data except the provided test data, for example commercial systems. In both cases the test data must be used strictly for reporting of results alone – it must not be used in any way to train or tune systems, for example by runing multiple parameter choices and reporting the best results obtained.

If using the training data we provide as part of the challenge development kit, all development, e.g. feature selection and parameter tuning, must use the “trainval” (training + validation) set alone. One way is to divide the set into training and validation sets (as suggested in the development kit). Other schemes e.g. n-fold cross-validation are equally valid. The tuned algorithms should then be run only once on the test data.

In VOC2007 we made all annotations available (i.e. for training, validation and test data) but since then we have not made the test annotations available. Instead, results on the test data are submitted to an evaluation server.

Since algorithms should only be run once on the test data we strongly discourage multiple submissions to the server (and indeed the number of submissions for the same algorithm is strictly controlled), as the evaluation server should not be used for parameter tuning.

We encourage you to publish test results always on the latest release of the challenge, using the output of the evaluation server. If you wish to compare methods or design choices e.g. subsets of features, then there are two options: (i) use the entire VOC2007 data, where all annotations are available; (ii) report cross-validation results using the latest “trainval” set alone.

Policy on email address requirements when registering for the evaluation server

In line with the Best Practice procedures (above) we restrict the number of times that the test data can be processed by the evaluation server. To prevent any abuses of this restriction an institutional email address is required when registering for the evaluation server. This aims to prevent one user registering multiple times under different emails. Institutional emails include academic ones, such as name@university.ac.uk, and corporate ones, but not personal ones, such as name@gmail.com or name@123.com.

Publication Policy

The main mechanism for dissemination of the results will be the challenge webpage.

The detailed output of each submitted method will be published online e.g. per-image confidence for the classification task, and bounding boxes for the detection task. The intention is to assist others in the community in carrying out detailed analysis and comparison with their own methods. The published results will not be anonymous – by submitting results, participants are agreeing to have their results shared online.

Citation

If you make use of the VOC2011 data, please cite the following reference (to be prepared after the challenge workshop) in any publications:

@misc{pascal-voc-2011,
	author = "Everingham, M. and Van~Gool, L. and Williams, C. K. I. and Winn, J. and Zisserman, A.",
	title = "The {PASCAL} {V}isual {O}bject {C}lasses {C}hallenge 2011 {(VOC2011)} {R}esults",
	howpublished = "http://www.pascal-network.org/challenges/VOC/voc2011/workshop/index.html"}	

Database Rights

The VOC2011 data includes images obtained from the “flickr” website. Use of these images must respect the corresponding terms of use:

For the purposes of the challenge, the identity of the images in the database, e.g. source and name of owner, has been obscured. Details of the contributor of each image can be found in the annotation to be included in the final release of the data, after completion of the challenge. Any queries about the use or ownership of the data should be addressed to the organizers.

Organizers

  • Mark Everingham (University of Leeds), m.everingham@leeds.ac.uk
  • Luc van Gool (ETHZ, Zurich)
  • Chris Williams (University of Edinburgh)
  • John Winn (Microsoft Research Cambridge)
  • Andrew Zisserman (University of Oxford)

Acknowledgements

We gratefully acknowledge the following, who spent many long hours providing annotation for the VOC2011 database:

Yusuf Aytar, Jan Hendrik Becker, Ken Chatfield, Miha Drenik, Chris Engels, Ali Eslami, Adrien Gaidon, Jyri Kivinen, Markus Mathias, Paul Sturgess, David Tingdahl, Diana Turcsany, Vibhav Vineet, Ziming Zhang.

We also thank Sam Johnson for development of the annotation system for Mechanical Turk, and Yusuf Aytar for further development and administration of the evaluation server.

Support

The preparation and running of this challenge is supported by the EU-funded PASCAL2 Network of Excellence on Pattern Analysis, Statistical Modelling and Computational Learning.

We are grateful to Alyosha Efros for providing additional funding for annotation on Mechanical Turk.

Introduction

Given two text fragments called ‘Text’ and ‘Hypothesis’, Textual Entailment Recognition is the task of determining whether the meaning of the Hypothesis is entailed (can be inferred) from the Text. The goal of the first RTE Challenge was to provide the NLP community with a benchmark to test progress in recognizing textual entailment, and to compare the achievements of different groups. Since its inception in 2004, the RTE Challenges have promoted research in textual entailment recognition as a generic task that captures major semantic inference needs across many natural language processing applications, such as Question Answering (QA), Information Retrieval (IR), Information Extraction (IE), and multi-document Summarization.

After the first three highly successful PASCAL RTE Challenges, RTE became a track at the 2008 Text Analysis Conference, which brought it together with communities working on NLP applications. The interaction has provided the opportunity to apply RTE systems to specific applications and to move the RTE task towards more realistic application settings.

RTE-7 pursues the direction taken in RTE-6, focusing on textual entailment in context, where the entailment decision draws on the larger context available in the targeted application settings.

RTE-7 Tasks

The RTE-7 tasks focus on recognizing textual entailment in two application settings: Summarization and Knowledge Base Population.

  1. Main Task (Summarization setting): Given a corpus and a set of “candidate” sentences retrieved by Lucene from that corpus, RTE systems are required to identify all the sentences from among the candidate sentences that entail a given Hypothesis. The RTE-7 Main Task is based on the TAC Update Summarization Task. In the Update Summarization Task, each topic contains two sets of documents (“A” and “B”), where all the “A” documents chronologically precede all the “B” documents. An RTE-7 Main Task “corpus” consists of 10 “A” documents, while Hypotheses are taken from sentences in the “B” documents.
  2. Novelty Detection Subtask (Summarization setting): In the Novelty Detection variant of the Main Task, systems are required to judge if the information contained in each H (based on text snippets from B summaries) is novel with respect to the information contained in the A documents related to the same topic. If entailing sentences are found for a given H, it means that the content of H is not new; if no entailing sentences are detected, it means that information contained in the H is novel.
  3. KBP Validation Task (Knowledge Base Population setting): Based on the TAC Knowledge Base Population (KBP) Slot-Filling task, the KBP validation task is to determine whether a given relation (Hypothesis) is supported in an associated document (Text). Each slot fill that is proposed by a system for the KBP Slot-Filling task would create one evaluation item for the RTE-KBP Validation Task: The Hypothesis would be a simple sentence created from the slot fill, while the Text would be the source document that was cited as supporting the slot fill.

Schedule

Proposed RTE-7 Schedule
April 29 Main Task: Release of Development Set
April 29 KBP Validation Task: Release of Development Set
June 10 Deadline for TAC 2011 track registration
August 17 KBP Validation Task: Release of Test Set
August 29 Main Task: Release of Test Set
September 8 Main Task: Deadline for task submissions
September 15 Main Task: Release of individual evaluated results
September 16 KBP Validation Task: Deadline for task submissions
September 23 KBP Validation Task: Release of individual evaluated results
September 25 Deadline for TAC 2011 workshop presentation proposals
September 29 Main Task: Deadline for ablation tests submissions
October 6 Main Task: Release of individual ablation test results
October 25 Deadline for system reports (workshop notebook version)
November 14-15 TAC 2011 Workshop

Mailing List

The mailing list for the RTE Track is rte@nist.gov. The list is used to discuss and define the task guidelines for the track, as well as for general discussion related to textual entailment and its evaluation. To subscribe, send a message to listproc@email.nist.gov such that the body consists of the line:
subscribe rte <FirstName> <LastName>
In order for your messages to get posted to the list, you must send them from the email address used when you subscribed to the list. To unsubscribe, send a message from the subscribed email address to listproc@email.nist.gov such that the body consists of the line:
unsubscribe rte
For additional information on how to use mailing lists hosted at NIST, send a message to listproc@email.nist.gov such that the body consists of the line:
HELP

Organizing Committee

Luisa Bentivogli, CELCT and FBK, Italy
Peter Clark, Vulcan Inc., USA
Ido Dagan, Bar Ilan University, Israel
Hoa Trang Dang, NIST, USA
Danilo Giampiccolo, CELCT, Italy

 

The Pascal Exploration & Exploitation Challenge seeks to improve the relevance of content presented to visitors of a website, based on their individual interests.

Task

In this challenge, the submitted algorithms have to predict which visitors of a website are likely to click on which piece of content. Visitors are characterised by a set of 120 features. Predicting clicks accurately, based on these features, is essential to present content that is relevant to visitors’ interests. It requires to continuously learn what might be of interest (exploration), while using this learning to serve relevant content often enough (exploitation).

Evaluation

Algorithms are run online (i.e. they receive their input sequentially) on data provided by Adobe Omniture, which closely simulates an actual web campaign. Each visitor click gives a reward of 1, and the best algorithm is the one that has highest cumulated reward in the end. The challenge will be run in phases, in between which the participants will have the opportunity to update their algorithms based on previous observations.

 

Dataset

From its experience in web analytics, Adobe Omniture has created a dataset that simulates the responses to a web campaign, with changes over time. The dataset comprises about 20 million {visitor feature vector, option id, binary clickthrough indicator} records that each represent a single visit to the website. For each visitor feature vector v of the data set, and for each option i, the binary clickthrough indicator informs us on whether v would click on i or not.

We are pleased to announce the 4th edition of the Large Scale Hierarchical Text Classification (LSHTC) Challenge.  The LSHTC Challenge is a hierarchical text classification competition, using very large datasets. This year’s challenge focuses on interesting learning problems like multi-task and refinement learning.

Hierarchies are becoming ever more popular for the organization of text documents, particularly on the Web. Web directories and Wikipedia are two examples of such hierarchies. Along with their widespread use, comes the need for automated classification of new documents to the categories in the hierarchy. As the size of the hierarchy grows and the number of documents to be classified increases, a number of interesting machine learning problems arise. In particular, it is one of the rare situations where data sparsity remains an issue, despite the vastness of available data: as more documents become available, more classes are also added to the hierarchy, and there is a very high imbalance between the classes at different levels of the hierarchy. Additionally, the statistical dependence of the classes poses challenges and opportunities for new learning methods.

The challenge consists of 3 tracks, involving different category systems with different data properties and focusing on different learning and mining problems. The challenge is based on two large datasets: one created from the ODP web directory (DMOZ) and one from Wikipedia. The datasets are multi-class, multi-label and hierarchical. The number of categories range between 13,000 and 325,000 roughly and number of the documents between 380,000 and 2,400,000. More information regarding the tracks and challenge rules can be found at the  “Datasets, Tracks, Rules and Guidelines” page.

Participants will be able to smoothly and continuously submit runs, in order to improve their systems.

In order to register for the challenge and gain access to the datasets you must have an account at the challenge Web site.

Organisers:

Massih-Reza Amini, LIG, Grenoble, France
Ion Androutsopoulos, AUEB, Athens, Greece
Thierry Artières, LIP6, Paris, France
Nicolas Baskiotis, LIP6, Paris, France
Patrick Gallinari, LIP6, Paris, France
Eric Gaussier, LIG, Grenoble, France
Aris Kosmopoulos, NCSR “Demokritos” & AUEB, Athens, Greece
George Paliouras, NCSR “Demokritos”, Athens, Greece
Ioannis Partalas, LIG, Grenoble, France

This challenge addresses a question of fundamental and practical interest in machine learning: the assessment of data representations produced by unsupervised learning procedures, for use in supervised learning tasks. It also addresses the evaluation of transfer learning methods capable of producing data representations useful across many similar supervised learning tasks, after training on supervised data from only one of them.

Classification problems are found in many application domains, including in pattern recognition (classification of images or videos, speech recognition), medical diagnosis, marketing (customer categorization), and text categorization (filtering of spam). The category identifiers are referred to as “labels”. Predictive models capable of classifying new instances (correctly predicting the labels) usually require “training” (parameter adjustment) using large amounts of labeled training data (pairs of examples of instances and associated labels). Unfortunately, few labeled training data may be available due to the cost or burden of manually annotating data. Recent research has been focusing on making use of the vast amounts of unlabeled data available at low cost including: space transformations, dimensionality reduction, hierarchical feature representations (“deep learning”), and kernel learning. However, these advances tend to be ignored by practitioners who continue using a handful of popular algorithms like PCA, ICA, k-means, and hierarchical clustering. The goal of this challenge is to perform an evaluation of unsupervised and transfer learning algorithms free of inventor bias to help to identify and popularize algorithms that have advanced the state of the art.

Five datasets from various domains are made available. The participants should submit on-line transformed data representations (or similarity/kernel matrices) on a validation set and a final evaluation set in a prescribed format. The data representations (or similarity/kernel matrices) are evaluated by the organizers on supervised learning tasks unknown to the participants. The results on the validation set are displayed on the learderboard to provide immediate feed-back. The results on the final evaluation set will be revealed only at the end of the challenge. To emphasize the capability of the learning systems to develop useful abstractions, the supervised learning tasks used to evaluate them make use of very few labeled training examples and the classifier used is a simple linear discriminant classifier. The challenge will proceed in 2 phases:

  • Phase 1 — Unsupervised learning: There exist a number of methods that produce new data representations (or kernels) from purely unlabeled data. Such unsupervised methods are sometimes used as preprocessing to supervised learning procedures. In the first phase of the challenge, no labels will be provided to the participants. The participants are requested to produce data representations (or similarity/kernel matrices) that will be evaluated by the organizers on supervised learning tasks (i.e. using labeled data not available to the participants).
  • Phase 2 — Transfer learning: In other practical settings, it is desirable to produce data representations that are re-usable from domain to domain. We want to examine the possibility that a representation developed with one set of labels can be used to learn a new, similar task more easily. For example, in the handwriting recognition domain, labeled handwritten digits would be available for training. The evaluation task would then be the recognition of handwritten alphabetical letters. We call this setting “transfer learning”. In the second phase of the challenge, some labels will be provided to the participants for the same datasets used in the first phase. This will allow the participants to improve their data representation (or similarity/kernel matrices) using supervised tasks similar (but different) from the task on which they will be tested.

Competition Rules

  • Goal of the challenge: Given a data matrix of samples represented as feature vectors (p samples in rows and n features in columns), produce another data matrix of dimension (p, n’) (the transformed representation of n’ new features) or a similarity/kernel matrix between samples of dimension (p, p). The transformed representations (or similarity/kernel matrices) should provide good results on supervised learning tasks used by the organizers to evaluate them. The labels of the supervised learning tasks used for evaluation purpose will remain unknown to the participants in phase 1 and 2, but other labels will be made available for transfer learning in phase 2.
  • Prizes: The winners of each phase will be awarded prizes see the Prizes page for details.
  • Dissemination: The challenge is part of the competition program of the IJCNN 2011 conference, San Jose, California, July 31 – August 5, 2011. We are organizing a special session and a competition workshop at IJCNN 2011 to discuss the results of the challenge. We are also organizing a workshop at ICML 2011, Bellevue, Washington, July 2, 2011. There are three publications opportunities, in JMLR W&CP and in the IEEE proceedings of IJCNN 2011 and in the ICML proceedings.
  • Schedule:
    Dec. 25, 2010 Start of the development period. Phase 0: Registration and submissions open. Rules, toy data, and sample code made available.
    Jan. 3, 2010 Start of phase 1: UNSUPERVISED LEARNING. Datasets made available. No labels available.
    Feb. 1, 2011 IJCNN 2011 papers due (optional).
    March 3, 2011 End of phase 1, at midnight (0 h Mar. 4, server time — time indicated on the Submit page).
    March 4, 2011 Start of phase 2: TRANSFER LEARNING. Training labels made available for transfer learning.
    April 1, 2011 IJCNN paper decision notification.
    April 15, 2011 End of the challenge at midnight (0 h April 16, server time — time indicated on the Submit page). Submissions closed. [Note: the grace period until April 20 has been canceled]
    April 22, 2011 All teams must turn in fact sheets (compulsory). The fact sheets will be used as abstracts for the workshops. Reviewers and participants are given access to provisional rankings and fact sheets.
    April 29, 2011 ICML 2011 papers due, to be published in JMLR W&CP (optional).
    May 1, 2011 Camera ready copies of IJCNN papers due.
    May 20, 2011 Release of the official ranking. Notification of abstract and paper acceptance.
    July 2, 2011 Workshop at ICML 2011, Bellevue, Washington state, USA. Confirmed.
    July 31 – Aug. 5, 2011 Special session and workshop at IJCNN 2011, San Jose, California, USA. Confirmed.
    Aug. 7, 2011 Reviews of JMLR W&CP papers sent back to authors.
    Sep. 30, 2011 Revised JMLR W&CP papers due.
  • Challenge protocol: (1) Development: From the outset of the challenge, all unlabeled development and evaluation data will be provided to the participants. All data will be preprocessed in a feature representation, such that the patterns are not easily recognizable by humans, making it difficult to label data using human experts. During development the participants may make submissions of a feature-based representation (or a similarity/kernel matrix) for a subset of the evaluation data (called validation set). They will receive on-line feed-back on the quality of their representation (or similarity measure) with a number of scoring metrics. (2) Final evaluation: To participate in the final evaluation the participants will have to (i) register as mutually exclusive teams; (ii) make one “final” correct submission of a feature based representation (or similarity/kernel matrix) for the final evaluation data for all 5 datasets of the challenge, (iii) submit the answers to a questionnaire on their method (method fact-sheet) and (iv) compete either in one of the two phases only or in both phases (it is not necessary to compete in both phases to earn prizes).
  • Baseline results: Results using baseline methods will be provided on the website of the challenge by the organizing team. Those results will be clearly marked as “ULref”. The most basic baseline result is obtained using the raw data. To qualify for prizes, the participants should exceed the performances on raw data for all the datasets of the challenge.
  • Eligibility of participation: Anybody complying with the rules of the challenge, with the exception of the organizers, is eligible to enter the challenge. To enter results and get on-line feed-back, the participants must make themselves known to the organizers by registering and providing a valid email so the organizers can communicate with them. However the participants may remain anonymous to the outside world. To participate in the final test rounds, the participants will have to register as teams. No participant will be allowed to enter as part of several teams. The team leaders will be responsible for ensuring that the team respects the rules of the challenge. There is no commitment to deliver code, data or publish methods to participate in the development phase, but the team leaders will be requested to fill out fact sheets with basic information on their methods to be able to claim prizes. When the challenge is over and the results are known, the teams who want to claim a prize will have to reveal their true identity to the outside world.
  • Anonymity: All entrants must identify themselves to the organizers. However, only your “Workbench id” will be displayed in result tables and you may choose a pseudonym to hide your identity to the rest of the world. Your emails will remain confidential.
  • Data: Datasets from various domains and various difficulty are available to download from the Data page. No labels are made available during phase 1. Some labels will be made available for transfer learning at the beginning of phase 2. Reverse-enginering the datasets to gain information on the identity of the patterns in the original data is forbidden. If it is suspected that this rule was violated, the organizers reserve the right to organize post-challenge verifications to which the top ranking participants will have to comply to earn prizes.
  • Submission method: The method of submission is via the form on the Submit page. To be ranked, submissions must comply with the Instructions. Robot submissions are permitted. If the system gets overloaded, the organizers reserve the right to limit the number of submissions per day per participant. We recommend not to exceed 5 submissions per day per participant. If you encounter problems with the submission process, please contact the Challenge Webmaster (see bottom of page).
  • Ranking: The method of scoring is posted on the Evaluation page. If the scoring method changes, the participants will be notified by email by the organizers.
    – During the development period (phase 1 and 2), the scores on the validation sets will be posted in the Leaderboard table. The participants are allowed to make multiple submissions on the validation sets.
    – The results on the final evaluation set will only be released after the challenge is over. The participants may make multiple submissions on the final evaluation sets, to avoid the last minute rush. However, for each registered team, only ONE final evaluation set submission for each dataset of the challenge be taken into account. These submissions will have to be grouped under the same “experiment” name. The team leader will designate which experiment should be taken into account for the final ranking. For each phase, the teams will be ranked by for each individual dataset and the winner will be determined by the best average rank over all datasets.

This challenge addresses machine learning problems in which labeling data is expensive, but large amounts of unlabeled data are available at low cost. Such problems might be tackled from different angles: learning from unlabeled data or active learning. In the former case, the algorithms must satisfy themselves with the limited amount of labeled data and capitalize on the unlabeled data with semi-supervised learning methods. In the latter case, the algorithms may place a limited number of queries to get labels. The goal in that case is to optimize the queries to label data and the problem is referred to as active learning.

Much of machine learning and data mining has been so far concentrating on analyzing data already collected, rather than collecting data. While experimental design is a well-developed discipline of statistics, data collection practitioners often neglect to apply its principled methods. As a result, data collected and made available to data analysts, in charge of explaining them and building predictive models, are not always of good quality and are plagued by experimental artifacts. In reaction to this situation, some researchers in machine learning and data mining have started to become interested in experimental design to close the gap between data acquisition or experimentation and model building. This has given rise of the discipline of active learning. In parallel, researchers in causal studies have started raising the awareness of the differences between passive observations, active sampling, and interventions. In this domain, only interventions qualify as true experiments capable of unraveling cause-effect relationships. However, most practical experimental designs start with sampling data in a way to minimize the number of necessary interventions.
The Causality Workbench will propose in the next few month several challenges to evaluate methods of active learning and experimental design, which involve the data analyst in the process of data collection. From our perspective, to build good models, we need good data. However, collecting good data comes at a price. Interventions are usually expensive to perform and sometimes unethical or impossible, while observational data are available in abundance at a low cost. Practitioners must identify strategies for collecting data, which are cost effective and feasible, resulting in the best possible models at the lowest possible price. Hence, both efficiency and efficacy are factors of evaluation in these evaluations.
The setup of this first challenge represented in the figure above considers only sampling as an intervention of the data analyst or the learning machine, who may only place queries on the target values (labels) y. So-called “de novo queries” in which new patterns x can be created are not considered. We will consider them in an upcoming challenge on experimental design in causal discovery.
In this challenge, we propose several tasks of pool-based active learning in which a large unlabeled dataset is available from the onset of the challenge and the participants can place queries to acquire data for some amount of virtual cash. The participants will need to return prediction values for all the labels every time they want to purchase new labels. This will allow us to draw learning curves prediction performance vs. amount of virtual cash spend. The participants will be judged according to the area under the learning curves, forcing them to optimize both efficacy (obtain good prediction performance) and efficiency (spend little virtual cash). Learn more about active learning…

GRavitational lEnsing Accuracy Testing 2010 (GREAT10) is a public image analysis challenge aimed at the development of algorithms to analyze astronomical images. Specifically, the challenge is to measure varying image distortions in the presence of a variable convolution kernel, pixelization and noise. This is the second in a series of challenges set to the astronomy, computer science and statistics communities, providing a structured environment in which methods can be improved and tested in preparation for planned astronomical surveys. GREAT10 extends upon previous work by introducing variable fields into the challenge. The “Galaxy Challenge” involves the precise measurement of galaxy shape distortions, quantified locally by two parameters called shear, in the presence of a known convolution kernel. Crucially, the convolution kernel and the simulated gravitational lensing shape distortion both now vary as a function of position within the images, as is the case for real data. In addition, we introduce the “Star Challenge” that concerns the reconstruction of a variable convolution kernel, similar to that in a typical astronomical observation. This document details the GREAT10 Challenge for potential participants. Continually updated information is also available from this http URL

In 2006 the PASCAL network funded the 1st Speech Separation challenge which addressed the problem of separating and recognising speech mixed with speech. We are now launching a successor to this challenge that aims to tackle speech separation and recognition in more typical everyday listening conditions.

The challenge employs noise background that has been collected from a real family living room using binaural microphones. Target speech commands have been mixed into the environment at a fixed position using genuine room impulse responses. The task is to separate the speech and recognise the commands being spoken using systems that have been trained on noise-free commands and room noise recordings.

On this web site you will find everything you need to get started. The background section explains the general motivation for the challenge. The instructions section describes the separation/recognition task in more detail and what you need to do in order to take part. The datasets are available for download already, and the evaluation tools will become available by the end of October. Further important dates are listed in the schedule.

This is a multidisciplinary challenge. We hope to encourage participants from the machine learning, source separation and speech recognition communities. Although the ultimate evaluation will be through speech recognition scores, participants may submit either separated signals, robust feature extractors or complete recognition systems. All entrants will be invited to submit papers describing their work to a dedicated satellite workshop hosted at Interspeech 2011. Participants will also be invited to submitted longer versions of their work for a Special Issue of Computer Speech and Language on the theme of Machine Listening in Multisource Environments has been organised.