Development Kit
The VOC challenge 2005 has now ended. The development kit which was provided to participants is available for download. This includes code for:
- Loading PASCAL image sets and annotation
- Computing Receiver Operating Characteristic (ROC) curves for classification
- Computing Precision/Recall curves for detection
- Computing Detection Error Tradeoff (DET) curves for detection
You can:
- Download the development kit code and documentation (92KB gzipped tar file)
- Download the PDF documentation (74KB PDF)
- Browse the HTML documentation
Datasets
The two datasets provided for the challenge have been added to the main PASCAL image databases page. To run the challenge code you will need to download these two databases:
- VOC 2005 Dataset 1: Training, validation, and test set 1
- VOC 2005 Dataset 2: Test set 2
Results
Results of the challenge were presented at the PASCAL Challenges workshop in April 2005, Southampton, UK. A chapter reporting results of the challenge will appear in Lecture Notes in Artificial Intelligence:
- “The 2005 PASCAL Visual Object Classes Challenge” (3.7MB PDF)
M. Everingham, A. Zisserman, C. Williams, L. Van Gool, M. Allan, C. Bishop, O. Chapelle, N. Dalal, T. Deselaers, G. Dorko, S. Duffner, J. Eichhorn, J. Farquhar, M. Fritz, C. Garcia, T. Griffiths, F. Jurie, D. Keysers, M. Koskela, J. Laaksonen, D. Larlus, B. Leibe, H. Meng, H. Ney, B. Schiele, C. Schmid, E. Seemann, J. Shawe-Taylor, A. Storkey, S. Szedmak, B. Triggs, I. Ulusoy, V. Viitaniemi, and J. Zhang.
In Selected Proceedings of the First PASCAL Challenges Workshop, LNAI, Springer-Verlag, 2006 (in press).
An earlier report which includes some more detailed results is also available for download, and Powerpoint slides of the challenge workshop presentation:
- Summary of challenge and results (377KB PDF)
- Powerpoint slides from workshop (12MB powerpoint)
Introduction
The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images will be provided. The four object classes that have been selected are:
- motorbikes
- bicycles
- people
- cars
There will be two main competitions:
- For each of the 4 classes, predicting presence/absence of an example of that class in the test image.
- Predicting the bounding box and label of each object from the 4 target classes in the test image.
Contestants may enter either (or both) of these competitions, and can choose to tackle any (or all) of the four object classes. The challenge allows for two approaches to each of the competitions:
- Contestants may use systems built or trained using any methods or data excluding the provided test sets.
- Systems are to be built or trained using only the provided training data.
The intention in the first case is to establish just what level of success can currently be achieved on these problems and by what method; in the second case the intention is to establish which method is most successful given a specified training set.
Data
The training data provided will consist of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the four classes present in the image. Note that multiple objects from multiple classes may be present in the same image.
The data will be made available in two stages; in the first stage, a development kit will be released consisting of training and validation data, plus evaluation software (written in MATLAB). One purpose of the validation set is to demonstrate how the evaluation software works ahead of the competition submission.
In the second stage, two test sets will be made available for the actual competition:
- In the first test set, images will be taken from the same distribution as the training data, expected to make an ‘easier’ challenge.
- The second test set will be freshly collected to provide images not expected to come from the same distribution as the training data and should make a ‘harder’ challenge.
Contestants are free to submit results for any (or all) of the test sets provided.
Submission of results
Contestants may run several experiments on each competition of the challenge, for example using alternative methods or different training data. Contestants must assess their results using the software provided. This software writes standardized output files recording classifier output, ROC and precision/recall curves. For submission, contestants must prepare:
- A set of output files from the provided software for each experiment
- A specification of the experiments reported i.e. the meaning of each output file
- A brief description of the method used in each experiment. Entrants may withhold details of their method, but will be judged in a separate category of the competition.
To submit your results, please prepare a single archive file (gzipped tar/zip) and place it on a publicly accessible web/ftp server. The contents should be as listed above. See the development kit documentation for information on the VOCroc and VOCpr functions needed to generate output files, and the location of these files. When you have prepared your archive and checked that it is accessible, send an email with the URL and any necessary explanatory notes to Mark Everingham.
Participants who cannot place their results on the web/ftp may instead send them by email as an attachment. Please include details of the attachment in the email body. Please do not send large files (>200KB) in this way.
Timetable
- 21 Feb 2005 : Development kit (training and validation data plus evaluation software) made available.
- 14 March 2005: Test set made available
- 31 March 2005: DEADLINE for submission of results
- 4 April 2005: Results announced
- 11 April 2005: Half-day (afternoon) challenge workshop held in Southampton, UK.
To aid administration of the challenge, entrants will be required to register when downloading the test set.
Publication policy
The main mechanism for dissemination of the results will be the challenge webpage. It is also possible that an overview paper of the results will be produced.
Organizers
- Luc van Gool (Zurich)
- Chris Williams (Edinburgh)
- Andrew Zisserman (Oxford)
Technical contributors
- Mark Everingham (Oxford): VOC challenge [email]
- Manik Varma (Oxford): PASCAL image databases