NIPS workshop: BigVision 2012
Big Data Meets Computer Vision: First International Workshop on Large Scale Visual Recognition and Retrieval (BigVision 2012)
Held in conjunction with NIPS 2012. December 7 or December 8 (TBD), 2012. Lake Tahoe, Nevada, USA.
The emergence of “big data” has brought about a paradigm shift throughout computer science. Computer vision is no exception. The explosion of images and videos on the Internet and the availability of large amounts of annotated data have created unprecedented opportunities and fundamental challenges on scaling up computer vision.
Over the past few years, machine learning on big data has become a thriving field with a plethora of theories and tools developed.
Meanwhile, large scale vision has also attracted increasing attention in the computer vision community. This workshop aims to bring closer researchers in large scale machine learning and large scale vision to foster cross-talk between the two fields. The goal is to encourage machine learning researchers to work on large scale vision problems, to inform computer vision researchers about new developments on large scale learning, and to identify unique challenges and opportunities.
This workshop will focus on two distinct yet closely related vision
problems: recognition and retrieval. Both are inherently large scale.
In particular, both must handle high dimensional features (hundreds of thousands to millions), a large variety of visual classes (tens of thousands to millions), and a large number of examples (millions to billions).
This workshop will consist of invited talks, panels, discussions, and paper submissions. The target audience of this workshop includes industry and academic researchers interested in machine learning, computer vision, multimedia, and related fields.
Call for Papers
We invite high quality submissions of extended abstracts on topics including, but not limited to
–State of the field: What really defines large scale vision? How does it differ from traditional vision research? What are its unique challenges for large scale learning?
–Indexing algorithms and data structures: How do we efficiently find similar features/images/classes from a large collection, a key operation in both recognition and retrieval?
–Semi-supervised/unsupervised learning: Large scale data comes with different levels of supervision, ranging from fully labeled and quality controlled to completely unlabeled. How do we make use of such data?
–Metric learning: Retrieval visually similar images/objects requires learning a similarity metric. How do we learn a good metric from a large amount of data?
–Visual models and feature representations: What is a good feature representation? How do we model and represent images/videos to handle tens of thousands of fine-grained visual classes?
–Exploiting semantic structures: How do we exploit the rich semantic relations between visual categories to handle a large number of classes?
–Transfer learning: How do we handle new visual classes
(objects/scenes/activities) after having learned a large number of them? How do we transfer knowledge using the semantic relations between classes?
–Optimization techniques: How do we perform learning with training data that do not fit into memory? How do we parallelize learning?
–Datasets issues: What is a good large scale dataset? How should we construct datasets? How do we avoid dataset bias?
–Systems and infrastructure: How do we design and develop libraries and tools to facilitate large scale vision research? What infrastructure do we need?
–Submissions must be in NIPS 2012 format, with a maximum number of 4 pages (excluding references).
The deadline of submission is 11:59pm PDT, September 16th, 2012.
Submissions do not have to be anonymous. Accepted papers will be presented as oral talks or posters during the workshop. For detailed submission instructions please visit https://sites.google.com/site/bigvision2012/
Submission deadline: September 16th, 2012.
Decision notification: October 7th, 2012.
Workshop date: December 7th or December 8th (TBD), 2012.
Alex Berg, Stony Brook University
Shih-Fu Chang, Columbia University
Andrew Ng, Stanford University
Florent Perronnin, Xerox Research Centre Europe
Lorenzo Torresani, Dartmouth College
Samy Bengio, Google
Jia Deng, Stanford University
Fei-Fei Li, Stanford University
Yuanqing Lin, NEC Labs