Motivation
We are organizing a workshop of gesture and sign language recognition from video data and still images. Gestures can originate from any body motion or state but commonly originate from the face or hand. Much recent research has focused on emotion recognition from face and hand gestures, with applications in gaming, marketing, and computer interfaces. Many approaches have been made using cameras and computer vision algorithms to interpret sign language for the deaf. However, the identification and recognition of postures and human behaviors is also the subject of gesture recognition techniques and has gained importance in applications such as video surveillance. This workshop aims at gathering researchers from different application domains working on gesture recognition to share algorithms and techniques.
Benchmark
One of the goals of the workshop is to launch a benchmark program on gesture recognition. Part of the workshop will be devoted to discussing the modalities of the benchmark. See our website under construction. Download sample data [or get your sample data DVD at the workshop]. Participate in a data exchange [You will need first to register to our Google group gesturechallenge to get access to the data exchange website].
Participation
We invite contributions relevant to gesture recognition, including:
Algorithms for gesture and activity recognition, in particular addressing:
- Learning from unlabeled or partially labeled data
- Learning from few examples per class, and transfer learning.
- Continuous gesture recognition and segmentation
- Deep learning architectures, including convolutional neural networks
- Gesture recognition in challenging scenes, including cluttered/moving backgrounds or moving cameras, or scenes where multiple persons are present.
- Integrating information from multiple channels (e.g., position/motion of multiple body parts, hand shape, facial expressions).
Data representations
Applications pertinent to the workshop topic, such as involving:
- Video surveillance
- Image or video indexing and retrieval
- Recognition of sign languages for the deaf
- Emotion recognition and affective computing
- Computer interfaces
- Virtual reality
- Robotics
- Ambiant intelligence
- Games
- Datasets and benchmarks
Program committee
- Aleix Martinez, Ohio State University, USA
David W. Aha, Naval Research Laboratory, USA - Abe Schneider, Knexus Research, USA
Jeffrey Cohn, Carnegie Mellon University, USA - Martial Hebert, Carnegie Mellon University, USA
- Dimitris Metaxas, Rutgers, New Jersey, USA
- Christian Vogler, ILSP Athens, Greece
- Sudeep Sarkar, University of South Florida, USA
- Graham Taylor, NYU, New-York, USA
- Andrew Ng, Stanford Univ., Palo Alto, CA, USA
- Andrew Saxe, Stanford Univ., Palo Alto, CA, USA
- Quoc Le, Stanford Univ., Palo Alto, CA, USA
- David Forsyth, University of Illinois at Urbana-Champaign, USA
- Maja Pantic, Imperial College, London
- Philippe Dreuw, RWTH Aachen University, Germany
- Richard Bowden, Univ. Surrey, UK
Fernando de la Torre, Carnegie Mellon University, USA - Paulo Gotardo, Ohio State University, Ohio, USA
- Carol Neidle, Boston University, MA, USA
- Trevor Darrell, UC Berkeley/ICSI, Berkeley, California, USA
- Greg Mori, Simon Fraser University, Canada
Matthew Turk, UC Santa Barbara, USA - Atiqur Rahman Ahad, Faculty of Engineering, Kyushu Institute of Technology, Japan
- Mingyu Chen, Georgia Institute of Technology, USA
- Wenhui Xu, Georgia Institute of Technology, USA
- Jesus-Omar Ocegueda-Gonzalez, University of Houston, USA
- Thomas Kinsman, Rochester Institute of Technology, USA
- András Lőrincz, Eötvös Loránd University, Budapest, Hungary
- Upal Mahbub, Bangladesh University of Engineering, Bangladesh
- Subhransu Maj, UC Berkeley, CA, USA
- Lubomir Bourdev, UC Berkeley, CA, USA
- Vassilis Pitsikalis, NTUA, Greece
Organizing committee
- Isabelle Guyon, Clopinet, Berkeley, California
- Vassilis Athitsos, University of Texas at Arlington
- Jitendra Malik, UC Berkeley, California
- Ivan Laptev, INRIA, France