Physical and economic limitations have forced computer architecture towards parallelism and away from exponential frequency scaling. Meanwhile, increased access to ubiquitous sensing and the web has resulted in an explosion in the size of machine learning tasks. In order to benefit from current and future trends in processor technology we must discover, understand, and exploit the available parallelism in machine learning. This workshop will achieve four key goals:
- Bring together people with varying approaches to parallelism in machine learning to identify tools, techniques, and algorithmic ideas which have lead to successful parallel learning.
- Invite researchers from related fields, including parallel algorithms, computer architecture, scientific computing, and distributed systems, who will provide new perspectives to the NIPS community on these problems, and may also benefit from future collaborations with the NIPS audience.
- Identify the next key challenges and opportunities to parallel learning.
- Discuss large-scale applications, e.g., those with real time demands, that might benefit from parallel learning.
Prior NIPS workshops have focused on the topic of scaling machine learning, which remains an important developing area. We introduce a new perspective by focusing on how large-scale machine learning algorithms should be informed by future parallel architectures.
Topics of Interest
While we are interested in a wide range of topics associated with large-scale, parallel learning, the following list provides a flavor of some of the key topics:
- Multicore / Cluster based Learning Techniques
- Machine Learning on Alternative Hardware (GPUs, Cell Processors, FPGAs, iPhone, ...)
- Distributed Learning
- Learning results and techniques on Massive Datasets
- Large Scale Kernel Methods
- Fast Online Algorithms for Large Data Sets
- Parallel Computing Tools and Libraries
Organizers
- Carlos Guestrin, Carnegie Mellon
- Alex Gray, Georgia Tech
- Alex Smola, Yahoo
- Arthur Gretton, Carnegie Mellon
- Joseph Gonzalez, Carnegie Mellon