NIPS 2010 workshop: Learning on Cores, Clusters, and Clouds

Learning on Cores, Clusters, and Clouds
NIPS 2010 Workshop, Whistler, British Columbia, Canada

http://lccc.eecs.berkeley.edu/

— Submission Deadline: October 17, 2010 —

In the current era of web-scale datasets, high throughput biology, and
multilanguage machine translation, modern datasets no longer fit on a
single computer and traditional machine learning algorithms often have
prohibitively long running times. Parallel and distributed machine
learning is no longer a luxury; it has become a necessity. Moreover,
industry leaders have already declared that clouds are the future of
computing, and new computing platforms such as Microsoft’s Azure and
Amazon’s EC2 are bringing distributed computing to the masses.

The machine learning community is reacting to this trend in computing by developing new parallel and distributed machine learning techniques.
However, many important challenges remain unaddressed. Practical
distributed learning algorithms must deal with limited network
resources, node failures and nonuniform network latencies. In cloud
environments, where network latencies are especially large, distributed
learning algorithms should take advantage of asynchronous updates.

Many similar issues have been addressed in other fields, where
distributed computation is more mature, such as convex optimization and
numerical computation. We can learn from their successes and their
failures.

The one day workshop on “Learning on Cores, Clusters, and Clouds” aims
to bring together experts in the field and curious newcomers, to present
the state-of-the-art in applied and theoretical distributed learning,
and to map out the challenges ahead. The workshop will include invited
and contributed presentations from leaders in distributed learning and
adjacent fields.

We would like to invite short high-quality submissions on the following
topics:

* Distributed algorithms for online and batch learning
* Parallel (multicore) algorithms for online and batch learning
* Computational models and theoretical analysis of distributed and
parallel learning
* Communication avoiding algorithms
* Learning algorithms that are robust to hardware failures
* Experimental results and interesting applications

Interesting submissions in other relevant topics not listed above
are welcome too. Due to the time constraints, most accepted
submissions will be presented as poster spotlights.

_Submission guidelines:_

Submissions should be written as extended abstracts, no longer
than 4 pages in the NIPS latex style. NIPS style files and
formatting instructions can be found at
http://nips.cc/PaperInformation/StyleFiles. The submissions should
include the authors’ name and affiliation since the review process
will not be double blind. The extended abstract may be accompanied
by an unlimited appendix and other supplementary material, with
the understanding that anything beyond 4 pages may be ignored by
the program committee. Please send your submission by email to
submit.lccc(at)gmail.com before
October 17 at midnight PST. Notifications will be given on or
before Nov 7. Topics that were recently published or presented
elsewhere are allowed, provided that the extended abstract
mentions this explicitly; topics that were presented in
non-machine-learning conferences are especially encouraged.

_Organizers:_

Alekh Agarwal (UC Berkeley), Lawrence Cayton (MPI Tuebingen),
Ofer Dekel (Microsoft), John Duchi (UC Berkeley), John Langford
(Yahoo!)

_Program Committee:_

Ron Bekkerman (LinkedIn), Misha Bilenko (Microsoft), Ran
Gilad-Bachrach (Microsoft), Guy Lebanon (Georgia Tech), Ilan Lobel
(NYU), Gideon Mann (Google), Ryan McDonald (Google), Ohad Shamir
(Microsoft), Alex Smola (Yahoo!), S V N Vishwanathan (Purdue),
Martin Wainwright (UC Berkeley), Lin Xiao (Microsoft)