Optimization is indispensable to many machine learning algorithms. What can we say beyond this obvious realization?
Previous talks at the OPT workshops have covered frameworks for convex programs (D. Bertsekas), the intersection of ML and optimization, especially in the area of SVM training (S. Wright), large-scale learning via stochastic gradient methods and its tradeoffs (L. Bottou, N. Srebro), exploitation of structured sparsity in optimization (Vandenberghe), and randomized methods for extremely large-scale convex optimization (A. Nemirovski), among others.
The ML community's interest in optimization continues to grow. Invited tutorials on optimization will be presented this year at ICML (N. Srebro) and NIPS (S. Wright). The traditional "point of contact" between ML and optimization - SVM - continues to be a driver of research on a number of fronts. Much interest has focused recently on stochastic gradient methods, which can be used in an online setting and in settings where data sets are extremely large and high accuracy is not required. Regularized logistic regression is another area that has produced a recent flurry of activity at the intersection of the two communities. Many aspects of stochastic gradient remain to be explored, for example, different algorithmic variants, customizing to the data set structure, convergence analysis, sampling techniques, software, choice of regularization and tradeoff parameters, parallelism. There also needs to be a better understanding of the limitations of these methods, and what can be done to accelerate them or to detect when to switch to alternative strategies. In the logistic regression setting, use of approximate second-order information has been shown to improve convergence, but many algorithmic issues remain. Detection of combined effects predictors (which lead to a huge increase in the number of variables), use of group regularizers, and dealing with the need to handle very large data sets in real time all present challenges.
We also do NOT ignore the not particularly large scale setting, where one has time to wield substantial computational resources. In this setting, high-accuracy solutions and deep understanding of the lessons contained in the data are needed. Examples valuable to MLers may be exploration of genetic and environmental data to identify risk factors for disease; or problems dealing with setups where the amount of observed data is not huge, but the mathematical model is complex.
Operational Details
- one day long with morning and afternoon sessions;
- three invited talks optimization and ML experts
- discussion: this year we plan to bolster discussion by having an open problems session;
- contributed talks;
- an interactive poster session;
Program Committee
- Andreas Argyriou
- Alexandre d'Aspremont
- Dhruv Batra
- Kristin Bennett
- Tijl De Bie
- Jieqiu Chen
- John Duchi
- Vojtech Franc
- Sathiya Keerthi
- Dongmin Kim
- Chih-Jen Lin
- Cheng Soon Ong
- Pradeep Ravikumar
- Mark Schmidt
- Onur Seref
- Nathan Srebro
- Sandor Szedmak