This workshop is about reinforcement learning in large state/action spaces, learning to optimize search, and the relation of these two.

Content-based information retrieval with relevance feedback is a multi-stage process, where at each stage a user selects the item which is closest to the requested information from a set of presented items, until the requested information is found. The task of the search engine is to present items such that the search terminates in few iterations. More generally, interactive search concerns multi-stage processes where a search engine presents some information and as response gets some feedback, which may be partial and noisy. Since the reward for finding the requested information is delayed, learning a good search engine from data can be modeled as a reinforcement problem, but the special structure of the problem needs to be exploited.

Since for realistic search applications the state space is enormous, this learning problem is a difficult one. Although the literature of reinforcement learning offers many powerful algorithms that have been successful in various difficult applications, we find that there is still relatively little understanding about when reinforcement learning might be successful in a realistic application, or what might make reinforcement learning successful in such an application. Furthermore, little work has been done on applying reinforcement learning to optimize interactive search.

Thus this workshop addresses in particular but not exclusively the following two questions:

  • Identify cases when realistically large problems with delayed feedback can be solved successfully, possibly but not necessarily by reinforcement learning algorithms. Such algorithm may need to exploit the special structure of the learning problem. As an example we see content-based information retrieval.
  • Application of learning techniques to develop powerful interactive search algorithms: optimizing a single search or learning across searches, with or without probabilistic assumptions.

A partial list of topics relevant to the workshop contains:

  • reinforcement learning in large state/action spaces,
  • automatic state/action aggregation and hierarchical reinforcement learning,
  • special cases or assumptions which facilitate fast reinforcement learning,
  • reinforcement learning, relevance feedback, and information retrieval,
  • search strategies based on relevance feedback,
  • learning efficient search strategies from multiple search sessions,
  • applications.

The workshop should provide an overview of the major achievements and the main open issues.

Organizers

  • Peter Auer, University of Leoben
  • Samuel Kaski, Aalto University, Helsinki
  • Csaba Szepesvari, University of Alberta