This project develops new kinds of information retrieval systems, by fusing multimodal implicit relevance feedback data with text content using Bayesian and kernel-based machine learning methods. A long term goal of information retrieval is to understand the “user’s intent”. We will study the feasibility of directly measuring the interests at the sentence level, and of coupling the results to other relevant sources to estimate user preferences. The concrete task is to predict relevance for new documents given judgments on old ones. Such predictions can be used in information retrieval, and the most relevant documents can even be proactively offered to the user. The motivation for this project is that by using eye movements we wish to get rid of part of the tedious ranking of retrieved documents, called relevance feedback in standard information retrieval. Moreover, by using the potentially richer relevance feedback signal we want to access more subtle cues of relevance in addition to the usual binary relevance judgments. The major task in this research is to improve the predictions by combining eye movements with the text content. We aim at combining the relevance feedback to textual content to infer relevant words, concepts, and sentences. We combine two data sources for predicting relevance: eye movements measured during reading and the text content. This is challenging: time series models of very noisy data need to be combined with text models in a task where we typically only have very little data about relevance available. This novel research task involves dynamic modeling of noisy signals, modeling of large document collections and users’ interests, and information retrieval. Multimodal integration and natural language processing are needed to some extent as well. The project also involves a number of interesting challenges from the point of view of applying both kernel and Bayesian methods.