Probabilistic graphical models are a powerful tool for representing complex multivariate distributions. They have been used with considerable success in many fields, from machine vision and natural language processing to computational biology and channel coding. One of the key challenges in using such models in practice is that the inference problem is computationally intractable in many cases of interest. This has prompted much research on algorithms that approximate the inference task. Examples for such algorithms are loopy belief propagation, then mean field method and Gibbs sampling. Due to the wide use of graphical models it is of key importance to design algorithms that work well in practice. Empirical evaluation in this case is key, since one does not expect approximation algorithms to work well for any problem (due to the theoretical intractability of inference).

The challenge was held as part of the Uncertainty in Artificial Intelligence conference (UAI). The challenge involved several inference tasks (finding the MAP assignment, computing the probability of evidence, calculating marginals, and learning models using approximate inference). Participants provided inference algorithms and these were applied to models from the following domains: machine vision (e.g., segmentation and object detection), computational biology (e.g., protein design and pedigree analysis), constraint satisfaction, medical diagnosis and collaborative filtering, as well as some synthetic problems whose graph structure appears in real world problems (e.g., 2D and 3D grids). Evaluating the state of the art in the field of approximate inference helps guide research in the field. It highlights which methods are particularly promising in which domains. Additionally, since running time was carefully evaluated, it indicates which methods can perform on very large scale data.