News Archives

The PASCAL Probabilistic Inference Challenge (PIC11) – Call for Submissions

Probabilistic inference in graphical models is a key task in many applications, from machine vision to computational biology. Since the
problem is generally computationally intractable many approximations have been suggested over the years.

The goal of this challenge is to evaluate inference algorithms on difficult large scale problems.
Some of challenge highlights are:
– Algorithms for MAP, marginals, and partition function approximation will be evaluated.
– Solvers will be evaluated on different time scales (20 seconds, 20 minutes, 1 hour).
– An online leaderboard will show the relative rankings of the algorithms.
– Modest cash prizes will be available on all categories.
– After the challenge the evaluation site will remain active as a service to the community.

Timeline:
July 15th, 2011: Site open to registration and submission of solvers.
October 1st, 2011: Challenge Begins. No new models added after this date.
December 31st, 2011: Site closed to submissions.
January 15th, 2012: Winners announced.

Organizers: Gal Elidan and Amir Globerson (Hebrew University).
Web master: Uri Heinemann (Hebrew University)

Web Page: http://www.cs.huji.ac.il/project/PASCAL/

PhD Studentship – Learning to Recognise Dynamic Visual Content from Broadcast Footage

Supervision: Dr Mark Everingham, University of Leeds

Deadline: Open until filled

Project Description:

A PhD studentship is available as part of an EPSRC funded project on “Learning to Recognise Dynamic Visual Content from Broadcast Footage” being jointly undertaken by the University of Leeds (Mark Everingham), the University of Oxford (Andrew Zisserman), and the University of Surrey (Richard Bowden).

The objective of the project is to develop automated tools that allow temporal visual content, such as a human gesturing, using sign language, or interacting with objects or other humans, to be learnt from standard TV broadcast signals using the transmitted annotation in the form of subtitles and annotation for the visually impaired as supervision. This requires the development of models of the visual appearance and dynamics of actions, and learning methods which can train such models using the weak supervision provided by the subtitles. Once the models have been learnt they can then be used without supervision, e.g. for sign language interpretation or automatic description of the content of video footage, and during the project demonstrators will be engineered for both of these applications.

The student will focus on the development of visual descriptors and learning algorithms for sign language and action recognition in broadcast video. S/he will be based in the School of Computing at the University of Leeds, and will be supervised by Dr Mark Everingham (http://www.comp.leeds.ac.uk/me/).

Funding Notes: The studentship is funded by an EPSRC project studentship and will start from 1st October 2011 or as soon as possible thereafter. The studentship is funded for 3 years and covers Home/EU fees and maintenance at the standard EPSRC rate (currently £13,590 per annum). Applications are welcome from overseas students, but such students would have to provide the difference between the UK/EU and the overseas student rates for university fees from some other source, such as a scholarship or personal funds.

Academic Staff Contact Details: Dr Mark Everingham. M.Everingham(at)leeds.ac.uk (for informal enquiries about the project only – do not send applications to this address).

Entry Requirements: The PhD candidate should have or expect to obtain a first class or strong 2.1 honours degree in computer science, mathematics, or related discipline. The following qualities are desirable: demonstrable experience in computer vision or machine learning; excellent record of academic and/or professional achievement; strong mathematical skills; strong programming skills, especially C/C++ and Matlab; good written and spoken communication skills in English.

Details on how to apply can be found at:

http://www.leeds.ac.uk/rds/prospective_students/apply/I_want_to_apply.html

Application Procedure: Formal applications for research degree study must be made either on line through the University website, or on the University’s application form. Detailed information of how to apply on line can be found at: http://www.leeds.ac.uk/students/apply_research.htm

The paper application form is available at: http://www.leeds.ac.uk/rsa/prospective_students/apply/I_want_to_apply.html

Please return the completed application form to the Research Degrees & Scholarships Office, University of Leeds, LS2 9JT.

Please note, if you intend to send academic references we can only accept them if they are on official letter headed paper and contain an original signature and stamp; they must arrive in sealed envelopes. Alternatively, the School will contact your named academic referees directly.

Open PhD Position @ ETH Zurich “Wearable Assistant for Patients with Parkinson’s Disease”

=== Background ===
The Wearable Computing Laboratory at ETH Zurich (http://www.wearable.ethz.ch/) develops methods to recognize human activities and context from data captured from sensors placed on-body, such as those found in mobile phones.

One of the uses of context and activity recognition methods is to support independent living for people in need, such as elderly or people with disabilities.

=== CuPiD: “Closed-loop system for personalized and at-home rehabilitation of people with Parkinson’s Disease” ===

CuPiD is a European funded project with 8 partners distributed in 6 EU countries during a 3 year period (2011-2014). Partners include 2 universities, 2 medical centers, 3 enterprises and one research foundation.

The objective of CuPiD is to develop a system to support the rehabilitation of persons with Parkinson’s diseases. The project considers three issues commonly faced by Parkinson’s disease patients:
* Freezing of gait: while walking, the person suddenly “freezes” and is unable to pursue walking. This may last a few seconds up to a minute. The person is conscious of this, but unable to resume walking. This happens more oftenwhen performing sharp turns, entering narrow areas, or when faced with cognitive load or social pressure.
* Motor control: fine motor control used for manipulative gestures is often negatively affected with the progression of Parkinson’s disease
* Balance: loss of balance is common and may lead to falls, or difficulties in appropriately shifting the body weight in sit-to-stand transitions.

The aim of the CuPiD project is to devise wearable and ambient devices to perform at-home rehabilitation for these issues. The principle consists in providing pre-emptive feedback to the user, shortly before the onset of an issue, and to rely on neural plasticity for the user’s behavior to unconsciously adapt to avoid situations leading to these issues algogether.

CuPiD makes use of sensors such as movement sensors placed on body, physiological sensors, location awareness, etc, in order to recognize the onset of an issue. Feedback includes audio and vibrotactile feedback, and virtual reality. The CuPiD system allows logging of data to monitor the user’s rehabilitation progress by care personnel.

At ETH Zürich we focus within this project on the issue of the freezing of gait, but close collaboration is foreseen between all project partners as an integrated rehabilitation system is the ultimate goal of the project.

=== Job description ===

We offer a PhD position within the framework of the 3 year long (2011-2014) European-funded CuPid project.
In this position you will be responsible for one of the project’s work package. This work package consists in:
* devising methods and sensors to predict the upcoming onset of gait freeze from multimodal body-worn sensors data using data mining and pattern recognition techniques;
* devise appropriate audio feedback for the patients,
* combine sensing and feedback in dedicated embedded systems;
* conduct evaluations of the system with patients. This work is done in close collaborations with partners of the project.

Your work environment will be multinational, both in Zürich and with project partners within Europe, with frequent travels to the partner’s location.

Within this project, your research topics will include (but are not limited to):
* Multimodal data-mining: identification of relevant sensor signal patterns in a large dataset to predict onset of freeze
* Activity and context recognition with body-worn sensors
* User calibration techniques to adapt context recognition parameters to specific users
* Multimodal data fusion to combine location, motion, physiological signals to predict the onset of freeze
* Wireless sensor networks
* Laboratory and clinical evaluation: evaluations with patients at the partner’s clinical locations.

Starting date: ASAP

=== Requirements ===

The candidate has a diploma, MSc, or equivalent in electrical engineering, micro-engineering, computer science or mathematics.
He has strong interests in mobile computing, machine learning/pattern recognition, signal processing, adaptive and learning systems, and in the combination of theoretical and experimental research.
Fluent spoken and written English is mandatory.

=== Contact and application ===

For further information about the CuPiD project and your contribution within it, please contact Dr. Daniel Roggen (droggen(at)ife.ee.ethz.ch).

If you are interested and believe that you qualify, please send your application to Prof. Gerhard Tröster (troester(at)ife.ee.ethz.ch). Include:
* Curriculum Vitae with the names and contact details of at least 2 references
* a list of exams and grades obtained
* a cover letter explaining how your skills and research interests fit the project

For more information:
– http://www.wearable.ethz.ch/
– http://www.ife.ee.ethz.ch/openpositions/job_cupid1

Open PhD Position @ ETH Zurich “Crowd-Sourcing of Human Activity Recognition on Mobile Phones”

=== Background ===

The Wearable Computing Laboratory at ETH Zurich (http://www.wearable.ethz.ch/) develops methods to recognize complex and hierarchical human activities from data captured from sensors placed on-body, such as those found in mobile phones.

One goal of our group is to achieve an automatic “life log” of the user’s activities using mobile phones sensors. An example of a life log could be: “yesterday, you were taking a coffee with your friend, then went for shopping, did cook a cake, and watched tv before going to sleep”. This has promising uses to support people with memory loss and dementia. It also enables a lot of exciting applications when combined with social networks, such as the automatic update of the user’s profile on facebook, and pervasive gaming.

In order to realize such a life log the system must be able to recognize location, modes of locomotion, postures, gestures, and infer higher-level activities from these primitives. This is done with streaming signal processing and machine learning techniques. Current approaches follow a “learning by demonstration” principle, where the user is requested to provide training data to the system. This is a major limitation currently, as the size of the training datasets remains small and does not capture the rich variability of human activities.

=== Smart-Days: “Smart Distributed daily living ActivitY-recognition Systems” ===

In a Swiss-funded project which involves ETH Zürich and the University of Applied Sciences atYverdon, we develop a novel crowd-sourcing-based approach to recognize complex human activities. The project time span is 2011-2014.

The Smart-DAYS system is composed of: multi-modal sensor nodes providing data relevant to the user’s activities and capable of local data interpretation (e.g. motion sensors nodes); an on-body mobile device (e.g. phone) that is fusing the sensor node data to infer the user’s activities; and a cloud server backend storing collective activity models.

Smart-Days offers these advances over the state of the art:
* The activity models are obtained from a multitude of users providing sporadic annotations about their current or past activities. Thus a large and rich set of human activities can be captured in a bottom-up process;
* The set of activities to recognize is not statically defined at design-time; it can grow to encompass new activities as they are discovered at run-time;
* The system is capable of adapting activity models at run-time to cope with changing user behavior patterns;
* The system can share and reuse the knowledge acquired between the user’s of the system;
* The system can exploit online data sources to bootstrap the recognition capabilities;
* Essentially, the system will be able to recognizing an unbounded, growable, and adaptable set of human activities in open-ended environments.

In order to realize this, Smart-Days approach combines:
* High-level activity recognition based on unsupervised hierarchical clustering and semisupervised techniques
* Crowdsourcing of activity model acquisition, exploiting the knowledge acquired by other users’s devices and web databases to bootstrap activity recognition.

At ETH Zürich we focus on the development of the crowd-sourcing approach to activity recognition. The University of Applied Sciences at Yverdon focuses on unsupervised hierarchical data clustering techniques. These two approaches are combined into a series of joint evaluations in a large scale deployement. Thus, a tight collaboration between the two institutes is foreseen.

=== Job description ===

We offer a PhD position within the framework of the 3 year long (2011-2014) Smart-Days project. In this position you will be responsible for one of the project’s work package. This work package comprises all elements required to achieve a robust crowd-sourced acquisition of human activity models. You will closely collaborate with the project’s partners throughout the duration of the project.

Your work environment will be multinational with frequent travels to the partner’s location.

Within this project, your research topics will include (but are not limited to):
* Activity and context recognition on mobile phones: Real-time streaming signal processing and machine learning on mobile phones or embedded systems
* Online adaptive machine learning: including subsets of supervised, semi-supervised, unsupervised techniques, as well as transfer learning and multitask learning
* Incentive design and capture of sporadic user feedback: methods allowing to capture user feedback are devised, allowing to maximize information gain and minimize user disturbance.
* Real-world deployment and evaluation: experiments during several weeks or months with a mobile platform deployed with an app store.
* Multimodal data fusion

Starting date: ASAP

=== Requirements ===

The candidate has a diploma, MSc, or equivalent in electrical engineering, micro-engineering, computer science or mathematics.
He has strong interests in mobile computing, machine learning/pattern recognition, signal processing, adaptive and learning systems, and in the combination of theoretical and experimental research.
Fluent spoken and written English is mandatory.

=== Contact and application ===

For further information about the Smart-Days project and your contribution within it, please contact Dr. Daniel Roggen (droggen(at)ife.ee.ethz.ch), or Prof. Andres Perez-Uribe (andres.perez-uribe(at)heig-vd.ch).

If you are interested and believe that you qualify, please send your application to Prof. Gerhard Tröster (troester(at)ife.ee.ethz.ch). Include:
* Curriculum Vitae with the names and contact details of at least 2 references
* a list of exams and grades obtained
* a cover letter explaining how your skills and research interests fit the project

For more information:
– http://www.wearable.ethz.ch/
– http://www.ife.ee.ethz.ch/openpositions/job_smartdays

Open PhD Position @ ETH Zurich “Indoor Localization and Context Recognition for Patients with Parkinson’s Disease”

=== Background ===
The Wearable Computing Laboratory at ETH Zurich (http://www.wearable.ethz.ch/) develops methods to recognize human activities and context from data captured from sensors placed on-body, such as those found in mobile phones.

One of the uses of context and activity recognition methods is to support independent living for people in need, such as elderly or people with disabilities.

=== CuPiD: “Closed-loop system for personalized and at-home rehabilitation of people with Parkinson’s Disease” ===

CuPiD is a European funded project with 8 partners distributed in 6 EU countries during a 3 year period (2011-2014). Partners include 2 universities, 2 medical centers, 3 enterprises and one research foundation.

The objective of CuPiD is to develop a system to support the rehabilitation of persons with Parkinson’s diseases. The project considers three issues commonly faced by Parkinson’s disease patients:
* Freezing of gait: while walking, the person suddenly “freezes” and is unable to pursue walking. This may last a few seconds up to a minute. The person is conscious of this, but unable to resume walking. This happens more oftenwhen performing sharp turns, entering narrow areas, or when faced with cognitive load or social pressure.
* Motor control: fine motor control used for manipulative gestures is often negatively affected with the progression of Parkinson’s disease
* Balance: loss of balance is common and may lead to falls, or difficulties in appropriately shifting the body weight in sit-to-stand transitions.

The aim of the CuPiD project is to devise wearable and ambient devices to perform at-home rehabilitation for these issues. The principle consists in providing pre-emptive feedback to the user, shortly before the onset of an issue, and to rely on neural plasticity for the user’s behavior to unconsciously adapt to avoid situations leading to these issues algogether.

CuPiD makes use of sensors such as movement sensors placed on body, physiological sensors, location awareness, etc, in order to recognize the onset of an issue. Feedback includes audio and vibrotactile feedback, and virtual reality. The CuPiD system allows logging of data to monitor the user’s rehabilitation progress by care personnel.

At ETH Zürich we focus within this project on the issue of the freezing of gait, but close collaboration is foreseen between all project partners as an integrated rehabilitation system is the ultimate goal of the project.

=== Job description ===

We offer a PhD position within the framework of the 3 year long (2011-2014) European-funded CuPid project.
In this position you will be responsible for one of the project’s work package. This work package consists in devising context and indoor localization methods to deliver specific feedback to the user as part of the rehabilitation programme. Specifically the work package comprises:
* Devising a localization method for use in unknown indoor environment. This uses sensors such as those available on mobile phones or custom wearable sensor nodes, and uses techniques inspired from simultaneous localization and mapping (SLAM).
* Recognition pre-defined activities and gestures in the home environment from wearable sensors to deliver feedback to the user in specific situations.
* Implementation of the localization and context recognition methods in embedded platforms or mobile phones.

Your work environment will be multinational, both in Zürich and with project partners within Europe, with frequent travels to the partner’s location.

Within this project, your research topics will include (but are not limited to):
* Simultaneous Localization and Mapping (SLAM) with Body-Worn sensors: identification of the user’s position in unknown indoor environment on mobile phones or custom sensors (e.g. Wifi fingerprints to indentify re-occurring locations, dead-reckoning from acceleration and compass sensors to estimate path).
* Context recognition with body-worn sensors with streaming signal processing and machine learning techniques in embedded platforms
* Wireless sensor networks
* Multimodal data fusion
* Mobile phone and embedded platform implementations

Starting date: ASAP

=== Requirements ===

The candidate has a diploma, MSc, or equivalent in electrical engineering, micro-engineering, computer science or mathematics.
He has strong interests in mobile computing, machine learning/pattern recognition, signal processing, adaptive and learning systems, and in the combination of theoretical and experimental research.

Fluent spoken and written English is mandatory.

=== Contact and application ===
For further information about the CuPiD project and your contribution within it, please contact Dr. Daniel Roggen.

If you are interested and believe that you qualify, please send your application to Prof. Gerhard Tröster. Include:
* Curriculum Vitae with the names and contact details of at least 2 references
* a list of exams and grades obtained
* a cover letter explaining how your skills and research interests fit the project

For more information:
– http://www.wearable.ethz.ch/
– http://www.ife.ee.ethz.ch/openpositions/job_cupid2

Postdoc/PhD Positions in Robot Learning and Reinforcement Learning

Postdoc/PhD positions available at the Intelligent Autonomous Systems Group at Darmstadt University of Technology / Technische Universitaet Darmstadt, Germany

The Intelligent Autonomous Systems Group invites applications for several research positions (postdoc/phd) in the domain of robot learning and reinforcement learning. The core focus of the group evolves around the problem how we can endow robots with new motor skill and allow them to self-improve their abilities. In order to approach this topic, we develop novel machine learning methods, evaluate possibilities from the perspective of classical robotics and follow biological inspiration from human motor control.

The researcher will be embedded in this highly international research group lead by Jan Peters which is centered at Darmstadt, Germany but is tightly intertwined with the Department of Empirical Inference & Machine Learning at the Max Planck Institute for Intelligent Systems as well as the Computational Learning and Motor Control Lab at the University of Southern California. The researcher will encouraged to partake in the our collaborations with many important machine learning and robotics groups in North America, Europe and Japan.
More background can be found at the preliminary homepage http://www.intelligent-autonomous-systems.de/.

The successful candidate should fit in with these interests and should have a strong background in either robotics and machine learning, ideally in both.
Applications of scientists with background in reinforcement learning, imitation learning, robot learning, robot grasping and manipulation, motor skill learning, and biomimetic robotics are particularly encouraged. The positions are to be filled as soon as possible and are available immediately. Applications will be considered until all positions are filled. The application should include a cover letter stating the candidates research interests, curriculum vita, list of publications, and names of three referees. Electronic submission is requested.

IF YOU ARE CONSIDERING TO APPLY, PLEASE PING ME (=Jan Peters) BY EMAIL ASAP!

Contact Information:
Prof. Jan Peters, Ph.D.
Technische Universitaet Darmstadt, FB Informatik
Hochschulstr. 10, 64289 Darmstadt, Germany
phone: +49-6151-16 7351
fax: +49-6151-16 7374
email: peters(at)informatik.tu-darmstadt.de
mail(at)jan-peters.net
www: http://www.intelligent-autonomous-systems.de/

CFP: 1st IEEE Workshop on Kernels and Distances for Computer Vision

Call for Participation:
1st IEEE Workshop on Kernels and Distances for Computer Vision
ICCV 2011 Workshop, Barcelona, Spain
http://kdcv2011.ist.ac.at/
workshop date: November 13, 2011,
submission deadline: August 28th, 2011

A basic building block in many high-level Computer Vision tasks such as image classification, object detection, image retrieval, image segmentation, etc is the notion of a distance or similarity between images and/or parts thereof. This is conveniently formalized in the concept of distance functions and kernels which can be used with many existing algorithms such as large margin classifiers or nearest neighbour algorithms. The importance of suitable distances and kernels is reflected in the large number of publications at major computer vision conferences. Researchers have mainly pursued two different routes, via a) encoding human prior knowledge manually, or b) learning-based approaches that try to infer these functions automatically from training data. In particular, approaches to the latter have been applied quite successfully to the aforementioned tasks, as evident from ever-improving performance results on standard benchmark data sets. However, several key issues remain, and many existing approaches employ only simple distance and kernel functions that are not tailored to the specifics of the Computer Vision problem at hand.

We invite authors to submit abstracts of relevant research for presentation at the workshop. Topics relevant to the workshop include (but are not limited to) ongoing research efforts related to distances and kernels in computer vision, such as novel algorithmic formulations, applications of distance/kernel learning in vision, or empirical evaluations of existing techniques.

Invited Speakers:
Liefeng Bo (University of Washington), Deva Ramanan (UC Irvine), Fernando de la Torre (CMU), Andrea Vedaldi (University of Oxford)

Submissions:
Abstracts should be submitted as .pdf, and should not exceed two pages. Each submission will be reviewed by members of the workshop committee, and successful abstracts will be selected for presentation at the workshop as either oral or poster presentations. The deadline for submission is August 28, 2011. We expect decisions to be announced on or around September 4, 2011. Link to the submission site:

https://cmt.research.microsoft.com/KDCV2011/

Please note that we do not plan workshop proceedings. The intention of the workshop is to provide an overview of recent advances, so we invite presentation of previously-published work. We also invite submissions of work in progress that has not been published; since there are no proceedings, such work can be published in another peer-reviewed venue.

Organization
The workshop is co-organized by Peter Gehler (Max Planck Institute for Informatics), Christoph Lampert (IST Austria), and Brian Kulis (UC Berkeley).

PhD scholarships, Heidelberg University

5 PhD scholarships in Computational Linguistics: “Semantics beyond the
sentence” (Date: July 5th 2011, Deadline: July 31st)

We are soliciting applications for 5 PhD scholarships (EUR 1310 per
month, tax-free) within the research initiative “Coherence in Language
Processing: Semantics Beyond the Sentence”. They are associated with
the Graduate Program “Semantic Processing”, jointly organized by the
Computational Linguistics Department (ICL) at Heidelberg University
[1] and the NLP Group at HITS gGmbH [2] in Heidelberg, Germany. The
scholarships are available from October 1st, 2011, and will run three
years (dependent on positive evaluations after the first and second
years).

The goal of the research initiative “Coherence in Language Processing:
Semantics Beyond the Sentence” is to investigate computational models
of semantic phenomena at the discourse level by (a) analysing semantic
phenomena at the discourse level and represent them; (b) using these
representatons to improve semantic analysis; (c) evaluating (a) and
(b) in NLP applications, including SMT. See [3] for details.

Candidates should have a strong background in computational
linguistics and possess a Masters or a Diploma degree in either
Computational Linguistics or Computer Science/Linguistics with a
specialization in Natural Language Processing. Experience with machine
learning, corpus-based methods and statistics is a plus. Strong
programming skills (Java, C++, or Python) are required.

Scholarship holders will become members of the graduate program
“Semantic Processing” [4]. They will participate in a PhD colloquium
and may attend lectures at the university and the Heidelberg Graduate
School “MathComp” [5]. The graduate program provides a lively research
environment with about ten PhD students already
enrolled. Interdisciplinary research links exist to computer science,
linguistics, and other disciplines.

Applications should include a research statement indicating the
candidates’ interests and plans (which should fall into the area
outlined above), university transcripts and a CV. They should also
indicate a preference for a primary supervisor matching their plans
(Anette Frank, Sebastian Pado, Stefan Riezler, Michael Strube). For
more details, see the links below.

Please send your application, including all documents, as a single PDF
file before **July 31st 2011** to:
Anke Sopka & Corinna Schwarz
Sekretariat Computerlinguistik
sekretariat(at)cl.uni-heidelberg.de

Questions should be sent to Sebastian Pado, pado(at)cl.uni-heidelberg.de

Links:
[1] ICL (http://www.cl.uni-heidelberg.de)
[2] HITS (http://www.h-its.org/nlp)
[3] Research Initiative Discourse Semantics
http://semproc.cl.uni-heidelberg.de/discourse-semantics/
[4] Graduate Program Semantic Processing
http://semproc.cl.uni-heidelberg.de
[5] Heidelberg Graduate School in Mathematical
and Computational Methods for the Sciences
http://www.mathcomp.uni-heidelberg.de/

Researcher position, Statistical Machine Translation

The Machine Learning for Document Access and Translation group of the
Xerox Research Centre Europe conducts research in Statistical Machine
Translation and Information Retrieval, Categorization and Clustering
using advanced machine learning methods. The team is part of the
PASCAL 2 European Network of Excellence, ensuring a strong network of
academic collaboration.

We are opening a position for a researcher with a background in
Statistical Machine Translation to support our participation in the
EU-funded project TransLectures.

Required experience and qualifications:

– PhD in computer science or computational linguistics with focus on
SMT or statistical NLP.
– Good publication record and evidence of implementing systems.
– A good command of English, as well as open-mindedness and the will
to collaborate within a team.
– Acquaintance with Spoken Language Translation is a plus.

Preferred starting date: November 2011

Contract duration: 18 months

Application instructions

Please email your CV and covering letter, with message subject
“Statistical Machine Translation Researcher” to xrce-candidates and to
Nicola.Cancedda at xrce.xerox.com. Inquiries can be sent to
Nicola.Cancedda at xrce.xerox.com.

XRCE is a highly innovative place and we strongly encourage
publication and interaction with the scientific community.

Job announcement URL:

http://www.xrce.xerox.com/About-XRCE/Career-opportunities/Researcher-Statistical-Machine-Translation

Research position in machine learning / constrained optimisation – University of York

Postdoctoral Research Fellow – Application deadline: 29 July 2011

I’m looking for an energetic and creative Postdoctoral Research Fellow
to work on a Medical Research Council funded project: “A graphical
model approach to pedigree reconstruction using constrained
optimisation”. This research is at the intersection of constrained
optimisation techniques (particularly integer linear programming) and
machine learning of Bayesian networks (BNs) within a Bayesian
framework. The goal of the project is to learn BN representations of
pedigrees (‘family trees’) from genetic data and prior information.
Pedigrees are useful for uncovering genetic factors in disease and
more generally for investigating gene-gene and gene-environment
interactions.

The project is in collaboration with the Dept of Genetics at the
University of Leicester and the Dept of Social Medicine at the
University of Bristol. The project will run from 3 October 2011 for 3
years. Salary in the range GBP 35,788 – 44,016

For further details and details of how to apply please go to:
https://www22.i-grasp.com/fe/tpl_YorkUni01.asp?newms=jj&id=45117

For informal enquiries, please email James Cussens: jc(at)cs.york.ac.uk

http://www.cs.york.ac.uk/~jc