Special issue on SPEECH SEPARATION AND RECOGNITION IN MULTISOURCE ENVIRONMENTS
COMPUTER SPEECH AND LANGUAGE
Special issue on
SPEECH SEPARATION AND RECOGNITION IN MULTISOURCE ENVIRONMENTS
Submission Deadline: DECEMBER 31, 2011
One of the chief difficulties of building distant-microphone speech recognition systems for use in everyday applications is that the noise background is typically `multisource’. A speech recognition system designed to operate in a family home, for example, must contend with competing noise from televisions and radios, children playing, vacuum cleaners, and outdoors noises from open windows. Despite their complexity, such environments contain structure that can be learnt and exploited using advanced source separation, machine learning and speech recognition techniques such as those presented at the 1st International Workshop on Machine Listening in Multisource Environments (CHiME 2011). http://spandh.dcs.shef.ac.uk/projects/chime/workshop/
This special issue solicits papers describing advances in speech separation and recognition in multisource noise environments, including theoretical developments, algorithms or systems.
Examples of topics relevant to the special issue include:
• multiple speaker localization, beamforming and source separation,
• hearing inspired approaches to multisource processing,
• background noise tracking and modelling,
• noise-robust speech decoding,
• model combination approaches to robust speech recognition,
• datasets, toolboxes and other resources for multisource speech separation and recognition.
Manuscript submissions shall be made through the Elsevier Editorial System (EES) at
Once logged in, click on “Submit New Manuscript” then select “Special Issue: Multisource Environments” in the “Choose Article Type” dropdown menu.
December 31, 2011: Paper submission
March 30, 2012: First review
May 30, 2012: Revised submission
July 30, 2012: Second review
August 30, 2012: Camera-ready submission
We are looking forward to your submission!
Jon Barker, University of Sheffield, UK
Emmanuel Vincent, INRIA, France