Knowledge 4 All Foundation Concludes Successful Collaboration with European AI Excellence Network ELISE

Knowledge 4 All Foundation is pleased to announce the successful completion of its participation in the European Learning and Intelligent Systems Excellence (ELISE) project, a prominent European Network of Artificial Intelligence Excellence Centres. ELISE, part of the EU Horizon 2020 ICT-48 portfolio, originated from the European Laboratory for Learning and Intelligent Systems (ELLIS) and concluded in August 2024.

The European Learning and Intelligent Systems Excellence (ELISE) project, funded under the EU's Horizon 2020 programme, aimed to position Europe at the forefront of artificial intelligence (AI) and machine learning research
The European Learning and Intelligent Systems Excellence (ELISE) project, funded under the EU’s Horizon 2020 programme, aimed to position Europe at the forefront of artificial intelligence (AI) and machine learning research

Throughout the project, Knowledge 4 All Foundation collaborated with leading AI research hubs and associated fellows to advance high-level research and disseminate knowledge across academia, industry, and society. The Foundation contributed to various initiatives, including mobility programs, research workshops, and policy development, aligning with ELISE’s mission to promote explainable and trustworthy AI outcomes.

The Foundation’s involvement in ELISE has reinforced its commitment to fostering innovation and excellence in artificial intelligence research. By engaging in this collaborative network, Knowledge 4 All Foundation has played a role in positioning Europe at the forefront of AI advancements, ensuring that AI research continues to thrive within open societies

Knowledge 4 All Foundation Completes Successful Engagements in European AI Excellence Network HumanEAI-Net

Knowledge 4 All Foundation (K4A) is pleased to announce the successful completion of its engagements in two prominent European Networks of Artificial Intelligence (AI) Excellence Centres: the HumanE AI Network. These initiatives have been instrumental in advancing human-centric AI research and fostering collaboration across Europe.

Both HumaneAI-Net and ELISE were part of the H2020 ICT-48-2020 call, fostering AI research excellence in Europe.
The HumaneAI-NetE was part of the H2020 ICT-48-2020 call, fostering AI research excellence in Europe

The HumanE AI Network, comprising leading European research centres, universities, and industrial enterprises, has focused on developing AI technologies that align with European ethical values and societal norms. K4A’s participation in this network has contributed to shaping AI research directions, methods, and results, ensuring that AI advancements are beneficial to individuals and society as a whole.

K4A remains committed to advancing AI research and development, building upon the foundations established through these collaborations. The foundation looks forward to future opportunities to contribute to the global AI community and to promote the responsible and ethical development of AI technologies.

AI4D blog series: Text to speech WOLOF dataset

In this work, we propose to create a Wolof text to speech dataset. Text To Speech(TTS) dataset is composed of pairs of text and audio, where it’s text is the transcription of the associated audio. But before we deep dive in the process of collecting the dataset, let’s take a look at some interesting facts about Wolof language and why it is important to build such dataset.

Wolof /ˈwoʊlɒf/[4] is a language of Senegal, the Gambia and Mauritania, and the native language of the Wolof people. Like the neighbouring languages Serer and Fula, it belongs to the Senegambian branch of the Niger–Congo language family. Unlike most other languages of the Niger-Congo family, Wolof is not a tonal language.[1]

Wolof is spoken by more than 10 million people and about 40 percent (approximately 5 million people) of Senegal’s population speak Wolof as their native language. Increased mobility, and especially the growth of the capital Dakar, created the need for a common language.

Today, an additional 40 percent of the population speak Wolof as a second or acquired language. In the whole region from Dakar to Saint-Louis, and also west and southwest of Kaolack, Wolof is spoken by the vast majority of the people.

Typically when various ethnic groups in Senegal come together in cities and towns, they speak Wolof. It is therefore spoken in almost every regional and departmental capital in Senegal.[1]

Goal and benefits

Our goal here is to help researchers and companies to have a dataset that they can use to experiment and build automatic systems that can convert text to audio. This type of system can help people with reading problems (e.g blind or illiterate people) to get information and interact with other people or even new technologies(e.g web and mobile applications). There  is also the fact that Wolof is not written correctly by most Wolof natifs and non natif, so it becomes difficult for them to read Wolof text, but with a TTS system they could easily be able to understand Wolof text and also learn how Wolof should be written in the process.

Text data collection and preparation

The text collection is the phase of creating clean and representative text that can be used to do the recordings.

Sources: Unlike popular languages, such as english or french, Wolof texts are very scarce on the Internet and in general in digital form, so we had to make more effort just to get the raw text data.

The text used to build this dataset is collected from different sources, such as Wolof website news(sabaal and dekufi), wikipedia and many many texts provided by the Wolof expert in our team.

Cleaning: The cleaning of the text was the most challenging and time consuming task of this work. We had to remove some non Wolof sentences, non used symbols or words, long sentences or paragraphs. There was also manual cleaning of the data.

We also developed with the help of our Wolof expert an algorithm to convert Wolof text to Wolof phonemes. This part is crucial here, because we needed to verify if the text corpus covers correctly the Wolof phonemes, because if phonemes are not covered correctly, the resulting TTS system will have difficulties converting some phonemes to their corresponding sound. After some iteration, we were able to choose sentences that cover all  Wolof phonemes with a good distribution with respect to phonemes frequencies  .

Where are we: the collection of the text is complete with more than 30 000 sentences cleaned and ready for recording.

Audio data collection

The audio collection is the phase of creating recordings that correspond to the already cleaned text.

The human part: The audio collection is done by two actors, a male and a female voice. Each one needs to record at least 20 000 out of the 30 000 cleaned sentences. We have had some issues on starting the recording due to the time of text cleaning but, we also had some delay with respect to having the microphone and the material resources needed to do the recording. The other problem we had here was building the Web platform that actors use to do the recording, which we will talk about in the next section.

The platform for recording: We forked and modified open source project Common Voice[2], which is a project from Mozilla to help our actors to easily due the recording using just their internet browser. Data is collected and automatically sent to an S3 bucket after each 5 recordings.

Where are we: We had a big delay on the collection of the recordings (the recording started just three weeks ago), as writing this article(2020/10/20), we have collected 4000 out of the 40 000 recordings. But we hope that the recording rate will be higher after the actors are more used to the process. We are expecting to collect at least 1400 recordings per week from the two actors.

Also after the recording is done, we will verify and clean the audio data set, for example, trim silences, duration check, and so on. We will also build à baseline model with this dataset and make it available to the community.

Conclusion

We are really grateful to AI4D for giving us the possibility and the means to build and collect a Wolof TTS dataset and we hope that this kind of initiative will be more frequent to help create more and more dataset for our local languages so that new system and models can be built with them and so increase accessibility to new technologies but also help more people to have access to information in their own language.

References

[1] https://en.wikipedia.org/wiki/Wolof_language

[2] https://github.com/mozilla/common-voice