Abstract

To initiate the collection and construction of a medicinal plant database on top of which a search engine and AI-based image recognition for plants to enable scalable search of preserved knowledge.

Objectives

Medicinal plant Database construction: collect image dataset + Enumerate, curate and associate labeling metadata about pharmaceuticals virtues. Building a comprehensive database requires that teams are spread out to investigate various sources of information (e.g., existing literature) as well as some well-known traditional healers to collect information and precisely label them. Then the collected data is merged and curated.

Develop and Operationalize a Detection and Search engine for medicinal plants. Leverage the built database to implement Artificial Intelligence technology for recognizing plants based on leaves photographs. Build an “Information Retrieval”-based search engine on top of natural language descriptions in the database to enable scalable search of preserved knowledge.

Long-term vision

As Amadou Hampâté Bâ 1960 said while addressing UNESCO members in 1960, “in Africa, when an old man dies, it’s a library burning”. This is particularly true today when we debate on the virtue of plants for disease therapy. A substantial amount of knowledge is being lost due to a lack of proper preservation in digital, searchable and reusable databases. With this project, we aim to make the preservation of ethnopharmacological Knowledge in the Sahel an ultimate target. To that end, we propose to initiate the construction of a medicinal plant database on top of which a search engine and an AI-based image recognition for plants could help serve a large panel of users.

Abstract

To develop an Arabic text to Moroccan Sign Language (MSL) translation product through building two corpora of data on Arabic texts for the use of translation into MSL. The collected corpora of data will train Deep Learning Models to analyze and map Arabic words and sentences against MSL encodings.

Introduction

Over 5% of the world’s population (466 million people) has disabling hearing loss. 4 million are children [1]. They can be hard of hearing or deaf. Hard of hearing people usually communicate through spoken language and can benefit from assistive devices like cochlear implants. Deaf people mostly have profound hearing loss, which implies very little or no hearing.

The main impacts of deaf is on the individual’s ability to communicate with others in addition to the emotional feelings of loneliness and isolation in society. Consequently, they can not equally access public services, mostly education and health and have not equal rights in participating at the active and democratic life. This leads to a negative impact in their lives and the lives of the people surrounding them. Over the world, deaf people often communicate using a sign language with gestures of both hands and facial expressions.

Sign languages are full-fledged natural  languages with their own grammar and lexicon. However, they are not universal although they have striking similarities. In Morocco, deaf children receive very few education assistance. For many years, they were learning the local variety of sign language from Arabic, French, and American Sign Languages [2]. In April 2019, the governement standardized the Moroccan Sign Language (MSL) and initiated programs to support the education of deaf children [3]. However, the involved teachers are mostly hearing, have limited command of MSL and lack resources and tools to teach deaf learn from written or spoken text. Schools recruit interpreters to help the student understand what is being teached and said in class. Otherwise, teachers use graphics and captioned videos to learn the mappings to signs, but lack tools that translate written or spoken words and concepts into signs. This project comes to solve this particular issue.

Objectives

We propose an Arabic Sppech-to-MSL translator. The translation could be divided into two parts, the speech-to-text part and the text-to-MSL part. In a previous work [4], we was interested in the arabic Speech-to-Text translation. We conducted a research and a comparison on the existing Speech-to-Text APIs. A web application was built for this end [check web app here]. Our main focus in this current work is to take the results from the Speech-to-Text module and perform Text-to-MSL translation.

Up to now, there is not enough data of arabic words with their translations into MSL. This is of course challenging because we have to bring interpreters and linguists together in order to create this initial corpus. Each word or concept in the arabic corpus could be mapped to a time series of hand gestures and facial expressions in the MSL corpus. Our main objective is to find the best possible mapping between these two corpuses. The collected data will allow us to train big Deep Learning Models. In fact, we aim to explore the existing deep learning pretrained architectures suitable for analyzing arabic words and sentences and find their mappings with the MSL encodings. Recurrent Neural Networks using GRU and LSTM units and their variants are proved to be the most suitable and powerful when dealing with sequential data [5][6]. We believe that tuning these state of the art models will allow us to achieve good generalization performances.

Expected outcomes

With this work we expect building the MSL and the Arabic text corpuses that we aim keeping free and open for public use. For this end, we will develop an interactive web application for the creation of the two corpuses. The Text-to-MSL translation product will be hosted on a web and mobile application.

 

Abstract

To test the feasibility of the deployment of Unmanned Ground Vehicles (UGVs) for automated intelligent patrol, detection, wildlife monitoring, identification across the national parks and reserves in Kenya.

Introduction

Wildlife tourism is a significant and growing contributor to the economic and social development in the African region through revenue generation, infrastructure development and job creation. According to a recent press release by the World Travel and Tourism Council [1], travel and tourism contributed $194.2
billion (8.5% of GDP) to the African region in 2018 and supported 24.3 million jobs (6.7% of total employment). Globally, travel and tourism is a $7.6 trillion industry, and is responsible for an estimated 292 million jobs [2]. Tourism is also one of the few sectors in which female labor participation is already above parity, with women accounting for up to 70% of the workforce [2].

However, the wildlife tourism industry in Africa is being increasingly threatened by rising human population and wildlife crime. As poaching becomes more organised and livestock incursions become frequent occurrences, shortages in ranger workforce and shortcomings in technological developments in this space put thousands of species at risk of endangerment, and threaten to collapse the wildlife tourism industry and ecosystem. According to The National Wildlife Conservation Status Report, 2015 – 2017, presented by the Ministry of Tourism and Wildlife of Kenya [3], there is currently a shortage of 1038 rangers, from the required 2484 rangers in Kenyan national parks and reserves, a deficit of over 40%. With tourism in Kenya contributing a revenue of $1.5 billion in 2018 [4], and with the wildlife conservancies in Kenya supporting over 700,000 community livelihoods [3], the recession of the wildlife tourism industry could have major adverse economic and social impacts on the country. It is thus critical that sustainable solutions are reached to save the wildlife tourism industry, and further research is fuelled in this area.

The national parks, reserves and conservancies in Kenya span thousands of square kilometers and make it difficult for rangers to track down all possible poaching activities. Poachers normally use guns, snares, and poison to capture wild animals. By collecting real world data on poaching activities, better learning of adversarial behavior is achieved and optimal strategies for anti-poaching patrols can be employed [5]. According to [5], a large number of security games research lacks actual adversary data and does not consider heterogeneity among large populations of adversary which makes it difficult to build accurate models of adversary behavior. Other problems inherent in past predictive models neglect the uncertainty in crime data, use coarse-grained crime analysis and propose time-consuming techniques that cannot be directly integrated into low-resource outposts [6].

To address shortages in ranger workforce, carry out monitoring activities more effectively, and detect criminal or endangering activities with greater precision, we propose the development of an open dataset containing georeferenced data on poaching incidents from the past 10 years as well as historical data on tagged elephant and rhino movements. We aim to observe correlations between the data using machine learning models and effectively model poaching trends and behavioural patterns to predict the location of the next poaching attack and suggest better patrol routes. The study will be carried out over a period of 4 months at Nairobi National Park in Kenya which covers a total area of 117 square kilometers and hosts many of the endangered wildlife species listed in the IUCN Red List of Threatened Species, such as the African Elephant and Black Rhinoceros.

Objectives

  1. To generate a real world dataset that maps poaching activities within the park.
  2. To develop a hybrid model that predicts the behavior of poachers by capturing their heterogeneity.
  3. To improve the accuracy of the hybrid model by creating novel algorithms in determining poaching
    activities and hotspots.
  4. To investigate the computation challenges faced when learning the behavioral model of poachers.

Vision

Our future vision is to test the feasibility of the deployment of Unmanned Ground Vehicles (UGVs) for automated intelligent patrol and wildlife monitoring across the national parks and reserves in Kenya. In
addition to carrying out automated patrol using the models learned in this study, the UGVs would be fitted
with an array of cameras and sensors that would enable it to navigate autonomously within the parks, and run multiple deep learning and computer vision algorithms that carry out numerous monitoring activities such as detection of poaching, livestock incursions, human wildlife conflict, distressed wildlife, and species
identification.

References

[1] “African tourism sector booming – second-fastest growth rate in the world”, WTTC press release , Mar. 13, 2019. Accessed on Jul. 11, 2019. [Online]. Available: https://www.wttc.org/about/media-centre/press-releases/press-releases/2019/african-tourism-sector-booming-second-fastest-growth-rate-in-the-world/

[2] “Supporting Sustainable Livelihoods through Wildlife Tourism”, World Bank Group , 2018.

[3] “The National Wildlife Conservation Status Report, 2015 – 2017”, pp. 75, 131, Ministry of Tourism and
Wildlife, Kenya , 2017.

[4] “Tourism Sector Performance Report – 2018” , Hon. Najib Balala, 2018.

[5] R. Yang, B. Ford, M. Tambe, and A. Lemieux, “Adaptive resource allocation for wildlife protection against illegal poachers,” in Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems , May 2014, pp. 453-460.

[6] S. Gholami, S. McCarthy, B. Dilkina, A. Plumptre, M. Tambe, et. al., “Adversary models account for
imperfect crime data: Forecasting and planning against real-world poachers,” in Proceedings of the 17th
International Conference on Autonomous Agents and MultiAgent Systems , July 2018, pp. 823-831.

Abstract

To determine the effectiveness of Long Short Term Memory Network in the prediction of pregnant mothers at high risk of developing pre-eclampsia and the effectiveness of prophylaxis of preeclampsia.

Background

The Sustainable Development Goal (SDG) 3 aims to reduce the global maternal mortality ratio to less than 70 per 100,000 live births. These deaths are caused by a number of conditions experienced during pregnancy and childbirth. Preeclampsia has adverse effects on maternal health especially in low and middle-income countries. Many challenges persist in the prediction, prevention and management of preeclampsia. Prophylytic measures such as low dose aspirin and calcium supplementation has been used western countries however more evidence is needed before it can be used in developing countries. The current management is timely diagnosis; proper management, timely delivery and good follow up after birth. This study therefore seeks to explore the use of wearable devices to continuously measure blood pressure in pregnant mothers. The obtained blood pressure data will be used to predict future maternal blood pressures using Long Short Term Memory (LSTM) recurrent neural networks on mobile devices. Those whose blood pressures will be predicted to be high, therefore risking development of preeclampsia will be put into two groups: one will receive the usual care in the high risk clinic while the second group will be supplemented with low dose asprin and calcium from second trimester. It is expected that the prediction will serve to identify those at risk early and management instituted immediately and those supplemented with low dose aspirin and calcium will not develop preeclampsia. Additionally the data collected will be valuable for future studies in the area of preeclampsia prediction using machine learning.

Introduction

Preeclampsia is a pregnancy complication characterized by persistent high blood pressure. It usually begins after 20 weeks of pregnancy in women whose blood pressure (BP) has been normal. If left untreated it will progress to eclampsia that is often fatal to both mother and baby. (Macdonald-Wallis, C et al 2015). Preeclampsia is often diagnosed when a mother goes to the health care facility for routine check where BP measurement is taken. The first sign of preeclampsia is a BP reading exceeding 140/90 in two or more occasions, at least four hours apart at 20 or more week’s gestation. Most pregnant mothers in Low and Middle Income Countries do not have personal BP machines to take regular BP readings thus they depend on BP reading during the antenatal clinic visits, which are 4-5 for the entire pregnancy. Early detection of preeclampsia is often missed during these visits because the BP measurement is often taken once unless otherwise indicated during the visit.

The detection and management of preeclampsia in out of clinic settings has however become much easier in the recent past through the development of smart blood pressure monitors. These devices that are now readily available on the market use a variety of non intrusive methods such as a cuff that inflates slightly to measure systolic and diastolic pressure via the oscillometric method as is the case with the Omron Smart watch (Omron 2019) and using a combination of optical sensors and clinically validated software algorithms as is the case with a number of smart watches such as the one developed by Aktiia (2018) and Bpro by MedTach Inc (2018). These devices are not only able to take readings and generate alarms but are also capable of transmitting this data to other devices such as mobile phones for use in further analysis using techniques such as machine learning cite.

The use of machine learning techniques for blood pressure prediction is a practice that is steadily growing using techniques such as Artifical Neural Networks (Hao et al, 2015), as well as classification and  regression trees (Zhang et al, 2018). Additionally Long Short Term Memory (LSTM) networks are increasingly being considered in studies such as the ones by Su el al (2017), Zhao et al (2019), Lo et al (2017) and Radha et al (2019). A majority of these techniques, current studies and solutions are developed and deployed on devices with significant computing and storage power such as servers and super computers which presents a major challenge overall in the potential utility of machine learning for individuals who increasingly prefer to access services, content and solutions on their mobile devices.

Preeclampsia remains a significant public health problem for both the developed and developing countries contributing to both maternal morbidity and mortality globally (McClure, Saleem, Pasha, & Goldenberg,  2009; Shah et al., 2009) however the impact of the disease is felt more severely in the developed countries (Prakash et al., 2010) where unlike other causes of mortality, medical intervention may be ineffective due to late presentation (Jido & Yakasai, 2013) . The problem is confounded by continuous mystery of the aetiology and unpredictable nature of the disease (Jido & Yakasai, 2013) . In developing countries supplementation of low dose aspirin calcium is used (Anderson & Schmella, 2017) are used as prophyxis for preeclampsia, however, further evidence is needed before it can be adopted in developing countries such as Kenya. The aim of this study is two fold: to determine the effectiveness of Long Short Term Memory Network in the prediction of those at high risk of developing preeclampsia and effectiveness of low dose aspiring and calcium in the prophylaxis of preeclampsia in those at risk

Objectives

  1.  To determine the effectiveness of Long Short Term Memory Network in the prediction of those at high risk of developing preeclampsia.
  2. To determine the effectiveness of low dose aspirin and calcium supplementation in the prophylaxis of preeclampsia in those at risk

Abstract

To develop a methodology for a semi-automatic classification of judgments disseminated by the High Court Library of the Malawi Judiciary with the purpose of enabling ‘intelligent searching’ within this body of knowledge.

Introduction

Challenges of Legal Research in Malawi Malawi faces a serious problem when it comes to law reporting [5]. The Official Law Reports have been discontinued; the African Law Reports Malawi Series and the Malawi Law Reports cover only the period 1923 – 1993. The MalawiLII website [9], which is the Malawi section on the Southern Africa Legal Information Institute SAFLII, is an online resource contains court judgments issued since 1993 and some statutory laws. However, it is not complete and not easily searchable. Paid services such as Blackhall’s Laws of Malawi contain all the statutory laws (Principal and Subsidiary Legislation) of Malawi in force available at one source on the Internet in an updated and consolidated form. However this is only accessible to paid members and it comes at a substantial cost. The High Court of Malawi maintains a section with printed judgments organised in folders by year and court. However, the indexing used is too rough. The High Court Library also has a paid email subscription service, though which members received scanned images of judgments. However, these are not in a searchable form. Commentaries and digests are very rare and most sections of the law do not have any such publications, e.g., the Criminal Law. There are also private libraries that may be maintained by various law firms.

Problem Statement

In Malawi, the legal research faces significant challenges in accessing and searching for relevant information. On one hand are the issues of accessibility and the availability or the scattered nature of the official reports. On the other hand are the challenges coming from the fact that the current document structure of Malawi legal text, e.g., court judgments, does not support a system of citation that makes it possible to link statutory law, case law and secondary law or to search by “legal terms” and their specific interpretations. This research tackles the specific problem of classifying court judgments disseminated by the High Court Library. The court judgments disseminated via the Malawi High Court Library are not classified according to useful categories, such as courts, topics of the law, statues they refer to. They do not have an index and the structure of the documents is not uniform. The internal structure of judgments impacts the efficiency of a search [2,4,6].

Objectives

The aim of this research is to develop a methodology for an semi-automatic
classification of judgments disseminated by the High Court Library of the Malawi Judiciary with
the purpose of enabling ‘intelligent searching’ within this body of knowledge. Specifically, we have
the following sub-objectives.

  1. To test the efficiency of the search tool available at the moment in the MalawiLII website.
  2. To build an automatic tool for identifying and extracting the general structure of court judgments in Malawi.
  3. To build a semi-automatic tool for extracting key meta-data from court judgments: type of case, involved parties, key legal terms, and laws and statues referred to in the judgment.

References

[1] V. R. Benjamins, P. Casanovas, J. Breuker, and A. Gangemi. Law and the semantic web, an
introduction. In Law and the Semantic Web, pages 1–17. Springer, 2005.

[2] Atefeh Farzindar and Guy Lapalme. ‘LetSum, an automatic Legal Text Summarizing system’ in T. Gordon (ed.), Legal Knowledge and Information Systems. Jurix 2004: The Seventeenth Annual Conference. Amsterdam: IOS Press, 2004, pp. 11-18.

[3] Heinrich H. Dzinyemba, Subject Index of Cases Unreported: Civil and Criminal Cases 1997 – 2003’, Malawi High Court Manuscript.

[4] H. Igari, A. Shimazu, and K. Ochimizu. Document structure analysis with syntactic model and parsers: Application to legal judgments. In JSAI International Symposium on A.I., pages 126–140, 2011.

[5] Judge Kapindu’s description of the Malawi Legal System notes in 2014 http://www.nyulawglobal.org/globalex/Malawi1.html#_edn70

[6] Marios Koniaris George Papastefanatos Yannis Vassiliou, Towards Automatic Structuring and Semantic Indexing of Legal Documents, PCI ’16, November 10 – 12, 2016, Patras, Greece.

[7] Q. Lu, J. G. Conrad, K. Al-Kofahi, and W. Keenan. Legal document clustering with built-in topic segmentation. In Proceedings of CIKM ’11, pages 383–392, 2011.

[8] Daniel Locke, G. Zuccon, & H. Scells. Automatic query generation from legal texts for case law retrieval. In 13th Asia Information Retrieval Societies Conference (AIRS 2017), 2017, Jeju, Korea.

[9] MalawiLII Website

[10] Xiaojun Wan and Jianguo Xiao. Single Document Keyphrase Extraction Using Neighborhood Knowledge, Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, 2008.

[11] Adam Wyner, Raquel Mochales-Palau, Marie-Francine Moens, and David Milward, Approaches to Text Mining Arguments from Legal Cases, JURIX 2008.

Abstract

Pest monitoring by using a data-driven computer vision technique in directing the extension officers support services across sub-Sahara Africa in a real-time pest damage assessment and recommendation support system for small scale tomato farmers.

Problem situation

Agriculture is a vital tool for sustainable development in Africa. A high yielding crop such as tomato with high economic returns can greatly increase smallholder farmers income when well managed. Despite the socio-economic importance of tomato that produce market opportunity, food and nutritional security for smallholder grower, it is apparently constrained by the recent invasion of tomato pest Tuta absoluta that is devastating tomato yield causing loss of up to 100% hence jeopardizing livelihoods of millions of growers in sub-Sahara Africa [1]. This puts small scale farmers at risk of losing income. Tuta absoluta, has swept across Africa, leading to the declaration of a state of emergency [2][3] in some of the continent’s main tomato producing areas. Furthermore, the lack of adequate capacity to detect and implement management measures. A shift from a reactive to a more proactive intervention based on the internationally recognized threestage approach of prevention, early detection and control is needed to be adopted. This work focus on early detection, a novel approach in initiatives to strengthen phytosanitary capacity and systems to help solve Tuta absoluta devastation.

Objective

This work will radically transform Tuta absoluta pest monitoring by using a data-driven computer vision technique in directing the extension officers support services [4] across sub- Sahara Africa in a real-time pest damage assessment and recommendation support system for small scale tomato farmers. To the best of our knowledge, it will be the first alternative approach using computer vision to help aviate the current alarming situation of invasive tomato pest Tuta absoluta by providing solutions that could help in early management and control. We aim to increase the effectiveness of limited farm-level extension support by leveraging emerging technological [5] and extension support to targeted affected areas (based on damage status maps) using our developed models based on quantified images of pest damage.

Justification

Pests and diseases are major threat to smallholder farmers [6] however, Tuta absoluta control still rely on low-speed inefficient manual identification and a few on the support of limited number of agriculture extension officers [7]. With application of computer vision based image recognition technology, early identification and quantification of Tuta absoluta damage status using recent improvements of tele-infrastructure and information technology will give new tools to deploy the start-of-art of computer vision [8] [9] [10] therefore giving a more targeted control needs to be taken in phytosanitary measures of Tuta absoluta.

Preliminary works and Expected outcomes

Our hypothesis is that current emerging technology can be integrated into a decision platform for tomato pest management and can provide diagnostics in real-time at minimal human capacity training. However, we leverage to extend and integrate alternative support such as recent discovery of a promising pesticide by our fellow team member, Ms. Never. We also anticipate that advice from limited extension service can be delivered to large numbers of smallholder farmers. We fully expect the proposed work to succeed. To achieve this, the first steps of this work have already been completed over the last 12 months through field work and in-house experiment to collect data using cameras and drones in affected areas of Arusha and Morogoro, Tanzania. We have taken and labeled over 4,000 images of tomatoes and multispectral images (RGB, infra-red, red edge allowing for vegetation indices data collection like NDVI) and trained convolutional neural network model. The models can classify Tuta absoluta damage cases. This work also emerged as computer vision for global challenge workshop (CV4GC) winner presented at Computer Vision Pattern Recognition (CVPR). The multidisciplinary research team and links to major key players such as Sokoine University of Agriculture, NM-AIST, agriculture extension officers have helped in initial works. A combination of different technical skills and background could be the best approach in tackling the  apparent state-of-emergency of Tuta absoluta invasion. Since we expect our work to have major impact, we will test how Tuta absoluta pest damage map assessment could increase yield in tomato value chains in Tanzania and sub-Sahara Africa.

Reference

[1] Z. Never, A. N. Patrick, C. Musa, and M. Ernest, “Tomato Leafminer, Tuta absoluta (Meyrick 1917), an emerging agricultural pest in Sub-Saharan Africa: Current and prospective management strategies,”African J. Agric. Res., vol. 12, no. 6, pp. 389–396, 2017.

[2] Nigeria’s Kaduma state declares ‘tomato emergency’ [Online] Available https://www.bbc.com/news/world-africa-36369015 Accessed: 30th July, 2018.

[3] Invasive Africa: Tuta absoluta. [Online] Available https://www.youtube.com/watch?v=_dubR2qoW8k Accessed: 24th September, 2018.

[4] T. J. Maginga, T. Nordey, and M. Ally, “Extension System for Improving the Management of Vegetable Cropping Systems,” vol. 3, no. 4, 2018.

[5] Zahedi, Seyed Reza, and Seyed Morteza Zahedi. “Role of information and communication technologies in modern agriculture.” International Journal of Agriculture and Crop Sciences 4, no. 23 (2012): 1725- 1728.

[6] V. Mutayoba, T. Mwalimu, and N. Memorial, “Assessing tomato farming and marketing among smallholders in high potential agricultural areas of Tanzania Venance ,” no. July, pp. 0–17, 2017.

[7] R. Y. A. Guimapi, S. A. Mohamed, G. O. Okeyo, F. T. Ndjomatchoua, S. Ekesi, and H. E. Z. Tonnang, “Modeling the risk of invasion and spread of Tuta absoluta in Africa,” Ecol. Complex., vol. 28, pp. 77–93, 2016.

[8] Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” 2015.
[9] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,”
pp. 1–14, 2015.
[10] H. Kaiming, Z. Xiangyu, R. Shaoqing, and S. Jian, “Deep Residual Learning for Image Recognition.”
[11] H. Peng, Y. Bayram, L. Shaltiel-Harpaz, F. Sohrabi, A. Saji, U.T. Esenali, A. Jalilov . “Tuta absoluta
continues to disperse in Asia: damage, ongoing management and future challenges.” Journal of Pest
Science(2018): 1-11.

The network will be coordinated by Knowledge 4 All Foundation and IDRC in close collaboration with UNESCO Chair in AI.

Background

Artificial Intelligence (AI) has the potential to alter our world and to advance human development, with dramatic implications across every sector of society. According to PWC, AI could provide a 15.7 trillion-dollar boost to the world’s GDP by 2030 and is already bringing major advancements to the way we learn, conduct business and monitor our health.

There is a consensus that AI is changing our world, that it is here to stay and that it offers a vital commercial opportunity in every sector. However, the future of AI is uncertain, especially in Africa. While these technologies have accomplished impressive feats, including diagnosing disease and making self-driving cars a reality, other profound, and perhaps darker, implications have emerged.

Broad AI for Development intitiative

At the same time, the indiscriminate dissemination of AI applications could also exacerbate inequalities. As AI applications spread rapidly across sectors and around the globe, more research is required to better understand how AI applications impact human development.

To enhance the economic and social prospects of people in the Global South, it is critical to support knowledge, skills development, and the institutions to responsibly implement and govern these technologies.

To this end, we will invest and design a range of AI for development (AI4D) initiatives, focusing on innovations, foundations and governance. These initiatives will support relevant community building, research, develop AI applications that are inclusive, ethical, and rights-based, and strengthen and create appropriate capacity building programs.

Activities of the African AI network

The main purpose of the AI for Development (AI4D) Africa project is to support a network of excellence in AI in sub-Saharan Africa to strengthen and develop community scientific and technological excellence in a range of AI-related issue areas. Specifically, the project will run 4 activities:

  1. Develop a network of institutions and individuals working on and researching AI from across sub-Saharan Africa, via workshops and consultations
  2. Deliver an AI research agenda with a focus on ethical, legal and social aspects of AI research
  3. Generate an AI capacity building agenda via a survey of universities
  4. Issue a call for at least ten multidisciplinary innovation projects within and outside the network, exploring local frontiers of research in AI

Additionally, the project will consider effective capacity building approaches based on identified policy and educational frameworks within the target countries.

Building on existing work

The project will draw from the recent K4A, IDRC and UNESCO supported mapping and PASCAL2 Network to facilitate a bottom-up network/community of researchers who will investigate and recommend how the network/community should shape its research agenda and actions.

Connection to other networks

The project will seek to align with Humane AI, a European FET Flagship Project Proposal for new ethical, trustworthy, AI technologies to enhance human capabilities and empower citizens and society. The project includes three major European AI communities such as ELLIS, CLAIRE and PASCAL.

Timeline

AI4D Africa will be starting Dec 2018 and run for 18 months and result in the establishment of the network, a research “roadmap”, a portfolio of innovation projects, and recommendations for capacity building for ethical and locally relevant AI research around the African continent.

 

The project will contribute to Open Educational Resources in a twofold manner, firstly as a means of creation of innovative practices in driving forward the use of ICTs for OER-supported teaching and learning, online based education by using VideoLectures.Net, K4All, Opencast and OCWC, and secondly by applying methods in the realm of “big data” to analyse emerging trends in learning outcomes, in the creation, and dissemination in the field of OERs by means of AI techniques.

The project focuses on OERs at national/regional and global level in line with the set objectives. The project will offer AI technologies, evidence and guidance for OER research methods ranging from research, use cases, deployment, exploration, exploitation and operability, whereas this may apply to all educational sectors.

This will be done based on implementation, impact and creation analysis of OER initiatives pursued by other UNESCO OER Chairs. This project will create an OER research agenda and technology roadmap.

The Chair is Mitja Jermol who is also a trustee of the Knowledge for All Foundationis and head of the Centre for knowledge Transfer at JSI working in the area of e‐learning and dissemination and promotion of research results.

The highlights, activities and outputs of the UNESCO Chair on Open Technologies for Open Educational Resources and Open Learning at the Jožef Stefan Institute (Slovenia) during its first 4 years of formal activities (November 2014 – November 2018) are listed below as major accomplishments.

Relevant major research results:

  • Strategic projects 2014-2018: transLectures[1], MediaMixer[2], Xlike[3], Xlime[4], TraMOOC[5], MOVING[6]
  • Strategic projects 2018-2022 funded by the Government of Slovenia and European Commission: X5GON[7], CLEOPATRA[8], CogLo[9], DataBench[10], TheyBuyForYou[11], MicroHE[12]
  • Addition of 5000 new OER based academic videos on VideoLectures.Net, a WSA 2009 and 2013 award winning video library currently including content from 1105 events, 15617 authors, 21269 lectures (some 24658 videos in total).

Relevant major capacity building results:

  • 2nd World Open Educational Resources (OER) in Ljubljana, Slovenia, on 18–20 September 2017, co-organized by UNESCO and the Government of Slovenia. This event marked five years since the World OER Congress was held in Paris in June 2012. Organized by UNESCO and the Slovenian Ministry of Education, Science and Sport in close collaboration with the Commonwealth of Learning, Creative Commons, the Slovenian National Commission for UNESCO and the Chair with the generous support of The William and Flora Hewlett Foundation.
  • 21 satellite events at the Congress facilitated and co-funded by the Chair
  • The Chair organized in synergy with numerous partners a series of 20+ events in relevance to UNESCO’s strategic objectives, covering core UNESCO actions.

Relevant major educational result:

  • Open Education for a Better World is a half-year on-line mentoring program in which students from very different backgrounds and different parts of the world developed 14 OER projects aligned with the UN SDG agenda.

Relevant major policy results:

  • The Ljubljana OER Action Plan: The plan presents 41 recommended actions to mainstream open-licensed resources to help all Member States to build Knowledge Societies and achieve the 2030 Sustainable Development Goal 4 on “quality and lifelong education.”
  • The Ministerial Statement: The statement is endorsed by 20 Ministers and their designated representatives of Bangladesh, Barbados, Bulgaria, Czech Republic, Costa Rica, Croatia, Kiribati, Lao People’s Democratic Republic, Lithuania, Malta, Mauritius, Mauritania, Mozambique, Palestine, Romania, Serbia, Slovakia, Slovenia, South Africa and the United Arab Emirates.
  • The Dynamic Coalition on OER: Formation of a Dynamic Coalition of National Governments in OER and Open Education to propose, construct and operate a dynamic coalition of countries devoted to research, develop, deploy and exchange OER and Open Education solutions, practices and policies.
  • The Slovenian Case in OER – From Commitment to Action: National policy document is now in a website format intended to showcase other governments at the Congress how Slovenia is coping with the idea of opening up education. We have identified 5 major areas across all fields of education.

Relevant major technological result:

  • Global Infrastructure for OER promises to deliver the first building blocks for an open and artificial intelligence powered infrastructure to easily connect all global OER sites/siloses and provide a digestion pipeline for understanding content including by th use of machine translation, reasoning, recommendation, automatic curation, personalisation and aggregation of OER. It will result in providing technology services benefiting teachers, learners, researchers, policy makers and technologists.

Upcoming relevant results:

  • UNESCO Recommendation on Open Educational Resources (OER), leading the draft text formulation further to the adoption of Resolution 44 ‘Desirability of a standard-setting instrument on international collaboration in the field of Open Educational Resources (OER)’ at the 39th Session of the UNESCO General Conference
  • Setting up the Category 2 Centre on Artificial Intelligence Under the Auspices of UNESCO

[1] transLectures FP7 ICT Project – FP7-ICT-287755-STREP – Language technologies – website : http://translectures.eu/

[2] MediaMixer FP7 ICT Project – FP7-ICT-318101-CSA- Intelligent Information Management website : http://mediamixer.eu/

[3] Xlike FP7 ICT Project – FP7 -ICT-288342- STREP – Cross Lingual Knowledge Extraction (2012-2014) http://www.xlike.org/

[4] Xlime FP7 ICT Project – STREP, FP7-ICT-2013-10 – crossLingual crossMedia knowledge extraction (2014-2017)

[5] TraMOOC H2020 Project – Innovation Action, Translation for Massive Open Online Courses (2014-2018) http://tramooc.eu/

[6] MOVING H2020 Project – INSO-4-2015 – Training towards a society of data-savvy information professionals to enable open leadership innovation, http://moving-project.eu

[7] X5gon H2020 Project – ICT-19-2017 – X5GON: Cross Modal, Cross Cultural, Cross Lingual, Cross Domain, and Cross Site Global OER Network, http://x5gon.org/

[8] CLEOPATRA – H2020-MSCA-ITN-2018 – Cross-lingual Event-centric Open Analytics Research Academy

[9] CogLo – Future COGnitive Logistics Operations through Social Internet of Things

[10] DataBench – Evidence Based Big Data Benchmarking to Improve Business Performance

[11] TheyBuyForYou – Enabling procurement data value chains for economic development, demand management, competitive markets and vendor intelligence

[12] MicroHE – Support Future Learning Excellence through Micro-Credentialing in Higher Education, Erasmus+, EACEA/41/2016 – Forward-Looking Cooperation Projects

The Chair is based on the Institute Jozef Stefan which is a research rather than an educational institution, so in terms of education, no traditional certificates were provided. However, it does deliver training programme on three levels:

  • Course in Open Education Design for practitioners in partnership with the University in Nova Gorica. Designed as a 5-day course, the participants become familiar with open education design processes, methods and tools in OER and based on the Open Education for a Better World mentoring programme delivering OER projects across the world.
  • Via the MicroHE – Support Future Learning Excellence through Micro-Credentialing in Higher Education which specific objective is to examine the scope for and impact of micro-credentials – a form of short-cycle tertiary qualification – in Higher Education and deliver these certificates over a blockchain solution for open and online learning via the educational repository Vidoelectures.Net.
  • Open Education for a Better World is a tuition-free international online mentoring programme launced by the Chair and University of Nova Gorica, to unlock the potential of open education in achieving the UN’s Sustainable Development Goals (SDG). It’s a half year-long mentoring programme for students from all backgrounds, regions and continents with the potential and desire to employ Open Educational Resources to solve large scale and relevant problems important in relation to today’s global landscape. The programme with over 40 mentors takes place over half a calendar year, starting from January, through to July 2018.  All mentoring sessions and events take place online and comprise of one final end-of-year, that is designed to help students finalise their work. OE4BW mentees will be expected to attend the final event.

There are currently 14 projects that are part of the first batch of the Programme, which students and mentors are actively finalising:

The UNESCO Chair in OER at Université de Nantes aims at developing technologies and research towards these technologies for open education resources in the training of teachers.

Activities related with the chair consist of contributing to (i) the succes of Class’Code, a French national project where blended MOOCs are provided with the goal of contributing to the training of teachers and educators for coding and computational thinking, and (ii) analyzing Class’Code, documenting it, helping its openness, and (iii) disseminating the model.

The Chair is Colin de la Higuera, professor in Computer Science at University of Nantes and in the Laboratoire des Sciences du Numérique de Nantes. His research field is Machine Learning, a field in which he has collaborated with researchers from more than 20 countries over the years. He is the author of two books and many research papers.

He has been the founding President of the learnt Société informatique de France, and contributed to design and launch the Class’Code project whose goal is to train teachers and educators to code and computational thinking.

Colin de la Higuera is also trustee of the Knowledge for All Foundation and currently works with European actors on indexing and accessing OER in a more helpful way.

Some years ago the teacher would prepare her new class by using a textbook, searching through her library, or the library close to her home, perhaps discussing with her close colleagues and taking advantage of her own personal experience.

The advent of Internet has changed that, like many other things. In 2017 the teacher is going to use the internet as a set of textbooks, a huge library and her colleagues now live at the other end of the planet.

But this apparent richness is of little use if the right resources are too hard to find. How does one know that a video is useful without watching it? How can one believe that the lecture given by an unknown is correct? How do we discover the resource we need between thousands of others? And how do we find the resource which is open and we can therefore freely redistribute to our students?

To these difficulties we can add another one: how can we build a new resource in such a way that we am allowed to share it with others? With an answer to this question the teacher ceases to be a mere consumer and becomes a creator.

These questions are economic, political, pedagogical. But also technological: where are the tools enabling the teacher to make full use of free knowledge?

Furthermore, we would like these tools and the process itself to take place with no added cost: the challenge is that building open educational resources and using them should be as simple and as cost-free as possible.

The issues raised here are backed by many institutions: Unesco, the OECD,…, and many countries who signed the Paris 2012 declaration.

The Unesco Chair on Technologies for the Training of Teachers by Open Educational Resources, supported by the Nantes University Foundation, aims at contributing to this challenge.

The Chair will build its activities upon Unesco’s international dimension and the research cooperation maintained over the years with many actors. The Chair will benefit from the favourable ecosystem one can find in Nantes on these questions and more particularly of its research teams (LS2N, CREN,…) as well as the projects these are involved in (Class’Code, Labex CominLabs, …).

The Chair’s founding ideas – Class’Code

In October 2016, University Presidents Frédérique Vidal and Olivier Laboux, Chair-holder Colin de la Higuera, and many other presidents and directors of Universities, research institutes, learnt societies, and associations representing formal and informal education in France wrote in the French newspaper Le Monde:

Le chantier pour l’éducation est immense car il faut faire enseigner cette nouvelle matière sans avoir vraiment assez d’éducateurs et d’enseignants formés pour cela. En effet, la vitesse à laquelle les technologies numériques ont changé notre quotidien a été bien supérieure à celle du changement de la formation des enseignants.

Il est pourtant aujourd’hui indispensable à la fois de commencer à éduquer les enfants et de former les éducateurs et enseignants qui vont, dans les écoles et les collèges, mais aussi dans le contexte des activités périscolaires, se trouver face à ces enfants.

Class’Code is a free innovative blended learning program that places computer science at the heart of our educational system; the goal is to train members of the educational \& informatics communities to teach young people from 8 to 14 basic programming and computer science. This includes creative programming, information coding, and familiarization with networks, fun robotics, and the related impacts of technology in our society. It will help familiarise our children with the concept of algorithms, computational thinking and thus have control over the digital world.

The Class’Code project is supported by both academic and industrial federations in computer science, led by the SIF (Société informatique de France) and managed by Inria (the French Research Institute in computer science and applied mathematics). Magic Makers is in charge of the pedagogy, Open Classrooms drives the production while the deployment on the territories is under the leadership of Les Petits Débrouillards.

Computational Thinking

Whereas coding, programming, computing are the popular words used to express the competences to be acquires at an early age in order to be able to not be dependent in the information society, it is now becoming understood that the knowledge is less computer oriented and closer to problem solving activities. Computational thinking (in French, Pensée Informatique) is the set of processes one uses to solve a problem through representing the data as information, algorithmically solving the associated problem and restituting the result through some device. It is now strongly argued that it is a paradigm to be acquired at school.

Artificial Intelligence (AI) is a key technology for the further development of the Internet and all future digital devices and applications. At this point the rapid growth of talent, projects, companies, research outputs, etc. has fuelled sectors ranging from data analytics and Web platforms to driverless vehicles and new generation of robots for our homes, hospitals, farms or factories.

The total potential is not defined as there is no comprehensive study on “all things AI”. We therefore prepared a global map of talents, players, knowledge and co-creation hot spots in AI in emerging economies.

Emerging economies Artificial Intelligence ecosystem

Multiple helix approach needed

Research and innovation, investments and business dynamics in AI are increasingly being influenced by the development of interactions among all stakeholders (“multiple helix” approach). More actors are involved in AI knowledge creation and the innovation process. Universities and research institutions collaborate with business enterprises, hospitals, local municipalities, public services providers and citizen organisations.

At the same time, research, innovation and business processes are changing with the transition towards open science, open innovation and open education, with rapid increase of funding in AI, as well as the computational capacity. The focus is increasingly on developing, testing and rolling out a large number of solutions for the benefit of citizens and local jobs.

This shows the way for new hot-spots of AI knowledge and co-creation globally. The challenge is to map the landscape across countries and understand what is out there in-the-wild.

Work done

The Canadian company ElementAI has created a Global AI Talent Report 2018 summarizing their research into the scope and breadth of the worldwide AI talent pool. Although their data visualizations map the distribution of worldwide talent at the start of 2018, they present a predominantly Western-centric model of AI expertise.

However, the second half of the report focuses on a qualitative assessment of talent and funding in Asia and Africa, but the reliability of numbers drops off significantly and does not match the industry or academic output of these hotspots.

We identified this approache as de-centralised and lacking in objective and therefore aimed the project to fill the gaps that ElementAI report has not been able to map.

Instead we focused on presenting a complete bottom-up “Emerging economies Artificial Intelligence ecosystem”, which proves to be a highly effective approach in Global South countries, due to the lack of structured data.

Need for large-scale objectives

The aim of the “Emerging economies Artificial Intelligence ecosystem” is to:

  • Create a bottom-up mapping via a community of AI ambassadors
  • Focus on the AI distribution in developing countries, specifically in low-middle income countries in 4 regions (Latin America/Caribbean, SSA, MENA, Asia)
  • Deliver an extensive list of AI players in developing countries and infographics
  • Create first global directory for AI hot spots, and matching SDGs.

The “Emerging economies Artificial Intelligence ecosystem” identified players in three clusters:

  • Private sector with start-ups and accelerators
  • University labs and public sector
  • NGOs, CSOs, think tanks, development projects.

Results are encouraging

The project produced two tangible results:

The Web directory of institutions in AI emerging markets cover 4 regions, ASIA, LAC, MENA, and SSA. The total mapping has produced in total 617 entities with the following breakdown per region.

Bottom-up mapping of players

The methodology includes manual mapping (bottom-up) by using our researcher network and partner sites and City.AI community in each country/region.

ElementAI used (i) results from several LinkedIn searches, which show the total number of profiles according to their own specialized parameters, (ii) captured the names of leading AI conference presenters who were considered to be influential experts in their respective fields of AI, (iii) relied on other reports and anecdotes from the global community.

We built on these results and made use of City.AI ambassadors which are hosting quarterly community gatherings world-wide. They foster their local ecosystems by curating high-quality talks from AI experts who focus on the lessons learned and challenges they face when putting AI into production.

Breakdown of AI ecosystems in regions

The selection comprises of bottom-up mapped entities from 33 countries and researchers still submitting weekly updates with information from the field.

  • ASIA has a total of 399 players in AI: Academia (92), Accelerators and Investors (9), Corporates (6), Social sector (2) and Start-ups (286), countries include Georgia, India, Pakistan, Sri Lanka, Indonesia, Lao, Nepal, Philippines, Vietnam, Bangladesh, Cambodia, Mongolia and Myanmar.
  • SSA has a total of 149 players in AI: Academia (111), Accelerators and Investors (1), Corporates (2), and Start-ups (29), countries included are Kenya, Nigeria, Zimbabwe, Mozambique, Senegal, Congo, Ivory Coast, Cameroon and Uganda.
  • MENA has a total of 57 players in AI: Academia (21), Accelerators and Investors (1), Corporates (2), Social sector (9) and Start-ups (22), countries included are Morocco, Egypt, Jordan, Tunisia and Syrian Arab Republic.
  • LAC has a total of 36 players in AI: Academia (21), Accelerators and Investors (1), Corporates (1), Social sector (1), countries included are Haiti, El Salvador, Bolivia, Guatemala and Honduras.

Regions are emerging strong

The directory has quantitative value, as it presents for the first time a bottom-up mapping of AI entities in the 4 regions, however it has also qualitative value by unearthing anecdotal data, an immediate example is a university in Madagascar which has no Web presence, but is running an AI teaching programme. No other methodology could unearth such data.

We have piloted this action with the European project in AI in OER titled X5GON which aims at creating a platform to deliver globally accessible Open Educational Resources in Machine Learning. Three X5GON partners currently have positions as UNESCO Chairs in OER and the newly established UNESCO Chair in Artificial Intelligence with chair holder John Shawe-Taylor.

Starting a new global AI research network

The results will be used to bootstrap a series of AI research networks starting from the SSA region. This initial list of 600+ players will serve as a basis for capacity building toconnect the landscape and offer evidence and guidance ranging from AI based policies to capacity building, research methods, use cases, deployment, exploration, exploitation and operability, applied to all SDG sectors.

Extending the directory

Caveats include the following:

  • The mapping of the country entities is subjective: although they are normally based directly on input from country experts, only a few experts per country and per region could be consulted in the time available
  • On the whole, judgements of what fits into the category of AI are taken at face value, though the researchers looked for URLs to confirm each entity expertise and credibility made about specific institutions
  • The research outcomes presented in the study are not intended to be exhaustive about the state-of-the-art of AI across UNESCO Member States – in particular the research team did not run an extensive general analysis into matching AI companies with SDGs
  • A considerable number of countries and UNESCO member states have some kind of initiative with regard to AI, but there is still a long way to go, both in mapping these MS deeper and drill into research and industrial directions and expertise. In most MS the vision of AI is rather broad. We are not sure how this vision is applied to actual policy and commerce, as our approach is still limited.
  • International Development Research Centre is a Canadian federal Crown corporation that invests in knowledge, innovation, and solutions to improve lives and livelihoods in the developing world
  • UNESCO’s Communication and Information Sector (CI), Section for ICT in Education, Science and Culture (CI/KSD/ICT)
  • Knowledge 4 All Foundation is an NGO based in the UK and has a community of 1000 machine learning researchers and makes use of the largest collection of AI video lectures
  • City.AI is a global non-profit network headquartered in the Netherlands. AI practitioners across 55+ cities on 6 continents are connecting, learning and sharing with the ultimate goal to enable everyone to apply AI. 170+ local ambassadors are the local community leaders and the backbone of City.AI
  • UNESCO Chair in Artificial Intelligence based at UCL in London
  • UNESCO Chair on Open Technologies for OER and Open Learning  setting up the Category 2 Centre on Artificial Intelligence under the auspices of UNESCO.