Abstract

To develop an Arabic text to Moroccan Sign Language (MSL) translation product through building two corpora of data on Arabic texts for the use of translation into MSL. The collected corpora of data will train Deep Learning Models to analyze and map Arabic words and sentences against MSL encodings.

Introduction

Over 5% of the world’s population (466 million people) has disabling hearing loss. 4 million are children [1]. They can be hard of hearing or deaf. Hard of hearing people usually communicate through spoken language and can benefit from assistive devices like cochlear implants. Deaf people mostly have profound hearing loss, which implies very little or no hearing.

The main impacts of deaf is on the individual’s ability to communicate with others in addition to the emotional feelings of loneliness and isolation in society. Consequently, they can not equally access public services, mostly education and health and have not equal rights in participating at the active and democratic life. This leads to a negative impact in their lives and the lives of the people surrounding them. Over the world, deaf people often communicate using a sign language with gestures of both hands and facial expressions.

Sign languages are full-fledged natural  languages with their own grammar and lexicon. However, they are not universal although they have striking similarities. In Morocco, deaf children receive very few education assistance. For many years, they were learning the local variety of sign language from Arabic, French, and American Sign Languages [2]. In April 2019, the governement standardized the Moroccan Sign Language (MSL) and initiated programs to support the education of deaf children [3]. However, the involved teachers are mostly hearing, have limited command of MSL and lack resources and tools to teach deaf learn from written or spoken text. Schools recruit interpreters to help the student understand what is being teached and said in class. Otherwise, teachers use graphics and captioned videos to learn the mappings to signs, but lack tools that translate written or spoken words and concepts into signs. This project comes to solve this particular issue.

Objectives

We propose an Arabic Sppech-to-MSL translator. The translation could be divided into two parts, the speech-to-text part and the text-to-MSL part. In a previous work [4], we was interested in the arabic Speech-to-Text translation. We conducted a research and a comparison on the existing Speech-to-Text APIs. A web application was built for this end [check web app here]. Our main focus in this current work is to take the results from the Speech-to-Text module and perform Text-to-MSL translation.

Up to now, there is not enough data of arabic words with their translations into MSL. This is of course challenging because we have to bring interpreters and linguists together in order to create this initial corpus. Each word or concept in the arabic corpus could be mapped to a time series of hand gestures and facial expressions in the MSL corpus. Our main objective is to find the best possible mapping between these two corpuses. The collected data will allow us to train big Deep Learning Models. In fact, we aim to explore the existing deep learning pretrained architectures suitable for analyzing arabic words and sentences and find their mappings with the MSL encodings. Recurrent Neural Networks using GRU and LSTM units and their variants are proved to be the most suitable and powerful when dealing with sequential data [5][6]. We believe that tuning these state of the art models will allow us to achieve good generalization performances.

Expected outcomes

With this work we expect building the MSL and the Arabic text corpuses that we aim keeping free and open for public use. For this end, we will develop an interactive web application for the creation of the two corpuses. The Text-to-MSL translation product will be hosted on a web and mobile application.

 

Abstract

To test the feasibility of the deployment of Unmanned Ground Vehicles (UGVs) for automated intelligent patrol, detection, wildlife monitoring, identification across the national parks and reserves in Kenya.

Introduction

Wildlife tourism is a significant and growing contributor to the economic and social development in the African region through revenue generation, infrastructure development and job creation. According to a recent press release by the World Travel and Tourism Council [1], travel and tourism contributed $194.2
billion (8.5% of GDP) to the African region in 2018 and supported 24.3 million jobs (6.7% of total employment). Globally, travel and tourism is a $7.6 trillion industry, and is responsible for an estimated 292 million jobs [2]. Tourism is also one of the few sectors in which female labor participation is already above parity, with women accounting for up to 70% of the workforce [2].

However, the wildlife tourism industry in Africa is being increasingly threatened by rising human population and wildlife crime. As poaching becomes more organised and livestock incursions become frequent occurrences, shortages in ranger workforce and shortcomings in technological developments in this space put thousands of species at risk of endangerment, and threaten to collapse the wildlife tourism industry and ecosystem. According to The National Wildlife Conservation Status Report, 2015 – 2017, presented by the Ministry of Tourism and Wildlife of Kenya [3], there is currently a shortage of 1038 rangers, from the required 2484 rangers in Kenyan national parks and reserves, a deficit of over 40%. With tourism in Kenya contributing a revenue of $1.5 billion in 2018 [4], and with the wildlife conservancies in Kenya supporting over 700,000 community livelihoods [3], the recession of the wildlife tourism industry could have major adverse economic and social impacts on the country. It is thus critical that sustainable solutions are reached to save the wildlife tourism industry, and further research is fuelled in this area.

The national parks, reserves and conservancies in Kenya span thousands of square kilometers and make it difficult for rangers to track down all possible poaching activities. Poachers normally use guns, snares, and poison to capture wild animals. By collecting real world data on poaching activities, better learning of adversarial behavior is achieved and optimal strategies for anti-poaching patrols can be employed [5]. According to [5], a large number of security games research lacks actual adversary data and does not consider heterogeneity among large populations of adversary which makes it difficult to build accurate models of adversary behavior. Other problems inherent in past predictive models neglect the uncertainty in crime data, use coarse-grained crime analysis and propose time-consuming techniques that cannot be directly integrated into low-resource outposts [6].

To address shortages in ranger workforce, carry out monitoring activities more effectively, and detect criminal or endangering activities with greater precision, we propose the development of an open dataset containing georeferenced data on poaching incidents from the past 10 years as well as historical data on tagged elephant and rhino movements. We aim to observe correlations between the data using machine learning models and effectively model poaching trends and behavioural patterns to predict the location of the next poaching attack and suggest better patrol routes. The study will be carried out over a period of 4 months at Nairobi National Park in Kenya which covers a total area of 117 square kilometers and hosts many of the endangered wildlife species listed in the IUCN Red List of Threatened Species, such as the African Elephant and Black Rhinoceros.

Objectives

  1. To generate a real world dataset that maps poaching activities within the park.
  2. To develop a hybrid model that predicts the behavior of poachers by capturing their heterogeneity.
  3. To improve the accuracy of the hybrid model by creating novel algorithms in determining poaching
    activities and hotspots.
  4. To investigate the computation challenges faced when learning the behavioral model of poachers.

Vision

Our future vision is to test the feasibility of the deployment of Unmanned Ground Vehicles (UGVs) for automated intelligent patrol and wildlife monitoring across the national parks and reserves in Kenya. In
addition to carrying out automated patrol using the models learned in this study, the UGVs would be fitted
with an array of cameras and sensors that would enable it to navigate autonomously within the parks, and run multiple deep learning and computer vision algorithms that carry out numerous monitoring activities such as detection of poaching, livestock incursions, human wildlife conflict, distressed wildlife, and species
identification.

References

[1] “African tourism sector booming – second-fastest growth rate in the world”, WTTC press release , Mar. 13, 2019. Accessed on Jul. 11, 2019. [Online]. Available: https://www.wttc.org/about/media-centre/press-releases/press-releases/2019/african-tourism-sector-booming-second-fastest-growth-rate-in-the-world/

[2] “Supporting Sustainable Livelihoods through Wildlife Tourism”, World Bank Group , 2018.

[3] “The National Wildlife Conservation Status Report, 2015 – 2017”, pp. 75, 131, Ministry of Tourism and
Wildlife, Kenya , 2017.

[4] “Tourism Sector Performance Report – 2018” , Hon. Najib Balala, 2018.

[5] R. Yang, B. Ford, M. Tambe, and A. Lemieux, “Adaptive resource allocation for wildlife protection against illegal poachers,” in Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems , May 2014, pp. 453-460.

[6] S. Gholami, S. McCarthy, B. Dilkina, A. Plumptre, M. Tambe, et. al., “Adversary models account for
imperfect crime data: Forecasting and planning against real-world poachers,” in Proceedings of the 17th
International Conference on Autonomous Agents and MultiAgent Systems , July 2018, pp. 823-831.

Abstract

To determine the effectiveness of Long Short Term Memory Network in the prediction of pregnant mothers at high risk of developing pre-eclampsia and the effectiveness of prophylaxis of preeclampsia.

Background

The Sustainable Development Goal (SDG) 3 aims to reduce the global maternal mortality ratio to less than 70 per 100,000 live births. These deaths are caused by a number of conditions experienced during pregnancy and childbirth. Preeclampsia has adverse effects on maternal health especially in low and middle-income countries. Many challenges persist in the prediction, prevention and management of preeclampsia. Prophylytic measures such as low dose aspirin and calcium supplementation has been used western countries however more evidence is needed before it can be used in developing countries. The current management is timely diagnosis; proper management, timely delivery and good follow up after birth. This study therefore seeks to explore the use of wearable devices to continuously measure blood pressure in pregnant mothers. The obtained blood pressure data will be used to predict future maternal blood pressures using Long Short Term Memory (LSTM) recurrent neural networks on mobile devices. Those whose blood pressures will be predicted to be high, therefore risking development of preeclampsia will be put into two groups: one will receive the usual care in the high risk clinic while the second group will be supplemented with low dose asprin and calcium from second trimester. It is expected that the prediction will serve to identify those at risk early and management instituted immediately and those supplemented with low dose aspirin and calcium will not develop preeclampsia. Additionally the data collected will be valuable for future studies in the area of preeclampsia prediction using machine learning.

Introduction

Preeclampsia is a pregnancy complication characterized by persistent high blood pressure. It usually begins after 20 weeks of pregnancy in women whose blood pressure (BP) has been normal. If left untreated it will progress to eclampsia that is often fatal to both mother and baby. (Macdonald-Wallis, C et al 2015). Preeclampsia is often diagnosed when a mother goes to the health care facility for routine check where BP measurement is taken. The first sign of preeclampsia is a BP reading exceeding 140/90 in two or more occasions, at least four hours apart at 20 or more week’s gestation. Most pregnant mothers in Low and Middle Income Countries do not have personal BP machines to take regular BP readings thus they depend on BP reading during the antenatal clinic visits, which are 4-5 for the entire pregnancy. Early detection of preeclampsia is often missed during these visits because the BP measurement is often taken once unless otherwise indicated during the visit.

The detection and management of preeclampsia in out of clinic settings has however become much easier in the recent past through the development of smart blood pressure monitors. These devices that are now readily available on the market use a variety of non intrusive methods such as a cuff that inflates slightly to measure systolic and diastolic pressure via the oscillometric method as is the case with the Omron Smart watch (Omron 2019) and using a combination of optical sensors and clinically validated software algorithms as is the case with a number of smart watches such as the one developed by Aktiia (2018) and Bpro by MedTach Inc (2018). These devices are not only able to take readings and generate alarms but are also capable of transmitting this data to other devices such as mobile phones for use in further analysis using techniques such as machine learning cite.

The use of machine learning techniques for blood pressure prediction is a practice that is steadily growing using techniques such as Artifical Neural Networks (Hao et al, 2015), as well as classification and  regression trees (Zhang et al, 2018). Additionally Long Short Term Memory (LSTM) networks are increasingly being considered in studies such as the ones by Su el al (2017), Zhao et al (2019), Lo et al (2017) and Radha et al (2019). A majority of these techniques, current studies and solutions are developed and deployed on devices with significant computing and storage power such as servers and super computers which presents a major challenge overall in the potential utility of machine learning for individuals who increasingly prefer to access services, content and solutions on their mobile devices.

Preeclampsia remains a significant public health problem for both the developed and developing countries contributing to both maternal morbidity and mortality globally (McClure, Saleem, Pasha, & Goldenberg,  2009; Shah et al., 2009) however the impact of the disease is felt more severely in the developed countries (Prakash et al., 2010) where unlike other causes of mortality, medical intervention may be ineffective due to late presentation (Jido & Yakasai, 2013) . The problem is confounded by continuous mystery of the aetiology and unpredictable nature of the disease (Jido & Yakasai, 2013) . In developing countries supplementation of low dose aspirin calcium is used (Anderson & Schmella, 2017) are used as prophyxis for preeclampsia, however, further evidence is needed before it can be adopted in developing countries such as Kenya. The aim of this study is two fold: to determine the effectiveness of Long Short Term Memory Network in the prediction of those at high risk of developing preeclampsia and effectiveness of low dose aspiring and calcium in the prophylaxis of preeclampsia in those at risk

Objectives

  1.  To determine the effectiveness of Long Short Term Memory Network in the prediction of those at high risk of developing preeclampsia.
  2. To determine the effectiveness of low dose aspirin and calcium supplementation in the prophylaxis of preeclampsia in those at risk

Abstract

To develop a methodology for a semi-automatic classification of judgments disseminated by the High Court Library of the Malawi Judiciary with the purpose of enabling ‘intelligent searching’ within this body of knowledge.

Introduction

Challenges of Legal Research in Malawi Malawi faces a serious problem when it comes to law reporting [5]. The Official Law Reports have been discontinued; the African Law Reports Malawi Series and the Malawi Law Reports cover only the period 1923 – 1993. The MalawiLII website [9], which is the Malawi section on the Southern Africa Legal Information Institute SAFLII, is an online resource contains court judgments issued since 1993 and some statutory laws. However, it is not complete and not easily searchable. Paid services such as Blackhall’s Laws of Malawi contain all the statutory laws (Principal and Subsidiary Legislation) of Malawi in force available at one source on the Internet in an updated and consolidated form. However this is only accessible to paid members and it comes at a substantial cost. The High Court of Malawi maintains a section with printed judgments organised in folders by year and court. However, the indexing used is too rough. The High Court Library also has a paid email subscription service, though which members received scanned images of judgments. However, these are not in a searchable form. Commentaries and digests are very rare and most sections of the law do not have any such publications, e.g., the Criminal Law. There are also private libraries that may be maintained by various law firms.

Problem Statement

In Malawi, the legal research faces significant challenges in accessing and searching for relevant information. On one hand are the issues of accessibility and the availability or the scattered nature of the official reports. On the other hand are the challenges coming from the fact that the current document structure of Malawi legal text, e.g., court judgments, does not support a system of citation that makes it possible to link statutory law, case law and secondary law or to search by “legal terms” and their specific interpretations. This research tackles the specific problem of classifying court judgments disseminated by the High Court Library. The court judgments disseminated via the Malawi High Court Library are not classified according to useful categories, such as courts, topics of the law, statues they refer to. They do not have an index and the structure of the documents is not uniform. The internal structure of judgments impacts the efficiency of a search [2,4,6].

Objectives

The aim of this research is to develop a methodology for an semi-automatic
classification of judgments disseminated by the High Court Library of the Malawi Judiciary with
the purpose of enabling ‘intelligent searching’ within this body of knowledge. Specifically, we have
the following sub-objectives.

  1. To test the efficiency of the search tool available at the moment in the MalawiLII website.
  2. To build an automatic tool for identifying and extracting the general structure of court judgments in Malawi.
  3. To build a semi-automatic tool for extracting key meta-data from court judgments: type of case, involved parties, key legal terms, and laws and statues referred to in the judgment.

References

[1] V. R. Benjamins, P. Casanovas, J. Breuker, and A. Gangemi. Law and the semantic web, an
introduction. In Law and the Semantic Web, pages 1–17. Springer, 2005.

[2] Atefeh Farzindar and Guy Lapalme. ‘LetSum, an automatic Legal Text Summarizing system’ in T. Gordon (ed.), Legal Knowledge and Information Systems. Jurix 2004: The Seventeenth Annual Conference. Amsterdam: IOS Press, 2004, pp. 11-18.

[3] Heinrich H. Dzinyemba, Subject Index of Cases Unreported: Civil and Criminal Cases 1997 – 2003’, Malawi High Court Manuscript.

[4] H. Igari, A. Shimazu, and K. Ochimizu. Document structure analysis with syntactic model and parsers: Application to legal judgments. In JSAI International Symposium on A.I., pages 126–140, 2011.

[5] Judge Kapindu’s description of the Malawi Legal System notes in 2014 http://www.nyulawglobal.org/globalex/Malawi1.html#_edn70

[6] Marios Koniaris George Papastefanatos Yannis Vassiliou, Towards Automatic Structuring and Semantic Indexing of Legal Documents, PCI ’16, November 10 – 12, 2016, Patras, Greece.

[7] Q. Lu, J. G. Conrad, K. Al-Kofahi, and W. Keenan. Legal document clustering with built-in topic segmentation. In Proceedings of CIKM ’11, pages 383–392, 2011.

[8] Daniel Locke, G. Zuccon, & H. Scells. Automatic query generation from legal texts for case law retrieval. In 13th Asia Information Retrieval Societies Conference (AIRS 2017), 2017, Jeju, Korea.

[9] MalawiLII Website

[10] Xiaojun Wan and Jianguo Xiao. Single Document Keyphrase Extraction Using Neighborhood Knowledge, Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, 2008.

[11] Adam Wyner, Raquel Mochales-Palau, Marie-Francine Moens, and David Milward, Approaches to Text Mining Arguments from Legal Cases, JURIX 2008.

Abstract

Pest monitoring by using a data-driven computer vision technique in directing the extension officers support services across sub-Sahara Africa in a real-time pest damage assessment and recommendation support system for small scale tomato farmers.

Problem situation

Agriculture is a vital tool for sustainable development in Africa. A high yielding crop such as tomato with high economic returns can greatly increase smallholder farmers income when well managed. Despite the socio-economic importance of tomato that produce market opportunity, food and nutritional security for smallholder grower, it is apparently constrained by the recent invasion of tomato pest Tuta absoluta that is devastating tomato yield causing loss of up to 100% hence jeopardizing livelihoods of millions of growers in sub-Sahara Africa [1]. This puts small scale farmers at risk of losing income. Tuta absoluta, has swept across Africa, leading to the declaration of a state of emergency [2][3] in some of the continent’s main tomato producing areas. Furthermore, the lack of adequate capacity to detect and implement management measures. A shift from a reactive to a more proactive intervention based on the internationally recognized threestage approach of prevention, early detection and control is needed to be adopted. This work focus on early detection, a novel approach in initiatives to strengthen phytosanitary capacity and systems to help solve Tuta absoluta devastation.

Objective

This work will radically transform Tuta absoluta pest monitoring by using a data-driven computer vision technique in directing the extension officers support services [4] across sub- Sahara Africa in a real-time pest damage assessment and recommendation support system for small scale tomato farmers. To the best of our knowledge, it will be the first alternative approach using computer vision to help aviate the current alarming situation of invasive tomato pest Tuta absoluta by providing solutions that could help in early management and control. We aim to increase the effectiveness of limited farm-level extension support by leveraging emerging technological [5] and extension support to targeted affected areas (based on damage status maps) using our developed models based on quantified images of pest damage.

Justification

Pests and diseases are major threat to smallholder farmers [6] however, Tuta absoluta control still rely on low-speed inefficient manual identification and a few on the support of limited number of agriculture extension officers [7]. With application of computer vision based image recognition technology, early identification and quantification of Tuta absoluta damage status using recent improvements of tele-infrastructure and information technology will give new tools to deploy the start-of-art of computer vision [8] [9] [10] therefore giving a more targeted control needs to be taken in phytosanitary measures of Tuta absoluta.

Preliminary works and Expected outcomes

Our hypothesis is that current emerging technology can be integrated into a decision platform for tomato pest management and can provide diagnostics in real-time at minimal human capacity training. However, we leverage to extend and integrate alternative support such as recent discovery of a promising pesticide by our fellow team member, Ms. Never. We also anticipate that advice from limited extension service can be delivered to large numbers of smallholder farmers. We fully expect the proposed work to succeed. To achieve this, the first steps of this work have already been completed over the last 12 months through field work and in-house experiment to collect data using cameras and drones in affected areas of Arusha and Morogoro, Tanzania. We have taken and labeled over 4,000 images of tomatoes and multispectral images (RGB, infra-red, red edge allowing for vegetation indices data collection like NDVI) and trained convolutional neural network model. The models can classify Tuta absoluta damage cases. This work also emerged as computer vision for global challenge workshop (CV4GC) winner presented at Computer Vision Pattern Recognition (CVPR). The multidisciplinary research team and links to major key players such as Sokoine University of Agriculture, NM-AIST, agriculture extension officers have helped in initial works. A combination of different technical skills and background could be the best approach in tackling the  apparent state-of-emergency of Tuta absoluta invasion. Since we expect our work to have major impact, we will test how Tuta absoluta pest damage map assessment could increase yield in tomato value chains in Tanzania and sub-Sahara Africa.

Reference

[1] Z. Never, A. N. Patrick, C. Musa, and M. Ernest, “Tomato Leafminer, Tuta absoluta (Meyrick 1917), an emerging agricultural pest in Sub-Saharan Africa: Current and prospective management strategies,”African J. Agric. Res., vol. 12, no. 6, pp. 389–396, 2017.

[2] Nigeria’s Kaduma state declares ‘tomato emergency’ [Online] Available https://www.bbc.com/news/world-africa-36369015 Accessed: 30th July, 2018.

[3] Invasive Africa: Tuta absoluta. [Online] Available https://www.youtube.com/watch?v=_dubR2qoW8k Accessed: 24th September, 2018.

[4] T. J. Maginga, T. Nordey, and M. Ally, “Extension System for Improving the Management of Vegetable Cropping Systems,” vol. 3, no. 4, 2018.

[5] Zahedi, Seyed Reza, and Seyed Morteza Zahedi. “Role of information and communication technologies in modern agriculture.” International Journal of Agriculture and Crop Sciences 4, no. 23 (2012): 1725- 1728.

[6] V. Mutayoba, T. Mwalimu, and N. Memorial, “Assessing tomato farming and marketing among smallholders in high potential agricultural areas of Tanzania Venance ,” no. July, pp. 0–17, 2017.

[7] R. Y. A. Guimapi, S. A. Mohamed, G. O. Okeyo, F. T. Ndjomatchoua, S. Ekesi, and H. E. Z. Tonnang, “Modeling the risk of invasion and spread of Tuta absoluta in Africa,” Ecol. Complex., vol. 28, pp. 77–93, 2016.

[8] Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” 2015.
[9] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,”
pp. 1–14, 2015.
[10] H. Kaiming, Z. Xiangyu, R. Shaoqing, and S. Jian, “Deep Residual Learning for Image Recognition.”
[11] H. Peng, Y. Bayram, L. Shaltiel-Harpaz, F. Sohrabi, A. Saji, U.T. Esenali, A. Jalilov . “Tuta absoluta
continues to disperse in Asia: damage, ongoing management and future challenges.” Journal of Pest
Science(2018): 1-11.

The network will be coordinated by Knowledge 4 All Foundation and IDRC in close collaboration with UNESCO Chair in AI.

Background

Artificial Intelligence (AI) has the potential to alter our world and to advance human development, with dramatic implications across every sector of society. According to PWC, AI could provide a 15.7 trillion-dollar boost to the world’s GDP by 2030 and is already bringing major advancements to the way we learn, conduct business and monitor our health.

There is a consensus that AI is changing our world, that it is here to stay and that it offers a vital commercial opportunity in every sector. However, the future of AI is uncertain, especially in Africa. While these technologies have accomplished impressive feats, including diagnosing disease and making self-driving cars a reality, other profound, and perhaps darker, implications have emerged.

Broad AI for Development intitiative

At the same time, the indiscriminate dissemination of AI applications could also exacerbate inequalities. As AI applications spread rapidly across sectors and around the globe, more research is required to better understand how AI applications impact human development.

To enhance the economic and social prospects of people in the Global South, it is critical to support knowledge, skills development, and the institutions to responsibly implement and govern these technologies.

To this end, we will invest and design a range of AI for development (AI4D) initiatives, focusing on innovations, foundations and governance. These initiatives will support relevant community building, research, develop AI applications that are inclusive, ethical, and rights-based, and strengthen and create appropriate capacity building programs.

Activities of the African AI network

The main purpose of the AI for Development (AI4D) Africa project is to support a network of excellence in AI in sub-Saharan Africa to strengthen and develop community scientific and technological excellence in a range of AI-related issue areas. Specifically, the project will run 4 activities:

  1. Develop a network of institutions and individuals working on and researching AI from across sub-Saharan Africa, via workshops and consultations
  2. Deliver an AI research agenda with a focus on ethical, legal and social aspects of AI research
  3. Generate an AI capacity building agenda via a survey of universities
  4. Issue a call for at least ten multidisciplinary innovation projects within and outside the network, exploring local frontiers of research in AI

Additionally, the project will consider effective capacity building approaches based on identified policy and educational frameworks within the target countries.

Building on existing work

The project will draw from the recent K4A, IDRC and UNESCO supported mapping and PASCAL2 Network to facilitate a bottom-up network/community of researchers who will investigate and recommend how the network/community should shape its research agenda and actions.

Connection to other networks

The project will seek to align with Humane AI, a European FET Flagship Project Proposal for new ethical, trustworthy, AI technologies to enhance human capabilities and empower citizens and society. The project includes three major European AI communities such as ELLIS, CLAIRE and PASCAL.

Timeline

AI4D Africa will be starting Dec 2018 and run for 18 months and result in the establishment of the network, a research “roadmap”, a portfolio of innovation projects, and recommendations for capacity building for ethical and locally relevant AI research around the African continent.

 

The project will contribute to Open Educational Resources in a twofold manner, firstly as a means of creation of innovative practices in driving forward the use of ICTs for OER-supported teaching and learning, online based education by using VideoLectures.Net, K4All, Opencast and OCWC, and secondly by applying methods in the realm of “big data” to analyse emerging trends in learning outcomes, in the creation, and dissemination in the field of OERs by means of AI techniques.

The project focuses on OERs at national/regional and global level in line with the set objectives. The project will offer AI technologies, evidence and guidance for OER research methods ranging from research, use cases, deployment, exploration, exploitation and operability, whereas this may apply to all educational sectors.

This will be done based on implementation, impact and creation analysis of OER initiatives pursued by other UNESCO OER Chairs. This project will create an OER research agenda and technology roadmap.

The Chair is Mitja Jermol who is also a trustee of the Knowledge for All Foundationis and head of the Centre for knowledge Transfer at JSI working in the area of e‐learning and dissemination and promotion of research results.

The highlights, activities and outputs of the UNESCO Chair on Open Technologies for Open Educational Resources and Open Learning at the Jožef Stefan Institute (Slovenia) during its first 4 years of formal activities (November 2014 – November 2018) are listed below as major accomplishments.

Relevant major research results:

  • Strategic projects 2014-2018: transLectures[1], MediaMixer[2], Xlike[3], Xlime[4], TraMOOC[5], MOVING[6]
  • Strategic projects 2018-2022 funded by the Government of Slovenia and European Commission: X5GON[7], CLEOPATRA[8], CogLo[9], DataBench[10], TheyBuyForYou[11], MicroHE[12]
  • Addition of 5000 new OER based academic videos on VideoLectures.Net, a WSA 2009 and 2013 award winning video library currently including content from 1105 events, 15617 authors, 21269 lectures (some 24658 videos in total).

Relevant major capacity building results:

  • 2nd World Open Educational Resources (OER) in Ljubljana, Slovenia, on 18–20 September 2017, co-organized by UNESCO and the Government of Slovenia. This event marked five years since the World OER Congress was held in Paris in June 2012. Organized by UNESCO and the Slovenian Ministry of Education, Science and Sport in close collaboration with the Commonwealth of Learning, Creative Commons, the Slovenian National Commission for UNESCO and the Chair with the generous support of The William and Flora Hewlett Foundation.
  • 21 satellite events at the Congress facilitated and co-funded by the Chair
  • The Chair organized in synergy with numerous partners a series of 20+ events in relevance to UNESCO’s strategic objectives, covering core UNESCO actions.

Relevant major educational result:

  • Open Education for a Better World is a half-year on-line mentoring program in which students from very different backgrounds and different parts of the world developed 14 OER projects aligned with the UN SDG agenda.

Relevant major policy results:

  • The Ljubljana OER Action Plan: The plan presents 41 recommended actions to mainstream open-licensed resources to help all Member States to build Knowledge Societies and achieve the 2030 Sustainable Development Goal 4 on “quality and lifelong education.”
  • The Ministerial Statement: The statement is endorsed by 20 Ministers and their designated representatives of Bangladesh, Barbados, Bulgaria, Czech Republic, Costa Rica, Croatia, Kiribati, Lao People’s Democratic Republic, Lithuania, Malta, Mauritius, Mauritania, Mozambique, Palestine, Romania, Serbia, Slovakia, Slovenia, South Africa and the United Arab Emirates.
  • The Dynamic Coalition on OER: Formation of a Dynamic Coalition of National Governments in OER and Open Education to propose, construct and operate a dynamic coalition of countries devoted to research, develop, deploy and exchange OER and Open Education solutions, practices and policies.
  • The Slovenian Case in OER – From Commitment to Action: National policy document is now in a website format intended to showcase other governments at the Congress how Slovenia is coping with the idea of opening up education. We have identified 5 major areas across all fields of education.

Relevant major technological result:

  • Global Infrastructure for OER promises to deliver the first building blocks for an open and artificial intelligence powered infrastructure to easily connect all global OER sites/siloses and provide a digestion pipeline for understanding content including by th use of machine translation, reasoning, recommendation, automatic curation, personalisation and aggregation of OER. It will result in providing technology services benefiting teachers, learners, researchers, policy makers and technologists.

Upcoming relevant results:

  • UNESCO Recommendation on Open Educational Resources (OER), leading the draft text formulation further to the adoption of Resolution 44 ‘Desirability of a standard-setting instrument on international collaboration in the field of Open Educational Resources (OER)’ at the 39th Session of the UNESCO General Conference
  • Setting up the Category 2 Centre on Artificial Intelligence Under the Auspices of UNESCO

[1] transLectures FP7 ICT Project – FP7-ICT-287755-STREP – Language technologies – website : http://translectures.eu/

[2] MediaMixer FP7 ICT Project – FP7-ICT-318101-CSA- Intelligent Information Management website : http://mediamixer.eu/

[3] Xlike FP7 ICT Project – FP7 -ICT-288342- STREP – Cross Lingual Knowledge Extraction (2012-2014) http://www.xlike.org/

[4] Xlime FP7 ICT Project – STREP, FP7-ICT-2013-10 – crossLingual crossMedia knowledge extraction (2014-2017)

[5] TraMOOC H2020 Project – Innovation Action, Translation for Massive Open Online Courses (2014-2018) http://tramooc.eu/

[6] MOVING H2020 Project – INSO-4-2015 – Training towards a society of data-savvy information professionals to enable open leadership innovation, http://moving-project.eu

[7] X5gon H2020 Project – ICT-19-2017 – X5GON: Cross Modal, Cross Cultural, Cross Lingual, Cross Domain, and Cross Site Global OER Network, http://x5gon.org/

[8] CLEOPATRA – H2020-MSCA-ITN-2018 – Cross-lingual Event-centric Open Analytics Research Academy

[9] CogLo – Future COGnitive Logistics Operations through Social Internet of Things

[10] DataBench – Evidence Based Big Data Benchmarking to Improve Business Performance

[11] TheyBuyForYou – Enabling procurement data value chains for economic development, demand management, competitive markets and vendor intelligence

[12] MicroHE – Support Future Learning Excellence through Micro-Credentialing in Higher Education, Erasmus+, EACEA/41/2016 – Forward-Looking Cooperation Projects

The Chair is based on the Institute Jozef Stefan which is a research rather than an educational institution, so in terms of education, no traditional certificates were provided. However, it does deliver training programme on three levels:

  • Course in Open Education Design for practitioners in partnership with the University in Nova Gorica. Designed as a 5-day course, the participants become familiar with open education design processes, methods and tools in OER and based on the Open Education for a Better World mentoring programme delivering OER projects across the world.
  • Via the MicroHE – Support Future Learning Excellence through Micro-Credentialing in Higher Education which specific objective is to examine the scope for and impact of micro-credentials – a form of short-cycle tertiary qualification – in Higher Education and deliver these certificates over a blockchain solution for open and online learning via the educational repository Vidoelectures.Net.
  • Open Education for a Better World is a tuition-free international online mentoring programme launced by the Chair and University of Nova Gorica, to unlock the potential of open education in achieving the UN’s Sustainable Development Goals (SDG). It’s a half year-long mentoring programme for students from all backgrounds, regions and continents with the potential and desire to employ Open Educational Resources to solve large scale and relevant problems important in relation to today’s global landscape. The programme with over 40 mentors takes place over half a calendar year, starting from January, through to July 2018.  All mentoring sessions and events take place online and comprise of one final end-of-year, that is designed to help students finalise their work. OE4BW mentees will be expected to attend the final event.

There are currently 14 projects that are part of the first batch of the Programme, which students and mentors are actively finalising:

The UNESCO Chair in OER at Université de Nantes aims at developing technologies and research towards these technologies for open education resources in the training of teachers.

Activities related with the chair consist of contributing to (i) the succes of Class’Code, a French national project where blended MOOCs are provided with the goal of contributing to the training of teachers and educators for coding and computational thinking, and (ii) analyzing Class’Code, documenting it, helping its openness, and (iii) disseminating the model.

The Chair is Colin de la Higuera, professor in Computer Science at University of Nantes and in the Laboratoire des Sciences du Numérique de Nantes. His research field is Machine Learning, a field in which he has collaborated with researchers from more than 20 countries over the years. He is the author of two books and many research papers.

He has been the founding President of the learnt Société informatique de France, and contributed to design and launch the Class’Code project whose goal is to train teachers and educators to code and computational thinking.

Colin de la Higuera is also trustee of the Knowledge for All Foundation and currently works with European actors on indexing and accessing OER in a more helpful way.

Some years ago the teacher would prepare her new class by using a textbook, searching through her library, or the library close to her home, perhaps discussing with her close colleagues and taking advantage of her own personal experience.

The advent of Internet has changed that, like many other things. In 2017 the teacher is going to use the internet as a set of textbooks, a huge library and her colleagues now live at the other end of the planet.

But this apparent richness is of little use if the right resources are too hard to find. How does one know that a video is useful without watching it? How can one believe that the lecture given by an unknown is correct? How do we discover the resource we need between thousands of others? And how do we find the resource which is open and we can therefore freely redistribute to our students?

To these difficulties we can add another one: how can we build a new resource in such a way that we am allowed to share it with others? With an answer to this question the teacher ceases to be a mere consumer and becomes a creator.

These questions are economic, political, pedagogical. But also technological: where are the tools enabling the teacher to make full use of free knowledge?

Furthermore, we would like these tools and the process itself to take place with no added cost: the challenge is that building open educational resources and using them should be as simple and as cost-free as possible.

The issues raised here are backed by many institutions: Unesco, the OECD,…, and many countries who signed the Paris 2012 declaration.

The Unesco Chair on Technologies for the Training of Teachers by Open Educational Resources, supported by the Nantes University Foundation, aims at contributing to this challenge.

The Chair will build its activities upon Unesco’s international dimension and the research cooperation maintained over the years with many actors. The Chair will benefit from the favourable ecosystem one can find in Nantes on these questions and more particularly of its research teams (LS2N, CREN,…) as well as the projects these are involved in (Class’Code, Labex CominLabs, …).

The Chair’s founding ideas – Class’Code

In October 2016, University Presidents Frédérique Vidal and Olivier Laboux, Chair-holder Colin de la Higuera, and many other presidents and directors of Universities, research institutes, learnt societies, and associations representing formal and informal education in France wrote in the French newspaper Le Monde:

Le chantier pour l’éducation est immense car il faut faire enseigner cette nouvelle matière sans avoir vraiment assez d’éducateurs et d’enseignants formés pour cela. En effet, la vitesse à laquelle les technologies numériques ont changé notre quotidien a été bien supérieure à celle du changement de la formation des enseignants.

Il est pourtant aujourd’hui indispensable à la fois de commencer à éduquer les enfants et de former les éducateurs et enseignants qui vont, dans les écoles et les collèges, mais aussi dans le contexte des activités périscolaires, se trouver face à ces enfants.

Class’Code is a free innovative blended learning program that places computer science at the heart of our educational system; the goal is to train members of the educational \& informatics communities to teach young people from 8 to 14 basic programming and computer science. This includes creative programming, information coding, and familiarization with networks, fun robotics, and the related impacts of technology in our society. It will help familiarise our children with the concept of algorithms, computational thinking and thus have control over the digital world.

The Class’Code project is supported by both academic and industrial federations in computer science, led by the SIF (Société informatique de France) and managed by Inria (the French Research Institute in computer science and applied mathematics). Magic Makers is in charge of the pedagogy, Open Classrooms drives the production while the deployment on the territories is under the leadership of Les Petits Débrouillards.

Computational Thinking

Whereas coding, programming, computing are the popular words used to express the competences to be acquires at an early age in order to be able to not be dependent in the information society, it is now becoming understood that the knowledge is less computer oriented and closer to problem solving activities. Computational thinking (in French, Pensée Informatique) is the set of processes one uses to solve a problem through representing the data as information, algorithmically solving the associated problem and restituting the result through some device. It is now strongly argued that it is a paradigm to be acquired at school.

Artificial Intelligence (AI) is a key technology for the further development of the Internet and all future digital devices and applications. At this point the rapid growth of talent, projects, companies, research outputs, etc. has fuelled sectors ranging from data analytics and Web platforms to driverless vehicles and new generation of robots for our homes, hospitals, farms or factories.

The total potential is not defined as there is no comprehensive study on “all things AI”. We therefore prepared a global map of talents, players, knowledge and co-creation hot spots in AI in emerging economies.

Emerging economies Artificial Intelligence ecosystem

Multiple helix approach needed

Research and innovation, investments and business dynamics in AI are increasingly being influenced by the development of interactions among all stakeholders (“multiple helix” approach). More actors are involved in AI knowledge creation and the innovation process. Universities and research institutions collaborate with business enterprises, hospitals, local municipalities, public services providers and citizen organisations.

At the same time, research, innovation and business processes are changing with the transition towards open science, open innovation and open education, with rapid increase of funding in AI, as well as the computational capacity. The focus is increasingly on developing, testing and rolling out a large number of solutions for the benefit of citizens and local jobs.

This shows the way for new hot-spots of AI knowledge and co-creation globally. The challenge is to map the landscape across countries and understand what is out there in-the-wild.

Work done

The Canadian company ElementAI has created a Global AI Talent Report 2018 summarizing their research into the scope and breadth of the worldwide AI talent pool. Although their data visualizations map the distribution of worldwide talent at the start of 2018, they present a predominantly Western-centric model of AI expertise.

However, the second half of the report focuses on a qualitative assessment of talent and funding in Asia and Africa, but the reliability of numbers drops off significantly and does not match the industry or academic output of these hotspots.

We identified this approache as de-centralised and lacking in objective and therefore aimed the project to fill the gaps that ElementAI report has not been able to map.

Instead we focused on presenting a complete bottom-up “Emerging economies Artificial Intelligence ecosystem”, which proves to be a highly effective approach in Global South countries, due to the lack of structured data.

Need for large-scale objectives

The aim of the “Emerging economies Artificial Intelligence ecosystem” is to:

  • Create a bottom-up mapping via a community of AI ambassadors
  • Focus on the AI distribution in developing countries, specifically in low-middle income countries in 4 regions (Latin America/Caribbean, SSA, MENA, Asia)
  • Deliver an extensive list of AI players in developing countries and infographics
  • Create first global directory for AI hot spots, and matching SDGs.

The “Emerging economies Artificial Intelligence ecosystem” identified players in three clusters:

  • Private sector with start-ups and accelerators
  • University labs and public sector
  • NGOs, CSOs, think tanks, development projects.

Results are encouraging

The project produced two tangible results:

The Web directory of institutions in AI emerging markets cover 4 regions, ASIA, LAC, MENA, and SSA. The total mapping has produced in total 617 entities with the following breakdown per region.

Bottom-up mapping of players

The methodology includes manual mapping (bottom-up) by using our researcher network and partner sites and City.AI community in each country/region.

ElementAI used (i) results from several LinkedIn searches, which show the total number of profiles according to their own specialized parameters, (ii) captured the names of leading AI conference presenters who were considered to be influential experts in their respective fields of AI, (iii) relied on other reports and anecdotes from the global community.

We built on these results and made use of City.AI ambassadors which are hosting quarterly community gatherings world-wide. They foster their local ecosystems by curating high-quality talks from AI experts who focus on the lessons learned and challenges they face when putting AI into production.

Breakdown of AI ecosystems in regions

The selection comprises of bottom-up mapped entities from 33 countries and researchers still submitting weekly updates with information from the field.

  • ASIA has a total of 399 players in AI: Academia (92), Accelerators and Investors (9), Corporates (6), Social sector (2) and Start-ups (286), countries include Georgia, India, Pakistan, Sri Lanka, Indonesia, Lao, Nepal, Philippines, Vietnam, Bangladesh, Cambodia, Mongolia and Myanmar.
  • SSA has a total of 149 players in AI: Academia (111), Accelerators and Investors (1), Corporates (2), and Start-ups (29), countries included are Kenya, Nigeria, Zimbabwe, Mozambique, Senegal, Congo, Ivory Coast, Cameroon and Uganda.
  • MENA has a total of 57 players in AI: Academia (21), Accelerators and Investors (1), Corporates (2), Social sector (9) and Start-ups (22), countries included are Morocco, Egypt, Jordan, Tunisia and Syrian Arab Republic.
  • LAC has a total of 36 players in AI: Academia (21), Accelerators and Investors (1), Corporates (1), Social sector (1), countries included are Haiti, El Salvador, Bolivia, Guatemala and Honduras.

Regions are emerging strong

The directory has quantitative value, as it presents for the first time a bottom-up mapping of AI entities in the 4 regions, however it has also qualitative value by unearthing anecdotal data, an immediate example is a university in Madagascar which has no Web presence, but is running an AI teaching programme. No other methodology could unearth such data.

We have piloted this action with the European project in AI in OER titled X5GON which aims at creating a platform to deliver globally accessible Open Educational Resources in Machine Learning. Three X5GON partners currently have positions as UNESCO Chairs in OER and the newly established UNESCO Chair in Artificial Intelligence with chair holder John Shawe-Taylor.

Starting a new global AI research network

The results will be used to bootstrap a series of AI research networks starting from the SSA region. This initial list of 600+ players will serve as a basis for capacity building toconnect the landscape and offer evidence and guidance ranging from AI based policies to capacity building, research methods, use cases, deployment, exploration, exploitation and operability, applied to all SDG sectors.

Extending the directory

Caveats include the following:

  • The mapping of the country entities is subjective: although they are normally based directly on input from country experts, only a few experts per country and per region could be consulted in the time available
  • On the whole, judgements of what fits into the category of AI are taken at face value, though the researchers looked for URLs to confirm each entity expertise and credibility made about specific institutions
  • The research outcomes presented in the study are not intended to be exhaustive about the state-of-the-art of AI across UNESCO Member States – in particular the research team did not run an extensive general analysis into matching AI companies with SDGs
  • A considerable number of countries and UNESCO member states have some kind of initiative with regard to AI, but there is still a long way to go, both in mapping these MS deeper and drill into research and industrial directions and expertise. In most MS the vision of AI is rather broad. We are not sure how this vision is applied to actual policy and commerce, as our approach is still limited.
  • International Development Research Centre is a Canadian federal Crown corporation that invests in knowledge, innovation, and solutions to improve lives and livelihoods in the developing world
  • UNESCO’s Communication and Information Sector (CI), Section for ICT in Education, Science and Culture (CI/KSD/ICT)
  • Knowledge 4 All Foundation is an NGO based in the UK and has a community of 1000 machine learning researchers and makes use of the largest collection of AI video lectures
  • City.AI is a global non-profit network headquartered in the Netherlands. AI practitioners across 55+ cities on 6 continents are connecting, learning and sharing with the ultimate goal to enable everyone to apply AI. 170+ local ambassadors are the local community leaders and the backbone of City.AI
  • UNESCO Chair in Artificial Intelligence based at UCL in London
  • UNESCO Chair on Open Technologies for OER and Open Learning  setting up the Category 2 Centre on Artificial Intelligence under the auspices of UNESCO.

The Chair will study Artificial Intelligence as a driver and component for solutions and strategies to assist the achievement of the SDGs. It will contribute to the way information can be intelligently assimilated and utilized across a range of sectors and services, and drive AI to follow a development course.

The project focuses on AI at the national (or regional) level and will offer evidence and guidance ranging from AI research methods, use cases, deployment, exploration, exploitation and operability, applied to all SDG sectors. The implementation, impact and creation analysis of AI initiatives are line with agendas pursued other UNESCO Chairs.

It will create an AI research agenda and technology roadmap, and a series of AI deployment scenarios in specific contexts. The Chair will embed the values of ethics, well-being, peace and human rights within AI, how stakeholders and policymakers can best utilize AI, taking advantage of its benefits and minimizing risks.

John Shawe-Taylor has the role of Chairholder and is already working with the Knowledge Societies Division, CI sector of UNESCO to prepare a mapping of the  AI ecosystem in emerging economies. He was also indtrumental in designing with the K4A team a challenge for the 2nd World Open Educational Resources (OER) Congress named “Artificial Intelligence for solving SDG 4” and organized an accompanying event at the Congress on the same theme with several other Chairs.

 

We are at the threshold of an era when much of our productivity and prosperity will be derived from the systems and machines we create. We are accustomed now to technology developing fast, but that pace will increase and AI will drive much of that acceleration. The impacts on society and the economy will be profound, although the exact nature of those impacts is uncertain. We are convinced that because of the UK’s current and historical strengths in this area UCL is in a strong position to lead rather than follow in both the development of the technology and its deployment in all sectors of industry, education and government.

Artificial Intelligence describes a set of advanced general purpose digital technologies that enable machines to do highly complex tasks effectively. AI is not really any single thing – it is a set of rich sub-disciplines and methods, vision, perception, speech and dialogue, decisions and planning, robotics and so on. We have to consider all these different disciplines and methods in seeking real solutions in delivering value to human beings and organizations. AI was viewed as a set of associated technologies and techniques that can be used to complement traditional approaches, human intelligence and analytics and/or other techniques.

Key factors have combined to increase the importance of the AI Chair in recent years:

  • New and larger volumes of data
  • Supply of experts with the specific high level skills
  • Availability of increasingly powerful computing capacity.

This is UNESCO’s single such Chair with this remit. It’s not copying other Chairs, but has a unique remit and identity, which will complement other UNESCO Chairs in the area of Data Science, Analytics and data and open technologies for Open Educational Resources. It has a clear interdisciplinary and intersectoral dimension: it transcends several of UNESCO’s programmes: education, communication and information, freedom of expression (fake news), media development and universal access to information and knowledge, and especially science for peace and sustainable development.

The Chair is established at the University College London in the United Kingdom (UCL), London’s leading multidisciplinary university, with 11000 staff, 35000 students and an annual income of over £1bn. University College London in general, and John Shawe-Taylor in particular is already a leading party in a European and a global network of partners collaborating in Artificial Intelligence (AI). This includes Knowledge 4 All Foundation Ltd. (K4A), together with a selection of a number of top ranked research institutions involved in European FP7 and H2020 research projects and consortiums from projects such as NeuroCOLT, PASCAL, PASCAL 2, KerMIT, CompLACS and most recently the X5GON, based on institutions consortiums.

Present the potential of AI for SDGs

Starting with 2 case studies in education and health: defining sustainable processes and structures (governance, access, business models, licensing, etc.) as well as developing a suitable software infrastructure (APIs and tools to aggregate existing tools and algorithms and to make them easily deployable in applications, as well as to access data and computing resources). Collect data from available data sources to create an infrastructure to ingest, process, analyze, aggregate and enrich specific-domain data, for specific SDG challenges. This infrastructure will scale to large amounts of data, starting from the education based project (in the millions of OER audio, video and animation, curricula, syllabi, lecture notes, assignments, tests, projects, courses, course materials, modules, textbooks, tests and datasets) that will form the basis for the algorithms to mine, represent, reason upon and use this diversity of information. This would be society agnostic, but would include the countries which are already OER adopters. As argued before, the core partnership lies with research institutes and partially universities. They are in the countries listed above. Not all countries, however, are represented in our network. While they still can be included in individual projects or as object of study, and certainly in the dissemination activities. In addition, it will also provide a test bed for other researchers outside the AI domain who might be interested in accessing the data processed and produced in the project. Access to the data sets and its metadata will be provided via a Web-based API, which will furthermore allow publishing new data sources.

Place the issue of AI for SDGs on the national, regional and international agenda

To identify options to harness the potential of rapid technological change and innovation towards achieving the Sustainable Development Goals with the use of AI. Collect information on, gain insight in, and identify the major characteristics (both similarities and distinctions) of AI developments at the national (and regional) level in a series of contexts that may be considered representative for the full spectrum of data science as well as for the variety in societies (e.g., The Netherlands, UK, Spain, Turkey, India, China, South Africa, Nigeria, Brazil, USA, Canada, Australia, New Zealand, and Commonwealth states). The core partnership for this objective lies with already established connections with Computer Science departments and Data partners,

Mobilizing an AI community

Including researchers, businesses and start-ups to provide access to knowledge, algorithms and tools for achieving SDGs. The increasing capability and use of machine learning, the rising creation of augmented reality content, and the changing capabilities and uses of smartphones have broad potential to contribute towards the SDGs.

Evaluate the critical success and failure factors

Among the national/regional AI case studies in relation to the variety of contexts, convert these into a context dependent multi-facetted framework of best conditions and guidelines for implementing an AI strategy at the national (or regional) level, and derive a set of AI scenarios fit for specific contexts. Analyze the educational, economic and societal impact of the national / regional AI case studies, advise on new requirements for machine learning and educational environments, and develop context dependent business models, explicitly taking into account societal benefits.

 Disseminate and share the broad AI knowledge

That has been created and derived in the project, more specifically provide good and bad practices, underline the contextual dependence, and give guidance and basic support to new national (or regional) AI initiatives. With instruments as visiting professorships, joint research projects, scholarships for PhD students the project will provide opportunities and support to the capacity building of partners in different regions.

AI Research – research activities on AI for SDGs – enabled artificial intelligence applications with definition of case studies:

The two initial case studies are SDG3 and SDG4 where two projects are running; X5GON and Malaria Diagnosis. Artificial Intelligence technologies are being developed and “marketed” for educational and healthcare use since decades but a large implementation gap exists. There is a slow adoption rate of technologies in education, because of mismatch between real needs and supply. The lack of use of technologies is particularly affecting the primary and secondary education. There is a need for building the evidence base for more effective learning with technology. This will go hand in hand with tools and processes for collecting, storing, exploring and reasoning on large-scale educational data We will collect “big data” from students’ technology supported learning activities, transforming the data into information and producing, recommending actions aimed at improving learning outcomes.

Prototype development, installation and deployment of state-of-the-art AI tools and technologies:

Prototypes to measure feedback and analysis of contexts, processes and environments. In order to have feasible research data, a real-life and functional ICT based network. AI is expected to contribute to advances in the data collected that will deliver the smart tools and analytical techniques required to generate actionable information from large and diverse datasets.

Dissemination and demonstration

Development of AI platform, joint or single business plan, exploitation of results, investigate service models, clustering and liaison, active community building and standard means of dissemination (presentations, publications, events, meetings, data exchange). The main visibility objectives of this project are to make scientific, industrial and educational communities aware of the project and its results; to disseminate project activities and results in related fields or application sectors. Additionally, to build a community around both AI project results and actively maintain a communication channel to its members. The project will follow a dissemination and community building strategy and will ensure that the technology deployed and data collected and created within the project are available beyond the end of the project. It will actively communicate with communities outside of the project, collect their feedback and involve them in the software development and research activities.

Exchange of research staff

Supervision, training of PhD and other research staff interested in computer science and technical aspects of AI. We recognize the need for researchers to work with large-scale data and we encourage them to develop collaborations with users to facilitate this exchange. We also encourage them to explore alternative routes to access sufficient computational resources (e.g. use of commercial clouds). However, the Chair will not try to imitate industry, and will focus on AI opportunities not yet identified by industry or not yet commercially viable.

Networking

Sharing and promoting best practices, case studies, prototypes and research results. Here we define two types of research projects. The first type (Type A) reflects the research program of the AI Chair and therefore has its focus in AI and for SDG3 online learning for OER and SDG4 healthcare, underpinned by advanced knowledge and context technologies. These research projects will be supervised under the AI Chair. The second category (Type B) is a set of ongoing H2020, Erasmus+ research projects involving University College London in a variety of AI related subjects these include: learning and adaptation; sensory understanding and interaction; reasoning and planning; optimisation of procedures and parameters; autonomy; creativity; and extracting knowledge and predictions from large, diverse digital data. These applications of AI systems are very diverse, ranging from understanding healthcare data to autonomous and adaptive robotic systems, to smart supply chains, video game design and content creation, and will be connected to the AI Chair in order to enhance synergy with UNESCO Chair in Analytics and Data Science in the overall AI research agenda at University College London. The AI Chair is fully involved in these (Type B) research projects. In the first year and part of the second year the research activities (Type A) will build on ongoing H2020 research projects (Type B). At the same time University College London will bid and develop new research projects under the Horizon 2020 research program and the research program of the AI Chair and in collaboration with the other partners listed in c. partnerships/networking (Type A).

The Chair will address the topic of AI in two specific scenarios to indeed contribute with real-life case studies to showcase AI as a feasible approach towards meeting initially two and expanding to other SDGs.

Sustainable Development Goal 3: Good Health and Well-being

Machine learning is already contributing to improved diagnoses and treatment of diseases. Quicker accurate malaria diagnosis will enable faster delivery of clinical services to facilitate International Development Goals for the sub-Saharan African region and other regions of the World affected with malaria. The funding will be used to carry-out engineering (robotics), computational research (computer vision and machine learning) and digital health clinical research (pediatric infectious diseases) to design, implement, deploy and test a fully automated system capable of tackling the challenges posed by human-operated light-microscopy currently used in the diagnosis of malaria. The funded research aims to overcome these diagnostic challenges by replacing human-expert optical-microscopy with a robotic automated computer-expert system that assesses similar digital-optical-microscopy representations of the disease. The Fast, Accurate and Scalable Malaria (FASt-Mal) diagnosis system harnesses the power of state-of-the-art machine learning approaches to support clinical decision making. Driven by AI in each step, this allows for constant improvement and scalability. Improved smartphone processing also has potential to enable diagnostics “on-the-go” or in remote areas. As smartphone penetration increases, access to mobile diagnostics will expand, magnifying the effect of the improvements in smartphone processing enabling these innovations.

Sustainable Development Goal 4: Quality Education

Developments in machine learning can raise educational standards through improved educational apps, digital engagement, and personalised learning. The X5gon: Cross Modal, Cross Cultural, Cross Lingual, Cross Domain, and Cross Site Global OER Network project leverages AI to deliver personalized learning. This solution will adapt to the user’s needs and learn how to make ongoing customized recommendations and suggestions through a truly interactive and impactful learning experience. This new AI-driven platform will deliver OER content from everywhere, for the students and teachers need at the right time and place. X5GON will develop innovative services for large scale learning content understanding, large scale user modelling and real-time learning personalization with a main processing pipeline dealing with big data analytics in near to real time setting. X5GON analytics pipeline is not only relevant for OER but can be easily applied in other domains as well. The term Open Educational Resources has been introduced by UNESCO in 2002, and adopted OER as a strategy to meet its objectives in education.

Artificial Intelligence and Computer Science based on the research results and applications from fields :

  • Machine Learning
  • Deep Learning
  • Big-Data
  • Data-Mining
  • Text-Mining
  • Web-Mining
  • Multimedia Mining
  • Semantic Technologies
  • Social Network Analysis
  • Language Technologies
  • Natural Language Processing
  • Multi-lingual
  • Cross-lingual Technologies
  • Scalable, Real-time Data Analysis
  • Data Visualization
  • Knowledge Management
  • Knowledge Reasoning
  • Cognitive Systems.

Project Leader:
John Shawe-Taylor
Professor of Computational Statistics and Machine Learning
Director, Centre for Computational Statistics and Machine Learning (CSML)
Head of Department of Computer Science

University College London
Gower Street
London WC1E 6BT
United Kingdom
Office: 5.13
Tel: +44 (0)20 7679 7680 (Direct Dial)
Internal: 37680
Fax: +44 (0)20 7387 1397
Email: J.Shawe-Taylor@cs.ucl.ac.uk
Website: http://www0.cs.ucl.ac.uk/staff/J.Shawe-Taylor/

 

Contact Person:
Charlotte Penny
Role: Manager, Centre for Virtual Environments, Interaction and Visualisation
University College London
Dept. of Computer Science
66-72 Gower Street
London WC1E 6EA
United Kingdom
Tel: +44 (0)20 3108 7150 (Direct Dial)
Internal: 57150
Email: C.Penny@cs.ucl.ac.uk
Website: http://www.cs.ucl.ac.uk/people/C.Penny.html/