The report Artificial Intelligence Capacity in Africa, commissioned by the Knowledge 4 All Foundation as part of the AI4D grant initiative, provides a comprehensive analysis of the AI landscape in Sub-Saharan Africa (SSA). It highlights significant gaps and opportunities in AI education, research, and policy across the region. The study identifies key stakeholders, including higher education institutions, governments, and the broader AI community, emphasizing their roles in fostering a robust and responsible AI ecosystem. It underscores the importance of capacity building, from enhancing formal education in AI to supporting short-term training programs, while addressing gender and diversity challenges that hinder inclusive AI development. The findings reveal that despite growing interest, many institutions face constraints such as limited funding, inadequate infrastructure, and a lack of AI-focused policies.
Artificial Intelligence Capacity in Sub-Saharan Africa
A major finding of the report is the lack of systematic integration of AI into higher education curricula and research across SSA. While several universities offer AI-related modules within broader science or engineering programs, dedicated AI degrees remain rare. The report points out the critical need for both foundational skills in STEM and the inclusion of humanities and social sciences to ensure ethical and socially relevant AI solutions. It also reveals significant disparities in gender representation, with males dominating AI-related education and professional spaces. This calls for targeted initiatives to promote diversity, such as scholarships and mentorship programs for women and underrepresented groups in AI.
The report also addresses the challenges of establishing a supportive ecosystem for AI development. Limited government engagement in AI policy and strategy formation, coupled with a lack of alignment between academic institutions and industry needs, stifles innovation. Moreover, issues such as unreliable internet connectivity, inadequate access to data, and limited funding for AI startups further hinder growth. The study highlights the need for public-private partnerships to fund research and infrastructure and suggests creating national AI strategies that align with global ethical standards and regional development priorities.
In conclusion, the report offers actionable recommendations to enhance AI capacity in SSA. It calls for governments to develop regulatory frameworks and invest in AI research, while academic institutions are encouraged to integrate AI into their curricula and foster interdisciplinary research. The AI community is urged to champion diversity and inclusion, provide technical expertise, and collaborate with policymakers. Through the coordinated efforts of all stakeholders, SSA has the potential to harness AI as a transformative force for socio-economic development while ensuring equitable and ethical applications.
International Development Programme International Reports
Report on Responsible Artificial Intelligence in Sub-Saharan Africa
The report Responsible Artificial Intelligence in Sub-Saharan Africa: A General State of Play and Landscape examines the status, challenges, and opportunities for adopting responsible AI in the region. Commissioned by the Knowledge 4 All Foundation as part of the AI4D grant initiative, the report identifies significant gaps in AI readiness, infrastructure, and policy across Sub-Saharan Africa. It underscores the potential of AI to drive progress in achieving sustainable development goals (SDGs), such as poverty reduction, improved healthcare, and better education. However, the report warns that without targeted investments and ethical frameworks, AI may exacerbate existing inequalities. The study highlights the uneven distribution of AI advancements, with certain countries like South Africa, Kenya, and Ghana leading the way due to relatively stronger technological infrastructure and policy initiatives.
Responsible Artificial Intelligence in Sub-Saharan Africa: Landscape and General State of Play
A key finding of the report is the critical role of innovation ecosystems, capacity building, and policy frameworks in fostering responsible AI. The report identifies a growing number of grassroots machine-learning communities, academic partnerships, and emerging start-ups as the foundation for AI development in the region. However, it stresses that many of these initiatives are underfunded and lack robust local leadership. Furthermore, the reliance on imported technologies and frameworks often overlooks the unique socio-economic and cultural contexts of the region, limiting their effectiveness and sustainability. This points to the need for AI solutions tailored to African realities, particularly in sectors like agriculture and public health.
The report also examines the ethical implications of AI deployment in Sub-Saharan Africa, particularly concerning data privacy and algorithmic bias. It highlights how a lack of inclusive data and contextual algorithms can reinforce existing societal inequalities, particularly those affecting marginalized groups and women. Furthermore, the report warns against the unchecked adoption of AI technologies developed in regions with different socio-economic contexts, cautioning that such practices could lead to digital colonialism. It recommends proactive engagement with local stakeholders to ensure AI technologies are culturally sensitive and aligned with the values of the communities they aim to serve.
In conclusion, the report emphasizes the importance of collaborative efforts between governments, academic institutions, and private entities to build a robust and inclusive AI ecosystem in Sub-Saharan Africa. It advocates for increased investment in capacity-building initiatives, improved infrastructure, and the establishment of ethical governance frameworks to support the responsible development of AI. Through strategic interventions and leveraging initiatives like the AI4D grant, Sub-Saharan Africa can position itself as a leader in responsible AI innovation that aligns with global best practices while addressing regional challenges.
International Reports
Report on the workshop on Artificial Intelligence and Women Empowerment
Workshop held in Paris, France, on the 2nd of November 2019
Gender Equality is the 5th United Nations Sustainable Development Goal (SDG). As with other SDGs, Artificial Intelligence can play a role in promoting good practices, or, to the contrary, can enhance the existing biases and prejudices. A recent workshop at IJCAI, in Macao, made the case for a number of projects relating SDGs and Artificial Intelligence. In order to push forward the questions relating AI and gender equality, the Knowledge for All foundation, the Centre de Recherches Interdisciplinaire de Paris and the Unesco Chair on OER at Université de Nantes jointly organized this one day workshop. The workshop was built around sessions on the different aspects of the question. We were glad to give a special status to our keynote speaker, Bhavani Rao, from Amrita University, Director of the Ammachi and holder of the Unesco Chair on « Women’s Empowerment and Gender Equality ».
The questions identified with the help of our program committee were the following:
Bias issues: typically, AI will reproduce the bias in the data. If the data contains a prejudice, the decision making based on AI can reproduce (and sometimes enhance) that prejudice.
Gender issues in AI projects. Is it a good idea to add a “gender officer” to an AI project? Someone who can look out so that prejudice doesn’t creep in?
AI for Education: how educating women can make special sense? What do we need to look out for?
But, as the following workshop notes show, the discussion allowed us to reflect upon many different aspects too.
Colin de la Higuera
UNESCO Chair on Technologies for training teachers with open educational resources
The workshop took place at the Centre de Recherches Interdisciplinaires at the CRI – Learning center extension from 10am to 5pm and was advertised through the websites of the different partners organizing the event (UNESCO Chair at Université de Nantes; Knowledge for All Foundation and CRI). It was advertised online.
This meeting built upon work done by a number of partners, concerning gender issues in teaching computing, fair representation of women by AI and more broadly the impact of AI on the United Nations Sustainable Development Goals (SDGs).
These questions correspond to the 5th SDG and it is already known that AI can both increase the effect of bias or correct it, depending on how it is deployed.
Colin de la Higuera, Université de Nantes, Unesco Chair in Teacher Training Technologies with Open Education Resources set the scene, explained how this workshop was linked with the previous workshop organized by the Knowledge for All foundation in Macau in July 2019. He acknowledged the help of the CRI, the Unesco, Université de Nantes, the Knowledge for All Foundation and the Société informatique de France in organizing the event.
Bhavani Rao, from Amrita University, Director of the Ammachi Labs (a computer human interaction lab) and holder of the Unesco Chair on « Women’s Empowerment and Gender Equality » presented the initiatives led in India and the spirit of the work done around these questions in India, also involving Human Computer Interfaces and Artificial Intelligence. She explained how they have used artificial intelligence to map the various factors contributing to women’s vulnerability across India and to identify “hot spots” and “cold spots” for women’s empowerment. These identified locations take into account more than 250 quantitative data measurements in combination with qualitative data to represent a comprehensive understanding of the state of empowerment at that location. Bhavani Rao emphasised the need to track and monitor the progression of the women involved in Ammachi Labs’ (or any, for that matter) vocational training programmes and evaluate the impact it has on their community. Furthermore, she advocated in favour of a holistic approach and warned against initiatives that are just aimed at solving isolated issues, as often there is unintended fallout that negatively impacts both women and their communities.
John Shawe-Taylor, Unesco Chair in Artificial Intelligence at University College London, presented the different interventions that have been implemented at UCL toward gender equality in a Computer science department. These can be summarized as the 4 As: 1) Arrive: encouraging girls to study computer science, 2) Aspire: creating a supportive environment, 3) Achieve: ensuring they realise their full potential, 4) Advance: ensuring equal opportunities for career progression. The talk also highlighted a number of ways in which AI enabled systems might further accelerate the effectiveness of these interventions.
Wendy MacKay, INRIA National Institute for Research in Computer Science and Automation, Situ Ex talked about her own experience as a woman and a researcher. She also insisted on the importance of a user oriented approach: keeping the user in the loop at the different times of the development of an AI project could help humans develop and learn alongside AI.
Prateek Sibal, co-author of Unesco publication “Steering AI for Knowledge Societies” highlighted that while technological artefact may be neutral the culture and the practices associated with its use are biased against women. He discussed how different AI based tools including facial recognition and digital voice assistants mirror biases in society. For instance, several labelled image data sets used for training facial recognition programmes have high error rates in recognising gender of dark skinned women as compared to men. He pointed out that deep fakes based on Generative Adversarial Networks (GANs) are overwhelmingly used to target female celebrities by creating pornographic videos. He raised concerns around ‘technological determinism’ and advocated for an approach to the development and use of AI that is human centred and is anchored in human rights and ethics. He demonstrated how some instances of use of facial recognition technology are violative of human rights violations, can have life threatening consequences for people with diverse sexual orientations and gender identities. Vigilance by researchers, civil society and governments is enabling detection of bias in AI systems, this presents an opportunity to influence the culture of technology by developing artefacts that are gender equal, that respect diversity or even obliterate gender differences as was demonstrated with the example of gender neutral digital voice assistants.
A discussion with the room followed. Some of the ideas expressed during the debate were:
A goal is to design interventions and avoid undesired side effects: ideally one might need a simulator? even better, a causal model? Can we consider randomized controlled trials?
What if we improve women’s life in an otherwise unchanged world? This can turn for the bad. This being the key point made during Bhavani Rao’s talk.
Michèle Sebag, from CNRS (Centre national de la recherche scientifique) and Univ. Paris-Saclay discussed some thoughts about biases, glass ceilings, and how to handle them. Even after a 1st glass ceiling has been overcome (e.g. for women in selective engineering schools), biases remain as to the choice of options, with an impact on careers, money, etc. Even more puzzling, the wording of an option makes a significant difference regarding women’s choice (e.g. Energy for XXIst century vs Challenges of Sustainable Development) despite their technical content being 95% the same: the bias is in the language (footnote: nothing unexpected, see Bourdieu). As both genders might be using two different languages, a debiasing action might be to build translators; and/or display the gendered versions conveying the same content. This would also be fun! which is an important part of effective actions. [Using AI to reinforce self-confidence is another great perspective; note however that undermining the self-confidence feeds a multi-billion dollar industry].
Frédérique Krupa, Human Machine Design Lab, presented her own trajectory in the field and how, for her PhD on Girl Games: Gender Design and Technology, she studied belief systems as the principal influence amongst numerous factors in encouraging boys and discouraging girls to be interested in technology and pursue a career in ICT. The family factor is still today determining things far too strongly: through early choices by the parents (or the family environment) little girls were being deprived of the exciting activities and only getting access to less interesting, less challenging, less time consuming technological experiences. She followed up on her machine learning postdoc at 42, noticing the absence of interest on the quality, accuracy and representativity of data amongst homogenous teams of coder, mostly male, white, straight and from upper social classes, so that they do not consider these questions in their quest for optimal performance and chances of publishing – because they are not likely to suffer from bias. The issue of data quality is about having contextual information available to determine what bias may be present in the data and/or its resulting model. She calls for the development of AI UX practices, developed from quantitative social science methods.
A discussion with the room followed with some points:
Detecting known biases is a hot topic in AI (gender, race, wealth, sexual orientation…). But what are the unknown biases? Building experiments to provide evidence for biases: defines a challenge to be tackled with psychologists, neurosciences, MLers;
Another topic is ethical recommendation; to de-bias the recommendation one should have an idea of the targeted ultimate fair distribution. This is a normative issue: we need (on the top of all others) lawyers, politicians, citizens, …, sci-fi writers,…
Sophie Touzé, VetAgro Sup-Université de Lyon, and past President of the Open Education Consortium presented an original approach and offbeat vision of AI and the warning role it represents. The AI forces us to look at the skills unique to humanity, our added value in relation to the intelligence of machines. By challenging us, it provokes change and the saving awareness to know what we need to teach at school and university.
She insisted on what skills are essential. The 4 Cs are Collaboration, Communication, Creativity and Critical thinking and the 3 Ss are self-awareness, self-motivation and self-regulation. Unfortunately, these skills are not taught at school nor university. An app could be developed to help individuals monitor and develop these critical skills throughout their lives.
Empowered by these skills, each citizen of the world could participate to forge consciously the future we want no more as individual but as the human species. The narrative of humanity should not be left in the hands of a few people that present to us as heroes. It’s time for women to participate writing humanity’s epic story together!
Sophie Touzé concludes with “We are the heroes we’ve been waiting for”.
Mitja Jermol, Unesco Chair on Open Technologies for OER and Open Learning, used his experience in AI based education projects to present what an education to AI could be. He made the point that there are 3 issues here: 1. Developing AI, 2. Using AI and 3. Interacting with AI. Most discussions today are related to increasing the knowhow in developing AI which includes two very specific domains namely mathematics and software programming. The fact is that AI has become mature enough to be taught to other domains as tolls to be used. This is why the education should be concerned with the 2 last ones. Like Sophie Touzé he insists on the importance of soft skills. He also describes some projects related to the question in which he is involved, such as the X5-GON project. Opening education, free and inclusive access to all through a global open education could be a strong mechanism to empower not just women but any individual in the world. AI plays a major role in this by understanding the complex system of materials, competences, infrastructure and the needs of particular individual.
Conclusion
As a conclusion, it was reflected that these questions should be further discussed. Colin de la Higuera believed that there were 2 different issues which had been at the core of the discussions of the day. 1. The issue of gender equality which is just as present in the field of AI as in other fields: female researchers are finding it difficult to emerge and only those strong, or –as Frédérique Krupa remarked- who don’t follow the rules, will make it. Yet everyone agrees that a more equal representation in the field is necessary. 2. The second issue is the effects of AI itself towards gender (in)equality, women vulnerability or women empowerment.
Actions to follow are to push the findings of this workshop forward in Unesco and elsewhere. Furthermore, the Knowledge for All Foundation will also build upon these discussions.
International Reports
Report on Education, Training Teachers and Learning Artificial Intelligence
Report by UNESCO Chair in teacher training technologies with OER
A report delivered by K4A trustees for UNESCO reporting on Artificial intelligence (AI) which is taking an important part in our lives, the question of educating towards AI becomes increasingly relevant. We argue in this document that although it may be premature to teach AI, we recommend an education to five pillars or core questions which should be of great use in the future:
Data awareness or the capacity of building, manipulating, visualising large amounts of data;
Understanding randomness and accepting uncertainty or the ability to live in a world where models cease to be deterministic
Coding and computational thinking or the skills allowing each to create with code and to solve problems through algorithms
Critical thinking as adapted to the digital society and finally
A series of questions amounting to understand our own humanity in view of the changes AI induces.
Artificial intelligence has been described as the new electricity. As such, the belief that it will have a profound influence over many fields, of which Education, is widely shared. For instance, in its 2018 report on artificial intelligence, the French committee chaired by Cedric Villani [25] presented Transforming Education as the first “focus”. In [12], hundreds of applications of AI have been scrutinised and mapped to the relevant technologies.
More recently, the JRC report The Impact of Artificial Intelligence on Learning, Teaching, and Education, by Ikka Tuomi et al. [20], considered the different aspects of the questions relating artificial intelligence and learning. And more generally, the question of transforming education with the help of technology is addressed by the Sustainable Development Goal 4 adopted by the United Nations in September 2015, and also by the OECD [13].
In this report we study the different interactions between AI and Education with an emphasis on the following question: If we accept that artificial intelligence is an important element in tomorrow’s landscape, what are the skills and competences which should appear in the future curricula and how can we help to train the teachers so that they can play the required role?
This report is one of the first addressing these questions: as such it is less built as a synthesis of existing reports with an increment from previous works than as an analysis based on the experience of teachers, researchers, academics and practitioners. A recent exception is the work by the UNESCO itself who has been exploring the links between AI and education [15].
What is AI? Why is the issue of general interest?
The history of artificial intelligence goes back to the history of computing. Alan Turing was interested very early in the topic of machine intelligence [21]: some of the ideas he introduced 70 years ago are still extremely relevant today; he argued in favour of randomness and discussed the implications of machine learning to society. Even if Turing didn’t predict the importance of data, he did understand the machine’s capacity of learning would be key to machine intelligence.
Another of Turing’s contributions to artificial intelligence is what became known as the Turing test: in this test an external (human) examiner has the capacity of interaction with both a machine and a human, but the interface being mechanical, he will have to examine the answers to the interactions rather than their form.
The examiner’s goal is to distinguish man from machine; the goal of the artificial intelligence is to confuse the examiner. This leads to the very general definition of artificial intelligence still in use today where it is less about a machine being intelligent than about a machine being able to convince the humans that it is intelligent.
The official birth of artificial intelligence is usually associated with the Dartmouth Summer Research Project on Artificial Intelligence: in 1956 researchers met in Dartmouth College to address the difficult questions for which computing failed to contribute [11].
Today, because of the impact of Machine Learning, and most notably of Deep Learning, alternative definitions for artificial intelligence have been considered: a more business oriented view is that AI matches these deep learning techniques which have a strong impact on industry [12, 30].
Being able to pass the Turing test is no longer the shared goal of research and would not explain the impact of AI today. Today’s successes of AI depend on several factors including machines tailored to the needs of the algorithms and the massive increase in quantity and quality of data. Machine learning techniques work today much better than 10 years ago.
They build better models, make less errors in prediction, they can make good use of the huge volumes of data, are able to generate new realistic data, and are being tuned and adapted to an increasing variety of tasks. As such, these algorithms are no longer aiming at tricking the human in believing that they are intelligent; they are actually replacing (in part) the human in one of her more intelligent tasks: that of building algorithms.
If computing is about algorithms and data, modern AI is a data science: it relies on being able to handle and make the most from data. Whereas the natural trend for computing was to build algorithms to handle data, me may argue that artificial intelligence is about data building the algorithms that build algorithms.
Why understanding AI matters
Artificial intelligence is influencing all parts of society where data can be made available and where there is room for improvement, either by automatising or by inventing new challenges and needs. In substance, this means that every human activity is being impacted or can be impacted.
For instance, all 17 United Nations sustainable development goals (SDG) are currently being scrutinised by AI experts [8]. The use of AI can lead to complex new situations, which can only be understood through an actual understanding of the technical and conceptual aspects underlying it. In many cases our physical understanding of the world is insufficient to gauge the impact or even the opportunity for AI.
When we read for i= 1 to 1000000 intuition is of little use: no human does anything 1 000 000 times in a lifetime! The mathematical world and its full abstraction doesn’t give an adequate answer either. People may imagine artificial intelligence as a process by which a machine does things the same way as we (humans) do, only faster and with more memory, storage space or computation power. But AI doesn’t always work that way. The algorithms will not follow the patterns from our physical world and an understanding of what they do will not give us a realistic idea of why they work and why they don’t.
When it comes to training teachers, that leaves us with two approaches: the first one supposes the teachers should be able to actually build simple AI systems: they should know how to code and be able to assemble blocks in order to obtain more complex systems, run artificial intelligence algorithms, build models and use them. The second approach supposes people do not learn how to design but only how to interpret and use. They will then necessarily interpret things through their very limited own physical world values.
AI and Education
The links between AI and Education are not new. They have worked in both directions, but one of these has received, up to today, much more attention than the other [15].
AI for Education
The first conference on Artificial and Education dates back to more than 20 years ago: the challenges have since been wide ranging and are now addressed by strong multidisciplinary communities. Research projects have been funded by the European Union, Foundations and individual countries. The goal is to make use of artificial intelligence to support education. An emerging industry has developed, covered by the name Edutech (which isn’t strictly AI) and the question has been studied in a number of reports: [23, 15, 25, 20].
Education for AI
Whereas the question of educating everyone to artificial intelligence is new, the one of training experts for industry has been dealt with for some time. Artificial intelligence has been taught in Universities around the world for more than 30 years. In most cases these topics require some strong foundational knowledge in computer science and in mathematics.
The increasing importance of data science, artificial intelligence and machine learning is currently leading to a modification of the computing curricula [16]. Education to artificial intelligence, prior to University, is in 2019 a whole new question. If it has not appeared before, this can be due to several reasons:
The need is novel: if AI has been around for some time, applications for everyday life have been limited. Today, through mobile phones or ambient companions, interaction with AI occurs in a routine way, at least in the more industrialised countries [12].
The issues surrounding education as a global question are huge and AI may not be perceived as essential [14], even when the goal is that of studying education in a digital setting [22]. The efforts made in many countries to introduce an Information and Communication Technologies curriculum have not even been able to produce yet results, and, with AI, it seems we are asking for more today [24]!
It is difficult to introduce the topic without trained teachers. As an example, in 2018, the French government decided to add a new computing curriculum at high school level (16-18 year old students) with the ambition of introducing artificial intelligence. This was impossible due to the numerous resistances.
It is still unclear what should be taught if we consider a teaching which should have some form of validity over the years. What will artificial intelligence be in 20 years, or even 10 or only 5?
Why should we train teachers in AI?
AI applications are going to be present in all areas [12]. As such, one could just add the usage of the applications to the individual skills required by a teacher. But this would probably limit in many ways the capacity of the teachers to adapt to new applications. One can also state that the children are going to be brought up in a world where artificial intelligence will be ambient and can therefore be called AI natives [26]. Should a teacher just know about the key ideas of AI? Or should she be more aware of other questions?
Is teaching AI an enhanced version of teaching computational thinking?
Teaching coding and computational thinking has been advocated strongly since 2012 [28, 7]. Many countries have now introduced the topic. Learning to code isn’t just about acquiring a technical skill: it is about being able to test one’s own ideas instead of just being able to run those of someone else. And of course, there is a strong dose of ideas in artificial intelligence, which means that both knowing how to code can help one use AI in a creative way and also understand the underlying questions and concepts. AI is an extension of computing. It comprises it but also introduces some new ideas and concepts.
Why should a teacher be AI aware?
Many of the reasons for training the teachers to some understanding of AI are very close to those advocated to prepare them to digital skills [24]. Whereas a first goal is to make sure that teachers are digitally aware, and this goal is not reached yet, how important/urgent is it to be AI aware? Why would that be necessary? Let us discuss some reasons.
The role toward community. If artificial intelligence is to impact every aspect of society, as many are predicting, citizens and future citizens are going to require guiding, some help to understand, decrypt these technologies.
In order to train skilled learners. An aspect put forward by many analysts is the impact of artificial intelligence on jobs. The more optimistic analyses point out that where many jobs will be lost to robotisation, new jobs will emerge.
And even if these jobs seem to require soft skills, it would seem reasonable that many will be linked directly or not with the technical aspects of AI. If in 2019 it seems neither relevant nor possible to train every child to AI, it would on the other hand be necessary that each child be given the principles and bases allowing her to adapt and learn at a later stage.
Because the learning environment is changing and will include AI. Intelligent tutoring systems, tools which will allow to propose individualised learning experiences, tailored companions… are some of the projects under way which will necessarily lead to situations where the learner is helped. Understanding these tools will be an asset, if not a necessity.
Because AI is a valuable tool for teaching. AI is used today to help the teacher. For example, project X5-GON recommends open educational resources adapted to the needs of a particular teacher [29]. In the same way as today a teacher is penalised by not being able to make use of the digital tools available, tomorrow’s teacher may lose out if she cannot access AI tools in a simple way.
Towards a curriculum. The proposed five pillars.
Artificial intelligence has not reached its maturity. The topic as it was defined in 1956, studied during 40 years, reached its spectacular results since 2012 is still difficult to understand. It is even more difficult to forecast how the technologies will evolve, even in a close future. If building a full curriculum is beyond the reach of this document, it is possible to put forward five pillars and propose to build upon these. We represent our proposal in Figure 1: we believe five pillars or core questions should be added to the training of teachers (lower part of the figure) and on these, in due time, AI would be able to be taught (top of figure).
Figure 1: The AI competences
Uncertainty and Randomness
Data is inconsistent. It does not demonstrate a strict causal nature. With data, a same cause can lead to different effects. Dealing with this legitimate non determinism in the modelled world, which is going to be used for AI based decision making, requires the acquisition of alternative skills. Probabilistic reasoning and statistics will need to be taught, but before that, activities allowing children to understand the stochastic nature of most modelled processes and those encouraging to make the best use of the imperfections of the data are necessary.
Yet AI both also means a new form of determinism which deserves our attention: when predictive systems are taken (too) seriously and we are told that our child aged 1 will develop into a scientist or have a complicated social life through a misuse of data, not understanding how these predictions work can cause a lot of damage.
An understanding that the forecasts proposed by AI are not ground truths, but estimations, and how these are to be interpreted is of great need. Teaching this may be complicated for didactic reasons: accepting uncertainty also means teaching without implying that the teacher is omni-scient and makes no mistakes.
Coding and Computational Thinking
Coding and computational thinking are today in the curricula of many countries, following the recommendations of experts [10, 18]. In many cases the AI code it is about using libraries for the programming languages which allow us to manipulate large amounts of data with very few instructions. But a proper usage of these techniques does involve some coding skills [19, 28, 6].
Furthermore, it has been argued that expert users of AI (for example the doctors) will need to understand the algorithms in order to know when not to trust the machine learning decision. Efforts have taken place in different countries to address this question and the related question of training the teachers [10]. In France, project Class’Code [5, 4] relies upon Open Educational Resources to allow teachers and educators to learn. Computers and robots are obvious artefacts, but an alternative approach is that of Computer Science Unplugged [2].
Data Awareness
AI is going to rely on data. Whereas the algorithm is at the centre in computing, this is much less the case with AI, where, often, most of the effort will concern the collection, preparation and organisation of the data [16]. An education to data (science) will rely on activities where data is collected and visualised, manipulated and analysed [15]. As a side effect, large amounts of data justify that algorithms get taught with more care as testing becomes much more complex.
Critical Thinking
Social sciences can and should contribute with many of the ethical questions AI raises. Critical thinking is one important aspect but it is essential that it relies on a real understanding of the way technology works.
A typical example: when attempting to detect fake news and information –a truly important question– it is often suggested that the answer consists in finding the primary source or in relying on common sense. This is a great 20th century answer but is of less use in many situations on the web from AI generated texts, images or videos. Today, the answer comes through a combination of understanding the digital literacy concepts and being able to make use of algorithms to build one’s convictions. In other words, the human alone is going to find it difficult to argue with a machine without the help of another machine.
Yet it would be just as much a mistake to only teach the technology without giving the means to understand the impacts, evaluate the risks and have a historical perception of media. In most reports on AI there is an agreement that an analysis of the ethical implications should be taken into account before deploying. Researchers from Media Sciences have worked on the question for some time [27] and should be encouraged to work with AI specialists.
Post AI Humanism
The previous 4 pillars can be matched to existing competences, skills and teaching profiles. The one we introduce now may be more difficult to fit in. The key idea is that the progress of AI is making us, as human beings, reconsider some ground truths. It is already known that our interaction with technology has an impact on non technological attitudes: for examples, teachers agree that the children’s use of the smartphone and the specific type of multitasking is introduces has an effect
on their capacity of studying, at least in the formal settings proposed by schools. With artificial intelligence, these changes may be even more formidable. We introduce this idea through four examples.
Truth
In 2017, system Libratus was built by researchers at Carnegie Mellon University [3]. This system beat some of the best poker players in the world. For this, the system used reinforcement learning to build its optimal policy. This includes the fact that it learnt to bluff -a necessary skill in poker without being explicitly trained to bluff. But Libratus did learn that the better strategy to win was to lie from time to time. In other words, the system was trained to win. And if this included bluffing –which is a socially acceptable form of lying– it did just that.
Experience
In 2016 system Alphago beat go champion Lee Sedol by four victories to one [17]. The system, like most till then, made ample use of libraries containing thousands of games played by experts: the machine built its victories on top of the human history and knowledge. A few months later, the new version called Alphago-zero was launched, beat Alphago by 100-0, and then was adapted to chess. The main difference was that Alphago-zero discarded all the human knowledge, just used the rules of the game and the capacity of the machine to learn by playing against itself. The question this raises is: do we need to build society upon its experience?
Creativity
This question is regularly posed. It can matter legally and intellectually. Today, artificial intelligence can compose music, write scripts, paint pictures, modify our photographs. Through artificial generation new artefacts can be created. It should be noted that in such cases where artificial intelligence is used for artistic creation it is most often reported that a human artist was part of the project. Whilst this may be in part true, it may only be that we need to be reassured.
Yet, again in the area of games, it is interesting to see specialists comment online the games played by the latest artificial intelligence programs. Whereas some years ago the “brute force” of the machine was put forward, this is much less the case today: the creative nature of the moves is admired by the human grand-masters. A question raised here is: can a machine create without feeling nor conscience? One answer is to say that the result is what matters: if we believe there is creation, do we need the feeling? [9].
Intelligence
Intelligence itself is being impacted by the progress of AI. Each time a progress is made and machine beats man at something which up to now was considered to be an activity requiring intelligence, experts invariably announce that the given activity didn’t really need intelligence. More and more, the goal seems to define intelligence in such as way as to make it unachievable by a machine.
Extending the model
The pillars described in the previous section must be understood as being able to support a larger framework of competences the teachers and learners will need to master in order to use and create AI systems (see Fig. 1). But they can also be seen as self contained 21st century skills which would allow them better to make use of the technologies introduced by and with artificial intelligence.
Linking with the UNESCO ICT-CFT
AI will have to be taught by teachers. There is a big difference between the countries in teacher training, for example over the basic ICT skills we can rely on to be able to install an AI teaching agenda. As a uniform starting point and framework we take the approach promoted by the UNESCO, namely the UNESCO ICT-Competency Framework for Teachers (ICT-CFT), and reflected in a series of evolving documents (over the past 10 years) [24].
In [24], the 6 aspects of a teacher’s work are scrutinised with respect to a goal of making use of ICT for better teaching:
A1 Understand ICT in Education: how ICT can help teachers better understand the education policies and align their classroom practices accordingly;
A2 Curriculum and assessment: how ICT can allow teachers to better understand these questions but also intervene and propose new modalities;
A3 Pedagogy: how the actual teaching itself can be positively impacted through the informed use of ICT;
A4 Application of Digital skills: how to make use of the new skills acquired by the learners to support higher -order thinking and problem solving skills;
A5 Organisation and administration: how to participate in the development of technology strategies at different levels;
A6 Teacher professional learning: how to use technology to interact with communities of teachers and share best practices.
Each aspect is then analysed, regarding the impact towards ICT, following three stages: technology literacy (.1), knowledge deepening (.2) and knowledge creation (.3). The report raises two questions:
In what measure does the ICT-CFT offer a good framework for teachers to be trained to AI?
In what measure would the ICT-CFT benefit from the impact of AI?
Figure 2: The alignment between pillars and aspects. full lines indicate an impact of an aspect on a pillar; dashed lines indicate a contribution of a pillar to an aspect. Double lines work both ways
How the ICT-CFT allows teachers to move on to AI
The ICT-CFT aims at allowing teachers to know how to use, in an experimented way, and to develop new ideas, learning material, curricula through ICT. As the different AI pillars presented in this paper all rely on an understanding of how computers, algorithms, data works, the ICT-CFT will be an important stepping stone towards AI. Teachers who will be able to master the different tools, strategies and skills proposed in the ICT-CFT would be able to interact better with the questions raised by the proposed pillars. We represent, in Fig. 2, with a full line, the main contributions of the ICT-CFT aspects to the five pillars.
The impact of AI on the ICT-CFT
The arrival of new AI tools for education (like [29]) will probably help motivate better teachers to the usage of ICT: the advantages will become clearer and we can hope that the usage will be simplified. Typically, OER are today difficult to construct, to offer, to find. AI and related technologies should make them much more accessible, which would ensure their wider adoption.
On the other hand, a better understanding of the key questions raised by the proposed pillars will have a positive effect on the motivation of the researchers to progress in levels across the different aspects proposed in the ICT-CFT.
For example, better understanding the social and ethical implications (Critical thinking and post AI humanism) would have a positive impact on the way teachers react and their motivations in training.
The ambitions proposed with the 5 pillars can also impact positively the ICT-CFT by requiring extra ambition: Coding and computational thinking are mentioned but not recommended, whereas for AI the proposal is that these are necessary skills. Learning to code would, on its own, render the ICT-CFT much more fulfillable.
We represent, in Fig. 2, with a dashed line, the main contributions of the five pillars to the ICT-CFT.
Artificial Intelligence and Open Educational Resources
ICT Teachers are very much the forerunners of sharing open educational resources (OER): as they naturally use the computer as an object of learning and a learning artefact, they have been very active in producing and sharing OER. This is also the case for AI: one can predict a great benefit for all.
Artificial intelligence is also working today at provided better tools to publish, share and access OER [29]. Therefore, the education of AI or towards AI proposed in this document should make ample use of OER.
Conclusion
We have presented in this preliminary report five competences or pillars which should be taking an increasing importance given the penetration of AI in society. Further work should follow, to better understand at what age and in what way the relevant concepts should be introduced, studied, mastered. around which we expect to explain how AI should be taught, both to teachers and to learners.
Acknowledgements
We would like to thank the following persons for their help and expertise: Neil Butcher, Victor Connes, Jaco Dutoit, Maria Fasli, Françoise Soulié Fogelman, Marko Grobelnik, James Hodson, Francois Jacquenet, Stefan Knerr, Bastien Masse, Luisa Mico, Jose Oncina, John Shawe-Taylor, Zeynep Varoglu, Emine Yilmaz.
References
[1] Fathers of the deep learning revolution receive ACM A.M. Turing award. News, 2019. https://www.acm.org/media-center/2019/march/turing-award-2018.
[2] T. Bell, I. H. Witten, M. Fellows, R. Adams, J. McKenzie, M. Powell, and S. Jarman. CS Unplugged. csunplugged.org, 2015.
[3] Noam Brown and Tuomas Sandholm. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. Science, 359(6374):418–424, 2017.
[4] Class’Code. Class’code, ses principes et leviers. Position paper, 2015. https://project. inria.fr/classcode/classcode-ses-valeurs/.
[5] Class’Code. Maaison : Maîtriser et accompagner l’apprentissage de l’informatique pour notre société numérique. Position paper, 2015. https://drive.google.com/drive/folders/0B42D-mwhUovqQ1RyOW01WUtyR1k.
[6] Informatics Europe. Informatics education: Europe cannot afford to miss the boat. Technical report, 2015. http://www.informatics-europe.org/images/documents/informatics-education-acm-ie.pdf.
[7] A. Fluck, M. Webb, M. Cox, C. Angeli, J. Malyn-Smith, J. Voogt, and J. Zagami. Arguing for computer science in the school curriculum. Educational Technology & Society, 19(3):38–46, 2016.
[8] Knowledge for All Foundation. IJCAI workshop on Artificial Intelligence and United Nations sustainable development goals. Workshop, 2019. https://www.k4all.org/event/ijcai19/.
[9] Yuval Noah Harari. Homo Deus: A Brief History of Tomorrow. Harvill Secker, 2016.
[10] L’Académie des Sciences. L’enseignement de l’informatique en France – il est urgent de ne plus attendre. Position document, 2013. http://www.academie-sciences.fr/fr/activite/rapport/rads_0513.pdf.
[11] John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon. A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI magazine, 27(4):12–12, 2006.
[12] McKinsey. Notes from the AI frontier. insights from hundreds of use cases. Discussion paper, 2018. https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning.
[13] L. Nedelkoska and G. Quintini. Automation, skills use and training. Position document, 2018. Documents de travail de l’OCDE sur les questions sociales, l’emploi et les migrations, n 202, Éditions OCDE, Paris, https://doi.org/10.1787/2e2f4eea-en. https://www.oecd-ilibrary.org/employment/automation-skills-use-and-training_2e2f4eea-en.
[14] High Level Committee on Programmes. Towards a UN system-wide strategic approach for achieving inclusive, equitable and innovative education and learning for all. Report, 2019.
[15] UNESCO Education Sector. Artificial intelligence in education: Challenges and opportunities in sustainable development. Report, 2019.
[16] R. Benjamin Shapiro, Rebecca Fiebrink, and Peter Norvig. How machine learning impacts the undergraduate computing curriculum. Commun. ACM, 61(11):27–29, 2018.
[17] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484–489, 2016.
[18] The Royal Society. After the reboot:computing education in uk schools. Position paper, 2017. https://royalsociety.org/-/media/policy/projects/computing-education/computing-education-report.pdf.
[19] P. Tchounikine. Initier les élèves à la pensée informatique et à la programmation avec scratch. Research paper, 2016. http://lig-membres.imag.fr/tchounikine/PenseeInformatiqueEcole.html.
[20] Ikka Tuomi. The impact of artificial intelligence on learning, teaching, and education. Submitted file, 2018. http://publications.jrc.ec.europa.eu/repository/bitstream/JRC113226/jrc113226_jrcb4_the_impact_of_artificial_intelligence_on_learning_
final_2.pdf.
[21] Alan Turing. Computing machinery and intelligence. Mind, Oxford University Press, 59(236):33–35, 1950.
[22] The NESCO/Netexplo Advisory Board (UNAB). Human learning in the digital era. Report, 2019. https://unesdoc.unesco.org/ark:/48223/pf0000367761.locale=en.
[23] UNESCO. Unesco and sustainable development goals. Policy document, 2015. http://en. unesco.org/sdgs.
[24] UNESCO. Unesco ICT competency framework for teachers. Report, 2018. https://unesdoc.unesco.org/ark:/48223/pf0000265721.
[25] Cédric Villani. Donner un sens à l’intelligence artificielle. Position document, 2018. https: //www.aiforhumanity.fr/pdfs/9782111457089_Rapport_Villani_accessible.pdf.
[26] Randi Williams, Hae Won Park, and Cynthia Breazeal. A is for artificial intelligence: The impact of artificial intelligence activities on young children’s perceptions of robots. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), May 4–9, 2019, Glasgow, Scotland UK.ACM, New York, NY, USA. ACL, 2019. https://doi.org/10. 1145/3290605.3300677.
[27] Carolyn Wilson, Alton Grizzle, Ramon Tuazon, Kwame Akyempong, and Chi Kim Cheung. Media and information literacy curriculum for teachers. Report, 2014. https://unesdoc. unesco.org/ark:/48223/pf0000192971.locale=en.
[28] Jeannette M. Wing. Computational thinking. Communications of the ACM, 49(3):33–35, 2006.
[29] x5gon. Cross modal, cross cultural, cross lingual, cross domain, and cross site global oer network artificial intelligence and open educational resources. European project h2020, 2017. https://x5gon.org.
[30] Les échos. AI for business. Comment faire des entreprises françaises des championnes de l’IA ? Livre blanc, 2019. https://www.lesechos-events.fr/think-tank/ai-business/#fndtn-recommendations.