Gender Equality is the 5th United Nations Sustainable Development Goal (SDG). As with other SDGs, Artificial Intelligence can play a role in promoting good practices, or, to the contrary, can enhance the existing biases and prejudices. A recent workshop at IJCAI, in Macao, made the case for a number of projects relating SDGs and Artificial Intelligence. In order to push forward the questions relating AI and gender equality, the Knowledge for All foundation, the Centre de Recherches Interdisciplinaire de Paris and the Unesco Chair on OER at Université de Nantes jointly organized this one day workshop. The workshop was built around sessions on the different aspects of the question. We were glad to give a special status to our keynote speaker, Bhavani Rao, from Amrita University, Director of the Ammachi and holder of the Unesco Chair on « Women’s Empowerment and Gender Equality ».

The questions identified with the help of our program committee were the following:

  • Bias issues: typically, AI will reproduce the bias in the data. If the data contains a prejudice, the decision making based on AI can reproduce (and sometimes enhance) that prejudice.
  • Gender issues in AI projects. Is it a good idea to add a “gender officer” to an AI project? Someone who can look out so that prejudice doesn’t creep in?
  • AI for Education: how educating women can make special sense? What do we need to look out for?

But, as the following workshop notes show, the discussion allowed us to reflect upon many different aspects too.

Colin de la Higuera

UNESCO Chair on Technologies for training teachers with open educational resources

Université de Nantes

Download the report here or scroll down to read

About the workshop

The workshop took place at the Centre de Recherches Interdisciplinaires at the CRI – Learning center extension from 10am to 5pm and was advertised through the websites of the different partners organizing the event (UNESCO Chair at Université de Nantes; Knowledge for All Foundation and CRI). It was advertised online.

This meeting built upon work done by a number of partners, concerning gender issues in teaching computing, fair representation of women by AI and more broadly the impact of AI on the United Nations Sustainable Development Goals (SDGs).

These questions correspond to the 5th SDG and it is already known that AI can both increase the effect of bias or correct it, depending on how it is deployed.

Colin de la Higuera, Université de Nantes, Unesco Chair in Teacher Training Technologies with Open Education Resources set the scene, explained how this workshop was linked with the previous workshop organized by the Knowledge for All foundation in Macau in July 2019. He acknowledged the help of the CRI, the Unesco, Université de Nantes, the Knowledge for All Foundation and the Société informatique de France in organizing the event.

Bhavani Rao, from Amrita University, Director of the Ammachi Labs (a computer human interaction lab) and holder of the Unesco Chair on « Women’s Empowerment and Gender Equality » presented the initiatives led in India and the spirit of the work done around these questions in India, also involving Human Computer Interfaces and Artificial Intelligence. She explained how they have used artificial intelligence to map the various factors contributing to women’s vulnerability across India and to identify “hot spots” and “cold spots” for women’s empowerment. These identified locations take into account more than 250 quantitative data measurements in combination with qualitative data to represent a comprehensive understanding of the state of empowerment at that location. Bhavani Rao emphasised the need to track and monitor the progression of the women involved in Ammachi Labs’ (or any, for that matter) vocational training programmes and evaluate the impact it has on their community. Furthermore, she advocated in favour of a holistic approach and warned against initiatives that are just aimed at solving isolated issues, as often there is unintended fallout that negatively impacts both women and their communities.

John Shawe-Taylor, Unesco Chair in Artificial Intelligence at University College London, presented the different interventions that have been implemented at UCL toward gender equality in a Computer science department. These can be summarized as the 4 As: 1) Arrive: encouraging girls to study computer science, 2) Aspire: creating a supportive environment, 3) Achieve: ensuring they realise their full potential, 4) Advance: ensuring equal opportunities for career progression. The talk also highlighted a number of ways in which AI enabled systems might further accelerate the effectiveness of these interventions.

Wendy MacKay, INRIA National Institute for Research in Computer Science and Automation, Situ Ex talked about her own experience as a woman and a researcher. She also insisted on the importance of a user oriented approach: keeping the user in the loop at the different times of the development of an AI project could help humans develop and learn alongside AI.

Prateek Sibal, co-author of Unesco publication “Steering AI for Knowledge Societies” highlighted that while technological artefact may be neutral the culture and the practices associated with its use are biased against women. He discussed how different AI based tools including facial recognition and digital voice assistants mirror biases in society. For instance, several labelled image data sets used for training facial recognition programmes have high error rates in recognising gender of dark skinned women as compared to men. He pointed out that deep fakes based on Generative Adversarial Networks (GANs) are overwhelmingly used to target female celebrities by creating pornographic videos. He raised concerns around ‘technological determinism’ and advocated for an approach to the development and use of AI that is human centred and is anchored in human rights and ethics. He demonstrated how some instances of use of facial recognition technology are violative of human rights violations, can have life threatening consequences for people with diverse sexual orientations and gender identities. Vigilance by researchers, civil society and governments is enabling detection of bias in AI systems, this presents an opportunity to influence the culture of technology by developing artefacts that are gender equal, that respect diversity or even obliterate gender differences as was demonstrated with the example of gender neutral digital voice assistants.

A discussion with the room followed. Some of the ideas expressed during the debate were:

  • A goal is to design interventions and avoid undesired side effects: ideally one might need a simulator? even better, a causal model? Can we consider randomized controlled trials?
  • What if we improve women’s life in an otherwise unchanged world? This can turn for the bad. This being the key point made during Bhavani Rao’s talk.

Michèle Sebag, from CNRS (Centre national de la recherche scientifique) and Univ. Paris-Saclay discussed some thoughts about biases, glass ceilings, and how to handle them. Even after a 1st glass ceiling has been overcome (e.g. for women in selective engineering schools), biases remain as to the choice of options, with an impact on careers, money, etc. Even more puzzling, the wording of an option makes a significant difference regarding women’s choice (e.g. Energy for XXIst century vs Challenges of Sustainable Development) despite their technical content being 95% the same: the bias is in the language (footnote: nothing unexpected, see Bourdieu). As both genders might be using two different languages, a debiasing action might be to build translators; and/or display the gendered versions conveying the same content. This would also be fun! which is an important part of effective actions. [Using AI to reinforce self-confidence is another great perspective; note however that undermining the self-confidence feeds a multi-billion dollar industry].

Frédérique Krupa, Human Machine Design Lab, presented her own trajectory in the field and how, for her PhD on Girl Games: Gender Design and Technology, she studied belief systems as the principal influence amongst numerous factors in encouraging boys and discouraging girls to be interested in technology and pursue a career in ICT. The family factor is still today determining things far too strongly: through early choices by the parents (or the family environment) little girls were being deprived of the exciting activities and only getting access to less interesting, less challenging, less time consuming technological experiences. She followed up on her machine learning postdoc at 42, noticing the absence of interest on the quality, accuracy and representativity of data amongst homogenous teams of coder, mostly male, white, straight and from upper social classes, so that they do not consider these questions in their quest for optimal performance and chances of publishing – because they are not likely to suffer from bias. The issue of data quality is about having contextual information available to determine what bias may be present in the data and/or its resulting model. She calls for the development of AI UX practices, developed from quantitative social science methods.

A discussion with the room followed with some points:

  • Detecting known biases is a hot topic in AI (gender, race, wealth, sexual orientation…). But what are the unknown biases? Building experiments to provide evidence for biases: defines a challenge to be tackled with psychologists, neurosciences, MLers;
  • Another topic is ethical recommendation; to de-bias the recommendation one should have an idea of the targeted ultimate fair distribution. This is a normative issue: we need (on the top of all others) lawyers, politicians, citizens, …, sci-fi writers,…

Sophie Touzé, VetAgro Sup-Université de Lyon, and past President of the Open Education Consortium presented an original approach and offbeat vision of AI and the warning role it represents. The AI forces us to look at the skills unique to humanity, our added value in relation to the intelligence of machines. By challenging us, it provokes change and the saving awareness to know what we need to teach at school and university.

She insisted on what skills are essential. The 4 Cs are Collaboration, Communication, Creativity and Critical thinking and the 3 Ss are self-awareness, self-motivation and self-regulation. Unfortunately, these skills are not taught at school nor university. An app could be developed to help individuals monitor and develop these critical skills throughout their lives.

Empowered by these skills, each citizen of the world could participate to forge consciously the future we want no more as individual but as the human species.  The narrative of humanity should not be left in the hands of a few people that present to us as heroes. It’s time for women to participate writing humanity’s epic story together!

Sophie Touzé concludes with “We are the heroes we’ve been waiting for”.

Mitja Jermol, Unesco Chair on Open Technologies for OER and Open Learning, used his experience in AI based education projects to present what an education to AI could be. He made the point that there are 3 issues here: 1. Developing AI, 2. Using AI and 3. Interacting with AI. Most discussions today are related to increasing the knowhow in developing AI which includes two very specific domains namely mathematics and software programming. The fact is that AI has become mature enough to be taught to other domains as tolls to be used. This is why the education should be concerned with the 2 last ones. Like Sophie Touzé he insists on the importance of soft skills. He also describes some projects related to the question in which he is involved, such as the X5-GON project. Opening education, free and inclusive access to all through a global open education could be a strong mechanism to empower not just women but any individual in the world. AI plays a major role in this by understanding the complex system of materials, competences, infrastructure and the needs of particular individual.


As a conclusion, it was reflected that these questions should be further discussed. Colin de la Higuera believed that there were 2 different issues which had been at the core of the discussions of the day. 1. The issue of gender equality which is just as present in the field of AI as in other fields: female researchers are finding it difficult to emerge and only those strong, or –as Frédérique Krupa remarked- who don’t follow the rules, will make it. Yet everyone agrees that a more equal representation in the field is necessary. 2. The second issue is the effects of AI itself towards gender (in)equality, women vulnerability or women empowerment.

Actions to follow are to push the findings of this workshop forward in Unesco and elsewhere. Furthermore, the Knowledge for All Foundation will also build upon these discussions.