News Archives

Knowledge 4 All Foundation Concludes Successful Collaboration with European AI Excellence Network ELISE

Knowledge 4 All Foundation is pleased to announce the successful completion of its participation in the European Learning and Intelligent Systems Excellence (ELISE) project, a prominent European Network of Artificial Intelligence Excellence Centres. ELISE, part of the EU Horizon 2020 ICT-48 portfolio, originated from the European Laboratory for Learning and Intelligent Systems (ELLIS) and concluded in August 2024.

The European Learning and Intelligent Systems Excellence (ELISE) project, funded under the EU's Horizon 2020 programme, aimed to position Europe at the forefront of artificial intelligence (AI) and machine learning research
The European Learning and Intelligent Systems Excellence (ELISE) project, funded under the EU’s Horizon 2020 programme, aimed to position Europe at the forefront of artificial intelligence (AI) and machine learning research

Throughout the project, Knowledge 4 All Foundation collaborated with leading AI research hubs and associated fellows to advance high-level research and disseminate knowledge across academia, industry, and society. The Foundation contributed to various initiatives, including mobility programs, research workshops, and policy development, aligning with ELISE’s mission to promote explainable and trustworthy AI outcomes.

The Foundation’s involvement in ELISE has reinforced its commitment to fostering innovation and excellence in artificial intelligence research. By engaging in this collaborative network, Knowledge 4 All Foundation has played a role in positioning Europe at the forefront of AI advancements, ensuring that AI research continues to thrive within open societies

Knowledge 4 All Foundation Completes Successful Engagements in European AI Excellence Network HumanEAI-Net

Knowledge 4 All Foundation (K4A) is pleased to announce the successful completion of its engagements in two prominent European Networks of Artificial Intelligence (AI) Excellence Centres: the HumanE AI Network. These initiatives have been instrumental in advancing human-centric AI research and fostering collaboration across Europe.

Both HumaneAI-Net and ELISE were part of the H2020 ICT-48-2020 call, fostering AI research excellence in Europe.
The HumaneAI-NetE was part of the H2020 ICT-48-2020 call, fostering AI research excellence in Europe

The HumanE AI Network, comprising leading European research centres, universities, and industrial enterprises, has focused on developing AI technologies that align with European ethical values and societal norms. K4A’s participation in this network has contributed to shaping AI research directions, methods, and results, ensuring that AI advancements are beneficial to individuals and society as a whole.

K4A remains committed to advancing AI research and development, building upon the foundations established through these collaborations. The foundation looks forward to future opportunities to contribute to the global AI community and to promote the responsible and ethical development of AI technologies.

Knowledge 4 All Foundation Completes NLP Projects with Lacuna Fund, Transitions Efforts to Deep Learning Indaba Charity

Knowledge 4 All Completes NLP Projects, Passing the Torch to Deep Learning Indaba
Completed NLP Projects, Passing the Torch to Deep Learning Indaba

The Knowledge 4 All Foundation is pleased to announce the successful completion of its Natural Language Processing (NLP) projects under the Lacuna Fund initiative. These projects aimed to develop open and accessible datasets for machine learning applications, focusing on low-resource languages and cultures in Africa and Latin America.

The portfolio includes impactful initiatives such as NaijaVoice, which focuses on creating datasets for Nigerian languages, Masakhane Natural Language Understanding, which advances NLU capabilities for African languages, and Masakhane Domain Adaptation in Machine Translation, targeting improved domain-specific machine translation systems. The Foundation’s efforts have significantly contributed to assisting African researchers and research institutions in creating inclusive datasets that address critical needs in these regions.

As part of a strategic transition, the Foundation has entrusted the continuation and expansion of these initiatives to the Deep Learning Indaba charity. The Deep Learning Indaba, dedicated to strengthening machine learning and artificial intelligence across Africa, is well-positioned to build upon the groundwork laid by Knowledge 4 All. The Foundation extends its gratitude to the Deep Learning Indaba charity for taking over these projects and is confident that their expertise will further the mission of fostering inclusive and representative AI development in the future.

Knowledge 4 All Foundation Celebrates Successful Completion of Erasmus+ ENCORE+ Project, Advancing Open Education Across Europe

The Knowledge 4 All Foundation (K4A) has successfully concluded its participation in a significant European Erasmus+ project: the European Network for Catalysing Open Resources in Education (ENCORE+). The project aimed to enhance the adoption and innovation of Open Educational Resources (OER) across Europe, fostering collaboration between higher education institutions and businesses.

Knowledge 4 All Foundation Celebrates Successful Completion of Erasmus+ ENCORE+ Project, Advancing Open Education Across Europe
ENCORE+ project was part of the Erasmus+, EAC/A02/2019, KA2: Cooperation for innovation and the exchange of good practices – Knowledge Alliances

Throughout its involvement, K4A played a pivotal role in developing a European OER innovation area by connecting stakeholder communities and fostering knowledge exchange. The foundation contributed to establishing open, distributed, and trusted community review strategies for OER, engaging businesses and higher education institutions in dialogues on quality and innovation. Additionally, K4A supported the integration of organizational strategies for OER in both business and academia, encouraging co-learning from successful implementations.

The successful completion of this project marks a significant milestone in K4A’s mission to promote open education and knowledge sharing. The foundation remains committed to advancing OER initiatives and looks forward to future collaborations that will continue to drive innovation and inclusivity in education and training across Europe.

HumaneAI-Net results and project legacy

The HumaneAI-Net project has significantly advanced human-centered artificial intelligence by developing innovative resources and fostering collaboration across Europe. Key achievements include the creation of the Humane AI Database, a comprehensive repository summarizing essential project outputs, and the establishment of the Hybrid Human Artificial Intelligence (HHAI) conference, which serves as a platform for interdisciplinary AI research. Additionally, the project has produced diverse datasets, such as the SOMTUME dataset, containing textual information from social media and news sites, and the DIASER corpus, comprising over 37,000 annotated dialogues. These contributions have been instrumental in promoting ethical AI practices and enhancing human-AI collaboration.

For a comprehensive overview of the project’s legacy and access to these resources, check the following list:

Core Data

Core Legacy Items

  • HHAI conference (Hybrid Human Artificial Intelligence): https://hhai-conference.org/
  • ADR Topic Group – Generative AI for Human-AI Collaboration: coming soon
  • Springer handbook on Human-AI Collaboration: coming soon (email haimgmt@dfki.de if you are interested to collaborate 🙂 )

Social Media

Datasets

IDMicroproject producing the dataset (linked to HAI Net page)DescriptionLink short text
DS-001TMP-003Available on githublink
DS-002TMP-007Reviewed Papers and Coding Spreadsheet:link
DS-003TMP-016(Dataset 1) Example Jupyter Notebooks – Uwe Köckemann, Fabrizio Detassis, Michele Lombardilink
DS-004TMP-016(Dataset 2) Example Jupyter Notebooks – Uwe Köckemann, Fabrizio Detassis, Michele Lombardilink
DS-005TMP-016(Dataset 3) Example Jupyter Notebooks – Uwe Köckemann, Fabrizio Detassis, Michele Lombardilink
DS-006TMP-022Dataset: Pilot dataset – Kunal Gupta & Mark Billinghurstlink
DS-007TMP-022Dataset: eye tracking data during encoding phaselink
DS-008TMP-023The SOMTUME dataset contains textual information gathered from social media and news sites, segment: Trustworthiness Information Content (TIC). The texts pertain to the migration of Ukrainians to the European Union from February 2022, to August 2023link
DS-009TMP-023The SOMTUME dataset contains textual information gathered from social media and news sites, segment: Trustworthiness Uncertain Information Content (UIC). The texts pertain to the migration of Ukrainians to the European Union from February 2022, to August 2023link
DS-010TMP-036Dataset: DIASER corpus – Ondrej Dusek: A corpus of 37,173 annotated dialogues with unified and enhanced annotations built from existing open dialogue resourceslink
DS-011TMP-037Loan Approval1: dataselink
DS-012TMP-037Loan Approval2: datasetlink
DS-013TMP-039Dataset: PEEK Dataset – Sahan Bulathwelalink
DS-014TMP-059A unified multi-domain dialogue dataset is introduced and released along with the paper “Dialog2Flow: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction” (Burdisso et al. – EMNLP 2024 main conference).link
DS-015TMP-060A list of relevant datasetslink
DS-016TMP-060Survey showing Point Processes resourceslink
DS-017TMP-062Github repository of datasets and softwarelink
DS-018TMP-068PET: a new annotated dataset of human-annotated processes in a corpus of process descriptionslink
DS-019TMP-081evaluation and development data sets for speech translation for meetings (for English->Latvian, Latvian->English, and Lithuanian- >English)link
DS-020TMP-081ELITIR minuting cortpus: an automatic minuting test set for the AutoMin 2023 shared task on automatic creation of meeting summaries (“minutes”) for English and Czechlink
DS-021TMP-084SynSemClass 3.5 datasetlink
DS-022TMP-086A survey of tools and datasets for a multimodal perception with transformerslink
DS-023TMP-091Datalink
DS-024TMP-094Generator for preference data – Bruno Veloso, Luciano Caroprese, Matthias Konig, Sonia Teixeira, Giuseppe Manco, Holger H. Hoos, and Joao Gamalink
DS-025TMP-096Dataset without topicslink
DS-026TMP-096Dataset with topicslink
DS-027TMP-099Individual subject trajectories – Annalisa Boscolink
DS-028TMP-102A GitHub repository with detailed analysis of literature Detailed analysis of containing 36 existing datasets and papers according to our desiderata and checklistlink
DS-029TMP-103Data can be found herelink
DS-030TMP-107HCN dataset: news articles in the domain of Health and Climate Change. The dataset contains news articles, annotated with the major claim, claimer(s) and claim object(s).link
DS-031TMP-117Dataset of EEG recordings corresponding to easy and difficult decisionslink
DS-032TMP-1183 etxnsions of the HeLiS ontology – Mauro Dragonilink
DS-033TMP-124dataset showing the evaluated VAAs and the frameworks used to evaluate themlink
DS-034TMP-130we have a cleared and cured dataset for 70 years of Senate activities from year 1333link

Software and Tools

IDMicroproject producing the dataset (linked to HAI Net page)DescriptionLink short text
TL-001TMP-001labelling tool repositorylink
TL-002TMP-003This repository contains the implementation of the model presented in the paper “Modelling Concept Drift in Dynamic Data Streams for Recommender Systems”.link
TL-003TMP-004C# code to run for the geometry friends human-ai collaboration study.link
TL-004TMP-005The code of the ABM can be found in this repositorylink
TL-005TMP-008Program/code: Crowdnalysis Python packagelink
TL-006TMP-009Interpretable Fair Abstaining Classifierlink
TL-007TMP-010chatbot codelink
TL-008TMP-012Open source tool for training ASR models for dysarthic speech: The repository contains: A baseline recipe to train a TDNN-CNN hybrid model based ASR system, this recipe is prepared to be trained on the TORGO dataset. And an end-to-end model using ESPnet framework prepared to be trained on UASpeech dataset.link
TL-009TMP-016Program/code: Python library: Moving targets via AIDDLlink
TL-010TMP-018Mass Media Impact on Opinion Evolution in Biased Digital Environments: a Bounded Confidence Modellink
TL-011TMP-025Methods and Tools for Causal Discovery and Causal Inferencelink
TL-012TMP-031Meta-control decision-making experimentlink
TL-013TMP-037Discovery Framework (program/code)link
TL-014TMP-0372Experimentslink
TL-015TMP-039Program/code: TrueLearn Modellink
TL-016TMP-039Program/code: Semantic Networks for Narrativeslink
TL-017TMP-044EvalSubtitle: tool for reference-based evaluation of subtitle segmentationlink
TL-018TMP-051Backend codelink
TL-019TMP-053patent under review for FPGA based prototypelink
TL-020TMP-055The base gamelink
TL-021TMP-055The extended gamelink
TL-022TMP-057python package providing grey box NLP model to assist qualitative analystslink
TL-023TMP-058Package page at Python Package Indexlink
TL-024TMP-059Source code comes with tool-like scripts to convert any collection of dialogs to a dialog flow automatically.link
TL-025TMP-059The code repository for long-context ASR is publiclink
TL-026TMP-065A software library to help analyze crowdsourcing results (2024)link
TL-027TMP-071Web-Services librarylink
TL-028TMP-082Prototype implementationlink
TL-029TMP-083T-KEIRlink
TL-030TMP-083erc-unibo-modulelink
TL-031TMP-084SynSemClass 3.5 browserlink
TL-032TMP-089A bundle to replicate a simulation with SUMO over Milano with 15k vehicles and 40% routed oneslink
TL-033TMP-090datasetlink
TL-034TMP-091Implementation in Pytorch of the Iterative Local Refinement (ILR) algorithmlink
TL-035TMP-094Self Hyper-parameter tunninglink
TL-036TMP-095Contributed to a computational theory called POSG, a multi-agent framework for human-AI interactionlink
TL-037TMP-096repo with the code used to build and study the datasetslink
TL-038TMP-097Github link of the code of the simulator for the new dynamic modellink
TL-039TMP-099Program/code: Recurrent neural network codeslink
TL-040TMP-101Program/code: Proactive Behavior Generation – Open Source System –link
TL-041TMP-101Program/code: Playground, Jupyter Notebook / Google Colablink
TL-042TMP-104CKR Datalog Rewriterlink
TL-043TMP-107Website demolink
TL-044TMP-107Services for claim identification and the retrieval enginelink
TL-045TMP-107Service for the text simplificationlink
TL-046TMP-108-TMP-034SAI Simulator for Social AI Gossipinglink
TL-047TMP-109Pest control game demolink
TL-048TMP-109The Pest Control Game experimental platformlink
TL-049TMP-113Prototype of a dialogue system that deliberates on top of the social context, in which the dialogue scenarios are easy to author.link
TL-050TMP-114prototypelink
TL-051TMP-120Diurnal Patterns in the Spread of COVID-19 Misinformation on Twitter within Italylink
TL-052TMP-124Trustworthiness of Voting Advice Applications in Europelink
TL-053TMP-126Code for audio data collectionlink
TL-054TMP-126Code for end-to-end response generationlink
TL-055TMP-130VLD Series Viewerlink
TL-056TMP-133X5Learn Platformlink
TL-057TMP-133TrueLearn Codebaselink
TL-058TMP-133TrueLearn Python librarylink

Tutorials and Reports

IDMicroproject producing the dataset (linked to HAI Net page)DescriptionLink
TR-001TMP-016Tutorial: Moving targets tutoriallink
TR-002TMP-038EduCourse: Open lectures and hands-on practicalslink
TR-003TMP-042Seminar: Research seminar: Ethics and AI for PhD students, postdoctoral scholars, and research fellows in University of Kaiserslautern-Landau (Winter 2023-2024)Reach out to project contact person for access
TR-004TMP-058Tutorial: “tutorial page documenting how to use the packagelink
TR-005TMP-059Tutorial: a jupyter notebook tutorial for joint speech-text embeddings for spoken language understanding.link
TR-006TMP-059Tutorial: part 1 (on dialogue modelling)link
TR-007TMP-059Tutorial: part 2 (on LLMs)link
TR-008TMP-082Report: ArXiv Technical Report on formalizationlink
TR-009TMP-086Tutorial: A tutorial on the use of transformers for multimodal perception.link
TR-010TMP-086Report: Report on challenges for the use of transformers for multimodal perception and interaction.link
TR-011TMP-096Report: report summarizing the detailed resultslink
TR-012TMP-103A pre-registration for the demographic studylink
TR-013TMP-104Report: echnical reportlink
TR-014TMP-121Seminar: the Mossos d’Esquadra, the police authority in Barcelona.Reach out to project contact person for access
TR-015TMP-121Seminar: the police education unit at Umeå Sweden.Reach out to project contact person for access
TR-016TMP-123Report: Report of applicable mechanisms and formats for AI-innovation Report of the initial workshoplink
TR-017TMP-123Report: White Paper – Methods for AI implementationlink
TR-018TMP-126Report: Report for end-to-end response generationlink
TR-019TMP-131EduCourse: slideslink

K4A supported research projects in African natural language processing now available in study


K4A has been instrumental in contributing to the roadmap for African language technologies. The new study investigates the motivations, focus and challenges faced by stakeholders at the core of the NLP ecosystem in an African context.

By identifying and interviewing core stakeholders in the NLP process a number of recommendations are proposed for use by policymakers, AI researchers, and other relevant stakeholders in aid of the betterment of the development of language content and language technology.

Graphical abstract
Graphical abstract of the study published in Patterns 4, 100820, August 11, 2023

The K4A grantees have put forward the following recommendations for stakeholders working in the African language ecosystem:

  • Language acquisition of Indigenous African languages, primarily by Africans, should be better supported, and technology is a means to do this, as has been the case for many other non-African languages.
  • Basic tooling to support content creation on digital platforms, such as digital dictionaries, thesauruses, keyboards supporting diacritics where relevant, and spell checkers that recognize African names and places without error, should be prioritized.
  • Language tools and processes for content moderation and to catch and control the spread of misinformation online in Indigenous African languages should be developed and actively used.
  • Language careers and the professional opportunities available, particularly as pertains to Indigenous African languages, should be made more visible to students earlier in their education so as to generate greater interest in these fields in tertiary education.
  • AI language tools that augment human activities as opposed to tools seeking to replace them should be the intentional design choice, especially given the current dearth of tooling and data for African languages.
  • Computational linguistics components should be introduced into the educational curricula of disciplines adjacent to and working with language, e.g., linguistics and journalism, with an emphasis on the role they can play in the development of ethical and inclusive AI so as to encourage a pipeline of cross-discipline stakeholders working to build language technology.
  • Professional training opportunities to enable multilingual individuals to venture into language careers should be increased.
  • The study of contemporary use of language in Africa should be emphasized, given increasing urbanization and the multicultural nature of the continent.
  • Funding for dataset creation and annotation, both of which can be time-consuming and expensive tasks, should be increased.
  • African language policies, particularly those pertaining to education and provision of government services, should be better implemented with the aid of emerging language tools and technologies.
  • Digital licensing and funding should be made suitable to support legal cases against non-African corporations who use open African data.
  • An ‘‘ethical data curation toolkit,’’ which is informed by information scientists, data privacy experts, and machine learning bias experts, would empower dataset curators with the knowledge and skills to perform informed data curation.
  • The toolkit should be accompanied by a workshop in which practical training and discussions can take place.

NAIXUS founding partners meet at the Deep Learning Indaba 2023

The NAIXUS project (Network of Excellence on AI and the United Nations SDGs) convened a significant meeting during the Deep Learning Indaba 2023. The purpose of this meeting was to discuss the progress of the project, share insights, and plan future actions to strengthen the network’s impact on advancing the United Nations Sustainable Development Goals (SDGs) through artificial intelligence (AI) research. K4A is a founding partner of the NAIXUS network.

Key Discussion Points:

  1. Project Overview: The meeting began with an overview of the NAIXUS project’s objectives and milestones achieved since its inception. The project aims to connect AI researchers worldwide to collaborate on SDG-related research, advocate for ethical AI practices, and contribute to policy discussions.
  2. Membership and Collaboration: Members discussed the growth of the NAIXUS network and the importance of expanding its reach to include researchers from diverse geographical regions and fields of expertise. Participants emphasized the need for cross-disciplinary collaboration to address complex sustainability challenges effectively.
  3. Research Focus Areas: The meeting highlighted the key research focus areas within the project, including healthcare, climate action, poverty reduction, and education. Members presented their ongoing research projects related to these themes and shared preliminary findings.
  4. Data Accessibility: Data accessibility emerged as a critical topic of discussion. Members discussed strategies for promoting open data sharing, building partnerships with organizations possessing relevant data sets, and ensuring data privacy and security in AI research.
  5. Policy and Advocacy: The NAIXUS project’s advocacy efforts were discussed, including engagement with policymakers and international organizations. Members shared success stories and outlined plans for future advocacy initiatives to influence AI policy in alignment with the SDGs.
  6. Ethical AI Guidelines: The meeting acknowledged the importance of ethical considerations in AI research. A working group was established to draft a set of ethical guidelines for AI researchers working on SDG-related projects, with a commitment to responsible AI development.

Outcomes and Next Steps:

  1. Continued Collaboration: Members further committed to strengthening collaboration within the NAIXUS network. Plans were made to organize webinars, workshops, and virtual conferences to facilitate knowledge exchange and foster partnerships.
  2. Research Publication: The network emphasized the significance of publishing research findings in reputable journals and conferences to contribute to the academic discourse on AI for SDGs. Members pledged to submit their work to relevant outlets.
  3. Academic Journal: Plans were discussed to finalise and launch a centralized repository accessible to researchers working on SDG projects. This resource will help address academic excellence in the field of machine learning and development.
  4. Advocacy Campaign: A dedicated advocacy campaign was proposed to engage with policymakers and raise awareness about the role of AI in advancing the SDGs. Members will work together to create policy briefs and position papers.

The NAIXUS project meeting at Deep Learning Indaba 2023 was a productive and collaborative gathering of AI researchers committed to addressing the United Nations’ SDGs through responsible and ethical AI research. The outcomes of the meeting, including continued collaboration, research publication, advocacy efforts, and data accessibility initiatives, will contribute significantly to the project’s mission and its impact on global sustainable development efforts. The project remains dedicated to fostering a global network of researchers working together to harness the power of AI for the betterment of society and the achievement of the SDGs.

Workshop at the Deep Learning Indaba 2023: “Building a Global Network of AI Researchers On AI and the United Nations SDGs”

K4A as a core member in the Network of Excellence NAIXUS; a multi-stakeholder initiative aimed at bridging the gap between AI and sustainable development co-hosted a meeting and a workshop in Accra, Ghana. Both events were held as part of the Deep Learning Indaba 2023 Forum on September 8 and 9 and were co-hosted by the NAIXUS members from Africa.

The Deep Learning Indaba 2023 featured a dynamic workshop titled “Building a Global Network of AI Researchers on AI and the United Nations SDGs.” The workshop aimed to foster collaboration among AI researchers and practitioners to address the challenges posed by the United Nations Sustainable Development Goals (SDGs).

Held at the Indaba, this workshop brought together Indaba attendees from diverse backgrounds to discuss, share insights, and develop strategies for harnessing the power of artificial intelligence to advance the SDGs.

Key workshop Themes and Discussions:

  1. The Role of AI in SDGs: The workshop began with an exploration of the fundamental role AI can play in achieving the SDGs. Participants discussed how AI can be applied to areas such as healthcare, education, poverty alleviation, and climate change mitigation. The consensus was that AI has immense potential to drive progress in these critical areas.
  2. Challenges and Ethical Considerations: A substantial part of the workshop focused on the challenges and ethical considerations associated with AI in SDG-related projects. Concerns related to bias in AI algorithms, data privacy, and transparency were addressed. Participants stressed the importance of adhering to ethical AI principles to avoid exacerbating existing inequalities.
  3. Cross-Disciplinary Collaboration: The workshop emphasized the need for cross-disciplinary collaboration. It highlighted that addressing the complex challenges of the SDGs requires expertise from various fields, including computer science, social sciences, and policy-making. Building a global network of researchers from these diverse backgrounds was recognized as essential.
  4. Data Sharing and Accessibility: Ensuring access to high-quality data emerged as a critical issue. Participants discussed the importance of open data-sharing initiatives and the development of AI models that can work with limited data resources, especially in underserved regions.
  5. Policy and Advocacy: Policymaking and advocacy for AI in SDG implementation were also key topics. Participants discussed the importance of influencing policy frameworks to ensure AI is used responsibly to advance the SDGs. Advocacy efforts and partnerships with governments and international organizations were encouraged.

The “Building a Global Network of AI Researchers on AI and the United Nations SDGs” workshop at Deep Learning Indaba 2023 provided a platform for robust discussions and concrete actions. It underscored the importance of AI in addressing the SDGs and the need for interdisciplinary collaboration, ethical considerations, and responsible policymaking. The outcomes of the workshop pave the way for a more coordinated and impactful approach to utilizing AI for sustainable development globally.

K4A gives support to IRCAI and AWS fellowship on climate via research network expertise

K4A will support with research capacity from EU projects a new program that selects and fully funds proof of concepts of new ideas leveraging advanced cloud computing and AI to solve some of the biggest challenges in the fight against climate change. It is a new program to fund Climate Tech startups’ R&D projects that need a great deal of cloud computing. Startups at any stage can apply, they just need to have a tech team capable of building with advanced computing services.

APPLY HERE

It supports entrepreneurs and startups applying advanced cloud computing and artificial intelligence (AI) to create new solutions that address the climate crisis. The Compute for Climate Fellowship will select innovative ideas and fully fund the design and building of their proof of concepts (PoC).

When a startup is selected for the fellowship, they will engage in a 2-3 month build with 1:1 advice from mentors and AWS credits to cover the AWS service costs of the build. Both IRCAI and AWS will provide selected startups with a team of mentors who are experts in AI, sustainability and ethics. 

Startups will get access to advanced computing services, such as quantum computing, high-performance computing (HPC), artificial intelligence and machine learning (AI/ML), and AWS credits to cover the build of the PoC. In addition, all PoCs will be designed under the guidelines of UNESCO’s Ethics Impact Assessment for Artificial Intelligence to ensure that each solution is built with safe, trustworthy technology.

Run by:

The Compute for Climate Fellowship is a global program run by the International Research Centre on Artificial Intelligence (IRCAI), an organization under the auspices of UNESCO, and Amazon Web Services, Inc. (AWS).
The Compute for Climate Fellowship is a global program run by the International Research Centre on Artificial Intelligence (IRCAI), an organization under the auspices of UNESCO, and Amazon Web Services, Inc. (AWS).

Supported by:

European Learning and Intelligent Systems Excellence - Making Europe competitive in AI technology
European Learning and Intelligent Systems Excellence – Making Europe competitive in AI technology
HumanE AI Network - Making artificial intelligence human-centric
HumanE AI Network – Making artificial intelligence human-centric

Conference of the UK UNITWIN/UNESCO Chairs Programme

On 30-31 May 2023, UK National Commission for UNESCO hosted the Conference UK UNESCO Chairs Conference to mark the yearly anniversary of the UNITWIN/UNESCO Chairs Programme. This event, supported by the National Commission for UNESCO, brought together over 20 participants representing some 22 UNESCO Chairs and UNITWIN in the UK. This global network encourages inter-university cooperation, collaboration and information sharing. Today, the Programme involves over 700 institutions in 126 countries.

The two days of knowledge sharing inspired new ideas, partnerships, and opportunities that highlighted the value of intellectual collaboration across the network and beyond. The value of transdisciplinarity, future-oriented approaches and the need for increased North-South-South and South-South cooperation were emphasized throughout the event.