Why AI matters
Artificial Intelligence is already part of our lives, helping citizens, scientists, businesses, and public administrations. It powers everyday tools like search engines, personal assistants, translation and navigation apps, cybersecurity systems, and many others.
Along with its benefits, the development and use of AI has its challenges and potential risks. The results and impact of this technology depend on how it is designed, what data it uses and how it is ultimately implemented. All these aspects can be intentionally or unintentionally biased. They can endanger fundamental rights and democracy, intellectual rights, the job market, transparency, and security.
Ensuring AI is central to the safe and ethical digital transformation of society is an EU priority.
Europe’s approach to AI
As part of its digital strategy, the EU is regulating AI to ensure better and safer conditions for the development and use of this technology.
Key milestones of EU’s AI initiatives and policies:
- 2017
The European Council calls for an AI strategy. Leaders invite the Commission and Member States to work on a coordinated approach to AI.
- 2018
- The European Commission releases the Artificial Intelligence for Europe, its first-ever strategy focusing on the opportunities and challenges posed by AI.
- The Coordinated plan on AI is presented to the Member States to accelerate investment in AI, implement AI strategies and programmes and align AI policy to prevent fragmentation within Europe.
- A High-Level expert group on AI is appointed and the European AI Alliance is launched.
- 2019
Ethics guidelines for trustworthy AI released by the High-Level expert group on AI set up by the European Commission
- 2020
The White paper on AI is released, laying out options for an EU regulatory framework.
- 2021
The AI Act is proposed - the world’s first comprehensive AI law. It aims to make development and use of AI safe, transparent, traceable, non-discriminatory, environmentally friendly, and with an innovation angle.
- 2023
Political agreement on the AI Act is reached.
- 2024
- AI innovation package is announced with the aim of supporting Artificial Intelligence startups and SMEs. This includes the launch of the European AI Office and the GenAI4EU funding opportunities.
- At the request of the European Commission, the Scientific advice Mechanism provided recommendations on the uptake of AI in research.
- The Living guidelines on the responsible use of generative AI in research are launched.
- AI act is adopted and enters into force.
- First consortia are selected to establish AI factories which will boost AI innovation in the EU.
- 2025
- The AI Continent Action Plan is released, turning EU strengths, talent and strong traditional industries, into AI accelerators. It aims to shape the next phase of AI development, boosting economic growth, and strengthening our competitiveness in areas such as healthcare, cars, science and more.
- The Commission publishes guidelines for providers of General-Purpose AI.
- An online public consultation on AI in science was held with the aim of shaping the future European strategy for Artificial Intelligence in Science.
- Two strategies are launched to speed up AI uptake in European industry and science, including the AI in Science Strategy.
AI in EU-funded research projects
Since 2014, the number of projects managed by the European Research Executive Agency linked to AI has increased every year. With the help of over 230 AI experts from around the world, the Agency selected and implemented more than 1,200 projects* with a focus on AI or that use AI tools, with half of those being potentially very relevant for research practices in this field.
The projects received over €1.7 billion in EU funding under the Horizon 2020 and Horizon Europe programmes. Around 75% of the projects are funded through the Marie Skłodowska-Curie Actions, targeting doctoral education and postdoctoral training of researchers.
Either by advancing technologies or by integrating AI into their work for optimisation purposes, these research initiatives are leveraging AI while actively mitigating its risks.
Many of the more recent projects focus on areas central to the ethical questions and challenges raised by the AI Act – health, environmental protection, democracy, security, and education – while also aiming to deliver impactful results in their fields and establish good practices in AI-related research.
| MOBILITY |
| The IVORY project is developing a framework for optimal integration of AI in road safety. The research touches on human-vehicle interaction, novel scalable and equitable AI for proactive infrastructure safety management, and a sustainable knowledge sharing network on AI. The project also aims to provide efficient AI solutions for disadvantaged groups (i.e., vulnerable road users and users in low-to-middle-income countries). |
| DECISION MAKING |
| The AI4Gov and AI4POL projects advance the use of AI and Big Data for governance and regulation. AI4Gov provides policymakers with tools for evidence-based, transparent, and unbiased decision-making while addressing ethics, bias, and public trust. AI4POL supports European regulators by developing frameworks for trustworthy, rights-aligned AI, using data science to monitor and enforce rules, particularly in finance and citizen feedback. It also builds early-warning systems and an AI Threat Index to help policymakers address AI risks in autocratic contexts. |
| DEMOCRACY |
| Generative AI can reproduce realistic images, videos and voice outputs. These have the potential to create convincing ‘deepfakes’ - which can spread disinformation and erode trust. The SOLARIS project explores the circulation of deepfakes and their impact on democratic processes. The project examines the positive aspects of generative AI models such as Generative Adversarial Networks through a co-creation process involving citizen science, raising awareness on global issues like climate change, gender equality and human migration. |
| SECURITY |
| With new digital risks and tech-driven crimes on the rise, it’s becoming harder for authorities to keep citizens safe. The STARLIGHT project strengthens European security by equipping law enforcement agencies with trustworthy AI, safeguarding their systems, and combating AI-enabled crime and terrorism, while fostering a sustainable AI ecosystem for long-term resilience. |
| HEALTH |
Cancer remains one of Europe’s biggest health challenges, with fragmented data slowing research and treatment progress. The EOSC4Cancer project tackles this by using AI to securely connect cancer data across Europe, applying machine learning throughout the patient journey to accelerate discoveries and improve care in line with the European Cancer Mission. Millions of people struggle with hearing loss, yet current tools are limited. The VoCS project responds with AI-driven solutions to enhance speech perception, analyze vocal health markers, and design natural synthetic voices—improving communication and personalized healthcare. Food safety, waste, and sustainability are pressing concerns as Europe faces climate change and shifting diets. The HOLiFOOD and FoodDataQuest projects use AI to predict food hazards, strengthen supply chains, cut waste, and promote healthier, more transparent food systems—advancing the EU Green Deal and global sustainability goals. |
| ENVIRONMENT |
Biodiversity loss and environmental crime threaten Europe’s ecosystems. The PERIVALLON project detects and prevents pollution, wildlife trafficking, and other environmental crimes—empowering authorities and promoting sustainable development. The MAMBO project tracks species and habitats, standardizes monitoring, and automates data analysis to support conservation policies. The initiative delivers tools and infrastructure for automated, cost-efficient wildlife monitoring by integrating knowledge in sensor development, deep learning, computer vision, acoustics, ecology, remote sensing, biodiversity monitoring, citizen science, data pipelines, and ecological modelling. |
| EMPLOYMENT |
| As AI and automation reshape jobs, workers and businesses need guidance. The AI4LABOUR project predicts emerging roles and skills, providing AI-driven tools, training, and a skills portal to help individuals, companies, and policymakers adapt to the evolving labour market and the rise of advanced technologies. |
| EDUCATION |
As education faces challenges in inclusivity, personalization, and bias, AI offers new ways to support learners and educators. The EMPOWER and AUGMENTOR projects developed AI-driven platforms to enhance learning for all students, including those with neurodevelopmental disorders, by improving executive function, emotional regulation, and 21st-century skills, while supporting teachers and informing policy. The RAINBOW project combats online bias and stereotypes against LGBTQIA+ youth, using AI-powered multilingual text analysis to create educational tools that foster inclusivity and a safer digital environment. The COMPTEACH project applies AI and cognitive science to model how teachers transfer knowledge to learners, enhancing understanding of teaching strategies with applications in education and healthcare. |
Explore more AI-driven projects
Cordis Results Pack - How AI applications are facilitating new life science discoveries
EU research and innovation success stories on artificial intelligence
Marie Sklodowska - Curie actions research in AI
*Note:
The projects selected for this informative webpage received EU funding and are managed by the European Research Executive Agency (REA). There are numerous other EU-funded projects working in this field.
The 1,200 above mentioned projects specify Artificial Intelligence in their scope. The identification was made through a text search on project metadata (title, abstract, keywords) using more than fifty terms associated to AI. Potentially very relevant projects are identified as subset with matches only in title and keywords and then manually checking the abstract.
