The AI Action Summit: A Defining Moment for AI Governance
The Artificial Intelligence Action Summit, held in Paris on February 10-11, 2025, brought together global leaders, policymakers, and industry experts to address one of the most topical issues of our time: the future of artificial intelligence. As I followed the discussions taking place, I was struck by the strong leadership presence and growing consensus that AI must be developed in a way that serves humanity.
The event highlighted five key themes that will shape AI’s trajectory in the coming years: public interest AI, the future of work, innovation and culture, trust in AI, and global AI governance. Each of these areas is critical, but the most urgent takeaway from the Summit was clear to me: AI governance must be addressed on a global scale.
AI for the Public Interest
A major focus of the event was public interest AI – essentially how AI can serve society rather than being solely driven by private profits. France, the hosting nation of the event, proposed the creation of a global platform that could act as an incubator for AI solutions designed for public benefit. This initiative emphasizes independent, open access, or controlled access AI to ensure that technology remains sovereign and accessible, rather than being monopolized by certain entities.
This vision is ambitious, but necessary. AI’s potential to improve the energy industry (for one) is extraordinary, but only if we ensure equitable access.
The Future of Work: AI as a Collaborative Effort
Another heavily discussed topic was the future of work and the impact of AI on global labor markets. There was strong agreement that governments, private enterprises, and public sector organizations need to collectively work together to ensure that AI is developed in a way that protects workers and promotes socially responsible AI adoption.
Of course, some of this is easier said than done. The fear of job displacement due to automation is real, and different industries will be affected in different ways. But the solution is not to block AI, but rather implement policies that support the transition and ensure that AI tools enhance – rather than replace – the human workforce.
Innovation, Culture and AI Infrastructure
There is a clear need to develop strong AI ecosystems of a global nature. The Summit emphasized the need for investment in AI infrastructure and research, which are both essential for AI’s continued advancement.
And speaking of investment, this week alone we saw major financial commitments to AI development in parallel to the event. The EU, for example, announced a EUR 200 billion investment into AI. This kind of funding signals that AI is no longer just an emerging technology; it’s a fundamental pillar of future economic growth.
Trust in AI: The Need for Global Guardrails
A recurring topic – and one that I’ve explored in a previous blog – is trustworthy AI. As a global community, we have to work comprehensively and inclusively to address the risks associated with the development of this technology.
There is already significant concern about biases is AI models, false or fictitious information, and the potential for misuse. If we fail to build AI systems that people can trust, adoption will be slow, and skepticism will grow by default. This is why regulations are critical – not to hinder innovation, but to guarantee accountability and prevent AI from being exploited in harmful ways.
The biggest concern? If we allow AI to develop unchecked, it could follow the path we’re seeing with social media, which outgrew regulatory frameworks so quickly that we’re now struggling to control its impact. We cannot afford to make the same mistake twice.
The Case for Global AI Governance
This brings us to what I consider the biggest achievement of this event: the discussions around global AI governance. The conversations circled around the Global Partnership on Artificial Intelligence (GPAI) and the United Nations’ AI Advisory Body, both of which aim to establish international cooperation on AI policies. In fact, the UN has put together the Governing AI for Humanity Report that details this formative action.
Here's the fundamental challenge: we can’t operate in silos. For example, there’s global governance on nuclear technology. The difference is, nuclear is like a hardware – a tangible, detectable source – whereas AI is a software, thus making it harder to manage and regulate.
That’s why global governance is essential. We need international AI regulations to prevent misuse while still allowing AI to be shared and used for the benefit of humanity. But how do we do that without restricting innovation?
a) AI should be accessible – but with proper credentials and authentication systems in place
b) There must be safeguards to prevent AI from falling into the wrong hands
c) Open-source AI sounds good in theory, but if left unchecked, it could either be exploited by bad actors or become dominated – creating a power monopoly
d) We need to prioritize people and planet before profit
But what I’m sharing here really is just a starting point.
The Road Ahead
The fact that AI is being formally addressed at a global level is a source of incredible motivation for me. Leaders from around the world are recognizing that AI is not just a technological advancement, but a transformative force that will shape our future. This is only the beginning. AI development and governance will require ongoing adaptation, collaboration, and oversight. Just as there are global frameworks for climate action, trade, and nuclear policy, AI requires an equally robust governance model and strategy that not only promotes innovation but also protects human interests.
The AI Action Summit set the stage for discussion. Now, it's up to governments, industries, and civil society to turn these ideas into action. The future of AI isn't just about what we build - but who it serves. Let's make sure it serves us all.