
The dramatic speed of artificial intelligence’s development over the past two years has captured the public’s attention and ignited debates around a new technological revolution. Thanks to developments such as ChatGPT and increasingly human-like robots, most of us can now picture what a world with integrated artificial intelligence might look like. Our daily lives may become less tedious thanks to household helpers who can do our laundry, prepare our meals, and keep our houses tidy, and we should expect significant transformations in the world of work. Artificial intelligence’s promise to humanity is that it can make us more efficient – thus able to consecrate our advanced minds to greater pursuits.
Yet, there are many reasons not to let this idealised version of AI’s development linger too long in one’s mind. Academics, scientists and even the developers of AI themselves have warned of the great risks that its rapid development presents to humankind. As we create new forms of AI and feed them ever-increasing quantities of data, their abilities are likely to sharpen, and they may develop the ability to self-teach, ingesting data to train themselves, as well as potentially creating their own manipulative techniques to use against humans. Artificial intelligence does not need to be sentient to pose a risk to humanity. Its mastery of language, which has happened far more quickly than many expected, is enough to polarise our societies through the ability to manipulate the information that we absorb online. Not knowing whether an image is real or AI-generated, or whether a politician’s speech is a deep fake rather than a genuine video, could have immense consequences on what people believe, the relationships that we build with one another and create drastically different understandings of the world around us.
The AI Act is a landmark piece of legislation that should set a global benchmark for years, mainly thanks to an adaptable regulatory framework designed to be amended alongside the technology as it progresses. Julie Lübken
Faced with these frightening prospects, one response currently on the table is the EU’s Artificial Intelligence Act. The European Commission initially released the proposal for the Artificial Intelligence Act (AI Act) in 2021. In the early hours of Saturday, 9th December 2023, the EU institutions found an agreement, thus finalising the AI Act after thirty-six long hours of negotiations. The AI Act is a landmark piece of legislation that should set a global benchmark for years, mainly thanks to an adaptable regulatory framework designed to be amended alongside the technology as it progresses. Its flexibility is not the only aspect that makes the AI Act a valuable framework – the legislation is crafted around the use of the technology rather than regulating the technology itself. This means that specific applications, such as the use of remote biometric identification by law enforcement, will be heavily restricted while also allowing for research and development in AI to continue. To do so, however, it must be able to balance regulation and innovation while also leaving the EU open to cooperation with third countries.
What is in the AI Act?
The purpose of the AI Act, as espoused by the EU, is to create rules for AI systems that reflect European values. The deployment and use of AI should be safe, transparent, traceable, non-discriminatory, and environmentally friendly. The EU is concerned that, if the Union were not to establish its own rules, many of the fundamental rights of EU citizens would come under threat due to artificial intelligence – rights such as freedom of expression, human dignity and the right to personal data protection and privacy. Ensuring that this new technology, and all the changes to society that it entails, is human-centric, is the core tenet of the AI Act.
This is achieved mainly through a risk-based classification of AI systems. Basically, this means that the riskier an AI system is considered to be, the more regulation it will have to comply with. AI systems that are used to manipulate others, such as social scoring or voice-activated toys, present an unacceptable risk to society and are completely prohibited under the AI Act, with very few exceptions. High-risk AI systems are allowed, but developers will have to adhere to strict regulations such as rigorous testing, proper documentation around the data used, and solid accountability frameworks that have human oversight. AI systems are high-risk when they are thought to affect the safety or the fundamental rights of users negatively. This is the case with AI systems used in law enforcement, education or in the operation of critical infrastructure. For high-risk AI systems to be used, they will have to be thoroughly assessed. Limited risk systems have a few transparency requirements to comply with. For instance, they will have to ensure that users know they are interacting with an AI or seeing AI-generated content. Minimal risk systems will have no obligations. These include features in video games, for example.
While the AI Act is a brave and bold first step, it should only be the start of thinking, dissecting, and regulating our relationship to this new technology. European policymakers have realised that the use and commodification of AI should by no means be a rushed affair. Julie Lübken
The big picture
The very risk-centred structuring of this legislation has come under quite stark criticism, especially from the business community. Most significant was an open letter signed by 150 business leaders which expressed concerns that the AI Act would deter European startups and businesses from investing in or developing their own AI systems. Should AI startups decide to cease operating in the EU or fail to generate sufficient investment due to an unfavourable regulatory environment, this could seriously impede the EU’s international competitiveness compared to its American and Chinese counterparts. Relying on foreign technologies is already a reality within the EU. However, the EU is now actively trying to reverse this state of affairs through its rhetoric on digital sovereignty and foreign investment screening. Their argument is that new developers would face high entry costs into the AI market because of all the efforts that have to be made towards compliance. A broader legislative framework would be preferable, especially since the modern applications of AI are so new and will evolve rapidly in the years, or even months, to come. Critics maintain that a strict regulatory regime would disadvantage the EU and likely cause the Union to fall further behind the U.S. and China in terms of technological capacity.
Critics and proponents’ views
Concerns about the European Union’s capacity to innovate are certainly justified. European officials are constantly mentioning the lack of large tech companies on the Continent and “lagging behind” other big global players. The modern and fast development of artificial intelligence may present a different scenario. Contrary to the development of the Internet in the early 90s and the growth of online platforms in the 2000s, the EU is regulating as the technology is growing in global usage and prominence. As the AI market starts to boom and models develop properly, the European Union has anticipated these new technologies’ importance and revolutionary character. The regulation aims to protect from potential harms, which was arguably done too late in the case of the Internet and with the rise of large social media networks. The Digital Services Act and the Digital Markets Act, passed in 2022, were attempts to reign in Big Tech companies whose detrimental algorithmic practices and business models were already deeply entrenched. This time, the EU has an opportunity to create a healthy environment for companies to innovate and grow within this new sector.
Critics maintain that a strict regulatory regime would disadvantage the EU and likely cause the Union to fall further behind the U.S. and China in terms of technological capacity. Julie Lübken
Not just regulation
Even if companies predominantly invest in AI in the U.S., the EU is likely to remain an attractive landscape for the development and deployment of AI systems. The EU’s main asset is the Single Market, which groups together some of the world’s most powerful economies and guarantees the free movement of people, goods, and services between its twenty-seven Member States. Access to such a populated and wealthy market is not something that companies can afford to pass on. Alongside the AI Act’s stringent standards to ensure that AI is human-centric, the EU also offers significant incentives to companies. In January 2024, the European Commission released its AI Innovation Package, which aims to support AI startups and SMEs in various ways, but most significantly through access to one of the EU’s eight supercomputers.
The AI Act aims to give European companies a boost when it comes to developing this new technology, mainly with regard to the ethical nature of the systems companies develop through, for example, the use of quality data. European companies can certainly leverage the fact that AI systems are trained with EU values in mind, but whether this can rival the speed with which Big Tech is able to develop an AI model is uncertain at best. Creating and training new models is exceptionally costly, and it seems evident that Big Tech companies, with ample resources to invest in new technologies, already have a head-start in the AI race. This year alone, Google, Meta, and Microsoft have released their own LLM-powered chatbots, competing with one another to release the best-performing version. Big Tech has immense advantages compared with small developers, including access to the vast amounts of data they use to train their foundation models and the computing power available to them. This is worrying for the EU, notably given Big Tech’s track record of disregarding fair competition and neglecting to protect users’ fundamental rights online. It also does not bode well for EU incumbents as Big Tech companies prevent them from accessing an already very competitive market. The EU’s AI Act is timely in order to fight against these companies’ already entrenched positions, hopefully leaving enough room for EU start-ups such as Mistral and Aleph Alpha to scale up and compete globally.
Beyond the continent
The provisions in the AI Act, and its system of risk-based classification, also present a unique opportunity for the EU’s like-minded partners to build on the Union’s work. Julie Lübken
The provisions in the AI Act, and its system of risk-based classification, also present a unique opportunity for the EU’s like-minded partners to build on the Union’s work. Indeed, the EU is very fond of spearheading regulations in the technological realm. Take the General Data Protection Regulation, Digital Services, and Digital Markets Act as examples. These regulations have inspired similar legislation around the globe, a phenomenon termed ‘the Brussels Effect’. This so-called effect, a concept originated by Anu Bradford, refers to the European Union’s ability to set a global standard through its human rights, environment and technology regulations. Given that the AI Act is the first of its kind in the world, a similar such effect should not be ruled out. The U.S. has already put forth its ‘AI Bill of Rights’, and the UK is also planning to regulate AI in some way. At the OECD, forty-two countries have adopted the organisation’s principles on AI governance, pledging to ensure that AI systems are ‘robust, safe, fair, and trustworthy’, principles enshrined at the core of the EU’s AI Act. With this legislation, which does mark a global first, the EU’s first-mover advantage could translate into setting the trend for European-inspired high normative standards for AI governance across the globe.
The EU should not limit itself to engaging only with like-minded countries on AI. Dialogue, cooperation, and understanding should continue to be fostered and sought out with actors as influential as China. Common standards such as the trustworthiness of AI systems and human-centricity could be a foundation for partnership between China and the EU. The EU’s mission is to ensure that any artificial intelligence put on the market within the EU is safe and trustworthy, but that should not be the end of the story. According to Yuval Noah Harari, AI will change how we perceive our reality. AI will be able to mass produce intimacy and use this power to influence our relationships, our beliefs, and, ultimately, how we live our lives. We are only at the beginning of humanity’s relationship with artificial intelligence. While the AI Act is a brave and bold first step, it should only be the start of thinking, dissecting, and regulating our relationship with this new technology. European policymakers have realised that the use and commodification of AI should by no means be a rushed affair. The AI Act gives the EU, and perhaps the rest of the world, the opportunity to understand AI and all that it means for humanity. Hopefully, this first effort can inspire others around the world. We will be much better off if we’d rather be safe than sorry.
The European Leadership Network itself as an institution holds no formal policy positions. The opinions articulated in this policy brief represent the views of the author rather than the European Leadership Network or its members. The ELN aims to encourage debates that will help develop Europe’s capacity to address the pressing foreign, defence, and security policy challenges of our time, to further its charitable purposes.
Image credit: Wikimedia Commons / Techfiesta, edited with Wikimedia Commons / Rjd0060