What is AI Act and why does it matter?

We all know the acronym “AI”. It has quickly become a constant feature for all office workers and it’s now so omnipresent that FOMO (Fear Of Missing Out) has been transformed into FOBO, the Fear Of Becoming Obsolete, which was the new term being used at the World Economic Forum this year. Are our jobs all at risk? The tech giants joining Donald Trump’s entourage is definitely not helping.
But while societies have been getting acquainted with how artificial intelligence can make their lives easier, or threaten their livelihoods, the EU regulators have been working hard to ensure its compliance with EU values, defying threats from Washington on retaliation for imposing rules on Big Tech.
After extended discussions between the Council of the EU and the European Parliament, the EU has finally published its first piece of legislation on AI: the EU Artificial Intelligence Act. It came into force on 1 August 2024. Its implementation, however, is being staggered.
Although it is good news that the first global standard on AI has been passed, the instrument itself has not been very warmly welcomed by the EU tech community, mostly for fear of stifling innovation and the intensified need for compliance. People feel their creativity is being limited. And it isn’t just in the EU: the legislation also applies to any company operating in the EU or exporting products to the bloc.
After extended discussions between the Council of the EU and the European Parliament, the EU has finally published its first piece of legislation on AI: the EU Artificial Intelligence Act. It came into force on 1 August 2024.
This reluctance was clearly illustrated in 2023, when over 150 executives signed an open letter arguing that the legislation would jeopardise competition and technological sovereignty “without effectively tackling the challenges we are and will be facing.”[1] Competitiveness will be particularly crucial for the EU in the years to come, especially that both the US[2] and the UK[3] have already put emphasis on growth and innovation in their own AI strategies. And that’s before we even start talking about China.
Since President Trump’s inauguration, the narrative around the Act has further shifted. It is now about balancing the need to build a European AI hub at the same time as avoiding conflict with the US, whose Big Tech opposes the Act’s provisions requiring more data transparency.
This is only the first of what will surely be many attempts to regulate AI. As the technology increasingly impacts on our work and non-work lives, we all need to ensure we are fully up to date with both how best we can use AI and what the regulations permit.
Why do we have it?
The EU AI Act is intended to:
- promote the uptake of human-centric and trustworthy AI
- ensure a high level of protection of health, safety, fundamental rights, democracy, and rule of law from harmful effects of AI systems
- support innovation and the functioning of the internal market.[4]
What does the Act do?
the AI Act assigns applications of AI to four risk categories:
- Unacceptable risk (banned), such as government-run social scoring of the type used in China
- High-risk applications (subject to specific legal requirements), such as a CV-scanning tools used in rectruitment
- Limited risk (associated with a possible lack of transparency), such as chatbots
- Minimal risk (free use), such as AI-enabled video games or spam filers.
If an AI system is deemed unacceptable, it will be considered a threat and banned, with some exception of use for law enforcement purposes.
AI systems deemed high risk will be divided into two categories:
- Those used in products falling under the EU’s Product Safety Regulation which can include toys, aviation, cars, medical devices and lifts.
- Those falling into specific areas that will need to be registered, such as management and operation of critical infrastructure, education and vocational training, Law enforcement etc.
All high-risk AI systems will be assessed before being released to the market and they will be continuously assessed throughout their lifecycle. At any point, consumers will have a right to file complaints about AI systems to designated national authorities across the EU.
Although generative AI, like ChatGPT or Gemini, will not be classified as high-risk, it will have to comply with transparency requirements and EU copyright law, meaning they would need to:
- Disclose that the content was generated by AI, so users are fully aware where it comes from
- Design the model to prevent it from generating illegal content
- Publish summaries of copyrighted data used for training
Apart from assigning risk, the Act also regulates issues connected to the use of AI, such as an obligation of ensuring AI literacy.
If you have any questions on how to use AI effectively in your communications or public affairs activities, contact us using the form below.
[1] Open letter to the representatives of the European Commission, the European Council and the European Parliament “Artificial Intelligence: Europe’s chance to rejoin the technological avant-garde” <https://drive.google.com/file/d/1wrtxfvcD9FwfNfWGDL37Q6Nd8wBKXCkn/view>
[2] FACT SHEET: Biden-Harris Administration Outlines Coordinated Approach to Harness Power of AI for U.S. National Security <https://www.whitehouse.gov/briefing-room/statements-releases/2024/10/24/fact-sheet-biden-harris-administration-outlines-coordinated-approach-to-harness-power-of-ai-for-u-s-national-security/>
[3] HM Government, “National AI Strategy” <https://assets.publishing.service.gov.uk/media/614db4d1e90e077a2cbdf3c4/National_AI_Strategy_-_PDF_version.pdf>
[4] AI Act, Whereas (176) <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689>
Aleksandra Krol
Senior Consultant, Advocacy and Public Affairs, based in Geneva
Aleksandra is a Public Affairs Senior Consultant based in Geneva. She is a legal professional with experience in cross-border work, gained across both public and private sectors.
One Reply to “What is AI Act and why does it matter?”