Man vs. Machine: The world's first attempt to regulate AI is underway
Everything you need to know about the EU's new and ambitious AI Act to regulate artificial intelligence, which will come into effect this week.
This month, Meta announced that it doesn’t intend to launch its advanced future AI models in the European Union due to what it calls "the unpredictable nature of the regulatory environment in Europe." By doing so, the company joined Apple, which announced about a month ago that it would not launch its new AI system, Apple Intelligence, in the EU, for similar reasons. And it's not just tech giants that are worried: many AI startups in Europe fear that they will soon be burdened with new exorbitant costs and mountains of bureaucracy due to the EU's new AI law, set to take effect this week.
This ambitious law is the first global attempt to regulate the ongoing AI revolution and, in a groundbreaking approach to tech regulation, while the revolution is ongoing as opposed to years later as has typically been the case. Some believe this is an important move that will position Europe at the forefront of the field. "This law turns the EU into the world's AI police," Dr. Tehilla Shwartz Altshuler of the Israel Democracy Institute says. "Europe is setting international standards. In a world where everyone talks about the need to regulate AI but no one acts, Europe will have a clear advantage."
However, others warn that the law will create higher barriers for European startups, hindering the continent's competitiveness. "The AI law is a good idea, but I'm afraid it will make it harder for deep tech entrepreneurs to succeed in Europe," Andreas Cleve, CEO of Danish startup Corti, told the Financial Times. "Compliance costs can reach six figures for a company of fifty employees. This is an additional tax on small companies."
From a voluntary code of conduct to an outright ban
At the center of the law is the classification of AI systems according to their risk levels, beginning with “minimal risk” systems, such as photo filtering or video games. These systems aren’t regulated and EU companies cannot impose their own regulations on them, but the law includes a voluntary code of conduct for them. Most existing AI systems are expected to fall into this category.
The next category, "limited risk," includes AI systems that have direct interaction with humans, such as chatbots, emotion recognition systems, biometric systems, and systems that allow for the creation or manipulation of images, sound, or video (such as deep fake images or videos). These are subject to limited transparency requirements, intended to ensure, among other things, that users know they are interacting with AI and understand the possible risks involved.
Related articles:
The next level is "high risk" systems, which can negatively impact people's safety, health, or fundamental rights. This category includes AI systems in healthcare, education, employment, critical infrastructure, finance, immigration and asylum procedures, law enforcement, and the judicial system. These are subject to supervision on transparency, quality, human rights, and various safety requirements, and in some cases will need to undergo a "fundamental rights impact assessment" before deployment. These systems will be assessed both before entering the market and throughout their lifecycle.
At the top of the pyramid are "unacceptable risk" systems which pose a clear threat to people's safety, livelihood, and rights. This category includes systems capable of manipulating human behavior (i.e.; voice-activated toys encouraging dangerous behavior among children), real-time biometric recognition systems in public spaces, or biometric systems for identification and categorization of people, and social scoring systems which rate people based on personal, socioeconomic, or behavioral characteristics. The use of these systems will be completely banned, except for a few exceptional cases such as using biometric recognition systems for serious crime prosecution with court approval.
Fear of fading relevance
After the launch of ChatGPT in 2022, the EU quickly updated the law to include a "general-purpose AI" category, which regulates the activities of generative AI models and applications (GenAI), such as ChatGPT. These applications will not be classified as high-risk systems but will be required to meet transparency requirements and comply with EU copyright law. Among other things, they will be required to clearly mark content created or modified by AI, design the model to avoid generating illegal content, and publish summaries of copyrighted information used to train the model. High-impact models that could pose a systemic risk, such as GPT-4, will need to undergo a thorough evaluation and report any serious incidents to the EU Commission.
Although the law will come into effect this week, its most significant restrictions will be implemented gradually over the next two years. From February 2025, the use of unacceptable risk systems will be banned; in August 2025, regulation on general-purpose systems will take effect; and in August 2026, the restrictions on high-risk systems will be enforced.
"This may be the first time in history that such a forward-looking law comes into effect, in the sense that it will apply to the unknown," said Shwartz Altshuler. "It creates a new kind of experiment: preemptive rather than retrospective regulation of technology. One significant argument against it is that in the race to get ahead of tech regulation, some of these approaches may not be relevant in a few years. On the other hand, there are enough AI-based systems to understand what the law applies to today: machine learning systems that produce analysis, prediction, and inference, in medicine, policing, and workplace management; large language models; autonomous and robotic devices; and the world of customer service and chatbots. Additionally, it's worth remembering that regulatory waiting and restraint led to very high costs in previous revolutions, in terms of social networks, privacy, and cybersecurity."
‘Compliance lawyers will profit’
Although it is difficult to predict the impact of the law on the AI market, Shwartz Altshuler identifies one industry that is sure to benefit from it — the compliance industry: "Those who will profit are lawyers and consulting firms. How do you make operational changes to comply with the legislation, what is required to meet the standards, how do you create assessment, management, and risk-handling processes? Compliance costs are high, they often hinder innovation, especially for small companies, but they manage to level the market efficiently. Also, most of the legal obligations aren’t on the entire market but only on products defined as high-risk and on giant companies."
These possible costs are now at the heart of the criticism of the law. "This is regulation that will harm Europe's ability to compete with the U.S. in creating future AI companies," Cecilia Bonefeld-Dahl, director of DigitalEurope, which represents local and international tech companies, including Amazon, Apple, Google, Meta, Microsoft, and NVIDIA, told the Financial Times. "The compliance costs for companies in Europe will hurt us. We will have to hire lawyers while the rest of the world hires programmers."
Critics say that the legislation was hastily passed, leading to many problems. "The law is quite vague," said Kai Zenner, a parliamentary aide involved in drafting the law. "The time pressure led to a situation where many aspects remained open. Lawmakers couldn’t agree on them and it was easier to compromise. It was a shot in the dark."
Shwartz Altshuler, on the other hand, believes that the law's impact will be felt well beyond the European Union. "Companies will have to comply with the regulation if they offer AI-based products or services to EU customers or operate within the European market," she said. "This will affect data handling and cybersecurity. The fact that the regulation also applies to components within a product will only deepen the commitment to comply with the law."
However, many companies may simply choose to avoid entering the EU and focus on markets with less stringent regulation or those where the government flattens barriers instead of creating them. Meta and Apple's decisions not to launch advanced AI systems in the EU until the fog clears regarding the law's impact are proof that companies, especially giants, can get by without Europe for a while. They prefer to invest in system development and penetrate other markets rather than deal with new compliance requirements.
If Donald Trump wins the presidential election in November, the gap between Europe and the U.S. in this field may widen. Already, American regulation on AI systems is very lax, mainly summarized in an executive order signed by President Joe Biden last October, requiring certain safety tests for advanced systems. According to the Washington Post, Trump's associates are planning an executive order that will create a series of "Manhattan Projects" in AI. This proposed order will repeal Biden's order and call for an immediate review of the "onerous and unnecessary regulatory burden" on AI development.
Such a policy is expected to gain significant support from Trump's vice-presidential candidate, J.D. Vance, who has close ties to key figures in Silicon Valley and is considered a staunch opponent of AI regulation. In a Senate debate, Vance attacked the tech giants' doomsday scenarios and opposed heavy regulation that only they could comply with. "Such regulation will entrench existing tech companies and make it harder for new competitors to create the innovation that will drive the next generation of American growth," he said.
A possible outcome is that while Europe moves towards tight and heavy regulation, the U.S. and other countries will move towards a market without significant barriers, allowing companies to do almost whatever they want. Europe is probably right in creating regulation that will prevent the use of AI systems that could cause significant damage, but is it a smart move if the final result is that Europe will be left without companies and AI products while the rest of the world races forward?
Israel must also comply with EU law
Europe is considered Israel's largest trading partner. This means that unlike American companies, local players do not have the privilege of ignoring the EU and excluding their products or parts of them. That is, Israeli companies will be required to meet the law's requirements. And to help them, the Israeli government needs to immediately mobilize to bridge gaps with EU legislation.
"The Israeli perception so far has been not to adopt the European approach, and in practice, to do nothing," said Shwartz Altshuler. "If Israel wants to be compatible with Europe in a way that will allow the local industry to sell AI products and services to the European market, it will have to adopt at least some of the laws, and it is better to do this early and thoughtfully rather than later and under pressure. Israel needs to closely monitor enforcement of the AI law, fill the existing gaps — cyber regulation, privacy regulation, social network and search engine regulation — with the global standard, and start working. 'Regulatory restraint' or 'ethics' are no substitute for legislation and policy work, regulation, and rights protection. They also do not encourage long-term innovation precisely because compliance with European law will now be required. If it exists in Israel, it will mainly ease the burden on startups and small companies."