The EU AI Act: A Template for the Future of AI Legislation

Blue Brain

November 26, 2025

Despite being a new technology, it has already become difficult to imagine a world without AI. While our daily life seamlessly accommodates AI, local, national, and international government entities struggle with the legal ambiguities introduced by its rapid development egged on by competition between the various AI corporations. Although amusing to us, one can easily grasp, for example, the burden of AI’s ability to churn out images in this or that style on the copyright law meant to protect intellectual property. Chat GPT is officially prohibited from producing images in protected styles, such as Disney or Studio Ghibli. However, asking it to draw in those styles prompts ChatGPT to help users bend the rules and produce images that closely resemble those IPs.  

Another concern regards the way, “AI tools are inherently borderless, operating transnationally and posing significant challenges to enforcing criminal sanctions for misconduct,” Meaning that it’s not a simple matter to prevent the flow of criminal AI activity from one nation to the next (Morrow). A copyright image produced in Russia cannot (easily) be prevented from entering the United States, and vice versa. The borderless nature of the problem implies a borderless, international response. Yet, “the global AI regulatory landscape remains a fragmented patchwork of domestic approaches that limit international cooperation. Consequently, holding companies or individuals criminally liable for AI-generated content proves difficult” (Morrow). The major first step in these patchwork regulatory frameworks was The European Artificial Intelligence Act (EU AI Act). 

The EU AI Act was proposed in 2021 and has gone into effect this past August. Included in the idea for the legislation was the foreknowledge that AI technology would develop at a much faster rate than regulations could be passed. To address this, the act creates a system of risk assessment (Klein/Campisi). The intention is to speed up the rate at which regulations can put the brakes on high-risk technological developments, while not creating -- or implementing very few -- barriers for the low-risk implementation of AI. In other words, the way in which AI stimulates technological innovation and business productivity ought not be hindered by governments, while the way AI threatens those things and human societies as a whole ought to be reined in. The main risk levels are defined as follows:  

Unacceptable risk AI: These are AI systems that pose an actual threat to individuals as well as their freedoms, such as AI systems used for cognitive behavioral manipulation, social scoring, and large-scale real-time tracking. The use of AI systems that fall into this category is strictly prohibited.  

High risk AI: This risk level of AI has the capacity to negatively affect the safety or fundamental rights of consumers, such as systems used in the context of mass transportation, health care, medical devices, children's toys, management of critical infrastructure, employment, or law enforcement. The use of high risk AI systems is subject to judicial or other independent body authorization along with transparency, security, and risk assessment obligations.  

Limited risk AI: These are AI systems where the primary risk to individuals comes from a lack of transparency regarding the use of the AI system, such as the use of chatbots. The use of limited risk AI systems is generally permitted when fully disclosed to consumers.  

Minimal or no risk AI: This risk level includes AI systems that pose minimal risks (or no risks) to the rights and freedoms of individuals, such as AI- enabled video games or spam filters. It is expected that the vast majority of AI systems currently used in the EU fall into this category. The use of minimal or no risk AI systems is generally permitted without enhanced restrictions (Klein/Kampisi, AI EU Act Summary).  

As we will see, this framework has been useful for inspiring some legislation in the United States. However, one can easily see a glaring issue: Chat GPT, the form of AI we are most familiar with, belongs everywhere and nowhere on this schema.  AI models like ChatGPT are referred to as, “general purpose” AI, and carry their own regulations. In short, “these additional GPAI requirements fall primarily on the providers of GPAI and include mandatory technical disclosures as well as applicable copyright protections” (Klein/Kampisi).

As it stands, no federal framework for the regulation of AI exists in the United States. President Biden issued executive order 14110 in 2023 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which played this role for a while, but President Trump undid this on his first day of office.  

The President turns out not to be the only one opposed to wide-reaching regulations on AI in the United States. California, the home of many AI tech startups in Silicon Valley, passed at least 17 AI bills (Klein/Campisi) through the state legislature in 2024, the main one being SB 1047. Taking cues from the EU AI Act, the bill would have:  

imposed various requirements on developers of covered models, such as requiring implementation of administrative, technical, and physical cybersecurity protections to prevent unauthorized access to, misuse of, or unsafe post-training modifications of a covered model; development and implementation of safety and security protocols, including testing procedures; and implementation of capabilities to promptly enact a full shutdown of the model. The bill also required developers to assess and report the risks of critical harms posed by covered models and covered model derivatives, and to refrain from using or making them available for commercial or public use if there is an unreasonable risk of critical harm. Furthermore, the bill required developers to annually retain a third-party auditor to conduct an independent audit of compliance with the bill's requirements, and to submit a statement of compliance and a report of any AI safety incidents to the attorney general (Klein/Campisi). 

In September of 2024, Governor Gavin Newsom vetoed the bill, stating it’s, “a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities," and that it need to "take into account whether an Al system is deployed in high-risk environments, involves critical decision making or the use of sensitive data" (Klein/Campisi). 

AI is still in its infancy, and as such, has become the repository of dreams and nightmares about the future of our society. Will it save or destroy humanity? The legal battle over regulations is also in its infancy. Given the above snapshot of some of its fits and starts, it seems like the development of AI is simply beyond our control. Therefore, what can be certain is that the future of AI is deeply uncertain.   

Sources:  

Both articles are available on Lexis+, accessible onsite, at any of the locations of the Riverside County Law Library.  

Further Reading:  

Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity: https://arxiv.org/pdf/2401.07348 

 

Written by Yanis Ait Kaci Azzou, Library Assistant