For many organizations in the U.S., it’s about to get a lot harder — and potentially more expensive — to deploy AI. This time, the technology itself has very little to do with it.
That’s because while business leaders have been focused on the productivity-enhancing and revenue-generating potential of AI, state, federal, and even international governments have been busy passing legislation designed to mitigate AI’s potential to cause harm. These governments are quickly passing laws regulating how companies use AI, when and where they disclose its use to the public, and more.
These laws don’t just apply to what most consider “AI companies,” either. Most apply to organizations that use or incorporate AI into normal business functions, while some extend as far as hardware manufacturers. And while these laws can be cumbersome and expensive to comply with, the steep financial penalties for violations make them even more expensive to ignore.
To make compliance easier, we’ve compiled a list of the AI regulations — both active and upcoming — that belong on every tech leader’s radar today.
Who it impacts: Any organization offering AI products or services in the EU or whose AI output can be used within the EU
Purpose: Consumer protection
The European Union passed the world’s first comprehensive AI legislation in early 2024. The EU AI Act seeks to limit AI’s potential for harm and misuse through a sliding scale of risks, each of which comes with its own rules and restrictions. The EU AI Act categorizes these risks in four bands, ranging from “unacceptable” risks that are banned outright to “low or minimal risk” cases that are completely unrestricted.
Healthcare companies, for example, may be classified as high-risk use cases due to AI use in diagnostic tools, software embedded in medical devices, and more. Companies that fall into this high-risk bracket are required to audit their systems for fairness, risk controls, and detailed documentation.
The AI Act is also by far the most punitive AI legislation to date: Penalties for the most severe violations range from €35M (approximately $41M) or 7% of global annual revenue, whichever is higher. Penalties for other violations can be up to €15M or 3% of revenue.
Who it impacts: AI makers, social platforms, and device manufacturers doing business in California
Purpose: Countering deepfakes, consumer protection
Amid a raft of AI-related laws passed in California, three entries — SB 942, AB 853, and AB 2013 — may have the most significant impact.
SB 942 and AB 853 apply to large providers of generative AI systems, social media platforms, and device manufacturers and are largely designed to counter deepfakes and other deceptive AI-derived practices. SB 942 — known as the California AI Transparency Act (CAITA) — and the amendments in AB 853 require these organizations to provide a free, public tool to help users determine if images, videos, or audio were created or altered by AI. It also requires invisible "latent" labels (think: watermarks) on imagery, video, or audio created or altered by AI.
“Capture device manufacturers” (phones, tablets, and anything else that can capture images or videos), on the other hand, must “[e]mbed latent disclosures in content captured by the device by default.” The purpose of this requirement is to establish clear provenance of images and video and mitigate the use of AI-derived deepfakes. AB 2013 focuses on training data and requires developers of GenAI tools to supply “high-level” information about training data, including data sources and ownership, collection history, copyright information, and disclosures about any personal data that may be included.
AB 2013 took effect on January 1, 2026. SB 942 has been in effect throughout most of 2026, but its expanded regulations have an operative date of January 1, 2027 for social platforms and AI companies and January 1, 2028 for device manufacturers.
Who it impacts: Companies using AI to communicate autonomously with consumers, companies doing business with Utah-based customers, companies in regulated industries
Purpose: Consumer protection, transparency
Utah became the first state to regulate generative AI when SB 149, known as the UAIPA, passed in 2024. The act extends existing consumer protections to cover deceptive or fraudulent practices performed by or derived from AI systems, and amendments to the UAIPA also include protections against deepfakes and restrictions on and disclosure requirements for chatbots used for mental health or legal advice. Despite these requirements, the UAIPA remains one of the least restrictive pieces of AI legislation active in the U.S. today.
In 2025, the scope of its disclosure requirements was narrowed by SB 226 and SB 332, which now require companies to disclose AI use when explicitly requested by consumers or when used in “high-risk” interactions (like healthcare or financial advice).
Who it impacts: Companies doing business in Texas or with Texas-based customers
Purpose: Consumer protection, transparency
Texas HB 149 initially contained some of the same disclosure requirements present in California’s CAITA legislation, but the enacted version is more limited in scope. In its final form, TRAIGA prohibits AI use for the purposes of manipulating behavior, discrimination, creation of deepfakes or other illegal material, biometric data capture, and constitutional rights infringement.
It’s worth noting that TRAIGA’s definition of AI is much broader than most accepted definitions, and may extend to business automation systems, virtual assistants, content or recommendation algorithms, grammar check systems, and more.
Who it impacts: Companies doing business in New York or with New York-based customers
Purpose: Transparency, public safety, consumer protection
When it takes effect: January 1, 2027
Like many pieces of legislation on this list, New York’s S6953-B/A6453-B, known as RAISE, has been pared down from its initial scope. But unlike many of these laws, RAISE is limited to:
RAISE is designed largely to prevent use of advanced AI models in the planning or commission of acts of violence and extremism, including chemical, biological, and cyber attacks. To that end, it requires organizations to establish a range of guardrails, governance, and cybersecurity practices. It also set out an aggressive 72-hour reporting deadline following a “critical safety incident.” Initial violations can come with penalties up to $1 million and $3 million for subsequent violations, making it expensive to ignore.
Who it impacts: AI makers, social platforms, and device manufacturers
Purpose: Countering AI deepfakes, consumer protection
When it takes effect: Unclear. Possibly June 30, 2026 or January 1, 2027
As one of the first wide-ranging efforts toward AI regulation in the United States, Colorado’s SB24-205 seeks to regulate how consumers are impacted by AI-driven decisions. The law requires companies using “high-risk artificial intelligence systems” to make “consequential decisions” — like those regarding education, employment, financial or lending services, government representation, housing, healthcare, insurance, or legal services — to have documented risk management policies, conduct impact assessments, and provide transparency to consumers.
It’s unclear when or if SB 205 will take effect. The law’s enforcement date has been pushed back from February 2026 to June 2026 and potentially even to January 2027 amid significant debate, a range of proposed amendments, and at least one high-profile lawsuit.
Executive Order 14179, signed in early 2025, revoked the previous administration’s AI safety mandates in order to solidify U.S. dominance in AI. At the end of 2025, the White House created the Artificial Intelligence Litigation Task Force to challenge laws — like those listed above
— that would make AI use “onerous” for U.S. companies.
In 2026, meanwhile, the White House released its National Policy Framework for Artificial Intelligence, a set of legislative proposals that would preempt state-level controls and establish federal oversight for AI.
While none of these actions have succeeded in upending the state-level laws discussed here, they strongly indicate that the current administration is invested in challenging any legislation that restricts AI use by U.S. companies.
AI is going to be regulated. At this point, it’s not a question of if, but when, how much, and by whom. To get ahead of these regulations, tech leaders should begin planning — and acting — now to avoid compliance violations and costly penalties.
In addition to ensuring compliance with legislation from individual states, complying with the EU AI Act is a must. And while its heavy penalties create a powerful incentive for compliance, its clear risk framework offers a way forward:
There’s no established roadmap to follow with AI regulations yet. Until there is, the only viable way forward is to ensure that your organization is compliant with existing laws while keeping an eye on future developments.
To get more expert insights like these on AI, innovation, security, and more, visit our resources page today.