Previous All Posts Next

What the EU AI Act Means for US Companies

Posted: March 27, 2026 to Cybersecurity.

Tags: Digital Forensics, Cybersecurity

What the EU AI Act Means for US Companies

The EU AI Act is the world's first comprehensive artificial intelligence regulation, and its reach extends far beyond the borders of the European Union. Like GDPR before it, which transformed data privacy practices globally, the AI Act applies extraterritorially to any organization that deploys AI systems affecting people in the EU, regardless of where the company is headquartered or where its servers are located.

For US companies with European customers, employees, partners, or users, this regulation is not a distant European concern. It is an operational compliance requirement with substantial penalties for non-compliance, including fines that can reach 7 percent of global annual turnover. The organizations that prepare now will have a competitive advantage; those that wait will face the same scramble that characterized the post-GDPR enforcement period.

Extraterritorial Scope: Why US Companies Are Covered

The AI Act applies to your organization if any of the following conditions are met:

  • Your AI systems are used by people in the EU. If your SaaS product includes AI features (recommendations, chatbots, content generation, automated decisions) and EU residents use it, you are within scope. It does not matter that your servers are in Virginia or that your company is incorporated in Delaware.
  • You develop AI systems placed on the EU market. Selling, licensing, or providing AI-powered products or services to EU customers triggers the Act's requirements for AI system providers.
  • You deploy AI systems whose outputs affect EU residents. Using AI internally to make decisions about EU-based employees (hiring, performance evaluation, termination), EU-based customers (credit decisions, insurance pricing, service eligibility), or EU-based individuals in any capacity brings you within scope.
  • You import AI systems into the EU. If you distribute AI products through EU-based resellers or partners, importer obligations may apply.

The practical implication: if your company has any EU business exposure and uses AI in any customer-facing or employee-affecting capacity, you should assume the AI Act applies to at least some of your AI systems.

The Risk-Based Classification System

The AI Act categorizes AI systems into four risk tiers, each with different compliance obligations. This classification determines what you must do and by when.

Unacceptable Risk: Prohibited Practices

These AI applications are banned outright, with no compliance path:

  • Social scoring systems that evaluate individuals based on social behavior or personal characteristics for general purposes
  • Real-time biometric identification in publicly accessible spaces for law enforcement (with narrow, court-authorized exceptions)
  • AI systems that use subliminal, manipulative, or deceptive techniques to distort behavior in ways that cause significant harm
  • Systems that exploit vulnerabilities related to age, disability, or socioeconomic situation
  • Biometric categorization systems that infer sensitive attributes (race, political opinions, sexual orientation) from biometric data
  • Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
  • Emotion recognition in workplaces and educational institutions (except for medical or safety purposes)

US companies should audit their AI portfolio for any systems that could fall into these categories, particularly emotion recognition tools used in employee monitoring and any behavioral scoring or manipulation systems.

High Risk: Strictest Compliance Requirements

The following AI use cases are classified as high-risk and subject to the most demanding compliance requirements:

  • Employment and workforce management: AI used in recruitment (resume screening, candidate ranking), performance evaluation, task assignment and monitoring, promotion decisions, and termination decisions
  • Education: AI used for student admissions, grading, exam proctoring, and learning assessment
  • Financial services: AI for credit scoring, creditworthiness assessment, and insurance risk assessment and pricing
  • Essential services: AI that determines access to essential private or public services (benefits, emergency services)
  • Law enforcement: AI used in criminal risk assessment, polygraph and emotion detection, evidence evaluation
  • Migration and border control: AI for asylum application assessment, security risk evaluation, document authentication
  • Critical infrastructure: AI managing water, gas, heating, electricity, and transport systems
  • Safety components: AI used as safety components in products already covered by EU product safety legislation (medical devices, automotive, aviation)

For most US companies, the employment/HR and financial services categories are the most relevant. If you use AI tools for any aspect of hiring, performance management, or financial decision-making that affects EU individuals, those systems are likely classified as high-risk.

Limited Risk: Transparency Obligations

AI systems that interact with people or generate content must meet transparency requirements:

  • Chatbots and virtual assistants must clearly disclose to users that they are interacting with AI, not a human
  • AI-generated or manipulated content (deepfakes, synthetic media, AI-written text) must be labeled as AI-generated when distributed publicly
  • Emotion recognition systems (where not prohibited) must inform individuals that their emotions are being analyzed
  • AI systems that generate or manipulate image, audio, or video content that could appear authentic must include machine-readable metadata identifying it as AI-generated

Minimal Risk: No Specific Obligations

Most AI applications fall here: spam filters, AI-powered search, recommendation systems for entertainment, AI in video games, inventory management systems, and AI-assisted software development tools. No specific AI Act obligations apply, though general EU laws (GDPR, consumer protection) still do.

Compliance Requirements for High-Risk Systems

If your AI system is classified as high-risk, you must implement comprehensive compliance measures:

  1. Risk management system: Establish and maintain a risk management process throughout the AI system's lifecycle. Identify, analyze, and mitigate risks. Document risk management decisions and residual risks.
  2. Data governance: Training, validation, and testing data must be relevant, representative, sufficiently accurate, and as complete as necessary. Document data collection sources, preparation methods, and any data gaps or limitations. Address bias in training data explicitly.
  3. Technical documentation: Maintain detailed documentation of the AI system's design, development, capabilities, limitations, and intended purpose. This documentation must be sufficient for authorities to assess the system's compliance.
  4. Record-keeping and logging: Implement automatic logging of the AI system's operations to ensure traceability. Logs must be sufficient to reconstruct the system's decision-making for individual cases.
  5. Transparency and user information: Provide clear, comprehensive instructions for use including the system's intended purpose, level of accuracy, known limitations, and circumstances where it may produce errors.
  6. Human oversight: Design systems to be effectively overseen by humans. This includes the ability to understand the system's capabilities and limitations, correctly interpret outputs, decide not to use the system or override its output, and intervene or stop the system in real-time.
  7. Accuracy, robustness, and cybersecurity: High-risk AI systems must achieve appropriate levels of accuracy and be resilient to errors, faults, and attempts at manipulation. Cybersecurity measures must protect against unauthorized access and data poisoning.
  8. Conformity assessment: Before deployment, conduct a conformity assessment (self-assessment for most categories, or third-party assessment for biometric systems) and register the system in an EU database.
  9. Post-market monitoring: Implement ongoing monitoring of the AI system's performance in production, including accuracy degradation, bias emergence, and incident tracking.

General-Purpose AI (GPAI) and Foundation Model Obligations

The AI Act includes specific obligations for providers of general-purpose AI models (large language models, image generation models, etc.), even when these models are not classified as high-risk themselves:

  • Maintain and provide technical documentation about the model
  • Provide information and documentation to downstream providers who integrate the model into their applications
  • Comply with EU copyright law, including transparency about training data sources
  • Publish a publicly available summary of the training data used

For GPAI models classified as posing "systemic risk" (currently based on a training compute threshold of 10^25 FLOPS), additional obligations include adversarial testing, incident reporting, energy consumption reporting, and cybersecurity protections.

Timeline and Enforcement

The AI Act entered into force on August 1, 2024, with phased implementation designed to give organizations time to comply:

  • February 2, 2025: Prohibitions on unacceptable-risk AI practices take effect. If you have any prohibited AI systems, they must be discontinued now.
  • August 2, 2025: General-purpose AI obligations apply. GPAI model providers must comply with documentation, transparency, and copyright requirements. Governance structures (AI Office, advisory bodies) become operational.
  • August 2, 2026: Full enforcement of high-risk AI system requirements. This is the primary compliance deadline for most organizations.
  • August 2, 2027: Requirements for AI embedded in products covered by existing EU product safety legislation (medical devices, automotive, etc.).

Penalties scale by severity:

  • Violations of prohibited AI practices: up to 35 million euros or 7% of global annual turnover, whichever is higher
  • Non-compliance with high-risk AI requirements: up to 15 million euros or 3% of global annual turnover
  • Providing incorrect information to authorities: up to 7.5 million euros or 1.5% of global annual turnover

What US Companies Should Do Now

  1. Inventory all AI systems: Catalog every AI tool, model, API, and automated decision-making system in your organization. Include third-party AI services you integrate or resell. Many organizations are surprised by the number and variety of AI systems in use.
  2. Map EU exposure: For each AI system, determine whether it interacts with, makes decisions about, or produces outputs affecting EU residents. Map the data flows to understand where EU personal data enters AI processing.
  3. Classify risk levels: Assign each in-scope AI system to the appropriate risk tier. Pay special attention to HR/recruitment AI and financial decision-making AI, which are commonly high-risk.
  4. Conduct gap analysis: For high-risk systems, compare current documentation, transparency, human oversight, and monitoring practices against the Act's requirements. Documentation and human oversight gaps are typically the largest.
  5. Build your AI governance framework: The Act requires documented governance structures, policies, and accountability. This is the most important long-term investment, as it provides the foundation for complying with AI regulations globally.
  6. Engage qualified legal counsel: Preferably EU-qualified lawyers experienced in technology regulation. The Act's interaction with GDPR, the Digital Services Act, and sector-specific regulations creates complexity that requires specialized expertise.
  7. Budget for compliance: High-risk system compliance requires investment in documentation, testing, monitoring infrastructure, and ongoing governance. Start budget planning now for the August 2026 enforcement deadline.

The GDPR Parallel

GDPR enforcement provides a predictive model for how the AI Act will unfold. Many US companies initially ignored GDPR as a "European problem," then scrambled to comply when enforcement actions began generating headline fines. The companies that prepared early captured competitive advantage: they could immediately serve EU customers while competitors were still building compliance programs.

The AI Act will follow a similar trajectory, with an important difference: AI systems are more technically complex than data processing workflows, which means remediation takes longer. An organization that starts GDPR compliance in a rush can potentially catch up in 6 to 12 months. An organization that needs to retrofit high-risk AI system compliance, including documentation, testing, monitoring, and human oversight mechanisms, may need 18 to 24 months.

The European Commission's AI Act portal provides the full regulatory text, implementation guidance, FAQ documents, and updates on enforcement body establishment.

For organizations also managing US compliance obligations like CMMC, HIPAA, or SOC 2, the AI Act adds another compliance layer but shares common elements: risk assessment, documentation, monitoring, accountability structures, and incident reporting. A unified compliance approach that addresses multiple frameworks through shared governance infrastructure is significantly more efficient than building separate compliance programs for each regulation.

Frequently Asked Questions

Does the EU AI Act apply if we have no EU office or employees?+
Yes. The Act applies based on where the AI system's effects are felt, not where the company is located. If your AI-powered product or service is used by EU residents, or if your AI systems make decisions affecting EU individuals, the Act applies regardless of your corporate location. This mirrors GDPR's extraterritorial scope, which has been enforced against many US companies with no EU physical presence.
Are chatbots considered high-risk under the AI Act?+
Most chatbots fall under limited risk (transparency obligations), not high risk. You must disclose to users that they are interacting with AI. However, if a chatbot makes or directly influences decisions that fall into high-risk categories (employment screening, financial service decisions, access to essential services), that specific use would be classified as high-risk regardless of the underlying technology being a chatbot.
How does the AI Act interact with GDPR?+
The AI Act complements rather than replaces GDPR. If your AI system processes personal data of EU residents, both regulations apply simultaneously. GDPR governs the data processing aspects (consent, data minimization, data subject rights), while the AI Act governs the AI system aspects (risk management, transparency, human oversight, accuracy). Organizations already GDPR-compliant have a head start on several AI Act requirements.
What counts as an AI system under the Act?+
The Act defines AI systems broadly as machine-based systems that can, for a given set of objectives, generate outputs such as predictions, recommendations, decisions, or content that influence environments. This includes machine learning models, deep learning, natural language processing, computer vision, recommendation engines, and automated decision-making systems. Simple rule-based automation (if-then logic without learning) is generally excluded.
What is the penalty for non-compliance?+
Maximum penalties scale by violation type: up to 35 million euros or 7% of global annual turnover for prohibited AI practices, up to 15 million euros or 3% for high-risk system non-compliance, and up to 7.5 million euros or 1.5% for providing incorrect information to authorities. For SMEs and startups, proportionate lower caps apply, but the financial exposure remains significant for any company with EU revenue.
Should we wait for US AI regulation instead?+
No. The EU AI Act is already in force with enforcement dates approaching. US federal AI regulation is still evolving and will likely take a different approach (sector-specific rather than comprehensive). However, several US states (Colorado, Illinois, California) are enacting AI-specific laws, particularly around employment and consumer protection. Building a governance framework for the EU AI Act positions you well for emerging US regulations. The compliance investment is not wasted regardless of how US regulation develops.

Need Help with AI Compliance?

Petronella Technology Group helps organizations navigate AI governance and compliance across US and international frameworks. Schedule a free consultation or call 919-348-4912.

Need help implementing these strategies? Our cybersecurity experts can assess your environment and build a tailored plan.
Get Free Assessment

About the Author

Craig Petronella, CEO and Founder of Petronella Technology Group
CEO, Founder & AI Architect, Petronella Technology Group

Craig Petronella founded Petronella Technology Group in 2002 and has spent more than 30 years working at the intersection of cybersecurity, AI, compliance, and digital forensics. He holds the CMMC Registered Practitioner credential (RP-1372) issued by the Cyber AB, is an NC Licensed Digital Forensics Examiner (License #604180-DFE), and completed MIT Professional Education programs in AI, Blockchain, and Cybersecurity. Craig also holds CompTIA Security+, CCNA, and Hyperledger certifications.

He is an Amazon #1 Best-Selling Author of 15+ books on cybersecurity and compliance, host of the Encrypted Ambition podcast (95+ episodes on Apple Podcasts, Spotify, and Amazon), and a cybersecurity keynote speaker with 200+ engagements at conferences, law firms, and corporate boardrooms. Craig serves as Contributing Editor for Cybersecurity at NC Triangle Attorney at Law Magazine and is a guest lecturer at NCCU School of Law. He has served as a digital forensics expert witness in federal and state court cases involving cybercrime, cryptocurrency fraud, SIM-swap attacks, and data breaches.

Under his leadership, Petronella Technology Group has served 2,500+ clients, maintained a zero-breach record among compliant clients, earned a BBB A+ rating every year since 2003, and been featured as a cybersecurity authority on CBS, ABC, NBC, FOX, and WRAL. The company leverages SOC 2 Type II certified platforms and specializes in AI implementation, managed cybersecurity, CMMC/HIPAA/SOC 2 compliance, and digital forensics for businesses across the United States.

CMMC-RP NC Licensed DFE MIT Certified CompTIA Security+ Expert Witness 15+ Books
Related Service
Protect Your Business with Our Cybersecurity Services

Our proprietary 39-layer ZeroHack cybersecurity stack defends your organization 24/7.

Explore Cybersecurity Services
Previous All Posts Next
Free cybersecurity consultation available Schedule Now