What the EU AI Act Means for US Companies
Posted: March 27, 2026 to Cybersecurity.
What the EU AI Act Means for US Companies
The EU AI Act is the world's first comprehensive artificial intelligence regulation, and its reach extends far beyond the borders of the European Union. Like GDPR before it, which transformed data privacy practices globally, the AI Act applies extraterritorially to any organization that deploys AI systems affecting people in the EU, regardless of where the company is headquartered or where its servers are located.
For US companies with European customers, employees, partners, or users, this regulation is not a distant European concern. It is an operational compliance requirement with substantial penalties for non-compliance, including fines that can reach 7 percent of global annual turnover. The organizations that prepare now will have a competitive advantage; those that wait will face the same scramble that characterized the post-GDPR enforcement period.
Extraterritorial Scope: Why US Companies Are Covered
The AI Act applies to your organization if any of the following conditions are met:
- Your AI systems are used by people in the EU. If your SaaS product includes AI features (recommendations, chatbots, content generation, automated decisions) and EU residents use it, you are within scope. It does not matter that your servers are in Virginia or that your company is incorporated in Delaware.
- You develop AI systems placed on the EU market. Selling, licensing, or providing AI-powered products or services to EU customers triggers the Act's requirements for AI system providers.
- You deploy AI systems whose outputs affect EU residents. Using AI internally to make decisions about EU-based employees (hiring, performance evaluation, termination), EU-based customers (credit decisions, insurance pricing, service eligibility), or EU-based individuals in any capacity brings you within scope.
- You import AI systems into the EU. If you distribute AI products through EU-based resellers or partners, importer obligations may apply.
The practical implication: if your company has any EU business exposure and uses AI in any customer-facing or employee-affecting capacity, you should assume the AI Act applies to at least some of your AI systems.
The Risk-Based Classification System
The AI Act categorizes AI systems into four risk tiers, each with different compliance obligations. This classification determines what you must do and by when.
Unacceptable Risk: Prohibited Practices
These AI applications are banned outright, with no compliance path:
- Social scoring systems that evaluate individuals based on social behavior or personal characteristics for general purposes
- Real-time biometric identification in publicly accessible spaces for law enforcement (with narrow, court-authorized exceptions)
- AI systems that use subliminal, manipulative, or deceptive techniques to distort behavior in ways that cause significant harm
- Systems that exploit vulnerabilities related to age, disability, or socioeconomic situation
- Biometric categorization systems that infer sensitive attributes (race, political opinions, sexual orientation) from biometric data
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
- Emotion recognition in workplaces and educational institutions (except for medical or safety purposes)
US companies should audit their AI portfolio for any systems that could fall into these categories, particularly emotion recognition tools used in employee monitoring and any behavioral scoring or manipulation systems.
High Risk: Strictest Compliance Requirements
The following AI use cases are classified as high-risk and subject to the most demanding compliance requirements:
- Employment and workforce management: AI used in recruitment (resume screening, candidate ranking), performance evaluation, task assignment and monitoring, promotion decisions, and termination decisions
- Education: AI used for student admissions, grading, exam proctoring, and learning assessment
- Financial services: AI for credit scoring, creditworthiness assessment, and insurance risk assessment and pricing
- Essential services: AI that determines access to essential private or public services (benefits, emergency services)
- Law enforcement: AI used in criminal risk assessment, polygraph and emotion detection, evidence evaluation
- Migration and border control: AI for asylum application assessment, security risk evaluation, document authentication
- Critical infrastructure: AI managing water, gas, heating, electricity, and transport systems
- Safety components: AI used as safety components in products already covered by EU product safety legislation (medical devices, automotive, aviation)
For most US companies, the employment/HR and financial services categories are the most relevant. If you use AI tools for any aspect of hiring, performance management, or financial decision-making that affects EU individuals, those systems are likely classified as high-risk.
Limited Risk: Transparency Obligations
AI systems that interact with people or generate content must meet transparency requirements:
- Chatbots and virtual assistants must clearly disclose to users that they are interacting with AI, not a human
- AI-generated or manipulated content (deepfakes, synthetic media, AI-written text) must be labeled as AI-generated when distributed publicly
- Emotion recognition systems (where not prohibited) must inform individuals that their emotions are being analyzed
- AI systems that generate or manipulate image, audio, or video content that could appear authentic must include machine-readable metadata identifying it as AI-generated
Minimal Risk: No Specific Obligations
Most AI applications fall here: spam filters, AI-powered search, recommendation systems for entertainment, AI in video games, inventory management systems, and AI-assisted software development tools. No specific AI Act obligations apply, though general EU laws (GDPR, consumer protection) still do.
Compliance Requirements for High-Risk Systems
If your AI system is classified as high-risk, you must implement comprehensive compliance measures:
- Risk management system: Establish and maintain a risk management process throughout the AI system's lifecycle. Identify, analyze, and mitigate risks. Document risk management decisions and residual risks.
- Data governance: Training, validation, and testing data must be relevant, representative, sufficiently accurate, and as complete as necessary. Document data collection sources, preparation methods, and any data gaps or limitations. Address bias in training data explicitly.
- Technical documentation: Maintain detailed documentation of the AI system's design, development, capabilities, limitations, and intended purpose. This documentation must be sufficient for authorities to assess the system's compliance.
- Record-keeping and logging: Implement automatic logging of the AI system's operations to ensure traceability. Logs must be sufficient to reconstruct the system's decision-making for individual cases.
- Transparency and user information: Provide clear, comprehensive instructions for use including the system's intended purpose, level of accuracy, known limitations, and circumstances where it may produce errors.
- Human oversight: Design systems to be effectively overseen by humans. This includes the ability to understand the system's capabilities and limitations, correctly interpret outputs, decide not to use the system or override its output, and intervene or stop the system in real-time.
- Accuracy, robustness, and cybersecurity: High-risk AI systems must achieve appropriate levels of accuracy and be resilient to errors, faults, and attempts at manipulation. Cybersecurity measures must protect against unauthorized access and data poisoning.
- Conformity assessment: Before deployment, conduct a conformity assessment (self-assessment for most categories, or third-party assessment for biometric systems) and register the system in an EU database.
- Post-market monitoring: Implement ongoing monitoring of the AI system's performance in production, including accuracy degradation, bias emergence, and incident tracking.
General-Purpose AI (GPAI) and Foundation Model Obligations
The AI Act includes specific obligations for providers of general-purpose AI models (large language models, image generation models, etc.), even when these models are not classified as high-risk themselves:
- Maintain and provide technical documentation about the model
- Provide information and documentation to downstream providers who integrate the model into their applications
- Comply with EU copyright law, including transparency about training data sources
- Publish a publicly available summary of the training data used
For GPAI models classified as posing "systemic risk" (currently based on a training compute threshold of 10^25 FLOPS), additional obligations include adversarial testing, incident reporting, energy consumption reporting, and cybersecurity protections.
Timeline and Enforcement
The AI Act entered into force on August 1, 2024, with phased implementation designed to give organizations time to comply:
- February 2, 2025: Prohibitions on unacceptable-risk AI practices take effect. If you have any prohibited AI systems, they must be discontinued now.
- August 2, 2025: General-purpose AI obligations apply. GPAI model providers must comply with documentation, transparency, and copyright requirements. Governance structures (AI Office, advisory bodies) become operational.
- August 2, 2026: Full enforcement of high-risk AI system requirements. This is the primary compliance deadline for most organizations.
- August 2, 2027: Requirements for AI embedded in products covered by existing EU product safety legislation (medical devices, automotive, etc.).
Penalties scale by severity:
- Violations of prohibited AI practices: up to 35 million euros or 7% of global annual turnover, whichever is higher
- Non-compliance with high-risk AI requirements: up to 15 million euros or 3% of global annual turnover
- Providing incorrect information to authorities: up to 7.5 million euros or 1.5% of global annual turnover
What US Companies Should Do Now
- Inventory all AI systems: Catalog every AI tool, model, API, and automated decision-making system in your organization. Include third-party AI services you integrate or resell. Many organizations are surprised by the number and variety of AI systems in use.
- Map EU exposure: For each AI system, determine whether it interacts with, makes decisions about, or produces outputs affecting EU residents. Map the data flows to understand where EU personal data enters AI processing.
- Classify risk levels: Assign each in-scope AI system to the appropriate risk tier. Pay special attention to HR/recruitment AI and financial decision-making AI, which are commonly high-risk.
- Conduct gap analysis: For high-risk systems, compare current documentation, transparency, human oversight, and monitoring practices against the Act's requirements. Documentation and human oversight gaps are typically the largest.
- Build your AI governance framework: The Act requires documented governance structures, policies, and accountability. This is the most important long-term investment, as it provides the foundation for complying with AI regulations globally.
- Engage qualified legal counsel: Preferably EU-qualified lawyers experienced in technology regulation. The Act's interaction with GDPR, the Digital Services Act, and sector-specific regulations creates complexity that requires specialized expertise.
- Budget for compliance: High-risk system compliance requires investment in documentation, testing, monitoring infrastructure, and ongoing governance. Start budget planning now for the August 2026 enforcement deadline.
The GDPR Parallel
GDPR enforcement provides a predictive model for how the AI Act will unfold. Many US companies initially ignored GDPR as a "European problem," then scrambled to comply when enforcement actions began generating headline fines. The companies that prepared early captured competitive advantage: they could immediately serve EU customers while competitors were still building compliance programs.
The AI Act will follow a similar trajectory, with an important difference: AI systems are more technically complex than data processing workflows, which means remediation takes longer. An organization that starts GDPR compliance in a rush can potentially catch up in 6 to 12 months. An organization that needs to retrofit high-risk AI system compliance, including documentation, testing, monitoring, and human oversight mechanisms, may need 18 to 24 months.
The European Commission's AI Act portal provides the full regulatory text, implementation guidance, FAQ documents, and updates on enforcement body establishment.
For organizations also managing US compliance obligations like CMMC, HIPAA, or SOC 2, the AI Act adds another compliance layer but shares common elements: risk assessment, documentation, monitoring, accountability structures, and incident reporting. A unified compliance approach that addresses multiple frameworks through shared governance infrastructure is significantly more efficient than building separate compliance programs for each regulation.
Frequently Asked Questions
Does the EU AI Act apply if we have no EU office or employees?+
Are chatbots considered high-risk under the AI Act?+
How does the AI Act interact with GDPR?+
What counts as an AI system under the Act?+
What is the penalty for non-compliance?+
Should we wait for US AI regulation instead?+
Need Help with AI Compliance?
Petronella Technology Group helps organizations navigate AI governance and compliance across US and international frameworks. Schedule a free consultation or call 919-348-4912.