Alle Artikel
EU AI Act for Startups and SMEs: Compliance Guide
AI Act

EU AI Act for Startups and SMEs: Compliance Guide

Practical EU AI Act compliance guide for startups and SMEs. Reduced penalties, regulatory sandboxes, simplified docs, and budget planning.

Legalithm Team21 Min. Lesezeit
Teilen
Lesezeit21 min
ThemaAI Act
AktualisiertApr. 2026
Inhaltsverzeichnis

EU AI Act for Startups and SMEs: The Practical Compliance Guide

There is no size exemption under the EU AI Act. If your AI system affects people in the European Union, you must comply — whether you have 5 employees or 5,000. A two-person startup selling an AI hiring tool to a single German client has the same core legal obligations as a Fortune 500 enterprise.

But the regulation is not blind to startup realities. Regulation (EU) 2024/1689 includes specific support measures for startups, SMEs, and microenterprises — reduced fines, regulatory sandboxes, and simplified documentation. If you plan early, compliance becomes a manageable engineering problem rather than a company-ending legal risk.

This guide covers every SME-relevant provision with cost estimates, prioritisation frameworks, and real-world scenarios. Start with our EU AI Act compliance checklist if you need a primer on the Act's structure.

TL;DR — AI Act compliance for startups

  • The AI Act applies to all organisations regardless of size. There is no blanket SME exemption.
  • Reduced penalties: SMEs and startups pay the lower of the fixed euro amount or the percentage of turnover — the inverse of the formula applied to large enterprises. See our penalties guide for the full breakdown.
  • Regulatory sandboxes (Articles 57–62) let startups develop and test AI systems under supervisory guidance, with potential immunity from fines for good-faith participation.
  • Microenterprises (<10 employees, <EUR 2 million turnover) benefit from simplified technical documentation requirements under Article 62a.
  • AI regulatory sandboxes must prioritise SME access — member states are legally required to give small businesses preferential treatment.
  • Budget realistically: a high-risk AI system at a lean startup costs EUR 50,000–150,000 to bring into compliance; minimal-risk systems may cost almost nothing.
  • Start with classification: use a free risk classification tool to determine your obligations before spending anything.

No size exemption — but important support measures

The AI Act does not contain a general exemption for small businesses. Article 3 defines "provider" and "deployer" without reference to company size — if you develop or use an AI system in the EU market, you are subject to the Act regardless of headcount.

However, the legislators recognised the disproportionate burden on small companies. The Act includes several SME-specific provisions:

1. Proportionate penalties (Article 99(5))

For SMEs, including startups, fines are calculated as the lower of the fixed euro amount or the revenue percentage. For large enterprises, it is the higher of the two. This inversion is significant — a pre-revenue startup cannot be fined based on a percentage of zero turnover, and even a profitable small company will almost always pay the fixed amount rather than the percentage.

2. Mandatory regulatory sandboxes (Articles 57–62)

Every EU member state must establish at least one AI regulatory sandbox by 2 August 2026. These sandboxes must provide priority access to SMEs and startups, including those established in other member states. Participation gives you direct access to regulatory guidance and — critically — protection from enforcement action for good-faith compliance efforts during the sandbox period.

3. Simplified documentation for microenterprises (Article 62a)

Microenterprises and startups are permitted to use simplified forms for quality management systems and technical documentation. The European Commission is mandated to develop standardised templates specifically designed for small providers.

4. Standardised compliance templates

The AI Office is developing standardised templates and toolkits to reduce the cost and complexity of compliance documentation. These are being designed with input from SME representatives and will be freely available.

5. Priority access to testing facilities

National competence centres and European Digital Innovation Hubs must provide preferential access to AI testing infrastructure for SMEs. This includes access to computational resources, datasets, and technical expertise that would otherwise be unaffordable.

6. Codes of conduct for non-high-risk systems

Article 95 encourages the development of voluntary codes of conduct for AI systems that are not classified as high-risk. For startups operating in the minimal or limited risk tiers, following an industry code of conduct may be the most efficient path to demonstrating responsible AI practices without the overhead of full high-risk compliance.

Common startup AI use cases by risk tier

Risk classification determines 90% of your obligations and 95% of your costs. Use our free classification tool or read the detailed guide to determine where your system falls.

Use caseRisk tierKey obligations
AI-powered spam filterMinimalNone under the AI Act
Internal analytics dashboard with MLMinimalNone under the AI Act
Product recommendation engineMinimalNone under the AI Act
AI chatbot (customer-facing)LimitedTransparency: disclose AI interaction (Article 50)
AI-generated content (text, images)LimitedLabel content as AI-generated
Deepfake or synthetic media toolsLimitedMandatory disclosure of artificial generation
AI-based CV screening / recruitmentHigh-riskFull compliance: risk management, documentation, testing, monitoring (Article 6, Annex III)
AI credit scoring / lending decisionsHigh-riskFull compliance as above
AI for insurance pricingHigh-riskFull compliance as above
AI diagnostic tool (medical device)High-riskFull compliance + sector-specific regulation
Student assessment / exam proctoring AIHigh-riskFull compliance as above
AI emotion recognition in workplaceProhibitedBanned outright under Article 5
Social scoring systemProhibitedBanned outright
Subliminal manipulation techniquesProhibitedBanned outright

Minimal risk (no obligations)

Most startup AI falls here. If your system does not interact directly with natural persons in a way that requires transparency, and is not listed in Annex III as a high-risk use case, you have zero mandatory obligations under the AI Act. Internal tools, analytics, recommendation engines, search algorithms, and optimisation systems are typically minimal risk.

What to do: Document your classification reasoning. A written rationale is strong evidence of good faith if a regulator ever questions your system.

Limited risk (transparency only)

AI systems that interact directly with people — chatbots, virtual assistants, content generators — fall under limited risk with transparency obligations under Article 50. The requirements are straightforward:

  • Chatbots: clearly inform users they are interacting with an AI system (unless it is obvious from the circumstances).
  • AI-generated content: mark outputs (text, audio, images, video) as artificially generated or manipulated.
  • Emotion recognition / biometric categorisation: inform the person being analysed.

What to do: Add a visible AI disclosure to your user interface. Update your terms of service. Label AI-generated content with metadata and visible indicators. The compliance cost is negligible.

High-risk (full compliance)

If your AI system falls within one of the use cases listed in Annex III — employment, creditworthiness, education, law enforcement, critical infrastructure, and others — or is a safety component of a product covered by EU harmonised legislation, you face the full compliance regime: risk management (Article 9), data governance, technical documentation (Article 17), human oversight, conformity assessment, EU database registration, and post-market monitoring. This is where startup compliance becomes a serious project. See our full compliance checklist for every step.

Prohibited (already banned)

Article 5 bans specific AI practices outright, regardless of risk mitigation measures. These prohibitions have been enforceable since 2 February 2025. If your product relies on subliminal manipulation, social scoring, real-time remote biometric identification in public spaces (with narrow exceptions), exploitation of vulnerable groups, or workplace/educational emotion recognition, you must pivot or shut down that feature. There is no sandbox exception for prohibited practices.

Reduced penalties for SMEs and startups

The penalty structure in Article 99 treats SMEs fundamentally differently from large enterprises.

The three fine tiers

Violation typeFixed capRevenue percentage
Prohibited practices (Article 5)EUR 35 million7% of global annual turnover
High-risk / transparency violationsEUR 15 million3% of global annual turnover
Supplying incorrect information to authoritiesEUR 7.5 million1.5% of global annual turnover

The SME inversion formula

For large enterprises, the applicable fine is whichever is higher — the fixed amount or the revenue percentage. For SMEs, including startups, the formula is inverted: the fine is whichever is lower.

Why this matters in practice:

  • A startup with EUR 500,000 annual turnover that commits a high-risk violation faces a maximum fine of EUR 15,000 (3% of EUR 500,000), not EUR 15 million — because 3% of turnover is far lower than the EUR 15 million cap.
  • A pre-revenue startup faces an even more interesting calculation. With zero turnover, the percentage-based amount is EUR 0. Whether a supervisory authority would apply a zero fine is untested, but the legal formula clearly favours the startup.
  • A large enterprise with EUR 10 billion turnover faces the full EUR 300 million (3% of EUR 10 billion) for the same violation.

This proportionality mechanism is one of the most important SME protections in the entire regulation. For a deeper analysis including comparison with GDPR fines, read our full penalties and fines guide.

Mitigating factors

Beyond the SME formula, Article 99(7) requires authorities to consider the nature and gravity of the infringement, whether it was intentional or negligent, actions taken to mitigate harm, degree of cooperation, and whether the organisation voluntarily reported the issue. For a startup that classifies its systems, documents its compliance efforts, and cooperates fully, the practical fine risk is substantially lower than the headline numbers suggest.

AI regulatory sandboxes

Articles 57–62 establish AI regulatory sandboxes — controlled environments where startups develop, train, and test AI systems under supervisory guidance before market placement. Every member state must establish at least one by August 2026.

What a regulatory sandbox provides

  • Direct access to regulatory guidance: supervisory staff review your system and provide feedback on compliance.
  • Controlled testing environment: you can test your AI system on real data (with appropriate safeguards) under supervised conditions.
  • Good-faith protection: if you comply with the sandbox's terms and act in good faith, you will not face enforcement action for issues identified during the sandbox period.
  • Structured compliance pathway: the sandbox gives you a clear roadmap from prototype to market-ready compliant product.
  • Accelerated market access: successful sandbox completion may streamline your conformity assessment.

How to apply

Each member state manages its own sandbox, typically through the national market surveillance authority or a designated AI competence centre. The general process: submit a description of your AI system and its development stage, agree on a sandbox plan with the authority (scope, timeline, data safeguards), develop under supervised conditions with periodic reviews, and receive an exit report on compliance status that supports your conformity assessment.

Article 57(9) explicitly requires that sandboxes provide priority access to SMEs and startups, including cross-border participants. A startup based in one member state can apply to a sandbox in another.

Current sandbox status across the EU

As of early 2026, several member states have launched or announced sandbox programmes: Spain (AESIA — operational since late 2024, focused on financial services, healthcare, and employment), France (CNIL/ARCEP joint sandbox for generative AI), the Netherlands (RDI sandbox with SME fast-track), Germany (BNetzA expected mid-2026), and Finland (Traficom with national AI competence centre support). The European AI Office is coordinating these initiatives for interoperability and mutual recognition of sandbox outcomes.

The EU has also funded the EUSAiR (European Sandbox for AI Regulation) pilot programme under Horizon Europe, providing a pan-European sandbox framework for cross-border AI systems. Participation is free.

Simplified documentation for microenterprises

Article 62a introduces simplified compliance obligations for providers that qualify as microenterprises under the EU SME definition.

What qualifies as a microenterprise?

Under Commission Recommendation 2003/361/EC:

  • Fewer than 10 employees, AND
  • Annual turnover or balance sheet total does not exceed EUR 2 million

Both conditions must be met. A startup with 8 employees and EUR 5 million in revenue does not qualify. A startup with 12 employees and EUR 1 million in revenue does not qualify either.

What simplifications apply?

For microenterprises providing high-risk AI systems, the AI Act permits:

  • Simplified quality management system: instead of the full Article 17 quality management system, microenterprises may use a simplified form that covers essential elements proportionately.
  • Simplified technical documentation: the European Commission is developing standardised fill-in-the-blank templates that reduce the documentation burden while meeting Annex IV minimum requirements.
  • Lighter record-keeping: logging obligations are preserved, but the format and depth may be reduced.

What is NOT simplified

Even for microenterprises, certain obligations cannot be reduced:

  • Risk management (Article 9): you must still identify, analyse, and mitigate risks. The process can be lighter, but cannot be skipped.
  • Data governance (Article 10): training and validation data must still be relevant, representative, and as free of errors as possible.
  • Conformity assessment (Article 43): you must still complete a conformity assessment before placing your system on the market.
  • Transparency to deployers (Article 13): you must still provide clear information about your system's capabilities and limitations.
  • Post-market monitoring (Article 72): you must still monitor your system after deployment and report serious incidents.

The simplifications reduce the format and depth of documentation, not the substance of your obligations. A microenterprise still needs a compliant AI system; it just has less paperwork to prove it.

Practical budget planning by risk tier

The answer to "how much will this cost?" depends almost entirely on your risk tier. Below are realistic estimates based on early compliance projects as of early 2026.

Minimal / limited risk: EUR 0–2,000

Cost itemEstimate
Risk classification assessmentEUR 0 (use Legalithm's free tool)
Transparency disclosures (chatbot / content labelling)EUR 0–500 (internal development time)
Legal review of AI disclosure languageEUR 500–1,500
TotalEUR 0–2,000

For minimal-risk systems, the only cost is documenting your classification reasoning. For limited-risk systems, add a few hours of frontend work and a brief legal review.

High-risk (lean startup approach): EUR 50,000–150,000

Cost itemEstimate
Detailed risk classification and gap analysisEUR 3,000–8,000
Risk management system design and documentationEUR 8,000–20,000
Technical documentation (Annex IV)EUR 10,000–30,000
Data governance frameworkEUR 5,000–15,000
Bias testing and validationEUR 8,000–25,000
Quality management systemEUR 5,000–15,000
Conformity assessment (self-assessment)EUR 3,000–10,000
Ongoing monitoring infrastructureEUR 5,000–15,000
External legal counselEUR 5,000–15,000
TotalEUR 50,000–150,000

Lower end assumes internal handling with targeted external support; upper range assumes extensive outsourcing.

High-risk (full enterprise approach): EUR 200,000–500,000

This range applies to organisations with complex AI systems, multiple high-risk deployments, or systems requiring notified body assessment (biometric identification under Article 43). Major cost drivers: external consultancy (EUR 30,000–80,000), independent bias audits (EUR 20,000–50,000), notified body fees (EUR 15,000–40,000), ongoing legal counsel (EUR 25,000–60,000), and monitoring infrastructure (EUR 20,000–50,000).

Budget planning tips for startups

  • Classify first, spend later: use Legalithm's free AI Act assessment before allocating budget.
  • Build compliance into development: retrofitting is 3–5x more expensive than designing it in from the start.
  • Leverage open-source tools: bias testing frameworks (Fairlearn, AI Fairness 360) and documentation templates are free.
  • Pool resources: join codes of conduct or industry consortia to share costs.
  • Apply for sandbox access: free regulatory guidance and testing infrastructure.

Compliance prioritisation for lean teams

Below is a 90-day framework for a startup with 2–15 people and no dedicated compliance staff, sequencing the most impactful steps first.

Days 1–30: Classify and assess

  1. Build your AI systems inventory: list every AI system you develop, deploy, or use as a third-party tool. Follow our inventory guide for a step-by-step process.
  2. Classify each system: use the free risk classification tool or Legalithm's assessment to determine the risk tier.
  3. Determine your role: are you a provider or deployer? The obligations differ significantly.
  4. Check for prohibited practices: review your AI systems against Article 5. If any system uses prohibited techniques, stop using it immediately.
  5. Document your classification reasoning: write a one-page rationale for each system explaining why it falls in its risk tier. This is your first compliance artifact.

Days 31–60: Address high-priority obligations

  1. Implement transparency measures: for limited-risk systems, add AI disclosures to your user interfaces. This is quick and removes one entire category of legal risk.
  2. Start your risk management system: for high-risk systems, begin documenting risks, mitigation measures, and testing plans per Article 9.
  3. Begin technical documentation: start writing your Annex IV technical documentation. Use the European Commission's standardised templates when available, or follow the structure in our Annex IV template guide.
  4. Establish data governance: document your training data sources, quality measures, bias assessments, and data protection safeguards.
  5. Apply for a regulatory sandbox: submit your application to the relevant national authority. Even if you are not accepted immediately, the application demonstrates proactive compliance.

Days 61–90: Formalise and prepare for assessment

  1. Complete your quality management system: document your processes for design, testing, monitoring, and incident response per Article 17.
  2. Run bias and accuracy testing: conduct systematic testing for bias, accuracy, robustness, and cybersecurity. Document results.
  3. Prepare your conformity assessment: gather all documentation, test results, and process descriptions needed for self-assessment or notified body review.
  4. Register in the EU database: prepare your registration information for Article 49 submission.
  5. Set up post-market monitoring: establish processes for ongoing monitoring, incident detection, and reporting.

This is an aggressive timeline. If your system is complex or you are resource-constrained, extend to 120–180 days — but do not delay the start. The 2 August 2026 deadline is fixed.

Real-world startup scenarios

Scenario 1: SaaS chatbot startup (limited risk)

Real-world example: A 5-person SaaS startup has built a customer support chatbot for e-commerce companies. The chatbot uses a fine-tuned large language model to answer product questions, process returns, and escalate complex issues to human agents. The startup has 200 B2B customers across Germany, France, and the Netherlands. Annual revenue is EUR 800,000.

Risk classification: Limited risk. The chatbot interacts directly with natural persons and must comply with Article 50 transparency obligations. It is not listed in Annex III and does not make decisions with legal or similarly significant effects.

Obligations:

  • Ensure every user is clearly informed they are interacting with an AI system before or at the start of the interaction.
  • If the chatbot generates text that could be mistaken for human-authored content, label it as AI-generated.
  • No risk management system, technical documentation, or conformity assessment required.

Estimated compliance cost: EUR 500–1,500. This covers adding a visible "You are chatting with an AI assistant" banner, updating the terms of service, and a brief legal review. The engineering work is a single sprint task.

Penalty exposure (worst case): Under the SME formula, the maximum fine for a transparency violation is the lower of EUR 15 million or 3% of EUR 800,000 = EUR 24,000. The realistic penalty for a first-time, good-faith omission would be far lower — likely a warning or corrective order.

Scenario 2: HR tech startup with AI CV screening (high-risk)

Real-world example: An HR tech startup with 12 employees has built an AI-powered applicant tracking system. The core feature is automated CV screening: the system scores candidates based on qualifications, experience, and skills, then ranks them for recruiter review. The startup sells to mid-sized companies across the EU. Annual revenue is EUR 2.5 million.

Risk classification: High-risk. AI systems used in recruitment and selection of natural persons — specifically for screening or filtering applications, and for evaluating candidates — are explicitly listed in Annex III, area 4(a). This is not debatable; it is specifically enumerated.

Obligations (as a provider): Full risk management system documenting bias risks (gender, ethnicity, age, disability), comprehensive Annex IV technical documentation, systematic bias testing across protected characteristics, human oversight design so recruiters can override rankings, quality management per Article 17, self-assessment conformity assessment (permitted for employment AI under Article 43), EU database registration, and post-market monitoring.

Estimated compliance cost: EUR 80,000–120,000 using a lean approach. The largest cost items are bias testing (EUR 15,000–25,000), technical documentation (EUR 15,000–25,000), and external legal counsel (EUR 10,000–15,000). The startup can handle risk management, quality management, and monitoring infrastructure internally.

Penalty exposure (worst case): The lower of EUR 15 million or 3% of EUR 2.5 million = EUR 75,000. This is the absolute maximum for a high-risk compliance failure — meaningful but survivable for a company at this revenue level.

Scenario 3: Fintech startup using third-party credit scoring API (deployer)

Real-world example: A fintech startup with 8 employees has built a lending platform. The startup does not build its own AI — it integrates a third-party credit scoring API from an established provider. The API takes applicant financial data as input and returns a credit risk score. The startup uses this score as one factor in its lending decisions. Annual revenue is EUR 1.2 million.

Risk classification: High-risk deployer. AI-based credit scoring is listed in Annex III, area 5(b). Even though the startup did not build the AI system, using it under its authority makes it a deployer with its own set of obligations. See our provider vs deployer guide for the distinction.

Obligations (as a deployer): Verify provider conformity assessment (CE marking, declaration of conformity, instructions for use), implement human oversight so a human reviews and can override credit decisions, conduct a fundamental rights impact assessment (FRIA) before deployment, maintain logs for at least six months, inform affected persons about AI involvement, and monitor performance with incident reporting.

Estimated compliance cost: EUR 15,000–40,000. The deployer burden is lighter than the provider burden because the startup does not need to produce technical documentation or conduct conformity assessment — but it must still verify the provider's compliance, implement oversight, conduct a FRIA, and maintain monitoring. Read our FRIA guide for detailed instructions.

Penalty exposure (worst case): The lower of EUR 15 million or 3% of EUR 1.2 million = EUR 36,000.

Funding and support resources

The EU has established several support mechanisms specifically for startups and SMEs navigating AI regulation:

European Digital Innovation Hubs (EDIHs)

The EU's network of over 200 European Digital Innovation Hubs provides free or subsidised services to SMEs, including:

  • AI compliance assessments and gap analyses
  • Access to testing and experimentation facilities
  • Technical training on AI development best practices
  • Networking with other startups and regulatory experts

Find your nearest EDIH at the European Commission's EDIH catalogue.

National AI competence centres

Each member state is establishing national AI competence centres providing guidance on national AI Act implementation, technical support for documentation, connection to sandboxes, and sector-specific advice.

EU AI Office resources

The European AI Office publishes guidance documents, standardised compliance templates, FAQs, and enforcement timeline updates — all freely available on the Commission's website.

Free tools

EU funding programmes

Several EU funding instruments support AI compliance for SMEs:

  • Digital Europe Programme: grants for SME digitalisation, including AI compliance tooling.
  • Horizon Europe: research and innovation funding that can cover compliance-related R&D.
  • InvestEU: financing and guarantees for SME investments in AI infrastructure and compliance.
  • National recovery and resilience plans: many member states have allocated funds for AI adoption and compliance under their national plans.

Frequently Asked Questions

Can I delay compliance if I'm pre-revenue?

No. The AI Act's obligations are triggered by placing an AI system on the market or putting it into service in the EU — not by revenue. A pre-revenue startup that launches a beta version of a high-risk AI system to users in the EU must comply from the moment of launch. However, the reduced penalty formula means your financial exposure is minimal during the pre-revenue phase. The practical advice: use the pre-revenue period to build compliance into your product so you launch compliant from day one.

Does the AI Act apply outside the EU?

Yes. Article 2 establishes extraterritorial scope: the AI Act applies to providers placing AI systems on the EU market regardless of where they are established. It also applies when AI outputs are used in the EU. A US startup selling to EU customers or an Israeli company whose AI outputs reach EU businesses must comply.

What if I only use third-party AI tools?

You are likely a deployer. Deployers have their own obligations under the AI Act, including verifying provider compliance, implementing human oversight, maintaining logs, and informing affected persons. The obligations are lighter than provider obligations, but they are not zero. If the third-party tool is classified as high-risk, your deployer obligations may also include conducting a fundamental rights impact assessment. See our provider vs deployer guide for the full breakdown.

My AI system is not listed in Annex III — am I safe?

Probably, but not automatically. While Annex III lists the specific use cases classified as high-risk, Article 6 also captures AI systems that are safety components of products covered by EU harmonised legislation (medical devices, machinery, toys, etc.). Additionally, the European Commission has the power to expand Annex III over time. Your classification assessment should be reviewed periodically — at least annually and after any significant change in your system's functionality or intended purpose.

How long does compliance take for a startup?

Minimal/limited risk: days to a few weeks. High-risk: 3–6 months for a focused team with engineering resources and external legal support. The longest lead items are bias testing, technical documentation, and conformity assessment. For complex high-risk systems with multiple use cases: 6–12 months — consider applying for a regulatory sandbox.

What happens if I'm acquired — does compliance transfer?

If the acquirer offers the system under its own name, it becomes the new provider and inherits all obligations. If the system continues under the original name, existing compliance documentation remains valid. AI Act compliance due diligence is becoming standard in M&A — getting compliant before acquisition increases your valuation and reduces deal friction.

Next steps

  1. Classify your AI systems using Legalithm's free assessment tool.
  2. Determine your roleprovider or deployer.
  3. Follow the compliance checklist in our 2026 step-by-step guide.
  4. Apply for a regulatory sandbox if you are developing a high-risk system.
  5. Build compliance into your product from the start — it is cheaper, faster, and produces a better product.

The companies that invest in compliance early will have a competitive advantage in a market where trust, transparency, and legal certainty are increasingly valuable to enterprise buyers. Start today. The deadline is fixed — your head start is not.

AI Act
Startups
SMEs
Compliance
Regulatory Sandbox
Small Business
Budget

Prüfen Sie die Compliance Ihres KI-Systems

Kostenlose Bewertung ohne Signup. Erhalten Sie Ihre Risikoklassifizierung in wenigen Minuten.

Kostenlose Bewertung starten