Chapter VI — Measures in Support of InnovationArticle 60

Article 60: Testing of High-Risk AI Systems in Real-World Conditions Outside AI Regulatory Sandboxes

Applies from 2 Aug 20266 min readEUR-Lex verified Apr 2026

Article 60 allows providers of high-risk AI systems to test their systems in real-world conditions outside AI regulatory sandboxes, subject to specific safeguards. Requirements include: a real-world testing plan approved by a market surveillance authority, informed consent from test subjects (see Article 61), the ability to halt testing if serious incidents occur, time and scale limitations, and appropriate safeguards for affected persons. This complements the sandbox framework by allowing broader controlled testing where sandbox conditions are insufficient.

Who does this apply to?

  • -Providers of high-risk AI systems who need to test in real-world conditions that cannot be replicated in a sandbox or laboratory environment
  • -Market surveillance authorities approving real-world testing plans and monitoring their execution
  • -Test subjects who must give informed consent before participating in real-world testing

Scenarios

A provider of an AI-based traffic management system needs to test the system in a live urban environment to validate safety performance, because simulations cannot capture the full complexity of real traffic patterns. The provider submits a testing plan to the market surveillance authority specifying a 3-month test period, a bounded geographic zone, informed consent from participating municipal operators, and a kill-switch protocol.

The market surveillance authority approves the plan with conditions. The provider can test in real-world conditions while maintaining safeguards — results feed into the Article 9 risk management system and conformity documentation.
Ref. Art. 60(1)–(4)

During real-world testing of a high-risk AI system used for predictive maintenance in railway infrastructure, the system produces an incorrect safety assessment that could have led to a missed defect. The provider immediately halts the test, reports the incident, and reviews the root cause before resuming.

The provider's monitoring and halt capability — required by Article 60 — prevented a real safety incident. The incident is documented and feeds back into the risk management system.
Ref. Art. 60(5)

What Article 60 does (in plain terms)

Article 60 recognises that some high-risk AI systems cannot be adequately tested in sandboxes or laboratory environments alone — real-world complexity may be essential to validating safety and performance. The article creates a structured pathway for such testing with the following requirements:

1. Real-world testing plan — the provider must prepare a detailed plan and submit it to the relevant market surveillance authority for approval before testing begins. 2. Informed consent — all persons who are subjects of the testing must give informed consent in accordance with Article 61. Consent must be freely given, specific, informed, and unambiguous. 3. Monitoring and halt capability — the provider must continuously monitor outcomes during testing and must be able to immediately halt the test if a serious incident occurs or if risks to health, safety, or fundamental rights materialise. 4. Time and scale limitations — testing must be limited in duration and scope to what is necessary for the validation objectives. 5. Safeguards for test subjects — appropriate measures must protect the rights and safety of affected persons throughout the test.

The plan must also address what happens to data collected during testing and how results are incorporated into the conformity assessment process.

Relationship to sandboxes and conformity assessment

Article 60 is a complement to the sandbox framework, not a replacement:

  • Sandboxes (Article 57) offer a supervised environment within which authorities guide development. They are best suited for early-stage testing and compliance preparation.
  • Real-world testing under Article 60 addresses later-stage validation where the AI system needs exposure to live conditions to demonstrate safety and efficacy.

Importantly, real-world testing under Article 60 does not constitute placing on the market or putting into service. It is a pre-market activity. However, the results are expected to feed directly into the provider's risk management system (Article 9) and technical documentation (Article 11).

How Article 60 connects to the rest of the Act

  • Article 57 — sandbox framework (complementary pathway for earlier-stage testing).
  • Article 61 — informed consent requirements for real-world testing participants.
  • Article 73 — serious incident reporting obligations (applicable if incidents occur during testing).
  • Article 76 — confidentiality obligations for authorities overseeing testing.
  • Article 9 — risk management system (real-world test results feed back into risk analysis).
  • Article 113 — application dates.

Practical guidance for providers planning real-world tests

1. Start with the testing plan — define clear validation objectives, the specific real-world conditions needed, duration, geographic scope, number of subjects, and the metrics you will measure. 2. Engage the market surveillance authority early — submit your plan well in advance and be prepared for iterative review. Authorities may impose additional conditions. 3. Build informed consent into your design — work with Article 61 requirements from the start. Consent forms should explain the AI system, the testing conditions, risks, and the right to withdraw. 4. Implement a kill switch — you must be able to halt the test immediately at any point. Design technical and operational procedures for this before testing begins. 5. Document everything — test results, incidents, consent records, and authority correspondence all become part of your conformity evidence.

Compliance checklist

  • Prepare a detailed real-world testing plan covering objectives, scope, duration, safeguards, monitoring mechanisms, and halt criteria.
  • Submit the testing plan to the relevant market surveillance authority and obtain approval before commencing any real-world testing.
  • Obtain informed consent from all test subjects in compliance with Article 61 — consent must be freely given, specific, informed, and unambiguous.
  • Implement continuous monitoring during testing with the technical capability to halt the test immediately if serious risks materialise.
  • Limit testing to the minimum duration and scale necessary for the validation objectives — document the justification for scope.
  • Report any serious incidents during testing in accordance with Article 73 obligations.
  • Feed all testing results back into the risk management system (Article 9) and technical documentation (Article 11/Annex IV).

Planning real-world AI testing? Assess your high-risk system's readiness first.

Start Free Assessment

Frequently asked questions

Does real-world testing under Article 60 count as placing on the market?

No. Real-world testing under Article 60 is a pre-market activity. The AI system is not considered placed on the market or put into service during testing. However, all testing safeguards under Article 60 apply, and the results must inform your conformity assessment.

Can I test in real-world conditions without market surveillance authority approval?

No. Article 60 requires the testing plan to be approved by the relevant market surveillance authority before testing begins. Unapproved real-world testing of high-risk AI systems would be a compliance violation.

What happens if a serious incident occurs during testing?

The provider must immediately halt the test, take corrective action to protect test subjects, and report the incident to the market surveillance authority. Article 73 serious incident reporting obligations apply. Testing may only resume after the authority is satisfied that adequate mitigation measures are in place.