Annex 11 Is Changing: What Validation Teams Should Fix Before the Update Arrives

Table of Contents

Author

Omer Cimen

CEO & Co-Founder

Share

Annex 11 is no longer something validation teams can treat as a familiar old document sitting quietly in the regulatory cupboard.

The European Commission opened a stakeholder consultation in July 2025 for revisions to EU GMP Chapter 4, Annex 11 on Computerised Systems, and a new Annex 22 on Artificial Intelligence. The consultation closed on October 7, 2025, and the Commission explained that the update is driven by rapid advances in digital technologies and AI systems in pharmaceutical manufacturing. The stated aim is to keep GMP guidance clear, practical, relevant, and harmonized for manufacturers and competent authorities. (Public Health)

That is the important signal. Annex 11 is changing because the environment around computerized systems has changed.

Cloud systems, SaaS applications, connected manufacturing platforms, laboratory integrations, automated workflows, AI-supported processes, and hybrid documentation models are now part of daily regulated operations. Validation teams are no longer proving control over a neat system in a neat room with a neat binder beside it. They are proving control over living digital ecosystems.

The final text should not be treated as a starting gun. The draft already shows where expectations are heading, and the EMA Inspectors Working Group work plan lists Annex 11, Chapter 4, and Annex 22 with target dates in Q4 2026. (European Medicines Agency (EMA))

That gives validation teams a clear opportunity: fix the weak spots now.

Why the Annex 11 Revision Matters

The current Annex 11 revision is not just a clerical refresh. The draft states that the GMP/GDP Inspectors Working Group and PIC/S Committee recommended revising Annex 11 to reflect changes in regulatory and manufacturing environments, clarify requirements, and remove ambiguity and inconsistencies.

The draft also makes the direction clear from the beginning. Computerized systems should be validated before use and maintained in a validated state throughout their lifecycle. Quality Risk Management should apply throughout all lifecycle phases, considering process complexity, the level and novelty of automation, and the impact on product quality, patient safety, and data integrity.

That language matters because it moves the conversation away from validation as a one-time project. It places more weight on lifecycle control, continuous oversight, current requirements, data integrity, supplier accountability, security, access management, audit trail review, and periodic review.

In simpler terms: the system is not “validated” because a package was completed once. It remains validated only if the organization can show that control is being maintained.

Fix 1: Requirements That Actually Reflect the Implemented System

One of the most important areas to fix is requirements management.

The draft Annex 11 says system requirements should describe the functionality the regulated user has automated and relies on when performing GMP activities. It also says these requirements should be documented and kept updated to reflect the implemented system and its intended use, forming the basis for qualification and validation.

That is a quiet thunderclap.

Many organizations still treat requirements as early-stage project documents. They are written, approved, tested, and then slowly become stale as configurations change, vendor releases arrive, integrations shift, and business processes evolve. By the time a periodic review or audit arrives, the system may have changed more than the requirements admit.

Validation teams should start by reviewing whether their URS and related specifications still describe the system as it actually operates today. This includes operational requirements, functional requirements, data integrity requirements, technical requirements, interface requirements, performance expectations, availability needs, security controls, and regulatory requirements. The draft specifically notes that requirements should be detailed enough to support risk analysis, specification, design, purchase, configuration, qualification, and validation.

The practical fix is simple to describe and harder to execute: stop treating requirements as a launch artifact. Treat them as a controlled, living description of intended use.

Fix 2: Traceability That Survives Change

Traceability is another area where many organizations have a paper tiger in the filing room.

The draft Annex 11 calls for documented traceability between individual requirements, underlying design specifications, and corresponding qualification and validation test cases. It also encourages the use of effective tools to capture and hold requirements and facilitate traceability.

The key word is “maintained.”

Traceability should not be rebuilt at the end of a project like a crime scene diagram. It should remain current as the system changes. When a requirement is revised, the linked design and tests should remain visible. When a change request is opened, impacted requirements and tests should be clear. When a deviation occurs, teams should know which validated functions may be affected.

Weak traceability creates delayed risk. The organization may not feel the pain immediately, but it appears during change impact assessment, deviation investigation, periodic review, or inspection preparation.

This is where AI-Native Validation Infrastructure, or ANVI, becomes relevant. ANVI is not just a category phrase. It describes the kind of validation foundation needed when traceability must remain alive across systems, requirements, designs, tests, evidence, deviations, changes, and approvals. Annex 11’s direction strengthens the case for infrastructure that maintains validation relationships continuously rather than forcing teams to reconstruct them manually.

Fix 3: Quality Risk Management Across the Full Lifecycle

Quality Risk Management is not new, but the revised Annex 11 draft gives it sharper teeth.

The draft says QRM should be applied throughout the lifecycle of computerized systems, considering impact on product quality, patient safety, and data integrity. It also says the validation strategy and effort should be based on intended use and potential risks.

That means teams should not only perform a risk assessment at the start of validation and then let it gather dust like an elderly spreadsheet ghost.

Risk needs to inform validation scope, test depth, change impact, access controls, audit trail review, periodic review frequency, supplier oversight, backup strategy, cybersecurity controls, and revalidation decisions. A system that touches critical GMP data should not receive the same validation depth as a low-impact administrative tool. A high-risk interface should not be tested with the same casual glance as a non-critical report.

The practical fix is to connect risk assessment to actual downstream decisions. Risk should determine what gets tested, how deeply it gets tested, how often controls are reviewed, and what evidence is needed to defend the system.

Fix 4: Supplier and Cloud Oversight That Goes Beyond Vendor Trust

The draft makes another point that teams should address before the final update arrives: outsourcing does not outsource responsibility.

The draft says that when a regulated user relies on a vendor, service provider, or internal IT department for qualification or operation of a GMP computerized system, the regulated user remains fully responsible based on the risk to product quality, patient safety, and data integrity. It also says documentation must be accessible and explainable from the regulated user’s facility.

This is especially important for SaaS and cloud-based systems.

Too many organizations still rely on vendor documentation without enough review of whether it covers the implemented version, company-specific configuration, intended use, and GMP process. The draft is explicit that vendor documentation may be provided in part or whole, but the regulated user remains accountable and should carefully review and authorize its use.

Validation teams should strengthen supplier oversight now. That means reviewing contracts, SLAs, KPIs, version release processes, inspection support obligations, incident reporting expectations, supplier audit conditions, and exit strategies. The draft specifically highlights contracts or approved procedures that define activities and documentation, reporting and oversight, audit conditions, inspection support, issue resolution, communication of quality and security issues, exit strategy, and the process for release of new system versions.

The fix is not distrust. The fix is governed trust.

Fix 5: Audit Trails That Are Reviewable, Not Just Present

Having an audit trail is no longer enough. The more important question is whether the audit trail can be meaningfully reviewed.

The draft Annex 11 says systems should automatically log manual user interactions where users can create, modify, or delete data, settings, access privileges, acknowledge alarms, or execute electronic signatures. It also says the audit trail should capture who made the change, what changed, the old and new values, and when the change occurred, and that systems should prompt users to register why a change was made.

This shifts the focus from audit trail existence to audit trail usability.

The draft also says systems should accommodate effective and efficient audit trail reviews, including the ability to sort and search audit trail data or export it to a tool where this is possible. It says reviews should follow a documented procedure, significant variations should be investigated, and review scope should be targeted based on risk and adapted to local manufacturing processes.

Validation teams should assess whether audit trail review is actually workable. Can users search by who, what, when, and why? Can reviewers identify repeated activities, omissions, unauthorized deviations, suspicious changes, or data integrity risks? Is review frequency risk-based? Are review outcomes documented? Are reviews performed by sufficiently independent personnel?

An audit trail that exists but cannot be efficiently reviewed is a locked library with no catalogue. It may contain the truth, but good luck finding it before the inspector gets restless.

Fix 6: Access Management, Segregation of Duties, and Role Reviews

Access control is one of the easiest places for small weaknesses to become large findings.

The draft Annex 11 says all users should have unique and personal accounts, and that shared accounts, except limited read-only access, constitute a violation of data integrity. It also says access and roles should be granted, modified, and revoked in a timely manner as users join, change, or end their involvement in GMP activities.

The draft also emphasizes segregation of duties and least privilege. Users involved in GMP activities should not have administrative privileges, and users should not have higher access than necessary for their job function. It also calls for recurrent reviews of user accounts and roles, documented with appropriate action taken.

Validation and quality teams should review role design now. Common questions include:

Are any shared accounts still used for GMP-impacting actions?

Do administrators also perform GMP operational activities?

Are inactive users removed or deactivated promptly?

Are role reviews documented?

Are access privileges aligned with current responsibilities?

Are remote access controls, MFA, session timeout, and failed login controls appropriate for system risk?

The draft also says remote authentication on critical systems from outside controlled perimeters should include multifactor authentication, and that systems should include inactivity logout and access logs that are sortable, searchable, or exportable.

The fix is to make access governance boring, repeatable, and visible. In compliance, boring is often a crown disguised as a hat.

Fix 7: Change Control That Knows When Revalidation Is Needed

The revised Annex 11 draft puts strong emphasis on controlled change. It says any change to a computerized system, including configuration, hardware, software components, platform, or operating system, should be made in a controlled manner and according to defined procedures. Significant changes that may impact product quality, patient safety, or data integrity should be subject to requalification and validation.

This is where many teams need to close the gap between change management and validation.

A change request should not merely describe what changed. It should help teams understand what the change touches, which requirements are affected, whether test coverage is still sufficient, whether data integrity controls are impacted, whether security controls change, and whether revalidation is required.

The draft’s glossary defines change control as ongoing evaluation and documentation of system operations and changes to determine whether actual changes might affect the validated status of the system, with the intent of determining actions needed to maintain a validated state.

That definition is a useful north star. Change control should protect the validated state, not just record that someone changed something.

Fix 8: Periodic Reviews That Actually Review the Validated State

Periodic review is another area where the draft is direct.

After a system is validated and put into operation, periodic reviews should verify whether the system remains fit for intended use and in a validated state, or whether changes and revalidation are required. Findings should be analyzed for consequences on product quality, patient safety, and data integrity.

The draft gives a broad scope for periodic reviews. It includes changes to hardware, software, configuration, platform, infrastructure, interfaces, documentation, user guides, SOPs, and the combined effect of multiple changes in the system and other systems. It also includes actions from previous reviews, audits, inspections, CAPA, audit trail reviews, access reviews, risk assessments, incidents, deviations, security threats, maintenance, SLAs, vendor KPIs, backup restore testing, archival, data integrity assessments, and regulatory changes.

That is not a tick-box exercise. That is operational governance.

Validation teams should review whether periodic reviews are deep enough to answer the real question: does the system remain controlled today?

This is another place where ANVI becomes useful as a concept. AI-Native Validation Infrastructure supports the kind of connected oversight that makes periodic review less dependent on manual archaeology. If changes, deviations, tests, audit trail reviews, access reviews, and evidence are connected throughout the system lifecycle, periodic review becomes a living control mechanism instead of a desperate end-of-cycle treasure hunt.

Fix 9: Security, Backup, and Restore Testing

The draft Annex 11 makes cybersecurity part of the computerized system control conversation, not a distant IT concern.

It says regulated users should keep updated about new security threats and implement or improve protective measures in a timely manner where needed. It also says an effective information security management system should safeguard authorized access and detect and prevent unauthorized access to GMP systems and data.

The draft includes expectations around network segmentation, firewall rules, firewall reviews, timely platform updates, unsupported platforms, security patching, USB control, antivirus, penetration testing for critical internet-facing systems, and encrypted remote connections.

Backups also receive detailed attention. Data and metadata should be regularly backed up to prevent loss from accidental or deliberate change, deletion, malfunction, corruption, or cyberattack. Frequency, retention, and storage should be risk-based, backups should be physically and logically separated, and restore testing should be documented based on risk during validation and after changes to backup or restore processes.

The practical fix is to connect cybersecurity and backup controls to validation evidence. Do not let security live in a separate universe. For critical GMP systems, validation teams should be able to show how security, backup, restore, and disaster recovery controls support data integrity and continued validated state.

Fix 10: AI Governance Before Annex 22 Becomes Real

Annex 11 is not changing alone. The European Commission consultation also included a new Annex 22 on Artificial Intelligence. The Commission says Annex 22 establishes requirements for AI and machine learning in manufacturing of active substances and medicinal products, including selection, training, and validation of AI models, intended use, performance metrics, training data quality, test data management, continuous oversight, change control, model performance monitoring, and human review procedures when necessary. (Public Health)

Even if a team is focused mainly on Annex 11 today, it should not ignore Annex 22. The two are connected. The EMA Inspectors Working Group work plan lists Annex 22 with a Q4 2026 target and says it is intended to assure the use of artificial intelligence in the context of GMP, in parallel with updates to Annex 11 and Chapter 4. (European Medicines Agency (EMA))

That means AI governance should not sit outside validation governance.

Teams should start defining where AI is used, what the intended use is, what data is involved, what human review is required, how outputs are approved, how performance is monitored, and how changes to AI-enabled functionality are controlled. This is especially important when AI supports requirements, risk assessment, test generation, deviation handling, documentation, or any other GxP-relevant activity.

This is where AI-Native Validation Infrastructure becomes especially important. ANVI is the logical response to a world where computerized systems, validation evidence, AI-assisted workflows, audit trails, risk, and change control all need to operate as one connected control layer.

The Bigger Shift: From Validation Packages to Validation Infrastructure

The Annex 11 revision points toward a larger category shift.

Traditional validation systems helped teams complete projects, assemble documents, and manage approvals. That remains useful, but it is no longer enough for dynamic digital environments. The draft Annex 11 language repeatedly points toward lifecycle maintenance, ongoing risk management, supplier oversight, current requirements, traceability, periodic review, security, and auditability.

That is infrastructure thinking.

AI-Native Validation Infrastructure, or ANVI, fits this direction because it treats validation as an active operating layer rather than a static documentation process. In an ANVI model, requirements, risks, tests, evidence, changes, deviations, audit trails, and reviews remain connected and reviewable over time. AI can support productivity, but the deeper value is maintaining control as systems evolve.

Annex 11 may not use the term ANVI, but the regulatory direction strengthens the need for it.

What Validation Teams Should Do Now

Validation teams should not wait passively for the final Annex 11 text. The smartest preparation is to start closing obvious gaps now.

Start with a system inventory and identify GMP-impacting computerized systems, including SaaS platforms and outsourced services. Review whether requirements reflect current intended use and implemented configuration. Check whether traceability is complete and maintained. Strengthen supplier oversight and review vendor documentation against actual use. Evaluate audit trail review procedures and usability. Confirm access control, segregation of duties, and periodic access reviews. Connect change control to revalidation decisions. Make periodic reviews meaningful. Validate backup and restore processes. Bring cybersecurity evidence into the validation conversation. Map AI use before it becomes a surprise.

This is not about regulatory fortune-telling. It is about reading the direction of travel.

The direction is clear.

Conclusion

Annex 11 is changing because regulated digital environments have changed.

The draft revision and related EMA work plan point toward stronger expectations for lifecycle management, Quality Risk Management, supplier oversight, current requirements, maintained traceability, audit trail review, access governance, periodic review, security, backup, and AI-related control. (Public Health)

For validation teams, the best response is not to wait for final publication and then scramble. The best response is to strengthen the foundations that are already visible in the draft.

Because the organizations that prepare now will not simply be more Annex 11-ready. They will be better equipped for the next era of digital validation: continuous, connected, risk-based, and defensible.

Visual representing software validation processes

Computerized System Validation: What It Is and How to Validate a System

Computerized system validation is the backbone of safe,..

Data Integrity in Pharmaceutical Industry

Understanding Data Integrity in the Pharmaceutical Industry

Data Integrity Policy for Pharmaceutical Industry is a set..

Visual representing data integrity and compliance

The Importance of ALCOA Principles in Pharma

ALCOA principles are the five pillars, Attributable, Legible, Contemporaneous,..

Enter your email to get the Handbook

Learn about the industry

Get tailored templates

Discover Validfor

Before you go...

You’re all set!

We’ll reach out shortly to schedule a time