Personal data, breach reporting, and AI governance

EDPB and EDPS weigh in on the Digital Omnibus.

The European Union’s move to modernise its digital legal framework is currently centered on the Digital Omnibus, a legislative package aimed at reducing administrative burdens and enhancing the continent’s economic competitiveness. In early 2026, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) issued a joint opinion evaluating these proposals. While the regulators support the objective of streamlining compliance, they have identified several areas where the proposed changes could conflict with existing data protection standards, particularly regarding the definition of personal data and the transparency of automated systems.

Redefining the boundaries of personal data

A central point of concern for information governance and eDiscovery professionals is the proposed reinterpretation of ‘personal data’. The Commission suggests moving toward an entity-specific test of identifiability, focusing on whether a given controller has means ‘reasonably likely’ to identify an individual, and presents this as a clarification of existing Court of Justice of the European Union (CJEU) case law. In their joint opinion, however, the EDPB and EDPS argue that the proposal does not simply codify CJEU jurisprudence but would significantly narrow the scope of EU data protection law by redefining what constitutes personal data.

For eDiscovery teams, this shift could complicate the execution of legal holds and cross-border data transfers. If certain datasets—such as anonymised telemetry or obfuscated log files—are reclassified as non-personal for a given controller at the point of collection, they might bypass standard privacy-shielding protocols during the early stages of a matter. Practitioners should evaluate their cross-border transfer impact assessments and data-mapping assumptions to ensure that data categorised as “anonymous” under any new rules still meets the higher GDPR bar for international litigation if identification becomes possible later in the lifecycle.

Streamlining incident response and reporting

The Omnibus also proposes adjustments to the procedural aspects of cybersecurity management. The EDPB and EDPS are in favour of increasing the threshold of risk that triggers an obligation to notify a data breach to supervisory authorities and extending the deadline for such notifications, noting that this can reduce administrative burden without undermining protection for individuals. They also welcome the introduction of common EU-level templates and lists for data breaches and data protection impact assessments as a positive step toward harmonisation.

For cybersecurity incident response teams, the direction of travel points toward greater standardisation in how breaches are documented and reported. While the precise legal form of the templates remains under negotiation, organisations should anticipate the need to align forensic logging and reporting so that the ‘artefacts of discovery’—the logs, headers, and metadata collected during investigations—can be efficiently mapped to these common formats across jurisdictions. Updating internal incident response playbooks to reflect this anticipated structure can help ensure that any extended reporting window does not lead to a degradation in the quality or admissibility of evidence preserved for regulatory or legal scrutiny.

Information governance in the age of AI training

Regarding information governance and artificial intelligence, the proposal introduces a specific ‘legitimate interest’ provision for training AI models, intended to recognise AI development more explicitly as a legitimate interest basis under Article 6(1)(f) GDPR. The EDPB and EDPS acknowledge that AI development can constitute a legitimate interest but question the necessity of adding an AI-specific clause, pointing out that their existing guidance already confirms that AI training may be pursued on this basis, subject to the three-step test.

For governance teams, this means that even if the Omnibus is adopted, the fundamental requirement for a rigorous legitimate interest assessment (LIA) remains unchanged. Frameworks should be updated to ensure that the provenance of training data is auditable and that opt-out or objection mechanisms are operational, particularly in scenarios involving large-scale web scraping or mixed-use datasets. This documentation will be critical for responding to AI-related access and transparency requests under the AI Act and the GDPR, especially when automated decision-making significantly affects individuals.

Accountability for low-risk AI systems

The registration and documentation of AI systems remain areas of evolving regulatory focus under the AI Act and its proposed amendments via the Digital Omnibus. Discussions around the Omnibus include proposals to streamline obligations for AI systems that are not classified as high-risk, potentially reducing external registration or notification burdens for certain categories of tools.

From a governance perspective, this trend underscores the need for robust internal oversight of ‘shadow AI’. If formal registration requirements are relaxed for lower-risk systems, the internal burden of proof regarding risk classification and lifecycle controls will increase. Maintaining a comprehensive internal inventory of all AI assets—regardless of perceived risk—will be essential to defending classification decisions in the event of supervisory inquiries or litigation.

Data processing for bias detection

Finally, the Omnibus addresses the processing of special categories of personal data—such as health or ethnic information—for the purpose of bias detection and correction in AI models. The regulators recognise that undetected bias in AI systems can pose serious risks, but stress that processing special-category data remains, in principle, prohibited, and that any exception for bias monitoring must be narrowly circumscribed, strictly necessary, and subject to enhanced safeguards.

For eDiscovery and compliance professionals conducting internal bias audits, this creates a narrow but strategically important legal pathway. Practitioners should ensure that any use of sensitive data for bias testing is siloed from general processing, supported by clear legal justification, and that the data is purged once corrective measures are validated. This purpose-limited processing demands strict access controls, minimisation measures, and a detailed audit trail to prevent sensitive data from being repurposed for unauthorised secondary uses, which could attract heightened regulatory scrutiny and sanctions.

Timelines, AI act deferrals, and compliance burden

As the legislative process enters trilogue negotiations, the focus remains on balancing the pace of digital innovation with the integrity of the European privacy and fundamental-rights model. Discussions around the Digital Omnibus include proposals to link the application of certain high-risk AI obligations under the AI Act to the availability of harmonised standards, which could, in practice, push some compliance deadlines beyond 2026, with possible long-stop dates in the 2027–28 timeframe still under negotiation. The Commission is therefore pushing for timely adoption so that these adjustments can take legal effect before existing deadlines trigger a concentrated compliance crunch.

For those managing data and AI at scale, the primary takeaway is that ‘simplification’ often shifts the locus of compliance from external reporting toward internal documentation and governance. Even where notification thresholds are raised, or registration obligations are relaxed, organisations can expect greater emphasis on demonstrable accountability—through records of processing, data and model inventories, legitimate interest assessments, and evidence that technical and organisational measures are calibrated to the evolving EU standard.

Read the complete article at EDPB and EDPS Weigh In on the Digital Omnibus: Personal Data, Breach Reporting, and AI Governance.

Photo: Dreamstime.

Privacy Preference Center

Strictly Necessary

Cookies that are necessary for the site to function properly.

gdpr, wordpress_[hash], wordpress_logged_in_[hash], wp-settings-{time}-[UID], PHPSESSID, wordpress_sec_[hash], wordpress_test_cookie, wp-settings-1125, wp-settings-time-1125, cookie_notice_accepted

Comment Cookies

Cookies that are saved when commenting.

comment, comment_author_{HASH}, comment_author_email_{HASH}, comment_author_url_{HASH}

Analyze website

Cookies used to analyze website.

__hssc, __hssrc, __hstc, hubspotutk

Targeting/Advertising

Cookies for provide site rankings, and the data collected by them is also used for audience segmentation and targeted advertising.

__qca

Google Universal Analytics

This cookie name is asssociated with Google Universal Analytics.

_ga, _gid

Functionality

This cookies contain an updated page counter.

__atuvc, __atuvs

Skip to toolbar