The European Commission has introduced Digital Omnibus, a legislative package designed to simplify and modernize the EU’s complex digital rules. It brings together two significant proposals. One streamlines data, cybersecurity, and privacy regulation. The other fine-tunes the implementation of the EU AI Act. The Commission expects the reform to generate over €4 billion in savings by 2029 for businesses, public authorities, and citizens.
Regarding the EU AI Act, targeted adjustments are introduced to support a smoother rollout. This ensures that obligations align with the availability of technical standards. At the same time, a key element of the package is the consolidation of the EU’s data framework, with existing laws to merge into just two core instruments: the Data Act and the GDPR.
Digital Omnibus also introduces a unified reporting channel for cybersecurity incidents. Using a “report once, share many” approach, companies will be able to meet multiple reporting obligations under NIS2, GDPR, DORA, CER, and the EU Digital Identity Regulation through a single interface.
Delayed High-Risk Obligations: One of the most notable changes is the postponement of rules for high-risk AI systems used in areas such as biometrics, hiring, education, healthcare, law enforcement, and credit assessments. Instead of applying from August 2026, the requirements would take effect in December 2027 for Annex III systems. For Annex I systems, these will take effect in August 2028. However, the Commission may activate these rules six to twelve months earlier. This in case it concludes that sufficient standards and tools are available. Critics say this mechanism gives the Commission wide discretion over the final timeline.
Additional Time for AI-Generated Content Rules: Providers will receive a one-year grace period to meet new obligations for labelling and watermarking AI-generated content, giving companies more time to adjust their technology.
Stronger Role for the European AI Office: The Omnibus expands the powers of the AI Office, which will now oversee all AI systems built on general-purpose AI models and carry out conformity assessments for certain high-risk systems.
Support for SMEs and Mid-Caps: Compliance relief previously available only to SMEs will now extend to small mid-cap companies, easing documentation and monitoring duties for more businesses.
Simplified Compliance with Data Protection: The Digital Omnibus clarifies how AI developers can process personal data, including special categories of data, for bias detection and correction purposes, subject to appropriate safeguards. It also broadens access to AI regulatory sandboxes and real-world testing facilities. There are plans to establish an EU-level regulatory sandbox from 2028.
A Narrower Definition of Personal Data: The proposal refines the definition of personal data by focusing on whether an entity has “reasonably likely” means of identifying an individual. This entity-specific approach reflects recent case law but raises practical questions. Information that one organization cannot link to a person may still be easily identifiable by another. In today’s data ecosystems, identifiers such as ad IDs, cookies, or device fingerprints circulate widely. This shift could therefore influence how much data ultimately falls within GDPR protection.
AI Training as Legitimate Interest: The proposal introduces a new provision establishing that the development and operation of AI systems or models constitute a “legitimate interest” of the controller under Article 6(1)(f) GDPR. This would provide an explicit legal basis for processing personal data for AI training purposes. This provided the processing is necessary and does not override individuals’ rights and freedoms.
EU-Wide DPIA Rules: The proposal also calls for harmonised criteria across the EU for when a Data Protection Impact Assessment is needed. Uniform lists and templates developed at EU level would replace divergent national approaches, offering consistency but reducing local flexibility.
Updated Rules on Automated Decision-Making: Changes to Article 22 clarify that decisions necessary for a contract may rely on automation even if a non-automated alternative exists. This could broaden the use of automated tools in everyday commercial relationships
Higher Threshold for Breach Notifications: Data breach reporting would be required only when a high risk to individuals is likely, aligning the obligation to notify authorities with the existing duty to notify data subjects. The notification window would extend from 72 to 96 hours. A single EU entry point would be created to streamline submissions.
Shift in Cookie and Tracking Rules: By moving the ePrivacy rule into the GDPR as Article 88a, websites could rely on broader legal bases (including legitimate interest) rather than consent in most cases. This shifts the model from opt-in to opt-out, with tracking enabled unless users object. A new Article 88b introduces standardized, automated privacy signals set at browser or system level. Websites must recognize them once they develop technical standards. Certain media services would be exempt from honoring these signals for two years after those standards take effect.
Lighter Transparency Obligations: Controllers would no longer need to provide privacy notices when it is reasonable to assume that individuals already know the relevant information, except in specific risk-sensitive situations. This aims to reduce repetitive disclosures. However, this change may also mean individuals receive fewer direct explanations about how their data collection and usage.
The Digital Omnibus aims to position Europe as a more agile digital market, reducing fragmentation and easing compliance burdens. Yet the combined effect of the proposed changes raises a deeper question. How far should simplification go when it touches the foundations of data protection?
The introduction of a legitimate-interest basis for AI development would settle a long-standing debate within the EU. It would confirm that personal data may be used to train and operate AI models. A narrower reading of sensitive data would limit heightened protections to information directly revealing protected traits. Much of today’s behavioural and inferred data would fall outside enhanced protection.
The Digital Omnibus therefore represents more than a technical adjustment. It reflects a broader recalibration of priorities. This at a time when the ability to derive sensitive insights from seemingly ordinary data has never been stronger. Whether the balance ultimately favors economic efficiency or individual safeguards will depend on the final wording of the legislation. It will also depend on the interpretation and application of these new concepts in practice.