The European Union’s Artificial Intelligence Act (EU AI Act) represents the world’s first comprehensive legal framework for artificial intelligence, which entered into force on August 1, 2024, with the phase-out application of some provisions, latest of which relate to the governance rules and the obligations for General-Purpose AI Models (GPAI) models applicable from August 2, 2025.
GPAI models are sophisticated AI systems that demonstrate significant generality and can competently perform a wide range of distinct tasks across different domains, regardless of how they are released to the market. Unlike specialized AI systems designed for specific purposes such as fraud detection or applicant screening, GPAI models possess broad capabilities that allow them to be adapted and integrated into numerous downstream systems and applications. These include large language models like ChatGPT, Claude, and Llama.
Starting August 2, 2025, a layered supervisory framework took effect across the EU. The European AI Office assumed a key role in coordinating implementation and enforcement of the AI Act, with oversight over powerful AI models and ensuring consistency across Member States. At the same time, the European Artificial Intelligence Board became fully operational, bringing together representatives from each Member State and the European Data Protection Supervisor as an observer. This board works to ensure uniform application of the AI Act, coordinate national authorities, and support regulatory sandboxes, while also issuing recommendations and opinions to the Commission.
Each Member State had to designate national authorities by this date, including market surveillance bodies responsible for enforcement and notifying authorities that supervise conformity assessment bodies. These entities are the main enforcement arms within their respective jurisdictions.
GPAI model providers became subject to transparency rules requiring up-to-date technical documentation detailing training, testing, and evaluation processes. This must be accessible to downstream providers and authorities. Providers must also comply with EU copyright laws by implementing measures that ensure lawful training data use, respecting rights reservations via mechanisms like robots.txt or metadata, and publishing summaries of the data sources used, following templates provided by the AI Office.
For GPAI models exceeding the systemic risk threshold of 10²⁵ floating point operations, stricter safety measures apply. These include thorough evaluations such as adversarial testing, risk assessments, and mitigation protocols addressing risks from chemical, biological, radiological, and nuclear threats, cybersecurity breaches, and manipulation. Providers must also have incident reporting systems and enforce strong cybersecurity protections for both models and infrastructure, given the high potential risks these models pose if misused or compromised.
Given the EU AI Act’s potential cross-border implications, Serbia has likewise intensified its efforts to establish a robust legal framework for AI governance and has positioned itself as a regional leader in artificial intelligence governance, becoming the first country in Southeast Europe to establish a comprehensive AI development framework. Building upon its pioneering 2019 strategy, the government adopted the Strategy for the Development of Artificial Intelligence for the period 2025-2030 in January 2025.
Serbia’s strategic position in AI governance has been significantly strengthened through its leadership role in international organizations. The country assumed the presidency of the Global Partnership on Artificial Intelligence (GPAI) for the period 2025-2027. This leadership culminated in the adoption of the Belgrade Ministerial Declaration on Artificial Intelligence in December 2024, which was endorsed by 44 member states of GPAI and the European Union
Building on this international leadership, Serbia is now actively preparing its first comprehensive AI Act. These legal and strategic developments underscore Serbia’s proactive approach to AI regulation, which includes the new 2025–2030 strategy and the forthcoming national AI Act. This positions the country well for both domestic innovation and international compliance.
In this broader context, the EU AI Act represents a paradigm shift in AI governance, establishing comprehensive rules that extend far beyond EU borders. For Serbian law firms, understanding this regulation is essential given Serbia’s EU accession aspirations and the Act’s extraterritorial application. The risk-based approach, combined with specific obligations for different actors in the AI value chain, creates a complex but manageable compliance framework.
For more information about the EU AI Act, please see our EU AI Act Guide.
Authors: Uroš Rajić, Žarko Popović
Image generated by Midjourney.