News & Insights

Client Alert

December 11, 2025

EU & UK AI Round-up - December 2025


The past year has marked a pivotal stage in the EU’s approach to AI governance, moving from conceptual frameworks to concrete law, targeted regulatory guidance, institutional initiatives, and real-world applications that are expected to influence global standards for years to come, much like the impact of the GDPR since 2018.

In the European Union, the EU AI Act (“AI Act”) has entered a new operational phase, with the rules on general-purpose AI (“GPAI”) models taking effect in August 2025. This development has increased the focus on transparency and accountability across complex AI supply chains. Complementing the legislative changes are the strategic efforts of the European Commission (“EC”), such as the ‘Apply AI Strategy’, aimed at accelerating responsible AI adoption across industry and public services, and the ‘AI in Science Strategy’, which focuses on harnessing AI to strengthen European research capacity and scientific discovery. Supporting these policy pillars, the recently launched AI Act Single Information Platform centralises official guidance, national implementation updates, and compliance resources, while the EC’s draft guidance of reporting of serious AI incidents (and accompanying report template) – for which the consultation process closed on 7 November 2025 – offers welcome clarity for providers navigating post-market reporting obligations under the AI Act. Infrastructure developments also follow the same expansive trend; the EU announced 6 new AI Factories on 10 October 2025, increasing the number of AI Factories to a total of 19. A few days later, a further announcement launched “AI Factories Antennas”, an initiative to provide remote access to the supercomputing resources made available by the AI Factories. The AI Factories Antennas are not just limited to Member States but extend to ‘partner countries’ as well, including the United Kingdom. These efforts aim to reduce the EU’s reliance on US-owned and operated AI and tech infrastructure across both public and private sectors. The message from the EU is clear: it will not be left behind.

Across the Channel, the United Kingdom is advancing along a more flexible, sector-led governance path but remains deeply engaged in global AI questions. The Digital Regulation Cooperation Forum’s (“DRCF”) recent call for views on “agentic AI” reflects growing attention to systems capable of autonomous goal-setting and decision-making.  This is an area that may inform the UK government’s own priorities and preferences when it comes to regulating for the safe use of AI. Meanwhile, a wave of government partnerships and investment deals with major AI firms underscores the UK’s ambition to remain a competitive and secure hub for innovation in an increasingly rules-based international environment.

This fourth AI Round-up for 2025 will explore the following key updates:

  1. European Commission Initiates Guidelines and Code of Practice for AI System Transparency
  2. European Commission Proposes Changes to the EU AI Regulatory Landscape through the EU Digital Omnibus
  3. European Parliament Publishes Study on Interplay between AI Act and the EU Digital Legislative Framework
  4. UK Law Commission Publishes AI Discussion Paper
  5. UK Digital Regulation Cooperation Forum Publishes Insights from AI Hub Pilot
  6. UK Government Calls for Evidence in setting up AI Growth Lab
  7. Litigation Trends

You can read each of our previous instalments of the AI Round-up at www.kslaw.com  

European Commission Initiates Guidelines and Code of Practice for AI System Transparency  

The EC is developing EU-wide guidelines (“Transparency Guidelines”) and a voluntary code of practice (“Transparency Code of Practice”) to operationalise the transparency obligations for certain AI systems under the AI Act, with a particular focus on generative and interactive AI and deepfakes.​ The obligations in Article 50 of the AI Act (as spelled out below) are currently due to take effect from 2 August 2026, but if the proposals under the Digital Omnibus are approved (see below), compliance with these legal requirements will be delayed until 2 February 2027. On 5 November 2025, after a joint public consultation for the Transparency Guidelines and Transparency Code ended on 9 October 2025, the Commission officially kicked-off the drafting process. Publication of the final Transparency Code of Practice is expected in May-June 2026.

Purpose and legal basis

Article 50 of the AI Act requires providers and deployers of certain AI systems to ensure that people know when they (i) are interacting with AI or (ii) are exposed to AI-generated or manipulated content, including deepfakes and specific forms of AI-generated text. These rules aim to reduce deception, impersonation and misinformation, and to bolster trust and integrity by enabling users to make informed decisions about the AI they use.​

Scope of Article 50 obligations

The EC explains Article 50 covers four categories of AI systems: (1) interactive AI that converses directly with people; (2) systems that generate or manipulate audio, image, video or text content; (3) emotion recognition and biometric categorisation systems; and (4) systems producing deepfake content or AI-generated or manipulated text intended to inform the public on matters of public interest. In each case, providers or deployers must clearly disclose the artificial origin of the interaction or content, subject to narrowly defined exceptions (for example, some law enforcement uses or certain artistic works).​

Guidelines on Article 50

Under the AI Act, the EC must issue guidelines on the practical implementation of Article 50, and the forthcoming Transparency Guidelines will address this. According to the EC’s Q&A section, the Transparency Guidelines will clarify scope, legal definitions, how the transparency obligations apply in practice, the exceptions, and issues such as how Article 50 interacts with other EU rules, including data protection and the transparency regime for general-purpose AI models.​

Code of Practice on transparent generative AI

In parallel, the AI Office is coordinating a multistakeholder process to draft the Transparency Code of Practice on transparent generative AI systems, which will be a voluntary tool for implementing the obligations in Article 50(2) and (4) of the AI Act on labelling and detecting AI-generated or manipulated content and deepfakes. If the EC later approves this Code as adequate, providers and deployers will be able to use it to demonstrate compliance; enforcement for these actors will focus on monitoring adherence to the code.​ Note the Transparency Code of Practice is distinct from the recent Code of Practice on GPAI models.

European Commission Proposes Changes to the EU AI Regulatory Landscape through the EU Digital Omnibus

On 19 November 2025, the European Union released its much anticipated Digital Package, consisting of: (i) the Digital Omnibus, aimed at simplifying EU rules on data, cyber and AI; (ii) the Data Union Strategy, aimed at unlocking the data available for AI; and (iii) EU Business Wallets, a digital identification, authentication and exchange system aimed at reducing administrative processes for business within the EU.

The Digital Omnibus itself is further split into two proposed regulations; one to amend EU digital legislation broadly (the “Digital Omnibus”) and another to specifically target the AI Act (the “Digital Omnibus on AI”). That approach may allow for those amendments to the AI Act to come into force sooner, as both draft regulations will be subject to the EU legislative procedure, including a reading by both the Council of the European Union and the European Parliament.

In short, and limiting our analysis to changes that are relevant to AI, the EU’s Digital Package aims to promote innovation and the development of AI technologies by streamlining the legislative landscape that AI developers and providers have to navigate. Those objectives fit squarely with the EU’s larger AI strategy; to become a global leader in AI.

As the AI Act was only enacted last year, the Digital Omnibus on AI proposals come while the AI Act is not yet fully operational. Since our last EU AI Roundup, published on 24 July 2025, and shortly after the EU published the GPAI Code of Conduct, obligations for GPAI models became applicable.

While the Digital Omnibus on AI proposals are still in draft form they would contain meaningful changes for AI developers and providers in the EU, and we have set out the key provisions below:

  1. Delay: Recognising that current standards and support tools are behind schedule, the Commission has proposed to delay the entry into force of Chapter III Section 1 (Classification of AI systems as high-risk), Section 2 (Requirements for high-risk AI systems) and Section 3 (Obligations of providers and deployers of high-risk AI systems). The Commission will adopt a decision confirming that adequate measures in support of compliance are in place, triggering an implementation timeline for high-risk AI systems as defined by Article 6 (2) and Article 6 (1) of 6 and 12 months respectively. The clock is still ticking, however, as the Commission has also proposed longstop dates of 2 December 2027 (for Article 6 (2) high-risk AI systems) and 2 August 2028 (for Article 6 (1) high-risk AI systems) for those sections to enter into force. At the same time, obligations to mark AI generated synthetic audio, image, video or text content as such have been pushed back to 2 February 2027 for providers who would have already placed their systems on the market before 2 August 2026.
  2. SMEs and SMCs: Certain “regulatory privileges” that are available for SMEs will be extended to small-mid cap companies (“SMC”) under proposed amendments. Those privileges include simplified technical documentation requirements and proportionate quality management systems for high-risk AI systems as well as privileges relating to penalties in the current Article 99 of the EU AI Act. Generally, Member States will be required to take into account the interests of SMEs and SMCs when laying down the rules on penalties and enforcement measures; specifically, SMCs (like SMEs now) will be fined at the lower of the fine cap or percentage of worldwide annual turnover, rather than the higher, which applies to other offenders. Accordingly, a new definition of “SMC” would be added to the EU AI Act by reference to the Annex of Commission Recommendation (EU) 2025/1099.
  3. AI Literacy: The current wording of Article 4 of the EU AI Act places the responsibility on providers and deployers of AI systems to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems. The Digital Omnibus on AI, however, would amend Article 4 of the EU AI Act to state that ‘The Commission and Member States shall encourage providers and deployers of AI systems to take measure to ensure a sufficient level of AI literacy …’, shifting the burden away from AI providers and deployers.
  4. Data Privacy: In the Digital Omnibus package, a proposed amendment to the GDPR would clarify that processing of personal data in the context of the development and operation of an AI system “may be pursued for legitimate interests”. A further amendment to Article 9 of the GDPR would remove the prohibition on processing special category data in the same context, i.e., the development and operation of an AI system. The Digital Omnibus on AI would allow providers and deployers of AI systems and AI models to process special category data to detect and correct bias, introducing a new legal basis for processing special category data. However, processing would be subject to certain conditions including necessity, organizational and technical measures being in place and a record being kept of the processing activities. For AI developers and providers, these changes would represent legal certainty when it comes to the crucial stage of training AI models.
  5. Post-market monitoring: Whereas the current text of the EU AI Act required the Commission to prescribe detailed provisions establishing a template for a post-market monitoring plan, the Digital Omnibus on AI proposes that the Commission would adopt guidance instead.
  6. Registration: If an AI provider determines that an AI system is not a high-risk application – relying on the derogation available in Article 6 (3) of the EU AI Act – they will not be required to register the application in the EU database whereas the current form of the EU AI Act requires them in any case. This objective of this change is to streamline compliance and reduce the costs associated with registration.
  7. Centralised oversight: The Digital Omnibus on AI proposal would also centralise oversight of AI systems based on GPAI models where the model and system are developed by the same provider in the EU AI Office. Centralising oversight for specific AI systems with cross-border operations would, in the Commissions view, provide greater legal certainty for those developers than national enforcement actions. The EU AI Office’s exclusive oversight would also extend to AI systems integrated into a designated very large online platform or very large online search engine.
  8. Sandboxes: With a new Article 57 (3a), an amended EU AI Act would allow the EU AI Office to establish an AI regulatory sandbox at the EU level. A small but meaningful change would see national authorities and the EU AI Office design and implement AI regulatory sandboxes in a way that facilitates cross-border cooperation in every case, not just “where relevant” as per the current text of the EU AI Act (Article 57(13)).

Despite the draft nature of the Digital Omnibus, the wheels are now in motion for a simplified legislative framework that aims to provide greater clarity and legal certainty for AI developers and providers. We will be actively monitoring this space as the proposals advance through the EU’s legislative procedure.

European Parliament Publishes Study on the Interplay between the AI Act and the EU digital Legislative Framework

On 30 October 2025, the European Parliament published a study on the “Interplay between the AI Act and the EU digital legislative framework” (the “Study”), commissioned by the Committee on Industry, Research and Energy. In a similar vein to the EU Digital Omnibus, the Study examines the ways in which the EU AI Act interacts with other EU digital legislation, namely, the EU GDPR, the EU Data Act, the EU Cyber Resilience Act, the Digital Services Act, Digital Markets Act and NIS 2 Directive. The Study and its recommendations, to the extent any are implemented, would further alter the AI regulatory landscape in the EU.

With such a large net of legislation, it is no surprise that the Committee on Industry, Research and Energy was concerned about whether it represented a coherent system or a system that introduces “overlapping obligations, inconsistencies, or undue burdens that could fragment the internal market and undermine the development of a globally competitive European AI industry”. Ultimately, the Study leans towards the latter, finding that however justified each piece of legislation is in isolation, taken as a whole, they “deter uptake, delay time to market, and introduce compliance asymmetries across Member States”. Recommendations from the Study include:

  • Strengthening interaction and coordination among regulators, including joint guidance on areas of overlap which could, for example, lead to aligning transparency and documentation obligations under the Digital Services Act, EU Cyber Resilience Act and EU AI Act, presented in a single template; and
  • Simplifying the obligations of deployers especially when deploying AI systems for the same purposes and in the same manner.

As the Study was published before the release of the EU Digital Omnibus, it does not appear as part of the analysis despite some of the medium-term recommendations involving legislative amendments. The extent to which the proposals in the Digital Omnibus mitigate any inconsistencies, overlaps or gaps identified in the study has not been addressed.

UK Law Commission Publishes AI Discussion Paper

The UK Law Commission (“Law Commission”) published its discussion paper on 31 July 2025 (“AI Paper”) which distils the debate around how AI interacts with the law into three recurring and overarching issues: (i) AI autonomy and adaptiveness, (ii) oversight and reliance on AI, and (iii) model training and data. The AI Paper is intended to be a high-level analysis of these issues and how they present challenges for the UK’s existing legal doctrines and frameworks.

The AI Paper discusses three key areas with a particular focus on the first:

  1. Autonomy and adaptiveness: AI systems are designed to evolve based on the data they are fed, which naturally creates moving target risks. The AI Paper considers how to apportion responsibility when AI system behaviour shifts over time: what counts as unsafe, which risks were reasonably foreseeable, and what level of ongoing monitoring is expected. It also explores difficulties in assessing how responsibility should be shared across the value chain, from foundation model developers to deployers and end users, so that design choices, updates and real-world use are each accounted for. In addition, AI is becoming increasingly autonomous with the introduction of agentic AI being used by many organisations to fulfil complex multi-step tasks without ongoing human prompting (unlike common AI chatbot assistants). The AI Paper describes that with greater autonomy, “liability gaps” are revealed where “no natural or legal person is liable for the harms caused by, or the other conduct of, an AI system”. This is more apparent where AI is used in, and a product of, complex (cross-border) supply chains and the Law Commission highlights the difficulties of identifying whether there is causation between the output of an AI system and the harm created or proving the necessary knowledge of e.g. an AI provider, to establish liability. In a supply chain that, for example, comprises the data collector, the creator of the foundation model, and the software developer and distributor, it is very difficult to identify who owes a duty of care to the end user. Further, by their nature, AI systems are often ‘opaque’ – a phenomenon known as the ‘black box problem’.  This means there is often no way knowing how an AI system arrived at particular decision to then be able to unpack who is responsible. The autonomous and adaptive characteristics of AI arguably pose the greatest conundrum for our existing liability regimes. 
  2. Oversight and reliance on AI: When AI informs decisions about people’s rights or access to services, the basics still matter: clear accountability, understandable reasons and meaningful human oversight. The AI Paper points to practical guardrails such as documentation, audit trails, testing and incident reporting, and how they support fair process in the public sector and risk management in the private sector. It also flags when criminal liability might arise where organisations over-rely on automated outputs or allow “human in the loop” checks to become formalities rather than critical parts of the process.
  3. Training and data: Reliable, relevant and accurate data underpins trustworthy AI. The Commission highlights duties around sourcing, quality, bias and updating, managing drift and feedback loops as systems learn from new inputs. It suggests keeping reliable records of training data and model governance so issues can be tested in disputes. The direction of travel is alignment: UK rules should sit sensibly alongside international frameworks, including the AI Act, to reduce fragmentation while supporting innovation.

Although simply a hypothetical scenario at this stage, the AI Paper proposes a novel solution as an attempt to address the above risks and liability gaps: “the option of granting some AI legal systems legal personality is likely increasingly to be considered”. It provides arguments for and against this approach; an obvious argument against is the fact that AI systems may be used as “liability shields” protecting developers from reasonable accountability. Additionally, the AI Paper notes the complexity of enabling AI systems to hold funds and assets and how to ensure they would be meaningfully accountable, for example, when claims are brought against them. Conversely, by separating the legal identity of the entity from the AI system itself, this may encourage AI innovation and research (by granting AI developers separation in terms of liability) and fertile ground to test ideas and technologies. The approach may also enable AI systems to be incentivised to avoid liability as part of their programming, ensuring they operate and adapt safely and lawfully. What is clear from the AI Paper is that AI is causing us to refocus on various legal doctrines and approaches to liability, but for any substantive legal changes, we first need to arrive at some form of consensus on the definition of ‘AI’ in the UK. 

UK Digital Regulation Cooperation Forum Publishes Insights from AI Hub Pilot

The DRCF is a collaborative body comprising the UK’s key digital regulators, the Competition and Markets Authority (“CMA”), the Information Commissioner’s Office (“ICO”), Ofcom, and the Financial Conduct Authority (“FCA”). For one year from April 2024, the DRCF ran a pilot for its new AI & Digital Hub, a one‑year, multi‑agency advice service for assisting AI and digital innovators with cross‑cutting regulatory questions (“Hub”). Following the pilot, the DRCF published a report on 10 October 2025 summarising insights from the pilot which, in short, show that coordinated, multi‑regulator engagement can materially de‑risk and accelerate AI deployment in the UK (“Report”). The Report positions cross‑regulatory hubs and thematic initiatives as core practical tools for delivering the UK's “pro‑innovation” AI regulatory strategy, rather than creating a new single AI regulator.

The Report explains how the pilot operated (free, informal, non‑binding advice; cross‑regulatory scope; eligibility criteria around innovation, AI use, public benefit and multi‑regulator remit) and summarises feedback from participating innovators (20 in total) and regulators.

The Report states that innovators gained clearer understanding of overlapping regimes (e.g. data protection, consumer, financial services, online safety), increased confidence in compliance strategies, and cost and time savings in bringing products to market. The relevant regulators highlight that the model strengthened cross‑regulatory collaboration, built internal capability on AI, and prompted cultural change towards more joined‑up, anticipatory supervision.

The Report aligns closely with the UK government's “pro‑innovation” white paper, published on 29 March 2023, which relies on existing sectoral regulators, coordinated by bodies like the DRCF, rather than creating a horizontal AI statute (law) or standalone regulator at this stage. It also anticipates wider government initiatives such as work carried out by the Regulatory Innovation Office – sitting within the Department for Science, Innovation and Technology and working with regulators to ensure that regulation enables innovation in science and technology – on smarter tools and unified digital regulatory guidance, signalling that cross‑regulatory innovation support will be scaled and embedded rather than treated as a one‑off experiment.

UK Government Calls for Evidence in setting up AI Growth Lab

In the first edition of our EU & AI Roundup, published 15 January 2025, we noted that the UK government had unveiled its AI Opportunities Action Plan (“Action Plan”). One of three pillars of the Action Plan was to “lay the foundations to enable AI”, including by “Work[ing] with regulators to accelerate AI in priority sectors and implement pro-innovation initiatives like regulatory sandboxes”. The UK government has now called for evidence to set up an AI Growth Lab, a regulatory sandbox for AI (“Call for Evidence”). Issued on 21 October 2025, the Call for Evidence builds on the Action Plan recommendation, encouraged by other successful sandboxes such as the FCA’s regulatory sandbox as part of its Innovate Project.

At this preparatory stage, the UK government has called for evidence, seeking input from individuals and organisations who are interested in using, or going to be affected by, the AI Growth Lab, or those with expert views on implementing sandboxes. Specifically, the UK government  is seeking input for foundational aspects, such as which sectors and AI applications should be prioritised, oversight models, eligibility criteria for participation in the AI Growth Lab, and institutional models. The two models suggested in the Call for Evidence are “centrally operated”, operated by the UK government across sectors, and “regulator-operated”, operated by a lead regulator appointed for each sandbox, although the paper states they are “exploring a range of options”. A regulator-operated model would reflect the current sector-based approach when it comes to AI regulation in the UK, and allow regulators, integrated in the markets they serve, to account for sector-specific risks.

The Call for Evidence also seeks input on what regulations should be temporarily laxed or suspended as part of the AI Growth Lab and proposals for “red lines”, regulations that should never be modified or dis-applied during a pilot. Examples of “red lines” inserted by the UK government itself include regulations relating to human rights, consumer rights and intellectual property rights (among others). Submissions will be accepted up until 2 January 2026.

Litigation Trends

Recent litigation in the UK and EU over the last quarter underscores how courts are starting to test the boundaries of existing IP and data protection frameworks as applied to generative and surveillance AI. In the UK, the High Court’s November 2025 decision in Getty Images v Stability AI dismissed a landmark copyright claim on essentially territorial grounds, holding that Getty had not shown that infringing training acts occurred in the UK and confirming that supplying a trained model into the UK, which did not store infringing copies, did not amount to secondary infringement; i.e. a model trained on copyright protected works is not itself an infringing copy.  This has been widely read as a short-term win for model developers, but also as a signal that future claimants will focus on pinpointing where the fine-tuning of a training dataset and its curation have taken place. In parallel, the UK Upper Tribunal’s ruling in the Clearview AI facial recognition case revived the ICO’s enforcement action by confirming that Clearview’s largescale scraping, storage and analysis of images of UK residents fell within the UK GDPR’s territorial scope notwithstanding the law enforcement context of many client uses, reinforcing that training non-EU and non-UK providers’ models on EU/UK data may still face regulatory and civil exposure where their processing is sufficiently connected to individuals in these markets.

Across the EU, there is an increasingly dense regulatory backdrop, with Member State initiatives (such as Italy’s “EU plus” national AI law building out additional obligations on top of the AI Act baseline) prompting expectations of more private and public enforcement actions that combine AI-specific rules with traditional causes of action in copyright, data protection, product liability and consumer law. Elsewhere, EU litigation is bringing to the fore legal risks for foundation model providers, such as before the Court of Justice of the European Union (CJEU), tackling the principles of transparency and fairness in the use of third-party material within AI systems and AI-generated content.

Conclusion

With the next international AI Impact Summit set to take place in India in February 2026 following the AI Action Summit held in Paris, France last February, policymakers, regulators, and industry leaders will have another opportunity to build on the momentum generated this year and to refine a shared approach at an international level towards responsible AI innovation. Many commentators argue that international alignment is the only way to bring about AI use that is safe for users while creating the legal certainty businesses and innovators need.

In the EU, 2026 will no doubt be marked by the Digital Omnibus package and the extent to which its final form amends or delays the EU AI Act. Any delay in the implementation of the EU AI Act means delay in enforcement actions and although that state of flux may create uncertainty for organizations, they may be encouraged by the practical guidance and codes of conduct promised by the EC.

In the UK, the ICO’s forthcoming review of its guidance on automated decision-making underscores the regulator’s commitment to keeping pace with technological and ethical developments. Meanwhile, the obligation on the UK government under the Data Use and Access Act 2025 to prepare an economic impact assessment of different policy options relating to the use of copyright works in the development of AI systems is fast approaching – the assessment is due nine months after the Data Use and Access Act 2025 received Royal Assent, i.e., by March 2026, with a progress statement due imminently.

Organisations should remain alert, adaptive and proactive in assessing their AI frameworks and engage with the various regulatory consultations and sandboxes available to them to turn compliance into a strategic advantage in an increasingly complex regulatory landscape.

Related Insights
Related Insights
Date

March 11, 2025