Overview
On the first anniversary of the AI Continent Action Plan, the European Commission released two new Joint Research Centre (JRC) publications: a Science for Policy Brief titled Unpacking the Path to Trust and Excellence in AI (“JRC Brief”) and a full report titled Advancing AI Adoption in EU Public Administrations (“Public Sector Report”). Together, they offer both a retrospective on a decade of EU AI policymaking and a forward-looking framework for accelerating public sector AI adoption across Europe.
The tone is confident, presenting the EU's AI governance apparatus as a model of coherent, values-based regulation. Non-EU AI companies, however, should look past the institutional narrative. Beneath the surface lies a set of policy signals with real implications for market access, procurement strategy, and competitive positioning in Europe.
The Commission's Story: Exploration, Transition, Maturation
The JRC Brief traces the EU's AI policy arc in three phases, from the 2018 Communication on AI for Europe to the current trifecta of the EU AI Act, the AI Continent Action Plan, and the Apply AI Strategy (COM(2025) 723).
The AI Act is cast as the foundation of an "ecosystem of trust," the Continent Action Plan as the infrastructure for "excellence," and the Apply AI Strategy as the push toward real-world adoption. The Public Sector Report builds on this last pillar, proposing an operational framework structured around anchoring AI in EU values, adapting organizational capacity, and applying AI in high-impact domains.
The overarching message: the framework is in place, scientific evidence has informed every stage, and the hard work of implementation lies ahead.
What the Reports Do Not Say
While both publications map the internal architecture of EU AI governance thoroughly, they offer little by way of comparative analysis or self-criticism. The JRC Brief acknowledges three "challenges ahead" (the gap between legislation and practice, scaling difficulties, and geopolitical tensions) but treats each as a matter of fine-tuning rather than a structural concern. Tellingly, the JRC Brief concedes that "only implementation can answer empirical questions, such as how requirements are met in practice, how compliance burdens are distributed across firms of different sizes and whether the regulatory architecture produces the intended outcomes." This is a frank admission that sits in some tension with the document's otherwise assured tone.
The Public Sector Report is similarly inward-looking. It recognizes that AI "hype" may drive premature adoption and that administrations "may face pressure to rapidly adopt AI due to external influences," but does not meaningfully consider whether the EU's layered regulatory approach might itself be contributing to slower adoption rates relative to other jurisdictions.
Four Signals Non-EU AI Players Should Watch
- Non-EU Technology Framed as a Governance and Sovereignty Risk. The Public Sector Report explicitly frames reliance on non-EU AI as a threat to democratic governance, warning of "a shift in decision-making power to entities beyond the EU's borders." It links this concern to the broader concept of digital sovereignty, defined as "the EU capacity to exercise its independence in the digital domain while remaining open and connected to global networks," and cites a 2026 European Parliament resolution calling for measures to "guarantee EU independence and security by protecting our strategic infrastructure and reducing our dependence on non-European technology providers." The report further argues that reducing dependence on foreign technologies can result in "greater control over data and infrastructure, improved alignment with EU regulatory standards and reduced dependency on opaque AI models developed by private companies outside the EU." This framing does more than signal a market preference for domestic solutions; it provides a policy rationale for layering additional compliance requirements onto providers whose technology originates outside the EU. Non-EU providers should take note: when official policy documents treat foreign technology as an inherent governance risk, the compliance obligations that follow tend to expand over time.
- The AI Act as a Framework Whose Burdens Remain Untested. The JRC Brief describes the AI Act as a risk-based framework that "creates the conditions for a single market for AI systems and general-purpose AI models" through a layered set of obligations: prohibitions on unacceptable practices, requirements for high-risk systems, and transparency obligations for general-purpose AI models. Crucially, the JRC Brief acknowledges that "only implementation can answer empirical questions, such as how requirements are met in practice, how compliance burdens are distributed across firms of different sizes and whether the regulatory architecture produces the intended outcomes." This candid admission is not merely a caveat, it is an indictment of the regulatory approach itself. Enacting binding obligations whose practical effects, distributional burdens, and operational feasibility remain empirically unknown is not sound policymaking; it is legislating by hypothesis. The AI Act's own implementation trajectory confirms the concern: the Commission missed its statutory February 2026 deadline for Article 6 high-risk classification guidance, harmonized standards remain unfinished well past their original delivery dates, and the Digital Omnibus proposal to delay the high-risk regime exists precisely because the compliance infrastructure was not ready. When a regulator must seek a legislative extension before its own rules take effect, the framework's "untested" quality is no longer an open question, it is a structural deficiency.
- The AI Act's Staged Rollout: Uncertainty and Opportunity. The AI Act's obligations are entering into force in stages, with the first rules having applied since February and August 2025 and further enforcement set for August 2026. The Commission has launched a service desk for tailored compliance guidance, reflecting its "iterative" approach. For non-EU providers, this creates both compliance uncertainty and a window to engage early with the AI Office and national authorities. The ongoing simplification effort, including the Digital Omnibus initiative, signals flexibility, though providers should be cautious about expecting meaningful near-term reductions in compliance costs.
- An Institutional Infrastructure That Non-EU Providers Must Navigate from the Outside. Both publications describe a dense and growing institutional apparatus supporting AI adoption and governance in Europe: AI Factories, AI Gigafactories, Data Labs, the AI Skills Academy, the AI Observatory, the Apply AI Alliance, the AI Board, European Digital Innovation Hubs rebranded as "Experience Centres for AI," the GenAI4EU initiative, and the Public Sector Tech Watch. The Public Sector Report recommends that administrations use procurement strategically to "strengthen the European GovTech ecosystem and EU digital sovereignty" and suggests that public administrations "consider measures such as targeted funding, incentives and regulatory support for European AI GovTech companies." For non-EU providers, this institutional landscape presents a practical challenge. Compliance with the AI Act requires engagement with the AI Office, national market surveillance authorities, and evolving standards processes, all of which are designed primarily with EU-based stakeholders in mind. The JRC Brief's acknowledgment that the gap between legislation and practice remains the "most immediate challenge" suggests that the compliance pathway will be shaped through iterative dialogue between regulators and stakeholders, a process in which EU-based participants will have a structural advantage.
Practical Takeaways
Non-EU AI companies should keep three things in mind.
First, both publications treat the AI Act as the settled foundation of EU AI governance, but the JRC Brief's own acknowledgment that compliance burdens remain empirically untested should prompt non-EU providers to engage early rather than wait. The regulation applies to any company whose AI system is placed on the EU market, and the compliance pathway will be shaped through iterative implementation. Companies that participate in that process, through the AI Office's service desk and standards consultations, will be better positioned than those that react to settled rules after the fact.
Second, the AI Act's staged implementation means that compliance obligations will continue to expand through August 2026 and beyond. As the JRC Brief notes, general-purpose AI models are subject to transparency obligations and further requirements for models posing systemic risks, a regulatory layer that applies most directly to the large-scale foundation models disproportionately provided by non-EU companies. Providers should engage proactively with the AI Office's service desk and participate in standards development processes rather than waiting for final enforcement timelines to crystallize.
Third, the Public Sector Report's framing of non-EU technological dependence as a risk to democratic governance and digital sovereignty is not merely rhetorical. It provides a policy foundation, reinforced by a European Parliament resolution on technological sovereignty, for further measures that could progressively raise the compliance bar for foreign AI providers. Non-EU companies should monitor these policy signals closely and consider whether establishing deeper European partnerships or local presence could reduce both regulatory and commercial friction.
Conclusion
The Commission's anniversary publications present a European AI governance framework that is mature, coherent, and values-driven. That narrative is not without merit, but it also reflects a degree of institutional confidence that understates both the real gap between regulatory ambition and adoption on the ground and the tension between "digital sovereignty" rhetoric and Europe's continued reliance on non-EU AI technologies.
For non-EU AI players, the key takeaway is not the Commission's celebratory tone but the compliance and institutional architecture being assembled beneath it: one that, through its risk classifications, sovereignty framing, and institutional infrastructure, places a structurally heavier burden on non-EU providers, and one that shows every sign of continuing to expand. For those providers seeking to maintain and grow their presence in the European market, early and informed engagement with this evolving framework is essential.