Legal briefing | |

Travers Smith's Alternative Insights: The EU AI Act

Travers Smith's Alternative Insights: The EU AI Act

Listen now or read the full briefing below

Overview

A regular briefing for the alternative asset management industry. 

The EU's AI Act will affect most alternative asset managers operating in Europe, and many of their portfolio companies.  And, although most of the provisions will be effective from August 2026, some requirements are more imminent – including an obligation for certain staff to be "AI literate" by 2 February next year.  (We can provide that literacy training – let us know if we can help.)

But could the Act be good news for European investors?  Should it affect their investment preferences?

Businesses generally regard regulation as an unwelcome cost.  Certainly, recent EU attempts to regulate sustainable finance have not helped the reputation of rulebooks – even when firms have worked hard to embrace the sustainability agenda, they have felt the disproportionate cost of well-intended regulatory interventions. 

So it is easy to forget that regulation is essential for the operation of efficient markets.  And intelligent regulation can be a competitive advantage, especially when new technologies are emerging. 

With that in mind, the European Commission has set its sights on using the single market rulebook to burnish its leadership in Artificial Intelligence.  The Commission hopes that consistent AI regulation across all 27 member states, the EU's first-mover advantage, and the Act's extra-territorial effect, will shape emerging international standards (the so-called "Brussels Effect"), and attract innovators to the bloc.  It wants the EU to be a "world-class hub for AI".

Of course, competitive advantage is not the only objective of the Act; most importantly, the Commission wants to ensure that AI "works for people and is a force for good in society", protecting EU citizens from the harms that many have predicted.  

But the (secondary) competitiveness objective is welcome – and there are some encouraging signs.

There certainly is some reason to believe that the EU's approach will shape international norms; indeed, the influence of the Commission's advisory expert group on a parallel global initiative, the 2019 OECD AI Principles, seems clear.  And the AI Act's risk-based approach to AI technology features a highly permissive approach for limited or minimal risk technologies.  That approach will apply across the entire single market – with more restrictive national regulations largely prohibited – providing certainty that innovators will find attractive.  

"… the EU's ambition for its regulatory framework to help it establish global leadership seems overly optimistic."

As many alternative asset managers will be "deployers" (to use the Act's terminology), and some portfolio companies will also be "providers", firms will need to familiarise themselves with the impending regulation.  They will also need to bear in mind the confirmations by the EU and UK regulators that their current financial services rulebooks apply to the deployment of AI, just as much as they do to human actions in regulated firms.  So responsible AI frameworks are essential, and should take full account of these regulatory requirements. 

But the regulation is manageable and, in general, if the technology is not "unacceptable" or "high risk", developers have significant latitude.

However, the EU's ambition for its regulatory framework to help it establish global leadership seems overly optimistic.  Its rules-based approach means it will be slow to respond to a rapidly changing technological landscape. 

For instance, some argue that its provisions on generative AI were something of an afterthought, introduced following the surge in popularity of large language models like ChatGPT.  They are concerned that the Act's benchmark for these models to pose "systemic'" risks, based on computational power, is outdated and overlooks smaller (yet highly capable) models, raising questions about the effectiveness of the EU's approach to regulating higher performing systems.

Conscious of these and other risks, the US and the UK are deliberately adopting a principles-based and explicitly pro-innovation approach to their emerging regulatory interventions, hoping that these will provide more flexibility. 

At the same time, some commentators are pushing for a truly international approach to AI regulation, especially important given that the impacts of AI will not respect national borders.  Comparisons with the GDPR, the EU's 2016 data protection law, are apposite.  The GDPR became the de facto standard for data privacy around the world, but there was no global agreement that these provisions were optimal.  But, in this area – as in so many others – global agreement may be a pipedream.

The main problem for the EU, however, is that its rules-based approach will replace one type of uncertainty with another: as technologies develop it will be hard to apply the rulebook to novel applications.  There are mechanisms built into the Act to accommodate changes, but it remains to be seen whether changes will be fast enough – the Commission's legislative process can be slow and cumbersome.  And the price of getting it wrong could be high: innovators and their investors might reasonably worry about the substantial penalties which can be imposed for non-compliance – including fines of up to €35 million, or 7% of global annual turnover, whichever is higher.

Firms weighing up where to invest will no doubt take regulatory interventions into account, even if that will not (yet) be the primary driver. And the EU's competitiveness ambition is laudable.  But the jury is out on whether the first mover will gain sustainable advantage.

Read Simon Witney Profile
Simon Witney

TRAVERS SMITH'S ALTERNATIVE ASSET MANAGEMENT & SUSTAINABILITY INSIGHTS

A series of regular briefings for the alternative asset management industry.

TRAVERS SMITH'S ALTERNATIVE ASSET MANAGEMENT & SUSTAINABILITY INSIGHTS
Back To Top