Regulating AI – which approach will prevail?

Overview

In February 2021 Travers Smith and techUK led a webinar which, amongst other issues, examined how the growing need to provide more transparency and possibly to formally report on the use of AI and algorithms could lead to the development of an effective audit and assurance framework. 

Since then, the EU has published its proposals, in April 2021, for a regulation to harmonise the rules on artificial intelligence across EU member states. As Carly Kind from the Ada Lovelace Institute said in a blog on these EU proposals:

“The draft AI regulation published by the European Union last week is significant because it’s the first of its kind in the world – a comprehensive, cross-sectoral, supranational attempt to regulate artificial intelligence (AI) and algorithmic products across a range of ‘high-risk’ sectors. While only a week old, the Commission’s proposal has already achieved an impressive feat: it has shifted the policy window away from a conversation about whether to regulate artificial intelligence, opening up a new discourse about how to regulate artificial intelligence.”

In addition, in April 2021, the US Federal Trade Commission published a blog on “Aiming for truth, fairness, and equity in your company’s use of AI” and the new Biden White House Administration has signalled its intent to take a lead on the global debate around AI, with a particular focus on not being outflanked by China – as laid out in the March 2021 final report of the US National Security Commission on AI. 

Meanwhile the UK is waiting for publication by the Government of a National AI Strategy, as announced as part of the UK’s Ten Tech Priorities, which will be based on 16 recommendations` published by the UK AI Council in January 2021.

Why regulate AI?

It now seems to be generally accepted that AI needs to be regulated because it can make decisions which affect human lives and there needs to be confidence that those decisions are made safely, ethically and free from bias and discrimination.

The recent Report from the UK Commission on Race and Ethnic Disparities included a recommendation that:

"… supports the recommendations of the Centre for Data Ethics and Innovation (CDEI) and calls on the government to:

  • place a mandatory transparency obligation on all public sector organisations applying algorithms that have an impact on significant decisions affecting individuals 
  • ask the Equality and Human Rights Commission to issue guidance that clarifies how to apply the Equality Act to algorithmic decision-making, which should include guidance on the collection of data to measure bias, and the lawfulness of bias mitigation technique"

It is possible to regulate in different ways according to the context within which AI is being used.  For example, the EU proposes imposing the strictest regulatory requirements on biometric AI and AI that manipulates the content it is using.  AI using facial recognition has attracted particular attention for some time now including a Court of Appeal decision in the UK about the use of such technology by South Wales Police.

There may well be different criteria applied to AI depending on who is using it.  For example, the EU proposals allow greater leniency to law enforcement agencies using AI in very specific contexts.

How to regulate AI?

So, the debate now seems to focus on what the regulation of AI should look like and how it should work.  Should it be restrictive even to the extent of prohibiting certain uses of AI from the outset with the aim of protecting consumers? Or should it be light touch to enable innovation but give consumers as much information as possible so they can understand how the AI works, and the data it is using and then potentially object to a decision made by the AI?

The EU approach

The EU has clearly gone for the first option.  The draft regulation sets AI deployment within the long-established EU fundamental rights framework.  It defines what ‘high-risk’ AI is and sets out a system for the registration of stand-alone high-risk AI applications in a public EU-wide database.  AI providers must provide ‘meaningful information’ about systems and prepare conformity assessments.  The EU has decided that ‘comprehensive ex-ante conformity assessments’ are the most effective solution to the problems they see given that AI auditing expertise ‘is only now being accumulated’.

AI providers must also notify the relevant national supervisory authority of any serious incidents or malfunctions that result in a breach of fundamental rights obligations.

The draft regulation proposes a list of prohibited AI - systems that distort human behaviour, that result in ‘social-scoring’ or those used for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement, subject to three narrowly defined exemptions.  It is worth noting that, as currently drafted and unless clarified later, this could also potentially catch systems that are thought of as relatively benign eg. posts on Instagram, suggested to a user on the basis of content previously viewed.

Other ‘high-risk’ AI which is not prohibited but must follow mandatory requirements includes an AI system that is a safety component of a product already regulated by the EU or that is included within critical infrastructure, ‘real-time’ or ‘post’ remote biometric information systems, systems in education or vocational training, systems used in employment or self-employment, notably ‘for the recruitment and selection of persons’, systems used to evaluate someone’s credit score or creditworthiness, surveillance systems, systems used in migration, asylum and border control and those used for the administration of ‘justice and democratic processes’.

The mandatory requirements which will apply to such ‘high-risk’ AI systems includes provisions on:

  • the quality of the data sets used;
  • technical documentation and record-keeping;
  • transparency and the provision of information to those using the systems;[1]
  • human oversight;
  • robustness;
  • accuracy; and
  • cyber resilience.

Where necessary, in order to assess the conformity of the AI system with the mandatory requirements, the national surveillance authority may be granted access to the source code of the AI system.  

Familiar concepts

The EU's draft regulation contains many requirements, such as conformity assessment, that are common to existing EU harmonised product rules. Not least of these requirements is CE marking which indicates that a product conforms to all applicable mandatory requirements, or perhaps more straightforwardly, for consumers, that a product is "safe". No product containing AI technology will be allowed to use the CE mark unless all the AI requirements are also met.

For a range of high volume products from toys to medical devices, which incorporate AI systems, product manufacturers will be deemed responsible for the AI's conformity as if they themselves were the provider of it. Considering the very significant penalties available to regulators for non-compliance, this could potentially make product manufacturers hesitant to incorporate AI into products, or at least present a challenge for AI developers who could be asked to take on challenging commercial terms to absorb the additional risks on the product manufacturer. On the other hand, AI systems or products containing them which comply with the EU rules may well be viewed as the "gold standard", with user/consumer trust driving demand and price inflation compared with competing products. AI systems will, like other harmonised products, be able to demonstrate conformity by applying harmonised standards which set out practical requirements used in product development and engineering. As yet, these standards do not exist. The EU has a clear opportunity to take a leading position in this space, and it will be interesting to see whether "the Brussels effect" results in the EU's AI standards becoming de facto international rules. 

A lighter-touch approach

It isn’t yet apparent which approach the UK or the US will follow – except to note that neither country has rushed to endorse the EU approach, although they are no doubt watching reaction to the EU proposals carefully.  

The protection of consumers and the desire to prevent or correct any bias in an AI system will undoubtedly be important objectives for any government or regulator seeking to limit the potential harms of an AI system.   The UK may well follow the approach suggested in a House of Lords December 2020 report which suggested different regulators would each address issues specific to their sector in coordination with each other, rather than adopt the EU’s cross-cutting approach.  An early example of this could be the May 2021 UK Government Ethics, Transparency and Accountability Framework for Automated-Decision Making which is intended to “help government departments with the safe, sustainable and ethical use of automated or algorithmic decision-making systems.”

So far the US approach, as captured in the FTC’s April 2021 note, calls on those building AI systems to build in ‘truth, fairness and equity’ from the start - almost an ‘ethical by design’ approach without prohibiting particular systems.

The FTC reminds its audience that it already tackles unfair and deceptive practices and draws attention to specific legislation outlawing credit discrimination or discrimination in the allocation of other benefits such as housing.  This includes the Equal Credit Opportunity Act under which the use of AI has already led to the US Consumer Financial Protection Bureau issuing formal requests for information to businesses to explain their use of AI/Machine Learning.

It also draws attention to the following general principles:

  • start with the right foundation - think about ways to improve your data set, design your model to account for data gaps, and – in light of any shortcomings – limit where or how you use the model;
  • watch out for discriminatory outcomes;
  • embrace transparency and independence;
  • don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results;
  • tell the truth about how you use data;
  • do more good than harm - If your model causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition – the FTC can challenge the use of that model as unfair;
  • hold yourself accountable – or be ready for the FTC to do it for you.

 

The UK’s National AI Strategy is expected to be published at some point in 2021.  So far, the UK AI Council’s Roadmap makes 16 high level recommendations to help to shape the strategy.   In relation to the building and retention of public trust the AI Council also focuses on data governance and public scrutiny.  

The Council believes “…the UK has a crucial opportunity to become a global lead in good governance, standards and frameworks for AI and enhance bilateral cooperation with key actors” and should “..ensure public trust through public scrutiny. The UK must lead in finding ways to enable public scrutiny of, and input to, automated decision-making and help ensure that the public can trust AI.”

Transparency and creating space for public scrutiny

What all the above requirements have in common is that they require an AI systems provider to be transparent about how the AI works and makes decisions and on what basis.

Such transparency or explainability can be satisfied either by following the EU’s ex-ante conformity assessment process approach or making provision for a formal assurance or audit process.  The latter is what was discussed at the Travers Smith webinar on AI in February 2021.  As Travers Smith Commercial, IP & Technology Partner, James Longster said, "the laws relevant to AI and algorithms are many and varied although there is not, at this stage, a law of algorithms per se." James advised that organisations need to look at the context of how algorithms are being used to identify the relevant laws for any legal assurance or audit programme. He focused on two main areas – the laws and regulations relating to equalities and the laws around the processing of personal data.

A regulator already operating in this space is the UK’s Information Commissioner.  At our webinar  Alister Pearson from the Information Commissioner's Office (the ICO) made clear that the ICO’s interest in the use of AI stems from the fact that in a vast proportion of cases where AI and algorithms are being used to make a decision, personal data is being processed. As the data protection regulator, the ICO has a responsibility to regulate how personal data is being processed in that context. In early 2019 the ICO started to develop an AI auditing framework to provide a clear methodology on the auditing of AI systems, how the ICO thinks data collection and processing laws apply to the use of AI, and to support the ICO’s investigations and assurance teams with their work. The guidance was published in July 2020 and the ICO has recently closed a consultation on its AI Risk Toolkit.

On the question of AI audit Maria Axente who is the Responsible AI and AI for Good Lead at PwC believes that “…we need to be very specific about what we audit for and what we audit against. I think having that clarity and also support from the legislature will give AI audit or assurance …a boost”.

It is possible that a strong assurance/audit framework will head off the need for any jurisdiction to adopt the EU’s stringent approach to AI whereby some uses are prohibited from the beginning. Those seeking an alternative approach might argue that such a strict approach is likely to stifle innovation – although the EU would point to their support for allowing innovation within a sandbox structure supported by a new European AI Board.  But the outright ban on facial recognition technology has already been opposed by the UK’s new Biometrics Watchdog.

What does this mean for businesses now?

At our webinar Maria argued that all the C-suite in a business are critical in assessing how that business handles AI and it is a competitive advantage to have executive leadership who understand how to ensure AI is used ethically and responsibly.  Handling AI well is not only important from a safeguarding and brand reputation standpoint, but also positively impacts on improving business performance. The challenge is to bring together the conversations on governance of AI, behaving ethically and managing AI risk.  

Maria pointed to the maturity of conversations about the use of AI in financial services businesses and, in particular, their understanding that every part of the business from compliance to frontline business to HR have to work together on deploying AI successfully and responsibly. As Maria said, the use of AI “triggers a profound organisational change.”

The use of AI within financial services is already the subject of attention from the Bank of England. In 2020 the FCA announced a year-long collaboration with the UK’s Alan Turing Institute.  The project:

“… will examine current and future uses of AI across the financial services sector, analyse ethical and regulatory questions that arise in this context, and advise on potential strategies for addressing them. In doing so, the joint project will place a special focus on considerations of transparency and explainability. One of its aims will be to move from discussions of general principles for the development and use of algorithmic systems towards a better understanding of the concrete and specific challenges associated with AI use cases in the financial sector, including their implications from a regulatory perspective.”

The Bank of England’s Deputy Governor for Markets & Banking, Sir Dave Ramsden, said in a speech in October 2020 that in a recent survey they had conducted “..around 45% of (the banks and insurers we regulate) that participated reported that the (Covid) crisis has led to an increase in the importance of AI and data science applications for their future operations..”

The conclusion from all of this work is that although regulatory frameworks are yet to be finalised (in the EU) or even formally defined (outside the EU) there can be no doubt that the regulation of the use and design of AI is heading our way.   

Businesses can prepare by adopting measures such as ensuring the Board and senior management are fully briefed on the use of AI and the data it is using, thinking through how they explain publicly about the AI used in the business, ensuring their risk profile, systems and governance address the risks that AI brings and what they do if it falls within the EU’s ‘high-risk’ categorisation.  This is likely to stand all relevant businesses in good stead in complying with the eventual EU Regulation or meeting any future requirements for assurance or a formal AI audit.

References

1  For example ‘when persons interact with an AI system or their emotions or characteristics are recognised through automated means, people must be informed of that circumstance. If an AI system is used to generate or manipulate image, audio or video content that appreciably resembles authentic content, there should be an obligation to disclose that the content is generated through automated means, subject to exceptions for legitimate purposes (law enforcement, freedom of expression). This allows persons to make informed choices or step back from a given situation.’

For further information, please contact

Read Sarah-Jane Denton Profile
Sarah-Jane Denton
Back To Top