Artificial intelligence tools can be a game changer in outsourcing and other contracts for services - promising big wins in terms of costs, time, accuracy, scalability and productivity, to benefit both sides of the negotiating table. To reap those benefits, it is important to stay on top of the "new" risks associated with the use of AI in these arrangements. As we explain in this briefing, some of these risks might be entirely new, but many are a recasting of familiar challenges.
AI in service supply and outsourcing contracts: managing the risks

Overview
Flush out the service provider's uses of AI
Many service providers will be falling over themselves to point out their use of AI technology in marketing material. Customers will nevertheless want to flush out a complete picture of the service provider's use of AI, including where AI is being used behind the scenes. This should be done through due diligence and within the contract itself, so that AI risks can be properly managed (in a similar way to which open-source software provisions and policies have developed over the years). The customer may also have an Acceptable AI Use policy for its supply chain to which it may require the service provider to adhere.
Data traps for the unwary
Training data
Many uses of AI in services context will involve inputting data into the AI tool, either for training the AI or for processing purposes. It will be important to establish whether the customer is to provide the training data (but does it have the requisite dataset?) and/or whether the AI tool is to be trained on third party data.
Protecting customer data
Service providers can significantly improve their offering by training AI on an aggregation of their customers' data, and many will look for opportunities through the contract to do so. Customers, on the other hand, will want to protect commercially sensitive information and may be less willing to share their data in ways that could benefit their service provider's other customers, including potentially competitors. Confidentiality (and the definition of "Confidential Information"), data, intellectual property, information security and post termination provisions, in particular, will require close scrutiny to check that customer data is protected as intended. Similar issues arise too in relation to which party owns rights to improvements in the AI, which result from training data. There are various approaches here, ranging from a completely separate instance of the AI, where customer data is segregated from all other data and where the customer owns all rights in improvements (so that other customers of the service provider, including the customer's competitors, do not benefit from them), to "AI as a service", where the customer contributes to a data lake and the service provider owns the AI and improvements – with more nuanced approaches in between to protect high value data.
Personal data
If the relevant data is personal data, it will be necessary to ensure compliance with data protection legislation. Again, this is not a new risk - but processing personal data using AI may require additional Data Protection Impact Assessments to be carried out and measures relating to transparency (on which, see more at section 5 below) and the legal basis for processing, such as consent or legitimate interests, to be revisited, particularly if individuals might not expect their data to be processed using AI. Additional considerations will also apply if the AI involves automated decision-making that has a legal or similarly significant effect on individuals (e.g. AI screening CVs as part of an HR outsourcing).
Third party data
In response to the emergence of products such as ChatGPT, many businesses, particularly those whose business model relies heavily on the supply of data, are imposing new restrictions on whether their data can be processed by tools which use artificial intelligence – and in some cases, such usage is prohibited. Whilst this is unlikely to be an issue where the relevant data belongs to the customer, it may need to be addressed where the data is sourced partly or wholly from third parties.
Exit and data migration
As ever, it is crucial to prepare for the end of the relationship right at the beginning of it. On exit, there will be similar "lock-in" risks to those that apply for other proprietary software tools - the service provider may be unwilling to provide information or license its AI tool to a replacement provider – but there may be additional issues with data migration to consider in this context as well. It may not be possible to provide training data (e.g. belonging to third parties) to a replacement provider, nor to extract customer data that is held in a data lake. It will be important to mitigate these issues in the contract.
Tackling accuracy and reliability issues
Reliability: a salutary tale
While it does not concern an AI system, Bates v Post Office Ltd and the surrounding Post Office Horizon scandal, is an extreme and poignant demonstration that IT systems are fallible and that this can have disastrous consequences when the humans in charge of them steadfastly close their eyes to this fact. The risk of overreliance on technology is therefore not a 'new' risk – but it is arguably more acute with AI, given the lack of transparency, explainability and auditability of certain types of AI (see further section 5 below).
Before relying on AI tools, businesses will want to know, for example:
- whether the original data on which the AI was trained was itself reliable (which is not something that has typically needed to be considered with software in the past)
- whether its outputs can be trusted
- whether there have been any independent evaluations of the tool's reliability and accuracy
- the extent to which there will be a "human in the loop" to check for anomalous outputs
- the extent to which the service provider is prepared to accept liability for errors caused by AI
- what remedies the service provider will offer if outputs are wrong, given that the workings of AI are often not transparent (and establishing a root cause for incidents could be difficult in such circumstances).
Technical standards: the difference between "compliance" and "certification"
Some technical standards exist in relation to AI – for example, see page 19 of this UK Government guidance on AI assurance. These may be helpful in providing some degree of comfort that the supplier's approach to AI is sufficiently robust. However, it is important to note that when a supplier says it is "compliant" with a particular standard, this is not the same as saying that it has been through a full certification process. Certification requires auditing by an independent third party, which involves a rigorous procedure and can be quite a lengthy and expensive process. "Compliant" means that the supplier adheres to the relevant standard and whilst this can be evidenced (perhaps confusingly) by a "certificate of compliance", the latter relies on internal audits and self-assessments. So, whilst compliance offers some degree of assurance, much depends on how much you trust the supplier to have been rigorous in its own self-assessment.
Allocating responsibility for AI's outputs in the contract
We can expect to see creative arguments in future around liability for AI - the novel argument that Air Canada ran, for example, when its chatbot gave a passenger bad advice. Its attempt to side-step responsibility, arguing that the chatbot was akin to an agent, servant or representative and therefore responsible for its own actions (unsurprisingly in the context of that business/consumer relationship) was robustly rejected by the Canadian tribunal. But in a B2Bcontext, where the parties' bargaining position is more evenly balanced, it may be less certain where the sympathy of a court would lie, and this uncertainty is better headed off by a careful allocation of responsibility in the contract.
The service provider, for example, may look to rely on "relief"/"excusing cause" provisions if the services are negatively impacted by customer-provided training data that is of a substandard quality.
Intellectual property risks
While the focus traditionally is around the ownership and licensing of IP in the software tool itself, for generative AI used in the provision of services, it will also be important to address IP rights in the AI tool's output.
As with other types of software, customers will typically want some form of contractual protection against third parties claiming that their intellectual property is being infringed. However, in the past, that risk has come primarily from competing software vendors claiming that their technology has been copied in some way or utilised without permission. With AI, as we explain in this article, the more significant challenge may come from holders of copyright in the data on which the AI product has been trained. If the training data infringes third party intellectual property rights, there is a risk of the output also infringing if it is a substantial copy of that training data. In addition, if material used to train is open source and subject to "copyleft" licences, this could also create an infringement risk because "copyleft" licences require that derivative works (i.e. the AI's output) are in turn licensed on terms that are no less restrictive.
Customers will want to ask questions about the sources of training data and are likely to want to seek protection in the contract against third party infringement claims. Lastly, if AI is being used to interact with other third party software tools, you will want to check that third party licences support such use (e.g. if they are licensed on the basis of human users, additional licences may be required for use by AI).
The "black box" conundrum
Even some AI vendors admit that they do not yet fully understand how their own technology produces the results that it does. That lack of transparency and explainability may be a significant impediment to building trust, when trust is usually a key success factor in longer term services contracts – whether that's the trust of the customer's stakeholders or, where outputs impact individuals, e.g. in an HR outsourcing, the trust of end users. Transparency and explainability are also an important requirement for data protection compliance (as many services contracts involve processing personal data) and is set to be an important plank in other regulatory frameworks for AI too, such as the EU's AI Act. Moreover, being able to explain AI's logic and why AI's outputs are as they are, will also help in the allocation of responsibility between the parties.
To address these issues, customers will need to build into the contract sufficient information-provision, testing and audit requirements to ensure that the AI used in the provision of the services, at least to the extent that it is interacting with, or impacting, humans or making important decisions, is explainable.
Avoiding bias and discrimination
Where AI is being used to assist in decision-making processes relating to individuals (e.g. when processing CVs, insurance claims or credit card applications), there is also an increased risk, compared with other types of software, that it may exhibit bias. This in turn could give rise to breaches of the Equality Act 2010, such as its provisions prohibiting discrimination in the supply of goods or services. These are risks which businesses have not typically had to devote much attention to in the past when evaluating other types of software tools. As noted in section 3 above, there are technical standards relating to bias in AI, which may help to provide some level of comfort.
Regulatory risk
To date, there has been little in the way of legislation which seeks to regulate the provision of software. However, governments worldwide are now looking seriously at how to regulate AI. A key emerging risk for businesses is how that regulation could affect their use of AI and what, if anything, they can do to prepare.
So far, broadly speaking, two different approaches have emerged:
- Build on existing regulatory frameworks: for example, the current UK Government is proposing to rely on existing regulators and regulatory structures, rather than establishing broadly applicable AI-specific regulations or a dedicated AI regulator. Designed to encourage investment in AI in the UK, it aims to reduce the regulatory burden on businesses and follow a pro-innovation approach – although the Government acknowledges that, for the framework to be successful, there will need to be regulatory co-ordination, as well as central support from Government. Of course, with a general election around the corner, that approach may shift. The Labour Party appears to have a greater appetite for legislating, on a targeted basis at least.
- Create a new regulatory regime for AI: The EU's approach (as set out in the EU's AI Act) is far more prescriptive and sets out a specific regulatory regime for the use of AI. This will impose a higher regulatory burden, particularly for AI systems designated as "high risk" under the Regulation e.g. certain employment-related and credit scoring use cases, which may be relevant to some business process outsourcings.
Episode 1 from our AI Insights podcast series, explains these approaches in more detail and looks at what businesses can realistically do now to prepare for increased regulation in future, despite the current uncertainty.
Flexible commercial models and assessing value
AI can deliver efficiencies partly because it replaces people. Charging structures based on staff costs (which are fairly common in business process outsourcings) are likely to be replaced by alternative charging metrics - licence fees, volume-based transaction charges, gain-sharing mechanisms - all of which will need careful definition in the contract. These charging mechanisms are already familiar territory to many, but as AI disrupts the connection between the service provider's costs and the charges, there is likely to be increased focus on provisions in the contract that are intended to ensure that the customer continues to receive value – financial reporting and benchmarking provisions for example. In relation to benchmarking, with technology changing so quickly, careful thought should be given to how best to achieve a benchmarking sample that is sufficiently comparable to be fair to the service provider, while not being so restrictive that it prevents the benchmarking from occurring in practice. See our briefing on benchmarking here.
Addressing these risks now
While AI technology continues to develop at pace, and the extent of the risks that it poses are as yet not fully known, there are familiar contractual mechanisms in contracts for services, which the parties can employ and develop now to guard against known risks and that they can use to build in flexibility to cater for risks that are, as yet, not yet fully understood. Business should aim to get on the front foot here.
The Technology & Commercial Transactions team at Travers Smith has considerable expertise and experience in advising businesses on the risks of deploying artificial intelligence and drafting, negotiating and advising on services and outsourcing contracts. Please feel free to get in touch.
Get in touch
-
Louisa Chambers
- Head of Technology & Commercial Transactions
- +44 20 7295 3344
- Email Me