The rise and risks of AI contract drafting and analysis

The-rise-and-risks-of-AI-contract-drafting-and-analysis

Authored by ARAG and written by their partners at Ashfords LLP

There has recently been an increase in the number of startups bringing artificial intelligence (AI) contract drafting and analysis tools to market. These platforms use large language models (LLMs), a specialised form of generative AI to produce advice notes, letters, contracts and other legal documents. The models powering these applications are trained on vast datasets to identify patterns in text, interpret the meaning of language and generate coherent content which addresses its user's prompts.

The underlying technology is transformational and in time may have a profound impact on the legal profession. It’s no secret that the traditional methods of contract drafting are time consuming and expensive, due to the reliance on the expertise of lawyers who are able to produce robust agreements which have a clear understanding of the client's objectives and recognise the commercial context in which they operate. Some commentators believe AI could significantly reduce the time and expense of this exercise whilst enhancing the quality of the output.

The reduction of human error, contractual inconsistencies and the automation of more routine tasks including the insertion of appropriate boilerplate clauses, referencing and formatting seem like low hanging fruit. However, it is claimed that some applications can even reduce contractual risk by suggesting drafting improvements based on an analysis of their expansive repository of documents.

Given the increasing excitement about this technology we decided to test some of the free tools available by asking them to produce a commercial settlement agreement. The quality of the results varied dramatically. However, we were impressed by the level of relevant detail that some of the draft documents included after we refined our prompts. In a number of seconds, these applications were able to produce basic agreements which included many of the clauses you would expect to see in a template document and we expect the strength/relevance of the output will increase over time.

It was, however, noticeable (but perhaps not unexpected) that the applications performed far better at including relevant boilerplate/standard clauses than they did at drafting substantive sections of the agreements they produced. The applications we looked at were unable to suggest sensible payment mechanisms or include coherent ‘release’ provisions, the latter being particularly vital as these clauses set out the claim(s) the parties have agreed to waive in entering the agreement.

A much more comprehensive examination would be required to fairly test the merits of these platforms. However, whilst in time they may make the contract drafting process more efficient and enable the user to focus their efforts on the most impactful sections of an agreement, there are risks with relying on LLMs at present. Law firms will rightly require significant improvement in the LLMs before being able to use these tools for the production of template documents, over the adaptation of trusted in-house precedents or the amendment of competently drafted prior examples which seem to far outstrip what AI can produce today.

For the same reasons, non-lawyers should be cautious when using agreements produced by LLMs. Despite the encouraging signs, the risks are manifest. Perhaps the most obvious, overarching risk is overreliance. Our albeit brief assessment of the free tools available demonstrated that the quality of the output was heavily dependent on the user's prompts - interestingly, ‘prompt engineering’ is a widely discussed topic which refers to the artform of instructing LLMs effectively.

All the solutions we looked at required detailed, legally insightful inputs to generate anything close to a workable starting point which could be used for further tailoring, in all cases still surfacing drafts with basic errors and notable clause omissions. Ultimately you don't know what you don't know, so using the output of these solutions to contract with other parties may be a significant (and costly) mistake if done so without the supervision of legally trained, subject matter experts.

Provenance and traceability is also a particular concern. Whilst this may change over time, for those using AI tools to generate draft contracts, user inputs effectively go into a black box which spits out some suggested content. It is impossible to trace the origin of particular clause suggestions which limits an assessment of their suitability. The AI's access to an extensive library of existing legal documents will only be beneficial if some trust can be placed in their quality and relevance.

In this regard, AI ‘hallucinations’, where LLMs generate false information could be another major drawback for the use of this technology. This issue recently obtained widespread media coverage when two lawyers used ChatGPT, an AI chat bot developed by OpenAI, to assist them with a legal brief in which it fabricated several case authorities. Ignoring for a moment the lawyers' failure to verify their source material, this does raise questions about the suitability of generative AI for drafting legal documents. What is to say that AI won’t suggest a clause that is unenforceable or doesn’t take into account recent precedents?

For the same reasons, the use of AI for contract analysis also carries inherent risks.

The issues arise in particular for LLMs, because they are designed to create logical, plausible responses based upon the inputs of their users. When dealing with complex subject matter hallucinations may be difficult to spot, even for experienced professionals. Further, the full impact of these inaccuracies may not become apparent, unless and until, a dispute arises. Some commentators believe the occurrence of hallucinations will dramatically decrease over time, however, others claim it is an inherent quirk of LLMs which cannot be ironed out entirely, time may tell. For the time being, the words of OpenAI's CEO Sam Altman may be instructive:

"I probably trust the answers that come out of ChatGPT the least of anybody on Earth."

Confidentiality, data protection and liability for AI drafting errors are other areas which businesses may wish to consider before leveraging these platforms. However, organisations will need to keep a close eye on this rapidly advancing area of technology as it may ultimately revolutionise the way in which legal documents are produced.

For more information, please contact Hugh Houlston.

CLICK HERE TO SIGN UP FOR OUR
FREE BI-WEEKLY NEWSLETTER

About ARAG

ARAG UK is a specialist legal insurance provider and is part of the internationally recognised ARAG Group which serves 14 countries worldwide.

Providing several emergency assistance insurance products and an innovative range of Before-the-Event and After-the-Event legal insurance products and services, ARAG UK prides itself on its client-focused approach. This has been recognised by the industry following the results of the Insurance 360 Legal Expenses Insurers Study in which ARAG UK was voted 'best legal insurance provider'. youTalk-insurance sharing ARAG UK insurance news. 

Latest video

ARAG video: Day one benefits

With most insurance policies, if you don’t make a claim then you don’t get any real benefit from them.ARAG’s Essential Business legal policy is... click here for more