Disclaimer: This post is a personal reflection and does not constitute legal advice. Every situation is different — if you need legal advice, email me at rupen.shah@gpslegal.asia.
"I used ChatGPT" How many times have you heard this recently? A while back I was consulting for someone who needed "a quick once-over" on a contract. I asked the question about who drafted it, and the answer was they reached out to AI. The document looked polished. It read well. Then I did the deep dive.
A very brief, non-technical explanation of how LLMs work (and why it matters)
I’m not an AI expert — my understanding comes from high level reading and a few basic courses. So here is my rudimentary understanding. And, I'm happy to be educated by smarter people in the room. But it's important to set the scene for what AI is and how it's used in the consumer context (well my understanding).
Most of the popular consumer AI tools are powered by large language models (LLMs). An LLM is trained on huge volumes of text and learns patterns in language. When you ask it to draft something, it generates words that are statistically likely to follow from your prompt. It does not "look up" the correct answer the way a lawyer or search engine would.
They also produce:
- Confident sounding statements that can be factually incorrect
- Clauses that look standard but aren't relevant
- Internally inconsistent definitions and obligations
- "Legal-sounding" drafting that can mean nothing
In essence they produce plausible sounding outputs, not factually correct output, and they don't carry professional responsibility.
There are now professional AI tools being developed specifically for the legal field. These are designed to address some of the gaps in consumer tools—better legal training data, more rigorous outputs, integration with legal workflows. I have not tested these myself, so I cannot comment on how effective they actually are. But the market is moving in that direction and it is worth watching.
Why you cannot treat AI output as your lawyer (even if you think you are being careful)
Even if you use the same AI tool and ask it the exact same question twice, you will get different answers. The model does not retrieve information like a database does. It generates a new response each time, based on probabilities. Same prompt, different output.
This matters for any form of business document because consistency is key. If you ask AI to redraft a clause, or to check if two sections align, you could get completely different wording each time. That creates inconsistency.
In addition, how you frame your question—your prompt—determines what you get back. A simple prompt produces good sounding output, but probably with a lot of holes. A poorly specified prompt can miss critical detail. Is prompting an art, a science or something in between? That I don't know, but it is complicated. LLMs don't know what information to include or how to structure the request for precision. So even if the model is capable, the user gets an unoptimised result.
Add to this: different AI models are trained differently and behave differently. ChatGPT, Claude, Microsoft Copilot, Gemini—they will each produce different answers to the same question. Also the same model (like ChatGPT v Copilot) can be fine-tuned differently depending on which version you are using or which platform is running it. Users often do not realise this and assume "AI" is a consistent product. It is not.
Consumer AI output isn't reliable or consistent, and heavily dependent on how skilled you are at using it. For professional documents, you have to be very careful.
Further reading on prompt engineering.
Here is my attempt at using an LLM (in this case Copilot) to generate an image. I used the following prompt: "Can you draw me an image of using AI to draft a document and it going horribly wrong."

What I found: the fundamental issues you do not spot in a quick skim
Going back to the document I reviewed. Some examples:
1) Internal inconsistency (definitions, scope, cross-references)
The contract may define a term one way in one clause, use it differently elsewhere and then reference a clause number that does not exist or does not say what it is supposed to say.
2) Missing risk allocation
AI drafts often miss the commercial heart of the deal: who carries what risk, in what scenarios, with what caps, carve-outs and remedies.
Common gaps include:
- Limitation of liability that is incomplete or commercially unrealistic (either not there or not providing any sort of cap)
- Missing indemnities or indemnities that do not match the risk (like full indemnification)
- No proper IP ownership and licensing position
- Vague service levels, acceptance criteria, milestones or delivery obligations
3) "Standard" clauses that are not fit for purpose
Boilerplate is not boilerplate if it is wrong for your transaction.
AI can insert confidentiality, data protection, governing law and dispute resolution clauses that look plausible but don't work in the specific scenario, the negotiation leverage, or basic enforceability requirements.
4) False confidence and "legal voice"
The contract looks polished. The tone is authoritative. It feels like it's the real deal.
The practical reality: reviewing can cost more than drafting properly
If the starting draft is structurally wrong, the reviewer has to:
- Re-take full instructions (because the document does not reflect the deal)
- Map the commercial terms properly
- Fix internal references and defined terms
- Revise/redraft fundamental terms
At that point, "reviewing" becomes "drafting", but not from a fresh document, you are papering the cracks.
Add to that, if the document has already gone out to the other party, it makes your negotiating position even harder, clauses that are needed to protect you are missing.
That is why the "AI first, professional later" approach can be a false economy.
Where AI genuinely adds value (when used properly)
AI is not useless. Far from it. When deployed with the right guardrails and oversight, it can be a useful tool.
Here are sensible use cases:
- Brainstorming, bouncing ideas around
- Turning bullet points into a structured first outline (as a starting template, not a final contract)
- Summarising long documents for high level thinking
- Generating checklists
- Producing plain-English explanations for non-lawyers
- Helping spot inconsistencies (defined terms, clause references) as a second pair of eyes
A simple rule
Use AI to accelerate work you will still verify line-by-line, with professional oversight.
Do not use AI as a substitute for professional judgement, negotiation experience, or accountability.
Why you should instruct a professional from the beginning (even if you want to keep costs down)
If cost efficiency is the goal, there is a better model than "AI first, lawyer later".
A professional can:
1) Take proper instructions and translate them into enforceable terms
Contracts are documented decisions setting out the commercial intention of parties. A professional ensures the document reflects the actual deal, not a generic version of it.
2) Spot risk you are not thinking about yet
A professional can help you highlight risks that may not be on your radar. If you get an LLM to draft a document for you, it won't know these risks either.
3) Draft with negotiation and enforceability in mind
A contract has to survive negotiation, and protect your interests. Professionals draft (and negotiate) for what the other side will push back on and how your position will hold.
4) Provide accountability
If something goes wrong, you have someone to hold responsible and someone who understands the problem deeply enough to help fix it.
The better approach (if you want efficiency and protection)
Involve a professional early for the core structure and risk allocation
If cost is a real constraint, discuss a scoped, phased approach with your adviser
Avoid the false economy of "AI first, check later"
A short checklist if you do use AI at any stage
If AI is involved in producing a professional document:
[ ] Do not paste confidential deal data into consumer AI tools unless you have reviewed and accepted the confidentiality and data retention terms
[ ] Assume the output may be wrong even if it sounds right
[ ] Verify every defined term, cross-reference, date, number and condition before it goes to the other side
[ ] Treat citations, legal statements and "standard positions" as untrusted until independently verified
[ ] Ensure the draft reflects the real commercial deal, not generic wording
[ ] Get a professional review before signature, ideally before negotiation starts
Closing thought
AI is powerful and it is probably here to stay. Used with proper oversight, it can support efficiency.
But if you rely on it to produce professional documents without verification, the risk is not that the document looks amateur. The risk is that it looks professional while potentially failing to protect you.
The client I mentioned at the start learned this the hard way. The rework cost more than the original professional fee would have been.
If you have an AI-drafted contract you want reviewed, or you want to discuss a sensible workflow that balances cost and protection, reach out.
Also—I would genuinely like to hear both your AI horror and success stories. What did a tool get confidently wrong? Comment below by subscribing or contact me.
Disclaimer: This post is a personal reflection and does not constitute legal advice. Every situation is different — if you need legal advice, email me at rupen.shah@gpslegal.asia.