Can you trust an AI chatbot's personalised financial advice?

Can you trust an AI chatbot's personalised financial advice?

Can you trust an AI chatbot's personalised financial advice?

Introduction.

Artificial intelligence now sits in the background of everyday financial life, particularly in the form of chatbots like ChatGPT, Gemini and CoPilot. They are easy to access and, in many cases, have replaced search engines as people’s first port of call for research.

They can summarise investment concepts, explain the tax system, and draft market commentary that reads as though it belongs in a newspaper column. For many people, the first “conversation” about a financial question they have is no longer with a person, but with a chatbot.

The attraction is obvious. It is fast, calm, always available, and often, free. It also speaks with a kind of unbroken certainty that feels reassuring, particularly when the subject is unfamiliar.

Yet it is this unwavering confidence that is precisely the point of contention. The practical question is not whether AI can produce an answer, but whether the answer deserves your trust when your decisions have consequences and when the details matter.


Is AI the digital equivalent of the man in the pub?

We all have that experience of when overconfidence outweighs knowledge. Whether it’s the man in the pub telling you pensions are a scam, and that the markets are going to crash next week, or your drunk aunt giving everyone tax advice over Sunday lunch, it’s clear that a little bit of knowledge can be a dangerous thing and lead to overconfidence (see the Dunning-Kruger effect).

Many people know someone like this: articulate, definite, and willing to opine on anything from property markets to geopolitics. The delivery is typically persuasive, but the details can be patchy.

AI often feels like that. Except it is faster, and it never runs out of opinions.

The best way to help calibrate a healthy scepticism over AI is to talk to it about a subject you know inside out and wait for the obvious flaws in the chatbot’s arguments.

One of our clients, who wants to expand his plant nursery, asked an AI tool which species of hedging he should grow to fit his business requirements. The chatbot reeled off a sensible list and created a business plan that could easily be mistaken for expertise.

However, as an expert in his field, he quickly realised that the chatbot had completely missed a crucial factor: the very specific countryside stewardship scheme requirements that determine eligibility for landowner grants and therefore drive demand for his products.

In other words, the model gave a plausible answer to the wrong question. It answered his questions but did not address the bigger picture.

This is exactly how AI can mislead in personal finance.


What are AI chatbots actually doing when they “answer”?

AI Chatbots are an interface for large language models (LLMs), and it’s important to realise that a LLM is not a database that retrieves verified facts. It is simply a system trained to generate plausible text from patterns in the material it has been exposed to. It is good at coherence and tone but less reliable at distinguishing what is generally true from what is specifically true, or what used to be true from what remains true under current rules.

That gap is easy to miss because the output is written in fluent, human-like prose, and it’s this style that mimics competence. The LLM can sound like a subject expert, even when it is operating at the level of a generalist.

In finance, generalist answers can be helpful, but they can also be damaging when the advice is outdated, misses crucial details or misunderstands the interaction.

The Bank of England’s work on AI adoption in financial services repeatedly returns to this point: the benefits of AI tend to be operational, but the risks can scale if outputs are relied on without controls, challenge, and accountability.  


Why does it feel so convincing?

There is a behavioural hazard here that has nothing to do with finance and everything to do with psychology. People tend to place undue weight on outputs that are delivered confidently, particularly when those outputs are presented as the result of a sophisticated process. This is often discussed as “automation bias”, the tendency to over-rely on automated outputs even when independent judgment would call for scrutiny.  

Chatbots are, in effect, confidence machines. They do not pause. They rarely express uncertainty unless prompted. They do not naturally flag where information is incomplete. They produce a clean, finished answer, and they do so in a voice that resembles a competent person.


Does AI hold onto what was agreed earlier in a conversation?

A recurring problem with consumer chatbots is conversational recency bias. As the exchange evolves, LLMs tend to give greater weight to the most recent prompts and can “forget” earlier constraints or agreements, unless they are restated. This is not forgetfulness in a human sense. It is an artefact of how such systems prioritise context.

That has direct relevance to financial decision-making, because coherent planning depends on consistent constraints. People make decisions under a set of assumptions and priorities: the tolerance for drawdowns, the importance of other family members, the need for liquidity, the acceptable trade-offs between control and tax efficiency, and the emotional reaction to uncertainty. If those constraints are lost, the advice becomes a different plan without anyone noticing.

Human financial advisers behave differently. They remember what topics carry the most weight. They remember the moment a client became uneasy, the point at which risk stopped being an abstract concept with their reaction to a market fall, or the explicit agreement that a particular goal matters more than performance. A professional financial adviser remembers these clues because they shape the relationship, advice and determine what questions should be asked next.

AI chatbots can suffer from a subtle drift in direction. In a long exchange, something agreed specifically a while ago can be diluted by a later question, if phrased in a different way. If you are not vigilant as a user, the output moves on, while you assume the conversation is anchored in what has already been said.


What does AI miss when it misses the point?

General financial principles are often straightforward concepts: diversify, think long-term, avoid needless friction, keep risk aligned with your tolerance and so on.

The real work of a financial adviser is dealing with people’s actual lives, most of which are far from simple. Many people have multiple, conflicting objectives, changing incomes, family obligations, uncertain health and tax complexity, all of which all sit against a backdrop of rules and allowances that evolve over time.

In practice, the pivotal questions are often not the ones clients ask first. They are uncovered by experienced financial advisers by probing, reframing and getting to the bottom of what is not being said.


Can AI be trusted with numbers and precision?

If you have ever asked a Chatbot to do something as mundane as character or word count and marvelled as it proceeds to state the wrong answer with complete confidence, you will appreciate that it cannot always be trusted with numbers, even with the basics.

This is not only a computational failure, but also a failure of trust, as the LLMs do not naturally signal that they might be wrong. Without checking all of its work all the time, how do you know when it is right or wrong?

This matters because financial decisions are built from ongoing sequences of calculations and decisions. A small error in a contribution figure, an incorrect assumption about timing, a misunderstanding of a rule or a sloppy interpretation of what is “allowed” under current rules can compound over time and even cause people to fall foul of the rules.

Nonetheless, the correct response to this is not complete paranoia, but one needs to maintain a healthy dose of scepticism. AI can help draft, summarise, and organise, but it should not be the last step before taking action.


Is information the same as advice?

In the UK’s regulatory environment, advice is not merely an explanation. It is a personal recommendation tailored to an individual’s circumstances. It sits within a framework of responsibility, documentation, and accountability.

The FCA’s work on AI makes clear that current rules continue to apply and that governance and accountability remain central. The Bank of England and the FCA’s joint survey work similarly emphasises the importance of control frameworks and oversight.

The point is, AI can generate “an answer”, but it does not carry responsibility for whether it is suitable. A regulated adviser does, and must be able to evidence why.

The structural difference is that a professional financial adviser is incentivised to uncover risks and constraints that a chatbot will simply not discover by itself.


What can a human adviser do that AI cannot?

The simplest answer is that a human adviser can interpret people, not just their words.

Financial planning is riddled with moments where the question asked is not the real question. Someone asking about “tax efficiency” may actually be worried about control. Someone asking about “risk” might truthfully be afraid of regret. Someone asking “how much can I withdraw” could really be making enquiries about sustainability under stress and the behavioural discipline to stick to a plan.

A competent financial adviser will probe for the crux of the issue. They will challenge inconsistent statements gently. They will notice the difference between a client who can tolerate volatility in theory and one who will change course at precisely the wrong moment. They will remember the reactions, not just the data points. They will ask the awkward questions that prevent a plan from being built on polite assumptions.

AI cannot read the room. It cannot detect the pause before a client answers. It cannot distinguish a confident statement from a defensive one. It cannot build a relationship in which the client’s behaviour becomes part of the evidence.

An AI Chatbot does not lie in bed at night, thinking over your conversation and coming up with new solutions; it simply closes its book and is done.

That is why, even if AI improves dramatically, it cannot replace the human work that turns information into a coherent, liveable financial plan.


Where does AI genuinely help, if used properly?

AI chatbots can help clients get their bearings, particularly where the language of finance is new and complicated. Furthermore, chatbots can be used to generate questions for their financial advisers and summarise the information given quickly, provided it is checked against primary sources where accuracy matters.

Within firms themselves, AI and machine learning have more established use cases, particularly in operational areas such as fraud detection, triage, and document handling. The FCA’s research on machine learning in financial services highlights adoption across functions such as fraud and customer-facing applications.  The Bank of England and the FCA’s work reinforces that the challenge is governance, not the mere existence of the technology.  

The red line is obvious: assistance is different from delegation. The moment a tool begins to drive decisions without challenge, the risk profile changes substantially.


So, should we trust AI with financial decisions?

In the UK, the regulators have been explicit that technological change does not loosen responsibility. The Financial Conduct Authority’s work on AI stresses that the existing expectations still apply, including standards around clear communications and oversight.  

The Bank of England and FCA’s joint work on AI and machine learning in UK financial services also emphasises governance, controls, and risk management, rather than treating AI as a neutral layer.

These signals matter because they point to a sober truth: AI is an impressive tool; however, it is not automatically a trustworthy one.

AI can be a competent assistant, but it is not a responsible decision-maker. It can produce fluent answers that miss the bigger picture, it can drift from earlier agreements as the conversation evolves, and it can be wrong on basic precision tasks, all while sounding certain.

The proper way to treat AI and chatbots specifically is as a source of draft thinking, not final or definitive judgment. If you ever want a practical test, just ask it about your own industry, and watch what it confidently overlooks, and that experience should quickly recalibrate your trust.

When it comes to personal finance, the big impacts are rarely felt after a single wrong answer, although this is still possible. They are more likely to be found by following a chain of poor advice over time, under rules that evolve and conditions that change. As such, the value of a human financial adviser is less about having access to information and more about keeping the conversation going, reacting to changes and asking the questions that keep the plan coherent.


What’s next?

If you have a healthy scepticism of the advice AI chatbots offer and want to speak to a human who gets to the bottom of what you are really asking, the next step is to speak to one of our Independent Financial Advisers.

Based in Tunbridge Wells, we advise clients across the UK. Your initial consultation is free, confidential, and comes with no obligation to proceed.

This article provides general information and should not be taken as personal financial advice. Investments can go down as well as up, and you may get back less than you invest. Tax rules can change, and benefits depend on individual circumstances.

Next
Next

Saving to spending: what really changes in pension drawdown.