Your website has dirty little secret: It sucks at training AI about your brand
I think we have all experienced it, sometimes Generative AI loses the plot, and spits out dodgy content that is entertainingly incorrect, untruthful, or just plain weird. This is known as AI hallucinations and could well be highly problematic for your brand. AI is starting to change the way people buy products and services and as marketers we need to work hard to minimise the inherent brand risks that AI presents. First, let’s look some common misconceptions about Generative AI, from recent Australian research we conducted in April this year, to help set some context.
In short AI is starting to disrupt the decision making and buying behaviours of mainstream Australians, and it works in their favour in a myriad of emerging use cases: comparing products, summarising product reviews, price comparisons and asking questions of brands themselves by getting Generative AI to interrogate brand websites. And here is why AI hallucination is problematic for brands because if your digital and content footprint is not AI optimisation friendly you will be inadvertently training AI to get it all wrong about your brand, your product features, and your value for money.
So, why does AI hallucinate? There are two core reasons:
1.Compression sometimes causes confusion
This one it technical, but in essence knowledge learned from 40 GB of text ingested into a model that fits in about 3 GB of weight (13 times compression) means that sometimes the content also gets a bit jumbled, and AI get confused and trippy.
2. Low quality training data about your brand
This is one can you have more control over, hallucination also results from limitations or the inherent bias of the training data in the LLM. Internet content trains generative AI to answer questions, so how clean and accessible is your brand training data?
Firstly, on brand’s websites – Generative AI cannot access video content, PDF or HTML to be trained on their data. If the content is not AI Optimised, brands and businesses lose control of the narrative. And their sales funnel.
On non-owned assets, if Generative AI cannot be trained on the content, it will find training data on a myriad of other (sometimes dodgy) websites and suddenly you have an inaccurate and potentially dangerous set of ‘facts’ about brands, products and services.
So where can brands start to minimise the threat of AI Hallucination? A good place to start is to prompt a Generative AI chatbot to ask a bunch of questions of your website in isolation from the rest of the internet, take typical questions sourced from Google search queries and your call centre and see what answers you get back from the AI.
Chances are there will be some hallucination (and chronic inaccuracy about your product story) happening in those answers and brands will discover why you need to take this threat seriously.
From there you need to identify, understand, and monitor how customers and prospects are using AI to make their purchase decisions in new ways. If you understand the behaviours, you can start to figure out solutions.
In this moment, your brand can choose to be on the front foot dealing with this significant inflection point in marketing or your brand can do nothing and become a victim to the consequences of AI Hallucination.
We have developed a related product called Accelerated AI Answers (AAA)that is a simple add on to your website and will start to help give your website users fast answers to questions they have about your products and services. It also will support you in making your website content match fit for AI.