DPD and the perils of doing chatbots badly
26 Jan 2023
6 min read
If you missed it, last week the delivery company DPD landed in some hot water after a viral post on twitter from musician Ashley Beauchamp talking about its chatbot went viral a tweet from Ashley Beauchamp talking about its chatbot went viral. The story was picked up by multiple publications, including the BBC. Here’s what happened.
Ashley was trying to track a parcel, but the bot was unable to help. It also failed to connect him to human customer service to resolve the queries.
So Ashley decided to have some fun with the bot. In the course of a few more messages the DPD bot:
Was asked to tell a joke
Was asked to write a poem
Responded with "fuck yeah" when prompted
Swore at Ashley
Commented on how useless DPD was, and told Ashley not to call them
Called itself a useless chatbot and described DPD as the worst delivery firm in the world, with terrible service

(Source: Ashley Beauchamp, X)
The chatbot was operated by DPD as part of their customer service system.
This was clearly not a good look for DPD! The firm explained that a system update yesterday caused the AI element within the chat to behave unexpectedly, which means swearing and making inappropriate comments. After the error occurred, the element was immediately disabled. The update caused the issue, and the AI element was responsible for the malfunction.
Either way, it is not a good look for AI chatbots that use large language models. As we know a thing or two about this area, we thought it was best to talk about the benefits.
The bot was supposed to answer queries as a customer service chatbot in an online chat or chat feature.
How did this happen in the first place? What can firms do to stop this when embracing AI? We have a few ideas and tips that may help.

(Source: Ashley Beauchamp, X)
The benefits of AI powered chatbots
It is clear why plenty of companies and delivery firms across a range of industries are using AI-powered chatbots, also known as artificial intelligence chatbots. Provided chatbots are able to perform the same tasks and answer the same questions as humans, the main advantages are:
Speed – customers or users get answers to their questions much faster.
Availability – AI never sleeps, so users can get answers around the clock, and not just during office hours.
Accuracy – computers should not be able to get things wrong in the way that humans might.
Helpful – chatbots can be helpful for handling routine queries and providing quick responses to common issues.
You may have noticed the massive caveat in the paragraph above: Provided chatbots are able to perform the same tasks and answer the same questions as humans.
The truth is that there is a divide between humans and AI in what they’re both good at. In customer service chat, artificial intelligence can provide quick, factual responses, but human operators are still essential for sensitivity, creativity, and nuance. The gap between what an AI can do and what a human can do is narrowing, but it still exists.
That’s why we would always advocate that AI works alongside humans. Which brings us to our first problem.
AI should always be able to pass to a human
Well-designed chatbots should be able to answer at least some basic questions – the FAQs, if you will – that come up time and again. When they are unable to do it, the default should be to pass the issue to a human to solve.
This DPD bot failed at that first hurdle. It was unable to help, and then it could not connect Ashley to a human agent. This is CX 101 stuff: if you are going to have a customer service, they ought to be able to actually serve and help customers. When the bot cannot help, the process should continue with human support.
One retailer we spoke to gave us an example of sensitive conversations that they’d rather a human always answered. Specifically when a customer mentioned a death or illness that had affected them. In such cases, the customer must accept the transfer to a human agent. That is a situation you’d always want the sensitivity of a human to handle.
There should be limits on the questions a chatbot can answer
The vast capabilities of generative AI and Large Language Models means that people are reluctant to put limitations on what they can do. But without limits, the AI can be prone to go off in very different directions.
Take the swearing. If you really want your bot to swear at customers, then you can, but why should you give it that capability. In some cases, the bot's response included means swearing, which can lead to inappropriate interactions. By the same token, the AI can be trained to ignore certain other questions or requests, or to refuse to answer them. Here’s what our generative AI does when you ask it to swear:
Here, we see it refusing to action my request, and instead, recognising that this is beyond its capabilities, it passes to an agent. That’s exactly what should happen. The AI answers what it is trained to answer, and doesn’t engage in the rest.
The bot's writing abilities can also be manipulated. For example, users have prompted the AI to generate a poem in a specific form, such as a haiku, which demonstrates how the chatbot can be led to produce creative content in various formats, sometimes including inappropriate language.
Another factor here is cost. For a lot of firms, they are building their bots on top of a licensed Large Language Model, such as OpenAI. This means that every time they use the generative capabilities through an API, they are paying a fee to do so.
So in essence, DPD are likely paying a bot to play stupid games with customers. While this might be small in the grand scheme of things, it does seem like a bad use of money.
A chatbot should not be allowed to insult the company!
I’m guessing if a human agent called the company they represented “the worst”, they would likely not have a job the next day. But this AI bot was not trained to prevent this from happening, nor could it confirm accurate information or properly answer queries about the company.
If an AI is trained on vast amounts of information from a wide range of sources, when you ask it a question it may pull from one particular source, or it might pull from a different, completely contradictory source. This is what results in “hallucinations” that generative AI has been known for.
If instead it is trained on quite a narrow source of information (say a company’s internal documentation), and told to exclude other sources then will give the right answer to that specific question.
Once again, I’m guessing that nowhere in DPD’s internal documents does it say that DPD is the worst! So the AI has pulled that out from somewhere, or has been “trained” through its conversation with Ashley and is essentially mirroring what it has been told by a disgruntled customer. Other companies have faced similar issues with AI chatbots failing to confirm details or answer queries correctly, leading to public criticism.
How to avoid these mistakes yourself
If you are looking to build AI chatbots like this, but want to avoid the mistakes, here are some tips. Many companies have operated an ai element within their customer service systems, and these systems have functioned successfully for a number of years before encountering issues:
Ensure you set limits on what your AI can answer so it doesn’t go off on tangents answering things that are not in your domain.
Use your own proprietary and internal data to train your AI, don’t rely on external sources. For example, you could use actual agent transcripts to train your AI.
Know when to use AI, and when to pass to humans. You may need to test and optimise this, but even if AI cuts down 5-10% of tickets coming to your agents, that can have a big impact, and can easily grow as you get better. With proper management, chatbots can function within the chat successfully and provide reliable support.
If you are looking to buy: look for a provider that specialises in your industry. There will be a lot of the same generic use cases across your organisation and another that can be adapted to your business. Why look to use a tool that wasn’t built for your purposes?
If you want to find out how to create a good generative-AI powered chatbot, watch our webinar with air up here.