Five ways you're quietly damaging your brand with AI
(I bet you're doing at least one of these)
We’ve been told again and again that AI will make it faster, easier, cheaper to do things. But not enough people are talking about how it can also quietly erode your brand, damage your reputation, and break your relationships: sometimes without you even noticing.
Because while most AI horror stories focus on grand failures - like racist recruitment algorithms, killer robots or massive data leaks, for most businesses the damage, is quieter, slower and more subtle. It creeps in (or it already has crept in) through poor implementation decisions, lack of management curiosity, and tools we were told would save us time.
Here are five ways AI might already be damaging your business.
1. You’re recording Calls Without Consent
It starts with good(ish) intentions. You want to remember what was said in meetings. You want to save your team time on note-taking. (And let’s be real CEO, at the back of your mind you are thinking: if I record all these calls I can use them to train agents in the future so I don’t need these employees anymore). So you roll out AI note-takers - Otter, Fireflies, Fathom, whatever your flavour - across internal and external calls.
But did you put a proper consent process in place?
Because if you didn’t, you might be illegally recording people. And even if it’s technically legal (I’m not convinced it is - in the UK you would need to prove proportionality to record every call by default and I think you would struggle in most employment contexts), it might still be a breach of trust: not only between the business and its customers, but with employees too. When your employees clock that this is you processing their personal data (and that you can use it to train models in the future), you’re going to have some interesting questions to answer. And who doesn’t love answering questions at an employment tribunal?
This is widespread: Every time I join a call and an AI note taker joins too, I insist that the note taker is removed before I turn on my camera and before the meeting starts. I’m amazed at how frequently the person whose notetaker has joined has no idea how to remove them (or tells me it’s not possible - it is). That means these employees have not been informed and have not specifically consented to being recorded. Which means their employers are at risk - of legal complaints, of regulatory breaches, and of reputational fallout. This is exactly the sort of “little issue” that grows teeth in a dispute or audit.
If you’ve been so seduced by the power of AI that you have forgotten to embed respect, transparency and consent into your tech stack, then it might be time to start working on your crisis management strategy.
2. You’re pretending your AI bot is a human
Most of your customers aren’t idiots.
So when they chat to a bot that claims to be “Sophie from Support” but can’t answer basic questions or pauses for awkward lengths of time, they know it’s not Sophie. They just think you’re insulting their intelligence and wasting their time.
And even if Sophie can pass the Turing test, is it ethical to trick your customers into thinking she’s real? (the answer is no). And if anyone on your board was involved in (or aware of) the decision to try to deceive customers then they have breached their fiduciary duty.
There’s nothing wrong with using AI to triage support requests or answer FAQs. What’s wrong is pretending it’s a person. That damages trust - and once that’s lost, it’s very hard to win back.
This kind of deception creates what I call AI dissonance (well I do now, after ChatGPT suggested it): that moment when your customer realises you're not being straight with them. And in a world where trust is already fragile, that moment matters.
If you want to use bots, go ahead - but be honest about it. Transparency is a brand asset.
3. Your staff are using free tools without guardrails
This one is happening in thousands of companies right now.
An employee uses ChatGPT to draft a report, brainstorm customer messaging, or summarise meeting notes. Seems harmless. But if they’re using the free version, their inputs may be logged, stored - and used to train the model.
That means they could be feeding in:
confidential client data
customer details
contract clauses
unreleased product info
...and handing it all over to a third party, without realising.
The reputational and legal risk here is massive. You wouldn’t let your team publish private data on a public forum. But in effect, that’s what’s happening - quietly, invisibly, and probably with the best of intentions.
If you’re not offering access to secure, company-approved AI tools (including the training on how to use them), you’re leaving the back door wide open.
4. You’re suddenly publishing way more content
Generative AI makes it absurdly easy to produce content. You can ask ChatGPT for 100 blog posts, 1,000 tweets, a year of LinkedIn posts - and it will deliver. But if you haven’t taken the time to think about what you want to say, all you’re doing is flooding the internet with more of what’s already out there.
And you’re diluting your brand voice in the process.
All but the dimmest consumers can spot this. So if you want to reach thinking people, then this strategy isn’t going to work. And I think you know that already, because it’s not working, is it?
If you can’t add to the conversation then you don’t deserve to be in the conversation. So step back, and give us all a break from the noise pollution.
5. You’re using one of those personalised LinkedIn outreach tools…
…and you’ve just switched it on without any fine tuning. AI tools that promise “hyper-personalised” outreach are sh*te. They scrape LinkedIn, grab a job title, spot an irrelevant keywords and churn out cold Inmails that are all some version of: “Hi [First Name], I love what you’re doing as [Sometimes Incorrect Role] at [Company]…”
Obviously, that’s not personal. That’s template theatre. It’s annoying. It’s transparent. And it’s not going to work.
And just like that, AI has made your sales team less human, less effective, and more annoying.
Lazy AI is Risky AI
None of these examples are about AI going rogue. They’re about businesses using AI without thinking hard enough. And by businesses, I mean leaders.
Because AI isn’t plug-and-play. It’s not a shortcut to trust or strategy or empathy.
If you want AI to help you grow your business - not quietly sabotage it - you have to make sure the humans are still doing the thinking.