Why do good people end up in AI ethics crises?
Because you can't retrofit ethics.
While there's no shortage of bad people in the world, most people don't deliberately set out to do bad things. But if you're a leader, not intentionally doing bad things is only part of your job. You also have to make sure you don't unintentionally do bad things. And that's much harder because you have to be able to imagine all the potential consequences of your decisions.
That's why leaders earn the big bucks. Not for the strategy decks or the town halls. For the accountability. You carry the weight of decisions - including the ones you didn't realise you were making.
AI just multiplied that burden. Because now your decisions delegate to systems that make their own decisions. And you're accountable for those too.
You can’t blame the chatbot…
Replace customer service agents with a chatbot? That’s a smart, money-saving move…..until the chatbot invents a refund policy and you have to honour it.
Enthusiastically slash your headcount to make way for AI? Enjoy the short term wins, before customer satisfaction craters and you have to reverse course.
Liberate your team from having to take meeting notes? Until a customer decides to sue you for allowing an AI to eavesdrop on their calls.
Pass off your chatbot as a human? Be prepared to defend that decision on LinkedIn.
Equip your team with an AI tool to make them more efficient? Be ready to issue a refund when it hallucinates.
…and you can’t imagine all the ways AI can go wrong
Your AI tool hallucinates a statistic in a client proposal. Nobody catches it and it informs your client’s decision.
Your chatbot greets a customer by name and references their medical history after it was trained on personal data without the customer’s consent.
Your AI hiring tool screens out every candidate over 50 and you don’t notice for six months.
A team member pastes confidential contract terms into ChatGPT and now that data is sitting on someone else’s servers.
Your AI-generated marketing copy lifts a sentence from a copyrighted source and the rights holder notices before you do.
A report full of hallucinated facts gets published, promoted and shared.
None of these require bad intentions. They happen in a thought vacuum.
The AI doesn’t have ethics. But you do
And when something goes wrong, nobody will ask what the AI was thinking. They will ask what you were thinking. And “I wasn’t” isn’t a good answer.
Ethics is about making decisions you can stand behind, even when the rules haven’t caught up yet. With AI, that’s most of the time.
That means asking in advance: who is affected by this decision? What happens if the output is wrong? Who owns the mistake? What data is being used, and does that use respect the people it came from? What are we optimising for - and is that actually what we should be optimising for?
These questions are not difficult to ask. They’re just easy to skip when you’re moving fast.
Hey Reader. If you know anyone who would find this useful, I’d be grateful for a share!
AI ethics for leaders
I’m running a live session on Wednesday 11 March at 12:30 - AI Ethics for Leaders.
We’ll cover how to recognise ethical risk before it becomes a crisis. How to stay accountable without paralysing every decision.
If you’re making decisions about AI in your business - and you are, whether you realise it or not - this one is worth your time.
Register here or join The AI Edit membership (£20/month if you join by Easter) for access to this and all upcoming sessions.


