AI is moving faster than most leadership teams can keep up with.
Not because the technology is complex. But because the culture decisions behind it are unclear.
We’re seeing it every week:
The question isn’t should we use AI. It’s how do we use it without eroding the very culture we’ve worked hard to build?
This is where most organisations get stuck.
Patagonia is one of the few organisations that didn’t rush.
They paused. They questioned. They challenged whether AI actually belonged in their business.
Not because they are anti technology. But because they are pro purpose.
Their mission to save our home planet creates a very different filter for decision making. And that’s where this gets interesting for leaders.
When Patagonia explored AI, four tensions showed up. These are the same ones we see in fast growing companies.
AI can optimise for efficiency.
But it doesn’t always optimise for meaning.
If your culture isn’t clearly defined, AI starts making decisions that quietly pull you away from your purpose.
AI algorithms, especially black-box models, could have undermined transparency in decision-making, raising concerns about fairness and accountability.
For organisations like Patagonia that have built their brand on trust and authenticity, this creates a gap between how decisions are made and how they are experienced.
AI needs data. Lots of it.
However Patagonia prides itself on respecting customer privacy and avoiding exploitative practices. Introducing AI tools risked clashing with its ethical stance on minimal data collection.
AI isn’t neutral. It consumes energy. It shapes behaviour.
For Patagonia, this wasn’t a technical issue. It was a values issue.
This is where most organisations can learn something practical. They didn’t say yes or no to AI.
They got clear on how decisions would be made.
Before any tool was introduced, it had to align with purpose, values, sustainability and transparency.
AI was only used where it strengthened their values, for example:
Not everywhere. Just where it mattered.
Employees, customers and partners were part of the conversation.
Not after the fact. During the decision making.
They didn’t just apply ethical AI internally. They advocated for it externally. That’s what alignment looks like.
AI doesn’t create culture issues. It amplifies what’s already there.
If your culture is clear, AI accelerates you
If your culture is unclear, AI exposes it
This is the part many teams miss.
Before you roll out another AI tool, pause here.
If you’re unsure where AI fits in your organisation, don’t start with tools.
Start with clarity.
If this is something you’re navigating right now, it’s worth getting clear early.
The organisations doing this well are having these conversations sooner than others.