2025: The year AI split in two
A noisy year. Here's what we saw from inside the work.
Something shifted in 2025. Not in the technology – that just kept on advancing. The shift was in how people talked about it.
At the start of the year, AI was still a curiosity. By the end, it had become a position. The public conversation polarised, and the nuance started to drain out. And underneath it all, a strange undercurrent: people waiting for the whole thing to collapse, almost willing it to. In other words, the vibe changed.
But inside organisations, a different story was unfolding. We saw many teams stop waiting for consensus and start learning anyway – for many different reasons. People arrived at our workshops uncertain, then left surprised – not by what AI could do, but by what they could do with it once they had a framework.
2025 wasn’t the year AI proved itself or failed. It was the year the conversation split in two: the one happening in public, and the one happening in the work. This piece is a reflection about what it felt like to work in this space while the world made up its mind.
What we paid attention to
DeepSeek and the early shockwave
The year began with DeepSeek dominating headlines. Its performance and price landed like a provocation: AI capability was no longer only the domain of a small group of Western labs. The competitive map entered a broader, more fluid phase. For us, it was the first signal that 2025 would be a year of recalibration – assumptions about who leads, how fast things move, and what ‘cutting edge’ even means were all up for grabs. We are well and truly in a race now, and the participants are not who many expected.
The money story
Every month brought news of record capital expenditure: data centres, GPU clusters, energy deals, cooling systems. The figures and valuations were/are staggering. Chips went from a purely technical detail to being seen as geopolitical assets. AI became central to the very fabric of the economy.
And yet the market pattern wasn’t linear. An early surge in optimism, a sharp trough when Trump’s tariffs hit chip imports, then a reversal: the steepest rise in tech valuations we’ve seen in any category, ever. This volatility wasn’t just financial – it shaped sentiment inside organisations. Most leaders we spoke to weren’t anxious about AI as a technology, but they were anxious about keeping up. Eisenhower’s line took on fresh meaning: “Plans are useless. Planning is essential.”
Claude and ChatGPT: two adversaries emerge
Both models grew this year, but in different directions – and with different cultures forming around them. Anthropic’s revenue is roughly 80% enterprise and API usage; that’s the world we work in when we use Claude Code to build systems, workflows and internal tools. OpenAI sits at the other end: around 75% consumer, built around ChatGPT as a one-on-one personal chatbot and assistant.
This divergence will likely sharpen in 2026 – in branding, positioning, and how organisations choose which ecosystem to build around. Our advice: don’t pick a side yet. Treat it like a mobile network. Go with whoever offers the best deal, stay ready to move, and keep experimenting with both. Loyalty is a luxury this market hasn’t yet earned.
The GEO/AEO moment
Our most-read newsletter of the year, and the response told us something. Search has been stable for two decades. Entire industries have been built on that stability – agencies, content teams, measurement frameworks. When something that is fundamental starts to wobble, people pay attention.
The conversation resonated because it voiced an anxiety we were hearing everywhere:
What happens if the value chain built around search begins to erode?
What does ‘organic discovery’ even look like in an AI-first world?
What happens to the people who’ve built careers around this?
The conversation around GEO hinted at where things might go next, i.e. not a list of links, but a layer of AI that surfaces what we need without us asking. Something closer to Google Recommends than Google Search. For organisations, it raised real questions about how people will find products, content and brands in the years ahead.
What this meant for organisations
The mood changed
If you followed the headlines, you’d assume organisations were pulling back. As with most truths behind the headlines, the reality was more nuanced. The MIT study that found over 95% of AI pilots fail made it seem to many that meant AI is a failed technology. What it said to us as an organisation with deep expertise in organisational change was: literally nothing new. We’re only in business because embedding a new technology into an organisation is really hard and teams need a structured way to approach it.
In our work and in our client teams, what we saw wasn’t retreat or an abandonment of AI – it was seriousness. The hype had done its job. Now the questions were harder: how do we actually embed this? What skills do we need? Who’s going to lead it?
The organisations that stalled were the ones waiting for certainty – for the discourse to settle, for a clear winner to emerge, for someone to tell them what to do. The ones that moved were the ones who treated uncertainty as the operating condition and got on with learning anyway.
AI literacy became the quiet differentiator
As models advanced, organisations felt the distance between ambition and ability widening. The pace of releases outstripped internal understanding. Most teams weren’t lacking tools – they were lacking fluency.
This is where we saw the gap open up. Teams that invested in literacy moved faster, made better decisions, avoided the usual traps. They weren’t necessarily the ones with the biggest budgets or the most technical talent. They were the ones who’d built the confidence to experiment.
Capability-building went from optional to essential. Not because the technology demanded it, but because without it, organisations couldn’t tell the difference between a real opportunity and a shiny distraction.
Where we stand
Here’s the honest version: no one knows where this is going. Not the labs, not the analysts, not the consultants. Including us.
But we’ve noticed something. The teams that are finding their way aren’t the ones with the most certainty. They’re the ones with the most willingness to learn. They approach AI experimentally – not as a problem to be solved, but as a capability to be developed. They try things, pay attention to what works, and adjust.
Having a critical mindset is always essential. It doesn’t really matter if you’re an early adopter or a sceptic. That dialogue is a distraction. Instead we advocate for staying curious while everyone else picks a side.
We think the organisations that thrive in the next few years won’t be the ones who predicted correctly. They’ll be the ones who built the capacity to keep learning, even when the ground keeps shifting.
Our philosophy for 2026 is pretty simple. We’ll keep imagining, learning, testing, experimenting and building with this technology. If that sounds like you, we’d love to work with teams who feel the same.


