Opinion piece
Four AI mindsets I keep meeting (and why every team needs all four)
On Trailblazers, Guides, Adventurers, and Observers, and the quiet thing that actually decides whether AI lands in your organisation.
In my work, I meet a lot of leaders. Different industries, different companies, very different challenges. But on the topic of AI, over the last two years, I've started to notice the same four people in almost every room.
One of them is leaning forward, already trying things. Another is leaning back, asking the harder question. A third is curious, eager, willing to set off in a direction nobody has mapped yet. A fourth is quieter, taking it all in, sometimes unsure whether they belong in the conversation at all.
They aren't job titles. They aren't seniority levels. They show up in the CEO and the analyst, in marketing and in operations, in twenty-something engineers and in fifty-something directors. They are AI mindsets. And once you start seeing them, you can't unsee them.
Why mindset, not tools
For a long time I thought AI adoption was a tools problem. Or a strategy problem. Or a budget problem. I was wrong.
The tools are there. The strategies are there. The budgets, in most companies I work with, are there too. And still, in many of the organisations I walk into, AI is somehow not landing. Pilots stall. Manifestos get written and then quietly shelved. Whole teams attend the training and a month later you cannot tell anything has changed.
What's actually missing is something quieter. The way the people in those rooms relate to AI. What they believe is possible. What they're afraid of. What they think they're allowed to try. What they think will happen if they admit they don't know.
That is mindset. And mindset is where adoption either takes root or doesn't.
Why the fear is rational
There's an idea I want to push back on, because it shows up a lot in conversations about AI. It's the idea that the fear people feel is somehow irrational. A mindset deficit to be coached out of them.
I don't think it is.
Technology should be a means to something. In a healthier version of our world, that's exactly what it would be. But this is not how digital technology has actually arrived in our lives over the last twenty years. We took social media to be a means of connection, and what we got back was a polarisation engine, an attention extraction industry, and a documented mental health emergency in our young people. We did not plan for any of that. It happened because we built and deployed those technologies without enough lenses in the room. The engineers had one view. The growth teams had another. The voices that would have asked "what about the users, the kids, the democratic fabric?" were either absent or unheard.
That is the precedent we are looking at when we look at AI. The fear people carry into AI conversations is not a deficit. In many cases it is a memory. A reasonable response to a real pattern.
So the question of AI adoption is not just "can we move faster?" It is also: have we learned anything from social media? Are we, this time, going to bring enough lenses to the table to actually see what we are doing?
What our research kept telling us
Between 2023 and 2025, I worked on a multi-year EU-backed research project called REED, Responsible Education in the Era of Digitalisation, funded under the Erasmus+ programme and run with partners across five European universities. The starting question was deceptively simple: what does it mean to engage responsibly with digital technology, and why do most of our institutions still struggle to do it?
The finding that has stayed with me, the one that shapes everything we do at Untangle Lab now, is this: the reason current problems with digital technology stay unresolved is not a lack of intelligence in the room. It is a lack of lenses.
Most of our educational, business, and decision-making structures look at technology through a single perspective, usually a technical or commercial one, and that single perspective cannot see what it cannot see. The ethical questions get treated as soft. The environmental costs get rendered invisible. The human consequences get noted in passing and then forgotten. The reflexive question, "what assumptions am I making, whose interests does this serve, who is missing from this conversation," barely gets asked at all.
REED's response, in the educational context, was to argue for ethics, responsibility, and sustainability not as add-on modules but as embedded lenses, present across all teaching about technology. What we are doing at Untangle Lab is the operational version of that finding for businesses and leadership teams. A way to bring those different lenses into the room before decisions get made, rather than discovering, later, which lens you needed and didn't have.
The four lenses
The four AI mindsets we work with are not the result of an afternoon's whiteboard session and a tidy 2×2. They are the operational answer to the diverse-lenses problem REED surfaced, and they are built on top of a serious theoretical foundation: research on growth and fixed mindsets, on how organisations actually make sense of new realities, on how teams sustain dynamic equilibrium between competing demands, and on what responsible engagement with technology actually requires. We use validated scales adapted for AI alongside conceptual checks, and we keep the underlying scoring logic deliberately to ourselves.
Two factors, in our research and practice, most reliably predict whether someone moves on AI and whether their movement lands well: how willing they are to experiment, and how clearly they understand what AI actually is. Not whether they can build a model. Whether they can have a clear-headed conversation about what is happening under the hood, what is hype, and what is real.
Those two factors give you four people. Four AI mindsets. And in my experience, every team has some of them, and every team is also missing at least one.
The Trailblazer
The Trailblazer is usually the first person in the room to actually try a new AI tool. The first to demo something to colleagues. The first to suggest a project that will use it. They have technical confidence. They have the willingness too. They like moving.
You probably know one. They are the colleague who has already tried three tools you have not heard of, who has opinions about which lab to watch, who is impatient with conversations they feel are going slowly.
Their gift to a team is momentum. Without them, no one starts. With them, the team has someone proving that things are possible.
Their risk is the speed itself. A Trailblazer left alone, with no one to slow them down, can ship something fast and clever and miss the question that actually mattered: who does this affect, who has not been considered, what happens when it touches the real world. That is where ethics, regulation, and stakeholder care can quietly become afterthoughts. It is where the next public AI misstep gets born.
If this is you: keep going. You are not the problem. The work is to bring others with you, and to listen carefully when someone with a different lens, especially a Guide, raises a question. Their question is not a brake. It is an upgrade to what you are building.
The Guide
The Guide is in the room too, usually quieter than the Trailblazer. They have the technical depth, often more of it. But they are more deliberate. They have watched a few technology waves before this one, and they know the pattern: the breathless promises, the missing details, the projects that ship with everyone smiling and then quietly fall apart six months later.
They are often the colleague who asks "where is the data coming from?" or "what happens to the people whose work this changes?" usually right at the moment everyone else is excited and ready to move.
Their gift to a team is clarity. They see the system, not just the tool. They translate between the engineer and the boardroom. They notice the risk before it becomes a headline.
Their risk is isolation. Many Guides talk only to other Guides, to their technical peers, to their fellow careful thinkers, while the rest of the organisation makes decisions without their lens. That is how creative organisations end up shipping AI work that lands badly with their audience: not because no one in the building saw the issue, but because the Guides who saw it were not in the room.
If this is you: please come into the rooms where the Trailblazers and the Adventurers are working. They need you. You do not need to slow them down. You need to make their work better. Your caution is care, and care is what makes ambition trustworthy.
The Adventurer
The Adventurer is the colleague who tries the tool first too, but for different reasons than the Trailblazer. They do not yet have the same technical depth. What they have is openness. A willingness to look a bit silly while learning. A low ego about not knowing.
They are the person who, in a meeting full of confident-sounding statements, says "wait, can I just try this on Tuesday?" and then actually does. They learn by doing. They bring others into the doing. They are often the unexpected source of adoption inside teams that have stalled, because they make AI feel possible, casual, normal.
Their gift to a team is energy. The willingness to start, when starting is the hardest part.
Their risk is going too far without enough knowledge. An Adventurer alone, without someone to ground them, can lead a small group of colleagues somewhere none of them are equipped to handle. They get sold things by vendors. They believe the hype demo. They build the wrong thing with great enthusiasm.
If this is you: stay open. The world needs your willingness more than it usually admits. But pair before you launch. Find a Guide or an Observer on your team and ask them, before you press go, what could go wrong that you have not thought of yet. Their answer is your free upgrade. Your enthusiasm plus their depth is one of the most powerful combinations a team can have.
The Observer
The Observer is the quietest one in the room. Often the most thoughtful. They are paying close attention, not because they are disengaged, but because the conversation is moving fast and the gap between what is being claimed and what they understand feels uncomfortably wide.
They are noticing what nobody else is saying. The colleague who has gone quiet. The customer impact that has been glossed over. The human consequence sitting just outside the spotlight.
Their gift to a team is exactly this attentiveness. They hold the long view. They ask the human questions. In rooms full of momentum, they are often the one who notices what the rest of the room has missed.
Their risk is becoming invisible. When the technical conversation feels too dense to enter, the easiest thing is to stay quiet and hope it resolves itself. It will not. The conversation is not going to slow down on its own. And the cost of disengagement is not paid now. It is paid later, when the gap has widened and catching up has become harder.
If this is you: you are not behind. You are reading the room, and your reading is more accurate than you think. The work is small, structured, and yours. One thirty-minute experiment with a generative AI tool on a task you already know well. One short sentence in the next meeting that names what you have noticed. The path is walkable from where you are. It just needs a first step.
Why every team needs all four
You might recognise yourself in one of these. Or you might recognise a colleague: the Trailblazer in marketing who is racing ahead, the Guide who keeps getting left out of the right conversations, the Adventurer who is about to launch something a little too quickly, the Observer in the corner whose perspective you needed two meetings ago.
Here is what I want to say, clearly: none of these four is better than the others. Each is a lens, and each lens guards something the others tend to miss.
When teams adopt AI well, all four are present, and the friction between them is part of the work. The Trailblazer's speed becomes more credible because the Guide has challenged it. The Adventurer's enthusiasm becomes durable because the Observer has noticed what would otherwise be overlooked. The Guide's caution becomes useful because the Trailblazer has given them something real to assess. The conflict between these lenses is not a problem. It is a signal that the team is doing the work it should be doing.
When teams adopt AI badly, it is usually because one or two of the lenses is missing. Lots of Trailblazers and Adventurers without Guides ship fast, break things, and become the next cautionary tale. Lots of Guides and Observers without Trailblazers and Adventurers produce excellent slides about decisions that never get made.
This is why we work the way we work. Not by trying to make everyone into a Trailblazer. Not by trying to slow everyone down to a Guide's pace. By helping organisations see their mix, surface the friction productively, and get the four lenses talking to each other in the same room.
Where to from here
If you have been reading this and wondering which one you are, that itself is a good sign. The four mindsets are not fixed identities. They are starting points, shaped by what you currently believe and what conditions you have been in. With the right teammates, the right structure, and a small first step, every one of them can move.
We are building an AI Mindset Assessment that gives you a personal picture of your lens in about fifteen minutes. If you want to know yours, that is the place to start. And if you want to bring this conversation into your team or organisation, that is exactly the work an Untangle Lab expedition is designed for.
Either way, thank you for reading. I hope you saw yourself in here somewhere. And I hope, when you are next in a room talking about AI, you notice the other three lenses around you, and let them do their work.