Opinion paper
We're asking the wrong question about AI
An opinion piece on responsibility, leadership, and the question I left consulting to take seriously.
I left Accenture in part because of a question that kept not getting asked.
After years of helping organisations adopt emerging technology, blockchain, AI, automation, I noticed the same pattern in room after room. We asked, with great sophistication, what the technology could do. We almost never asked, with equal seriousness, what it should do. Or what it should not. Or who decides.
The first kind of question is technical, and we are good at it. The second kind is harder. It requires us to slow down. It requires more than one lens in the room. It requires us to admit that technology is not neutral, that it shapes how we live, and that the people who shape it carry a responsibility larger than their quarterly numbers.
I have come to believe that this second question is the most consequential question of the next decade.
What should AI do? What should it not? And who decides?
What I kept seeing
I worked across industries. Financial services, retail, public sector, energy. I saw scale. I saw acceleration. I saw the operational genius that good consultancies bring to good clients. And I saw, repeatedly, what happens when a powerful technology is implemented through a single lens. Usually a commercial one. Sometimes a technical one. Occasionally both. Almost never a human one.
The clearest example of that pattern is one we are all still living inside. We took social media to be a means of connection. What we got back was a polarisation engine, an attention extraction industry, and a youth mental health emergency that the platforms themselves now acknowledge. We did not plan for any of that. It happened because we built and deployed those technologies without enough lenses in the room. Engineers had one view. Growth teams had another. The voices that would have asked "what about the kids? What about democracy? What about loneliness?" were either absent or unheard.
I do not want us to do this again with AI. I do not think we have to.
What our research kept telling us
In 2023 I started working on a multi-year EU-backed research project under the Erasmus+ programme called REED, Responsible Education in the Era of Digitalisation, with partners across five European universities. The starting question was simple. Why do we keep struggling to engage responsibly with new technology, even when we know we should?
The finding that has reshaped my thinking is this. We do not struggle because we lack intelligence. We struggle because we lack lenses.
Most of our institutions, businesses, schools, governments, look at technology through a single perspective at a time. That single perspective is competent at what it sees, and blind to what it does not. The ethical questions get treated as soft. The environmental costs get rendered invisible. The human consequences get noted in passing and then forgotten. The reflexive questions, "what assumptions am I making, whose interests does this serve, who is missing from this conversation," barely get asked at all.
REED's argument, in its educational form, is that ethics, responsibility, and sustainability cannot be add-on modules. They must be embedded lenses, present across all teaching about technology. I think the same is true for organisations. You cannot run a values workshop in March and expect it to fix a roadmap built in February through a purely commercial lens. The lenses have to be in the room when the decisions are being made.
In September 2026 I begin a PhD that deepens this work, on technology mindset and responsible leadership. The reason is not academic ambition. It is that what I see across boardrooms is data the field does not yet have, and I think we need it.
Quickly, and rightly
I am not anti-speed. I worked at the kind of place where speed is a virtue, and there is nothing wrong with that. AI is moving fast. The geopolitics are moving fast. Your competitors are moving fast. And slowing down for its own sake is its own kind of irresponsibility, especially in Europe, where we cannot afford it.
But somewhere along the way, "quickly" became the only verb that mattered. We stopped putting "rightly" next to it.
My conviction is that those two words belong together. Not either. Both.
If you are a CxO at an organisation that already shapes the world we live in, and many of you are, you carry a responsibility I do not think the leadership conversation has fully metabolised yet. You have leverage. The decisions you make about AI in the next two to three years will compound, into your organisations, into the markets you operate in, into the daily lives of the people who use what you build. That kind of impact is not free. The price of it is the obligation to lead well. Not just fast. Right.
What we do at Untangle Lab
Untangle Lab is what I built to take this seriously, in practice, with leaders.
We start by measuring something most organisations have never made visible. The mindset quietly shaping their AI decisions. Not their strategy. Not their tooling. The assumptions, the hopes, the hesitations of the actual humans in the room. Drawing on our EU-funded research, we map how leaders and their teams actually think about AI. The measurement is a mirror. It tells you something true about your team that no consultant interview round can give you. The expedition that follows is sharper, because the team can finally see what they are working with.
Then we go on expeditions together. Sessions where leadership teams sit with the questions the mirror has surfaced. What should we do with AI? What should we never do? How do we want to live? How do we want to lead? These are not workshop questions. They are the real questions that good leadership avoids by accident, because they are uncomfortable, and because no one has built the structure that holds the conversation long enough to answer them.
The methodology is grounded in research, not opinion. The PhD I am beginning will deepen it, turning what we see across boardrooms into evidence the field does not yet have.
The research generates the insights. The insights become talks. And I host events at the intersection of AI, business, education, and society, because the most honest conversations I have sat in have always happened in rooms where those four worlds meet. Untangle Lab is the engine that holds it all together.
A personal note
I do not believe in shortcuts. Not in sport, not in research, not in leadership. As a former elite rower, I learned in a boat that the seconds you skip in training show up in the race, every single time. I think the same is true of leadership in a complex era. The lenses you skip in the early decisions become the headlines in the later ones.
I want a life that is rich and layered and deep. Hard work and moments of being tired. To do it all and feel it all.
I believe in business that is profitable and good. I am not interested in the framing that treats those as a trade-off, because I have seen, up close, that the best companies do not treat them that way. The future is not here yet. But the leaders shaping it now have an immense opportunity, and an immense responsibility. To move quickly, and to move rightly. Both, not either.
If this resonates
If you are a CxO trying to lead AI adoption you will still be proud of in five years, or you are organising an event where the AI conversation deserves more depth than it usually gets, let's connect and talk.
The next decade will be shaped by the quality of the questions we ask. Let us ask better ones, together.