Nicholas Clooney

What's Worth Keeping: On Humanness in the Age of AI

This post was co-written with AI — but the thinking, opinions, and anxieties are entirely my own.


I went to an interesting event recently. A thought experiment session — three short stories, each one a quiet provocation about AI, skill, and what it means to stay human in the middle of all this change.

I've been turning them over in my head since. As a software engineer who works with AI every single day — for code, for research, for posts like this one — I have a front-row seat to the tensions these stories describe. So here are my thoughts.


The thread running through all of it: humanness

The first story follows a junior lawyer who discovers that an AI has essentially dissolved the grunt work that was supposed to build her expertise. The third imagines a group of people gathering in a pub to ask themselves: is there anything you used to be able to think about that you can't anymore?

Different settings. Same underlying anxiety.

What's at stake isn't productivity, or even jobs — it's something harder to name. Human instinct. Human taste. Human judgement. The capacity to form a thought yourself before reaching for something that forms it for you.

I think these things are worth preserving. And I think the conversation about AI often skips over them too quickly.


On the job market — the short and long view

There's an uncomfortable economic reality sitting underneath these philosophical questions: AI is not just augmenting people, it's replacing roles. Entry-level and junior positions in particular — the kind of work that used to be how you broke in — are quietly disappearing.

In the short term, that means people without jobs. It hits juniors hardest, because so much of early-career office work — drafting, summarising, researching, reviewing — is exactly what AI does well and cheaply.

But here's what worries me more in the long term: a knowledge vacuum.

The senior people who carry deep expertise, hard-won instinct, and institutional memory will eventually retire. If there's no pipeline of junior and mid-level people who have gone through the process of building that expertise — because the grunt work that built it was automated away — who fills the gap? You can't shortcut your way to twenty years of pattern recognition.

I don't have a clean solution. But I think business, education, and government need to work through this together. There may be a short-term cost to investing in training juniors properly, in an age when AI makes it tempting not to bother. Long term, I think that cost is worth paying — for society, and for the businesses themselves.


Using AI is itself a skill

One of the junior lawyers in the first story, Omar, calls himself "the Centaur" — half-human, half-AI, the combination being more capable than either alone. I like that framing, and it maps onto my own experience.

Working effectively with AI is a genuine skill. It's not just prompting. It involves:

  • Providing the right amount of context, at the right level of specificity
  • Breaking bigger problems into sub-tasks that can be delegated to agents
  • Using different models or approaches to cross-check outputs
  • Building a knowledge foundation — standards, constraints, a framework — that the AI works within, so outputs align with your actual intentions

None of this happens automatically. It requires judgement, structure, and a critical eye. It is, in its own way, a craft.

But — and this is the part I want to hold onto — it only works if the human behind it is still actually thinking.


Critical thinking is the thing I worry most about

I can live with the idea that some skills atrophy with disuse. I grew up in China during the keyboard generation. I can type Chinese fluently, but my ability to write characters by hand has genuinely degraded. I notice it. I accept it. Writing by hand is a beautiful skill, but losing it hasn't hollowed me out.

I'm not sure I feel the same way about critical thinking.

For people my age, most of us had already formed our ways of thinking, researching, questioning, and creating before AI became what it is now. The habits were in place. We're now choosing — consciously or not — how much to lean on AI tools, and we have a baseline to return to.

But what about the generation coming up now?

If you grow up reaching for AI every time you need to learn something, form an argument, or produce creative work — does the underlying capacity still develop? Or does the shortcut quietly become the only route?

If those get replaced — or never form properly in the first place — I'm not sure what's left.


What I try to do

I use AI every day. I'm not going to pretend otherwise, or dress it up as something more romantic than it is.

But I try to stay the author of my own work. That means reading AI output critically, not just accepting it. It means forming my own view before I ask for help with it. It means asking whether something sounds like me — and caring about the answer.

The human behind the AI still matters. I believe that. I just think we need to be more deliberate about keeping it true.


Thanks to the Thought Experiment session for the provocation. The stories referenced are "The Centaur" and "The Thinking Gym", both by FG.