A Parable for Anyone Thinking About AI and Their Future

Let me tell you a story about a foosball player.

Not the person gripping the handles. Not the people leaning over the table. Not the ones watching from the side, reacting to every near miss and lucky bounce.

I mean the little player on the rod.

The one fixed in place. The one locked into one line. The one who can slide back and forth, but only so far. The one who can affect the game, but only if the ball comes close enough to matter.

They don’t choose the strategy. They don’t choose the timing. They don’t choose the pace.

Most of the time, they wait.

Then the ball comes their way, and suddenly everything matters. Angle. Timing. Readiness. Contact.

That sounds a little like work to me.

A lot of people spend their days in roles that aren’t all that different. They work inside boundaries they didn’t create. They carry responsibility inside systems they don’t control. They try to do their part well, even when they can’t see the whole field or understand everything that sent the work their way.

They may not know the whole game, or how the score is being kept. They may not even know what happened two lines back that sent the ball in their direction.

Still, when it reaches them, their moment is real.

There’s something important in that.

We don’t need to control the whole table to be responsible for our part of the play. We don’t have that kind of control in most of life. We’re asked something simpler and harder. Be ready. Pay attention. Do the best you can with what reaches you.

That alone is worth contemplating.

But what if we add artificial intelligence to the picture?

Imagine that same foosball player being given access to a system that sees patterns faster. A system that recognizes angles sooner. A system that can suggest where the ball is likely to go before the player fully sees it unfold.

At first, that sounds like help. And often it is.

The player reacts faster. The contact gets cleaner. The scoring chances improve.

AI helps people create faster, sort faster, summarize faster, and respond faster. It removes friction. It can make a capable person more effective inside the lane they’ve always occupied.

That is the promising side of it.

But there is also an uncomfortable part.

Once the system starts seeing faster and suggesting more accurately, someone above the table is eventually going to wonder why they still need the player. That question doesn’t always get asked out loud. But it’s there. You can feel it. Pretending otherwise doesn’t make it go away.

That unease is legitimate.

The question is what to do with it.

Here’s where I think the real work begins.

What separates a great foosball player from an automated one isn’t reaction time. Machines will win that contest.

The deeper difference is harder to name. Knowing when not to take the obvious shot. Recognizing that the ball coming from a certain direction is a trap, not an opportunity. Sensing that something is off and adjusting before the moment fully reveals why. Coordinating with the players on the other rods in ways that don’t require a word.

That’s judgment. That’s situational awareness. That’s the kind of thing that lives in the player, not the system.

AI can help with speed. It can help with prediction. It can surface options. But it doesn’t carry responsibility the way a person does. It doesn’t feel the weight of consequences. It doesn’t care about the human being on the other end of the decision. It doesn’t wrestle with what should be done. Only what can be done.

That still belongs to us.

I want to be honest about the limits of that claim. The argument that human judgment is safe from automation isn’t permanently settled. AI is advancing in that direction too. Anyone who draws that line with complete confidence is overconfident.

But if I define my value only by output and routine execution, I’ll always be vulnerable to something faster.

If my value includes judgment, trust, discernment, adaptability, and the ability to connect my small part of the field to a larger purpose, then the picture changes. AI becomes a tool I use, not a definition of who I am, or an immediate replacement for the work I do.

For some people, this reframing will feel like genuine good news. Their roles have always required judgment, and AI can finally free them from the parts that didn’t.

For others, the harder truth is that their role may need to change. Some work is primarily mechanical. Some lanes will be redesigned or eliminated in this process.

The courage in that moment isn’t pretending the role is something it isn’t. It’s being willing to grow. To move toward the parts of the field where human judgment still has the most to offer.

That is a hard ask. Unfortunately, for many people, it’s becoming a necessary one.

I also want to be honest about who fits this reframing the most. If you have domain knowledge, a network, and some runway, the opportunities ahead are genuine. If you are mid-career in a role that has been primarily mechanical, the path from insight to action looks different. That doesn’t make the direction wrong. It means the journey looks different depending on where you’re starting from.

But here’s something else worth considering, especially if uncertainty feels more like a threat than an opportunity.

The same tools raising these questions are also lowering barriers in ways we have never really seen before. Starting something new used to require capital, staff, infrastructure, and years of groundwork before the first real result.

That is still true for some things. But for many others, the gap between I have an idea and I have something real has collapsed in ways that are genuinely new.

The foosball player who spent years developing judgment, domain knowledge, and an instinct for the game now has access to tools that can help them build something of their own…not just execute better inside someone else’s system.

That’s a different kind of power than speed or efficiency.

It’s agency, if we choose to use it.

And it doesn’t have to be a solo venture. Some of the most interesting things happening right now involve small groups of people — two, three, maybe five — who share domain knowledge, complementary judgment, and a problem worth solving. With the help of these AI tools, they can pool their capabilities in ways that would have required a full company to attempt a decade ago.

Not everyone will go this route. Not everyone should.

But the option is more available than it has ever been. And for the person who has been quietly wondering whether there’s a different game they should be playing, this moment may be less of a threat and more of an opening.

The foosball player is still fixed to the rod. Still limited. Still dependent on timing. Still part of a game they don’t fully control.

That hasn’t changed.

What may need to change is the story the player tells about themselves. A bigger, truer one. One with more possibilities.

Use the AI tools. Learn how to maximize your position with them.

But don’t let AI reduce you.

You were never only the motion. You were never only the output. You were never only the kick.

You were the one responsible for what to do when the ball came your way, and that’s still true.

And now, for the first time, you may have more say than ever in choosing your table.

Photo by Stefan Steinbauer on Unsplash – I’ve only played foosball a few times. I’m terrible at it and haven’t played it enough to feel like the game is anything more than randomness and chaos. Funny thing is that lots of workers have a similar perspective on the job they’re doing for their employer.

When the Disruptors Get Disrupted

For most people in IT, change is constant.

New platforms arrive. Old tools fade. Processes are reworked. Skills must evolve.

In that sense, disruption has long been part of the job description.

Software developers create new and improved tools. They streamline workflows. They automate tasks that once required entire teams. Over time, they have reshaped and disrupted how work gets done across nearly every industry.

This pattern has been in place for decades.

For software developers, something different is happening now.

With the arrival of AI-assisted development tools, including systems like Anthropic’s Claude Code, disruption has begun to turn inward. These tools are reshaping how developers approach their own work.

For many in the profession, this feels unfamiliar.

Software development continues, but the definition and details of the role are shifting. Tasks that once required sustained manual effort can now be generated, refactored, tested, and explained with remarkable speed.

A developer who once spent an afternoon writing API integration code might now spend fifteen minutes directing an AI to produce it, followed by an hour reviewing edge cases and security implications. The center of gravity moves toward judgment and direction rather than execution and production.

When job roles experience disruption, responses tend to follow predictable patterns. Some people dismiss the change as temporary or overhyped. Others push back, trying to protect familiar and comfortable ways of working. Still others approach the change with curiosity and engagement, interested in how new capabilities can expand what’s possible.

Intent Makes the Difference

An important distinction often gets overlooked when discussing pushbacks.

Some resistance grows from denial. It spends energy cataloging flaws, defending established workflows, or hoping new tools disappear. That approach drains effort without shaping new outcomes. It preserves little and teaches even less.

Other forms of resistance grow from professional judgment.

Experienced developers often notice risks that early enthusiasm misses. Fragile abstractions, security gaps, maintenance burdens, and failures that appear only at scale become visible through lived experience. When developers raise concerns in the service of quality, safety, and long-term viability, their input strengthens the eventual solution. This kind of resistance shapes progress rather than attempting to stop it.

The most effective developers recognize this shift and respond deliberately. They move away from opposing new tools and toward advocating for their effective use. They ask better questions. They redesign workflows. They establish guardrails. They apply experience where judgment continues to matter.

In doing so, they follow the same guidance developers have offered others for years.

Embrace new tools.
Continually re-engineer how work gets done.
Move upstream toward problem framing, system design, and decision-making.

Greater Emphasis on Judgment

AI generates code with increasing competence. Decisions about what should be built, which tradeoffs make sense, and how systems must evolve over time still require human judgment. As automation accelerates, these responsibilities grow more visible and more critical.

This opportunity in front of developers calls for leadership.

Developers who work fluently with these tools, guide their thoughtful adoption, and help their teams and organizations navigate the transition become trusted guides through change. Their leadership shows up in practical ways:

-pairing new capabilities with healthy skepticism

-putting review processes in place to catch subtle errors

-mentoring junior developers in how to evaluate results rather than simply generating them

-exercising judgment to prioritize tasks that benefit most from automation

Disruption has always been part of the work.

The open question is whether we meet disruption as participants, or step forward as guides.

Photo by AltumCode on Unsplash