A Parable for Anyone Thinking About AI and Their Future

Let me tell you a story about a foosball player.

Not the person gripping the handles. Not the people leaning over the table. Not the ones watching from the side, reacting to every near miss and lucky bounce.

I mean the little player on the rod.

The one fixed in place. The one locked into one line. The one who can slide back and forth, but only so far. The one who can affect the game, but only if the ball comes close enough to matter.

They don’t choose the strategy. They don’t choose the timing. They don’t choose the pace.

Most of the time, they wait.

Then the ball comes their way, and suddenly everything matters. Angle. Timing. Readiness. Contact.

That sounds a little like work to me.

A lot of people spend their days in roles that aren’t all that different. They work inside boundaries they didn’t create. They carry responsibility inside systems they don’t control. They try to do their part well, even when they can’t see the whole field or understand everything that sent the work their way.

They may not know the whole game, or how the score is being kept. They may not even know what happened two lines back that sent the ball in their direction.

Still, when it reaches them, their moment is real.

There’s something important in that.

We don’t need to control the whole table to be responsible for our part of the play. We don’t have that kind of control in most of life. We’re asked something simpler and harder. Be ready. Pay attention. Do the best you can with what reaches you.

That alone is worth contemplating.

But what if we add artificial intelligence to the picture?

Imagine that same foosball player being given access to a system that sees patterns faster. A system that recognizes angles sooner. A system that can suggest where the ball is likely to go before the player fully sees it unfold.

At first, that sounds like help. And often it is.

The player reacts faster. The contact gets cleaner. The scoring chances improve.

AI helps people create faster, sort faster, summarize faster, and respond faster. It removes friction. It can make a capable person more effective inside the lane they’ve always occupied.

That is the promising side of it.

But there is also an uncomfortable part.

Once the system starts seeing faster and suggesting more accurately, someone above the table is eventually going to wonder why they still need the player. That question doesn’t always get asked out loud. But it’s there. You can feel it. Pretending otherwise doesn’t make it go away.

That unease is legitimate.

The question is what to do with it.

Here’s where I think the real work begins.

What separates a great foosball player from an automated one isn’t reaction time. Machines will win that contest.

The deeper difference is harder to name. Knowing when not to take the obvious shot. Recognizing that the ball coming from a certain direction is a trap, not an opportunity. Sensing that something is off and adjusting before the moment fully reveals why. Coordinating with the players on the other rods in ways that don’t require a word.

That’s judgment. That’s situational awareness. That’s the kind of thing that lives in the player, not the system.

AI can help with speed. It can help with prediction. It can surface options. But it doesn’t carry responsibility the way a person does. It doesn’t feel the weight of consequences. It doesn’t care about the human being on the other end of the decision. It doesn’t wrestle with what should be done. Only what can be done.

That still belongs to us.

I want to be honest about the limits of that claim. The argument that human judgment is safe from automation isn’t permanently settled. AI is advancing in that direction too. Anyone who draws that line with complete confidence is overconfident.

But if I define my value only by output and routine execution, I’ll always be vulnerable to something faster.

If my value includes judgment, trust, discernment, adaptability, and the ability to connect my small part of the field to a larger purpose, then the picture changes. AI becomes a tool I use, not a definition of who I am, or an immediate replacement for the work I do.

For some people, this reframing will feel like genuine good news. Their roles have always required judgment, and AI can finally free them from the parts that didn’t.

For others, the harder truth is that their role may need to change. Some work is primarily mechanical. Some lanes will be redesigned or eliminated in this process.

The courage in that moment isn’t pretending the role is something it isn’t. It’s being willing to grow. To move toward the parts of the field where human judgment still has the most to offer.

That is a hard ask. Unfortunately, for many people, it’s becoming a necessary one.

I also want to be honest about who fits this reframing the most. If you have domain knowledge, a network, and some runway, the opportunities ahead are genuine. If you are mid-career in a role that has been primarily mechanical, the path from insight to action looks different. That doesn’t make the direction wrong. It means the journey looks different depending on where you’re starting from.

But here’s something else worth considering, especially if uncertainty feels more like a threat than an opportunity.

The same tools raising these questions are also lowering barriers in ways we have never really seen before. Starting something new used to require capital, staff, infrastructure, and years of groundwork before the first real result.

That is still true for some things. But for many others, the gap between I have an idea and I have something real has collapsed in ways that are genuinely new.

The foosball player who spent years developing judgment, domain knowledge, and an instinct for the game now has access to tools that can help them build something of their own…not just execute better inside someone else’s system.

That’s a different kind of power than speed or efficiency.

It’s agency, if we choose to use it.

And it doesn’t have to be a solo venture. Some of the most interesting things happening right now involve small groups of people — two, three, maybe five — who share domain knowledge, complementary judgment, and a problem worth solving. With the help of these AI tools, they can pool their capabilities in ways that would have required a full company to attempt a decade ago.

Not everyone will go this route. Not everyone should.

But the option is more available than it has ever been. And for the person who has been quietly wondering whether there’s a different game they should be playing, this moment may be less of a threat and more of an opening.

The foosball player is still fixed to the rod. Still limited. Still dependent on timing. Still part of a game they don’t fully control.

That hasn’t changed.

What may need to change is the story the player tells about themselves. A bigger, truer one. One with more possibilities.

Use the AI tools. Learn how to maximize your position with them.

But don’t let AI reduce you.

You were never only the motion. You were never only the output. You were never only the kick.

You were the one responsible for what to do when the ball came your way, and that’s still true.

And now, for the first time, you may have more say than ever in choosing your table.

Photo by Stefan Steinbauer on Unsplash – I’ve only played foosball a few times. I’m terrible at it and haven’t played it enough to feel like the game is anything more than randomness and chaos. Funny thing is that lots of workers have a similar perspective on the job they’re doing for their employer.

The Adoption Curve in Real Life (It’s Messier than the Textbooks Say)

You’ve probably seen it happen. A new tool explodes across your social media feeds, your team starts asking questions, and you’re left wondering whether to embrace it or ignore it. Last month’s OpenClaw rollout is the latest reminder of how chaotic technology adoption really is.

Technology adoption curves are depicted as neat, predictable diagrams, a smooth line moving from innovators to early adopters to the early majority and eventually to late adopters.

In textbooks, the curve looks calm. In real life, it feels more like a storm.

Watching the recent surge of interest around OpenClaw, an open-source AI automation tool that lets developers and non-developers build custom autonomous agents, highlights this contrast clearly.

The tool moved rapidly from Clawdbot to MoltBot to OpenClaw. While its identity was in motion, innovators and early adopters embraced it with enthusiasm. Within days, countless articles and YouTube videos appeared with reviews, tutorials, and predictions about how it would reshape everything.

Within another week, we began hearing a more complete message. People still praised its power, but they also surfaced significant security weaknesses and vulnerabilities that accompany those capabilities.

My goal in this post is less about celebrating OpenClaw itself and more about understanding the real-world adoption pattern that I’ve seen countless times.


Phase 1: The Enthusiasts Light the Fuse

Early adopters jump in first. They’re curious, energetic, and quick to celebrate what they’ve discovered.

They imagine what could be, long before most people fully understand what exists today. They test edge cases, build experiments, share demos, and push boundaries simply because the possibility fascinates them.

This group rarely waits for permission. Their momentum gives a new idea its initial lift.


Phase 2: Quiet Experimenters Emerge

Close behind them comes a second tier of users who watch carefully and learn before speaking.

They begin to explore the tool in private, trying things on their own terms rather than joining the public conversation. Their silence can look like hesitation but usually signals careful attention and research.

They want confidence before committing.


Phase 3: The Tribalization of Opinion

At the same time, people who barely understand the technology start lining up on all sides of the debate as if it were a political issue.

Some declare that it will transform everything. Others warn that it is reckless or dangerous. Still others dismiss it as a passing fad.

Much of this reaction grows from identity, fear, or ideology rather than direct experience. The conversation gets louder while genuine clarity is harder to find.


Phase 4: Rapid Evolution and Ecosystem Growth

If the tool has real potential, the surrounding environment begins to move quickly.

The creators ship frequent updates of their new product. Early adopters invent new uses that nobody predicted. Supporting products (like Cloudflare services or the Mac Mini in the case of OpenClaw’s recent meteoric growth) suddenly see rising demand because they pair well with the new capability. Other companies look for ways to add integrations that make the new tool easier to plug into existing systems.

At this stage, the story shifts from a single product to an emerging ecosystem that amplifies its reach.


Phase 5: The Backlash from the Pioneers

Then a familiar turn arrives.

Some early adopters start getting bored and even a little disillusioned. Others start pointing out limitations, rough edges, and frustrations that were overlooked during their initial excitement. Sometimes they simply move on to the next shiny thing. Other times, sustained use reveals real constraints that only time can expose.

Ironically, the quieter second wave adopters are just beginning to feel comfortable. Enthusiasm and skepticism overlap in the marketplace.


Phase 6: Corporations Hit the Brakes

Meanwhile, large organizations watch from the sidelines while asking serious questions about security, governance, and risk. They focus on oversight, accountability, and long-term stability.

From a leadership perspective, this cautious approach seems safe. They can’t risk the family jewels on a promise of something amazing. At least, not yet.


Phase 7: The Safe Version Arrives

If the capability truly matters and maintains momentum, a major platform provider such as Microsoft, Google, Amazon, (and nowadays) OpenAI, or Anthropic eventually releases something comparable inside their own infrastructure.

This can happen through acquisition, partnership, or independent development. When it does, the risk profile shifts almost overnight.

What once felt experimental and dangerous now feels enterprise-ready. It’s the signal that many CIOs and CISOs were waiting for.


Phase 8: The Irony of Timing

By the time most corporations adopt the new “safer version” of the capability, the original pioneers have already moved on.

They’re chasing the next breakthrough and speaking about the earlier tool as if it belongs to another era. Six months earlier it felt magical. Now it feels ordinary, in part because that earlier innovation did its job of pushing the frontier outward.


What This Means for Leaders

For leaders who care about both capability and security, sprinting toward the bleeding edge rarely makes sense.

Waiting for stability, clear governance, and trusted integration usually serves organizations better. In practice, that means allowing major, “trusted” platforms to bring new capabilities inside their own secure environments before moving at scale.

At the same time, leaders can’t afford to look inward only. Something important is always unfolding beyond the walls of their organization. Entrepreneurs are experimenting. Startups are forming. New approaches and new possibilities are taking shape. If a company becomes too passive or too comfortable, it risks being outpaced rather than protected.

The real leadership challenge is learning to tell the difference between waves that will reshape an industry and those that will fade.

Some signs of staying power are multiple independent developers building on top of a new technology, respected technologists moving beyond flashy demos into real production use cases, and serious enterprise concerns about security and governance being addressed rather than dismissed.

We don’t need to chase every new wave.

The real test is recognizing the waves that matter before they feel safe enough to bring inside our organization.

Photo by Nat on Unsplash – Innovation is easy to see. Truth is harder to judge.     

AI as Iteration (at Scale)

We call it Artificial Intelligence, but large language models don’t think, reason, or understand in human terms.

A more accurate description might be Artificial Idea Iteration since these tools dramatically compress the cycles of research, drafting, testing, and revision.

SpaceX didn’t transform spaceflight by having perfect ideas. They collapsed the time between ideas and reality. Failing fast, learning quickly, and iterating relentlessly.

AI creates the same dynamic for knowledge work, letting us move from intuition to articulation to revision in hours instead of weeks.

Engineers rely on wind tunnels to test aircraft designs before committing real materials and lives. AI does this for thinking.

Iteration itself isn’t new. What’s new is the scale for iteration that we now have at our fingertips. We can explore multiple paths, abandon weak directions quickly, and refine promising ones without the time, coordination, and risk that once kept ideas locked in our heads.

When iteration becomes inexpensive, we can take more intellectual risks and shift from trying to always be right to trying to always get better.

It’s ironic that as iteration is becoming cheaper and faster with AI tools, human judgment becomes more valuable. Someone still needs to know what’s worth developing, what deserves refinement, and when something is complete rather than exhausted.

The intelligence was never in the machine. AI simply gives us the capacity to develop ideas, test them against reality, and learn from the results at a scale and speed we’ve never had before.

Iteration at scale changes what’s possible. Judgment determines what’s worth pursuing.

Photo by SpaceX on Unsplash – when SpaceX proposed the idea of landing and reusing their rocket boosters after each launch, the idea seemed impossible. Now it’s happening about 3 times per week…and they’re just getting started. 

Please share this post if you found it valuable…

What If Jarvis Is Available to Each of Us?

One of the best parts of the Iron Man movies is Jarvis, the ever-present AI system that acts as an extension of Tony Stark’s mind. Jarvis is a collaborator. A research analyst. A pattern finder. A problem solver. He handles logistics, runs calculations, surfaces insights, and stays ready in the background until Tony needs him.

Jarvis amplifies and extends Tony’s genius.

Recently, I introduced a friend to ChatGPT. He hasn’t jumped into any AI tools yet, but he can see that people around him are finding real value in them. Like many thoughtful people, his first questions weren’t about features. They were about data privacy. About whether these tools were simply repackaging other people’s work. About what was really going on under the hood.

At one point, he asked a simple question:

Is it like having Jarvis around whenever you need him?

To me, the honest answer is yes.

But it’s also important to realize that Jarvis isn’t perfect. And neither are the AI tools available to us today.

The First Questions Matter. Almost every serious conversation about AI tools begins in the same place.

Is my data safe?

Who owns the output?

Can I trust what I’m getting back?

These are the same questions we ask whenever a new digital tool emerges.

At a basic level, paid versions of tools like ChatGPT don’t use our conversations to train public models. Even with that protection in place, I still guard my data carefully. If I’m asking questions related to finances, health, or legal matters, I use hypothetical scenarios rather than personal specifics. I’m the first line of defense when it comes to my personal information.

In professional and commercial environments, organizations using business or enterprise versions gain additional protections around data isolation, encryption, access controls, and audit logging. At the enterprise level, some platforms even allow customers to manage their own encryption keys on top of the platform’s security.

The tool doesn’t decide what’s appropriate to share. We do.

Who Owns the Output? We do. The tool doesn’t claim authorship. It doesn’t retain ownership of what it produces for you. The output becomes yours because you directed the work, supplied the context, and decided how the result would be used.

But ownership is only part of the story. Responsibility matters just as much.

The tool doesn’t know your intent. It doesn’t understand your audience. And it doesn’t bear the consequences of getting something wrong. That responsibility stays with the human in the loop. That’s us.

In that sense, using AI isn’t fundamentally different from working with many other analytical tools we may have used for decades. The work becomes yours because you shape it, refine it, and ultimately stand behind it.

A Note on Sources and Attribution. Owning the output also means owning the responsibility for its accuracy and integrity. This is especially important when it comes to research and citations.

AI tools can pull together large volumes of information, synthesize ideas across many inputs, and present them in clean, compelling language. That capability is incredibly useful. But it doesn’t remove the author’s responsibility to understand where ideas come from and how they’re represented.

The tool may summarize research. It may surface commonly known concepts. It may produce language that sounds authoritative and polished. What it doesn’t guarantee is proper attribution or assurance that content isn’t too closely mirroring a specific source.

That responsibility stays with the human.

When I use AI for research or writing, I treat it as a starting point. I ask it to surface each source. I follow links. I read original material. And when an idea, quote, or framework belongs to someone else, I make sure it’s credited appropriately. This step also helps catch hallucinations that sound amazingly accurate.

Ownership requires standing behind the integrity of the work to the best of your ability.

Can I Trust What I’m Getting Back? Usually. Only with supervision. AI tools are very good at consuming information, identifying patterns, and accelerating first drafts. They are less reliable when precision, nuance, or real-world verification is required.

They can be confidently wrong. They can lose context. They can blend accurate information with outdated or incomplete details.

AI tools hallucinate regularly, though this tendency improves with each new model release. These aren’t reasons to dismiss AI as a tool. They’re reminders to understand what AI is and what it isn’t.

Trust paired with skepticism is the right approach. AI tools are fast-thinking assistants, never the final authority.

Verification still matters. Judgment still matters. Experience still matters. In fact, the better your judgment, the more valuable these tools become.

Why Memory Changes the Equation. Most people use AI tools like a smart search engine. Ask a question. Get an answer. Move on.

That works. But it barely scratches the surface of what’s possible. The real multiplier happens when the tool is allowed to remember context.

ChatGPT includes a memory capability that lets you intentionally store preferences, patterns, and reference material across conversations. Used well, this transforms the tool from something you query into something you can collaborate with.

Over the past year and across hundreds of prompt conversations, I’ve shared:

-My writing voice and stylistic preferences

-A digital copy of a leadership book I wrote over a decade ago (about 65,000 words)

-An autobiography I wrote for my children and grandchildren (about 90,000 words)

-Hundreds of blog posts published over the past 13 years (roughly 240,000 words)

-How I like to structure projects and approach new work

In total, I’ve trained the tool with nearly 400,000 words of my original content. This began as an experiment to see if I could reduce generic responses and encourage the tool to approach questions from my foundational perspective.

The difference is tangible. Early on, whether I was drafting communication, analyzing problems, or organizing ideas, the tool would produce polished but generic output that required extensive rewriting. Now, it reflects my priorities, uses frameworks I’ve shared, and produces work that feels aligned with how I think. I still edit quite a bit, but I’m refining rather than rebuilding.

Collaboration Requires Judgment. My friend asked me another important question.

Do you still feel like the writing you produce with it is yours?

Yes. Completely.

Every project I’ve worked on with these tools begins with my original content, reinforced by reference material I created long before AI entered the picture. Hundreds of thousands of words written over more than a decade. Clear intent about audience and purpose, using a defined process I’ve established before drafting anything.

The tool supports rather than replaces my judgment. Drafts usually require significant edits, shifts in tone, and sometimes complete rewrites.

Where it excels is in synthesis. In retrieval. In pattern recognition across large bodies of work. In accelerating first drafts that already have direction.

Large projects require constant supervision. Threads get crossed. Context gets muddled. The tool needs redirection, clarification, and sometimes retraining as the work evolves.

This is simply the nature of collaboration.

Why the Hype Misses the Point. There’s a popular narrative circulating that anyone can now write a book, write a complex software application, create a website, start a business, or become an expert with just a few well-written prompts.

This misunderstands both the tools and the craft associated with each of these tasks.

I think of AI the way I think of a great camera. We can all buy the same equipment. That doesn’t guarantee an amazing photo. The quality still depends on the eye behind the lens, the patience and skills to frame the shot, and the willingness to edit ruthlessly afterward.

Ansel Adams once said that asking him what camera he used was like asking a writer what typewriter he used. The tool matters. But it has never been the point.

The same is true with AI tools.

Without intent, taste, and care, straight AI output feels flat and formulaic. Readers will notice. Substance can’t be faked. Depth doesn’t appear by accident.

These tools reflect the discipline of the person using them.

Hitting the Ground Running. For someone just getting started, the biggest mistake is expecting magic. The better approach is to build understanding and training into the process (for you and the AI tool).

Explain what you’re trying to do.

Tell the tool how you think.

Correct it when it’s wrong.

Guide it when it drifts.

Treat it like a junior collaborator. One that’s fast, tireless, and remarkably capable…but still dependent on direction and context.

If you’re looking for a practical first step, try this. Find an article you’ve read recently and ask the tool to summarize it. Compare that summary to the original. Notice what it captured, what it missed, and what it misunderstood. This simple exercise reveals both the tool’s strengths and its limitations in a low-stakes way.

From there, you might ask it to help you draft an email, outline a presentation, or brainstorm solutions to a problem you’re facing. Start with tasks where you can easily evaluate the quality of the output and provide feedback on what the tool provides. 

Over time, you’ll notice the quality improves. That’s when the tool begins to resemble the Jarvis we imagined. It isn’t perfect, but it becomes more aligned with what you value most and how you like to approach your work. At the same time, your understanding of its strengths and limitations becomes clearer through consistent use.

AI doesn’t replace thinking. It requires it.

Used carelessly, it produces noise. Used deliberately, it sharpens your insights.

The question is whether we’re willing to slow down at the beginning, set expectations, and engage AI tools with proper intention.

Only then can these tools truly serve us well.

Photo by Chris Haws on Unsplash – photographers often say, “It’s about the photographer, not the camera.”

If this post was helpful, please feel free to share it.

Why Curiosity Is the New Competitive Advantage

Imagine two managers sitting at their desks, both using the same AI tool.

The first asks it to write the same weekly report, just faster. Three hours saved. Nothing new learned. Box checked.

The second uses the AI differently. She asks it to analyze six months of data and search for hidden patterns. It reveals that half the metrics everyone tracks have no real connection to success. Two new questions emerge. She rebuilds the entire process from scratch.

Same tool. Different questions. One finds speed. The other finds wisdom.

This is the divide that will define the next decade of work.

For a long time, leadership revolved around structure and repetition. The best organizations built systems that ran like clockwork. Discipline became an art. Efficiency became a mantra.

Books like Good to Great showed how rigorous process could transform good companies into great ones through consistent execution. When competitive advantage came from doing the same thing better and faster than everyone else, process was power.

AI changes this equation entirely. It makes these processes faster, yes, but it also asks a more unsettling question. Why are you doing this at all?

Speed alone means little when the racetrack itself is disappearing.

Curiosity in the age of AI means something specific. It asks “why” when everyone else asks “how.” It uses AI to question assumptions rather than simply execute them. It treats every automated task as an opportunity to rethink the underlying goal. And it accepts the possibility that your job, as you currently do it, might need to change entirely.

That last part is uncomfortable. Many people fear AI will replace them. Paradoxically, the people most at risk are those who refuse to use AI to reimagine their own work. The curious ones are already replacing themselves with something better.

Many organizations speak of innovation, but their true values show in what they celebrate. Do they promote the person who completes fifty tasks efficiently, or the one who eliminates thirty through reinvention? Most choose the first. They reward throughput. They measure activity. They praise the person who worked late rather than the one who made late nights unnecessary.

This worked when efficiency was scarce. Now efficiency can be abundant. AI will handle efficiency. What remains scarce is the imagination to ask what we should be doing instead. Organizations that thrive will use AI to do entirely different things. Things that were impossible or invisible before.

Working with AI requires more than technical skills. The syntax is easy. The prompts are learnable. Connecting AI to our applications isn’t the challenge. The difficulty is our mindset. Having the patience to experiment when you could just execute. The humility to see that the way you’ve always done things may no longer be the best way. The courage to ask “what if” when your entire career has been built on knowing “how to.”

This is why curiosity has become a competitive advantage. The willingness to probe, to question, to let AI reveal what you’ve been missing. Because AI is a mirror. It reflects whatever you bring to it, amplified. Bring efficiency-seeking and get marginal gains. Bring genuine curiosity and discover new possibilities.

Here’s something to try this week. Take your most routine task. The report, the analysis, the update you’ve done a hundred times. Before asking AI to replicate it, ask a different question. What would make this unnecessary? What question should we be asking instead?

You might discover the task still matters. Or you might realize you’ve been generating reports nobody reads, tracking metrics nobody uses, or solving problems that stopped being relevant two years ago.

Efficiency fades. What feels efficient today becomes everyone’s baseline tomorrow. But invention endures. The capacity to see what others miss, to ask what others skip, to build what nobody else imagines yet.

The curious will see opportunity. The creative will see possibility. The courageous will see permission. Together they will build what comes next.

The tools are here. The door is open. Work we haven’t imagined yet waits on the other side. Solving problems not yet seen, creating value in ways that don’t exist today.

Only if you’re willing to ask better questions.

Photo by Subhasish Dutta on Unsplash – the path to reinvention