When Effort Isn’t What’s Missing

The engine gets louder as the RPMs climb, but the car isn’t moving.
More activity, more motion. But no movement.

The constraint holding everything back was overlooked.
Until that changes, no amount of throttle will help.

Nothing’s broken. It’s just stuck in neutral.

Sometimes the system isn’t broken.

It’s in the wrong gear.

Photo by Vadym Kudriavtsev on Unsplash

Solving the Right Problem

Elon Musk once said he challenges requirements because they’re usually wrong. His warning is simple.

Don’t work hard to get the perfect answer to the wrong problem.

This idea goes far beyond engineering. It shows up in leadership, careers, relationships, and the quiet choices that shape our lives.

We’re trained to value effort. Be disciplined. Follow through. Execute well.

All great instincts, but we can spend months optimizing something that never really mattered.

We inherit assumptions, accept the framing, and start solving before asking whether we understand the problem.

Strong leaders question the premise.

What are we trying to accomplish?

If we succeed, what actually changes?

What are the real constraints?

There’s a related engineering mindset that captures this perfectly: the best part is no part at all.

Before improving something, ask whether it should exist in the first place.

This creates a simple hierarchy:

Delete — try to remove the requirement or part

Simplify — if it must exist, make it simpler

Optimize — only after you’re sure it belongs

Automate — last step, not first

Most organizations do this in reverse. They automate and optimize things that never needed to exist.

This is what gives us tools to manage our tools instead of time to do the work.

Six Questions at the End of the Day

For the next two weeks, I’ll be doing something new.

Marshall Goldsmith is encouraging people to ask themselves six questions every day. That’s the whole experiment.

Six questions. Asked at night. Answered honestly.

They all start the same way:

Did I do my best to…

The questions don’t ask what happened to me today. They ask what I did with today.

During his webinar introducing the experiment, Mr. Goldsmith referred to the Rigveda, an ancient poem from India that he described as being thousands of years old. He just mentioned it and moved on.

I had never heard of the Rigveda, so down the rabbit hole I went after his webinar ended.

The Rigveda is a collection of hymns. A lot of it is about everyday things. The sun rising. Fire. Breath. Life continuing. There’s a sense that daily life matters. That how we live each day counts.

People have been trying to figure out how to live a good life for a long time. Way before self-help and leadership books. Way before webinars and podcasts.

St. Ignatius of Loyola comes to mind. He developed something called the Daily Examen. It’s a review of the day. You look back. You notice where you were grateful. You notice where you fell short. You think about tomorrow.

Different times. Different traditions. Same basic ideas.

At the end of the day, pause and ask, “How did I live today?”

Goldsmith’s six questions fit right into that pattern.

Did I do my best to be happy today?

The question hits differently when the day is already over. I can see clearly whether I purposely enjoyed the day or just rushed through it.

Did I do my best to build positive relationships?

Now I’m thinking about the way I spoke to someone. Whether I listened. Whether I gave someone my full attention.

The questions are short. The reflections take some time.

Goldsmith describes happiness as “enjoyment with the process of life itself.” Happiness lives inside the day. It grows out of our engagement with what’s already in front of us.

The writers of the Rigveda seemed to understand that. Ignatius understood it too. They’re asking us to pay attention to our life and actively engage in it.

I’m only a few days into this experiment. Nothing dramatic has happened. No big breakthroughs.

But I know I’ll be answering these six questions later. I move through the day with more awareness. I catch myself sooner. I stay present a little longer. I think twice before reacting.

It’s a small shift…but small shifts repeated over time shape our lives.

Thousands of years have passed since the Rigveda was written. Centuries since Ignatius taught people to examine their day.

Our modern life looks very different, but the question remains the same.

How did I live today?


Here are Goldsmith’s six questions:

Did I do my best to set clear goals today?

-Did I do my best to make progress towards my goals today?

-Did I do my best to find meaning today?

-Did I do my best to be happy today?

-Did I do my best to build positive relationships today?

-Did I do my best to be engaged today?

h/t – Marshall Goldsmith

Photo by Jonh Corner on Unsplash – looks like an awesome spot to think about these questions.

The Adoption Curve in Real Life (It’s Messier than the Textbooks Say)

You’ve probably seen it happen. A new tool explodes across your social media feeds, your team starts asking questions, and you’re left wondering whether to embrace it or ignore it. Last month’s OpenClaw rollout is the latest reminder of how chaotic technology adoption really is.

Technology adoption curves are depicted as neat, predictable diagrams, a smooth line moving from innovators to early adopters to the early majority and eventually to late adopters.

In textbooks, the curve looks calm. In real life, it feels more like a storm.

Watching the recent surge of interest around OpenClaw, an open-source AI automation tool that lets developers and non-developers build custom autonomous agents, highlights this contrast clearly.

The tool moved rapidly from Clawdbot to MoltBot to OpenClaw. While its identity was in motion, innovators and early adopters embraced it with enthusiasm. Within days, countless articles and YouTube videos appeared with reviews, tutorials, and predictions about how it would reshape everything.

Within another week, we began hearing a more complete message. People still praised its power, but they also surfaced significant security weaknesses and vulnerabilities that accompany those capabilities.

My goal in this post is less about celebrating OpenClaw itself and more about understanding the real-world adoption pattern that I’ve seen countless times.


Phase 1: The Enthusiasts Light the Fuse

Early adopters jump in first. They’re curious, energetic, and quick to celebrate what they’ve discovered.

They imagine what could be, long before most people fully understand what exists today. They test edge cases, build experiments, share demos, and push boundaries simply because the possibility fascinates them.

This group rarely waits for permission. Their momentum gives a new idea its initial lift.


Phase 2: Quiet Experimenters Emerge

Close behind them comes a second tier of users who watch carefully and learn before speaking.

They begin to explore the tool in private, trying things on their own terms rather than joining the public conversation. Their silence can look like hesitation but usually signals careful attention and research.

They want confidence before committing.


Phase 3: The Tribalization of Opinion

At the same time, people who barely understand the technology start lining up on all sides of the debate as if it were a political issue.

Some declare that it will transform everything. Others warn that it is reckless or dangerous. Still others dismiss it as a passing fad.

Much of this reaction grows from identity, fear, or ideology rather than direct experience. The conversation gets louder while genuine clarity is harder to find.


Phase 4: Rapid Evolution and Ecosystem Growth

If the tool has real potential, the surrounding environment begins to move quickly.

The creators ship frequent updates of their new product. Early adopters invent new uses that nobody predicted. Supporting products (like Cloudflare services or the Mac Mini in the case of OpenClaw’s recent meteoric growth) suddenly see rising demand because they pair well with the new capability. Other companies look for ways to add integrations that make the new tool easier to plug into existing systems.

At this stage, the story shifts from a single product to an emerging ecosystem that amplifies its reach.


Phase 5: The Backlash from the Pioneers

Then a familiar turn arrives.

Some early adopters start getting bored and even a little disillusioned. Others start pointing out limitations, rough edges, and frustrations that were overlooked during their initial excitement. Sometimes they simply move on to the next shiny thing. Other times, sustained use reveals real constraints that only time can expose.

Ironically, the quieter second wave adopters are just beginning to feel comfortable. Enthusiasm and skepticism overlap in the marketplace.


Phase 6: Corporations Hit the Brakes

Meanwhile, large organizations watch from the sidelines while asking serious questions about security, governance, and risk. They focus on oversight, accountability, and long-term stability.

From a leadership perspective, this cautious approach seems safe. They can’t risk the family jewels on a promise of something amazing. At least, not yet.


Phase 7: The Safe Version Arrives

If the capability truly matters and maintains momentum, a major platform provider such as Microsoft, Google, Amazon, (and nowadays) OpenAI, or Anthropic eventually releases something comparable inside their own infrastructure.

This can happen through acquisition, partnership, or independent development. When it does, the risk profile shifts almost overnight.

What once felt experimental and dangerous now feels enterprise-ready. It’s the signal that many CIOs and CISOs were waiting for.


Phase 8: The Irony of Timing

By the time most corporations adopt the new “safer version” of the capability, the original pioneers have already moved on.

They’re chasing the next breakthrough and speaking about the earlier tool as if it belongs to another era. Six months earlier it felt magical. Now it feels ordinary, in part because that earlier innovation did its job of pushing the frontier outward.


What This Means for Leaders

For leaders who care about both capability and security, sprinting toward the bleeding edge rarely makes sense.

Waiting for stability, clear governance, and trusted integration usually serves organizations better. In practice, that means allowing major, “trusted” platforms to bring new capabilities inside their own secure environments before moving at scale.

At the same time, leaders can’t afford to look inward only. Something important is always unfolding beyond the walls of their organization. Entrepreneurs are experimenting. Startups are forming. New approaches and new possibilities are taking shape. If a company becomes too passive or too comfortable, it risks being outpaced rather than protected.

The real leadership challenge is learning to tell the difference between waves that will reshape an industry and those that will fade.

Some signs of staying power are multiple independent developers building on top of a new technology, respected technologists moving beyond flashy demos into real production use cases, and serious enterprise concerns about security and governance being addressed rather than dismissed.

We don’t need to chase every new wave.

The real test is recognizing the waves that matter before they feel safe enough to bring inside our organization.

Photo by Nat on Unsplash – Innovation is easy to see. Truth is harder to judge.     

The Second Generation Is Where It Gets Real

The first version of almost anything is an act of discovery. We’re learning in real time, usually without understanding what we’re building. We don’t yet know which parts will matter, which ones deserve less attention, or where the challenges are.

The first version is shaped by assumptions. Some accurate, others incomplete. It’s often held together by optimism and a willingness to learn as we go.

The first generation isn’t meant to be polished or permanent. Its purpose is proof of life.

Does this idea work at all?
Do we enjoy pursuing it?
Is there something here worth continuing once the novelty fades?

Many ideas never move beyond that first stage. Excitement gives way to routine. Maintenance enters the picture. It’s decision time.

Is this something I’m willing to own, or was I simply exploring an interesting possibility?

If the answer leans toward exploration alone, the idea stalls, usually forever. It never makes the leap from curiosity to commitment.

That leap matters.

William Hutchison Murray said it well, “Until one is committed, there is hesitancy…the moment one definitely commits oneself, then Providence moves too.”

The second generation begins at that moment of commitment.

If we choose to begin version two, everything changes.

We’re no longer experimenting or learning if this idea works. We’re deciding that it matters enough to carry forward.

We’re operating with experience now. We’ve seen where effort was misdirected and where the momentum came from. We understand which details carry lasting value and which ones only seemed important at first.

More importantly, we own it now.

That’s why the second generation feels heavier. The weight of responsibility belongs to us. We know too much to pretend otherwise.

An idea that survives long enough to earn a second version has already passed an important test. It has encountered reality and endured.

The first generation asks whether something can exist. The second generation answers whether it should continue.

From there, our work evolves. Spontaneous ideation turns into direction. The purpose becomes clearer than the feature set. Identity begins to emerge.

This is how we do it.
This is what matters.
This is what we’re willing to stand behind.

The second generation is the foundation for everything that follows…far more than the first. It establishes patterns, standards, and expectations for what comes next.

Tackling version one takes courage. But finishing that version is only part of the journey.

The deeper test lies in beginning again. This time with clearer eyes, better judgment, and full ownership of what we’re building.

We move from discovering what we could build to owning what’s truly worth building.

Photo by Ivan Aleksic on Unsplash

If you know someone standing at the edge of a second generation, feel free to pass this along to them.

When the Disruptors Get Disrupted

For most people in IT, change is constant.

New platforms arrive. Old tools fade. Processes are reworked. Skills must evolve.

In that sense, disruption has long been part of the job description.

Software developers create new and improved tools. They streamline workflows. They automate tasks that once required entire teams. Over time, they have reshaped and disrupted how work gets done across nearly every industry.

This pattern has been in place for decades.

For software developers, something different is happening now.

With the arrival of AI-assisted development tools, including systems like Anthropic’s Claude Code, disruption has begun to turn inward. These tools are reshaping how developers approach their own work.

For many in the profession, this feels unfamiliar.

Software development continues, but the definition and details of the role are shifting. Tasks that once required sustained manual effort can now be generated, refactored, tested, and explained with remarkable speed.

A developer who once spent an afternoon writing API integration code might now spend fifteen minutes directing an AI to produce it, followed by an hour reviewing edge cases and security implications. The center of gravity moves toward judgment and direction rather than execution and production.

When job roles experience disruption, responses tend to follow predictable patterns. Some people dismiss the change as temporary or overhyped. Others push back, trying to protect familiar and comfortable ways of working. Still others approach the change with curiosity and engagement, interested in how new capabilities can expand what’s possible.

Intent Makes the Difference

An important distinction often gets overlooked when discussing pushbacks.

Some resistance grows from denial. It spends energy cataloging flaws, defending established workflows, or hoping new tools disappear. That approach drains effort without shaping new outcomes. It preserves little and teaches even less.

Other forms of resistance grow from professional judgment.

Experienced developers often notice risks that early enthusiasm misses. Fragile abstractions, security gaps, maintenance burdens, and failures that appear only at scale become visible through lived experience. When developers raise concerns in the service of quality, safety, and long-term viability, their input strengthens the eventual solution. This kind of resistance shapes progress rather than attempting to stop it.

The most effective developers recognize this shift and respond deliberately. They move away from opposing new tools and toward advocating for their effective use. They ask better questions. They redesign workflows. They establish guardrails. They apply experience where judgment continues to matter.

In doing so, they follow the same guidance developers have offered others for years.

Embrace new tools.
Continually re-engineer how work gets done.
Move upstream toward problem framing, system design, and decision-making.

Greater Emphasis on Judgment

AI generates code with increasing competence. Decisions about what should be built, which tradeoffs make sense, and how systems must evolve over time still require human judgment. As automation accelerates, these responsibilities grow more visible and more critical.

This opportunity in front of developers calls for leadership.

Developers who work fluently with these tools, guide their thoughtful adoption, and help their teams and organizations navigate the transition become trusted guides through change. Their leadership shows up in practical ways:

-pairing new capabilities with healthy skepticism

-putting review processes in place to catch subtle errors

-mentoring junior developers in how to evaluate results rather than simply generating them

-exercising judgment to prioritize tasks that benefit most from automation

Disruption has always been part of the work.

The open question is whether we meet disruption as participants, or step forward as guides.

Photo by AltumCode on Unsplash

Grandpa Bob Encouraging Leadership — A New Podcast

Over the last 15 years, I’ve written a lot of words.

Words shaped by work and leadership challenges.

Words that grew out of quiet reflection or things that caught my attention at just the right moment.

Many of them were also shaped by family, faith, mistakes, and moments that stayed with me longer than I expected.

More than a few people have suggested I start a podcast. They’d tell me it’s a lot easier to listen than it is to keep up with a bunch of new reading assignments each week.

While my mom was still alive and living with significant vision loss from macular degeneration, I gave the idea serious thought. Listening would have been the only practical way for her to “read” my posts.

Unfortunately, that “serious thought” didn’t turn into action in time for her to benefit.

Ironically, for someone who usually believes in starting, then figuring things out along the way, I let all the steps required to set up a podcast get in the way of beginning.

Until now.

So today, I’m launching a new podcast:

Grandpa Bob Encouraging Leadership

This podcast is a series of short reflections on leadership, life, and learning. I’m sharing them first and foremost with my grandchildren…and with anyone else who might be listening in.

The episodes are intentionally brief, thoughtful, and unhurried.

They’re the kind of reflections you can listen to on a walk, during a quiet drive, or at the start or end of your day.

They’re meant to create space, not fill it.

Who it’s for

At its heart, this podcast is for my grandkids.

Someday, years from now, I want them to be able to hear my voice and know what mattered to me.

The things I noticed. What I learned the hard way. What I hope they carry with them as they find their own way in the world.

But leadership lessons rarely belong to just one audience.

So, if you’re listening, as a parent, a leader, a teacher, or simply someone trying to live well, you’re welcome here too.

What we’ll talk about

Each episode explores a simple idea. Here are some examples:

-Showing up when progress feels slow

-Letting go of certainty

-Choosing gratitude over entitlement

-Learning to wait without drifting

-Leading with trust, humility, and patience

-Paying attention to what’s quietly shaping us

    There won’t be hype. There won’t be slogans. There certainly won’t be any fancy edits.

    I’ll discuss questions worth talking about, and observations a loving grandfather hopes to pass along to his grandkids.

    An invitation

    You can find Grandpa Bob Encouraging Leadership wherever you listen to podcasts.

    Don’t worry if you can’t listen to every episode.

    Please feel free to disagree with anything I say. I don’t have a monopoly on the right answers.

    If even one episode helps you pause, notice something new, or steady yourself a little, then it’s doing what it was meant to do.

    Thanks for listening.

    And if you’re one of my grandkids reading this someday, know that I believe in you and I’m always rooting for you.

    If you’re listening alongside them, the same is true for you.

    Photo by Patrick Fore on Unsplash

    Just Show Up

    As we enter 2026, it’s tempting to look for a new system, a better plan, or the perfect moment to begin.

    Most of the time, the real answer is simpler.

    Just show up.

    The secret to progress isn’t brilliance or motivation. It isn’t certainty or confidence. It’s presence.

    Show up every day.
    Show up when it’s easy.
    Show up when it’s uncomfortable.
    Show up when you don’t know what comes next.

    Show up and be present.
    Show up and handle your business.
    Show up and figure it out as you go.
    Show up for the people you love.
    Show up for the work that matters.
    Show up for yourself.

    When you’re unsure what to do next, don’t overthink it. Show up and take the next step. Clarity usually follows movement.

    The alternative is standing down. Waiting. Drifting. Quietly giving up ground you were meant to claim.

    You’re stronger than that.

    Progress is rarely dramatic. It’s built through consistency. Through ordinary days stacked on top of each other. Choosing to show up when no one is watching.

    The hard things happen because you showed up.
    The meaningful things happen because you stayed.
    The impossible things only happen when you refuse to disappear.

    There’s another truth hidden in showing up.

    When you show up, you give others permission to do the same. Your presence becomes proof. Your consistency becomes encouragement. People notice. They realize they can take the next step too.

    So how do you crush your goals in 2026?

    You don’t wait for the perfect plan.
    You don’t wait to feel ready.

    You show up.
    You make it happen.

    Because that’s what you do.
    And this is how things get done.

    Photo by NEOM on Unsplash

    Please share this post with someone if you found it helpful. Thanks!

    Decision Time

    A decision sits in front of us, waiting.

    We turn it over in our head. We ask a few more questions. We look for one more data point. We check with another person whose opinion we respect. We wait for the timing to feel right.

    And still, we hesitate.

    We tell ourselves we need more information. More time. More certainty.

    Indecision usually grows from very human places. Fear of being wrong. Fear of being blamed. Fear of choosing a path that can’t be undone. Fear of embarrassment.

    Add decision fatigue to the mix and postponement starts to feel reasonable.

    Meanwhile, the cost of waiting accumulates quietly. Teams stall. Momentum fades. Confidence erodes. What began as a thoughtful pause turns into drift.

    Most leadership decisions are made without perfect information. Progress rarely waits for certainty.

    So, what is our hesitation really telling us?

    Sometimes, it’s a clear no. A request pulls us away from what matters most. We don’t like what we see, but we’re not sure why. Maybe a partnership doesn’t sit right with our values. In these moments, extended thinking isn’t searching for clarity. It’s searching for a way to explain our decision.

    Other times, we hesitate because the decision stretches us. It introduces uncertainty. It raises our visibility. It asks more of us than we feel ready to give. Growth decisions usually feel uncomfortable before they feel right.

    At some point, the data stops improving and the waiting stops helping.

    Start small. Take a step that tests the decision rather than locking it in. Forward motion reveals new information…something thinking alone can’t.

    A decision that turns out to be wrong isn’t failure.

    It’s feedback.

    And feedback points us toward our next decision.

    “Whenever you see a successful business, someone once made a courageous decision.”
    — Peter F. Drucker

    Photo by ChatGPT’s new image generator, which is way better than prior versions of the tool.

    What If Jarvis Is Available to Each of Us?

    One of the best parts of the Iron Man movies is Jarvis, the ever-present AI system that acts as an extension of Tony Stark’s mind. Jarvis is a collaborator. A research analyst. A pattern finder. A problem solver. He handles logistics, runs calculations, surfaces insights, and stays ready in the background until Tony needs him.

    Jarvis amplifies and extends Tony’s genius.

    Recently, I introduced a friend to ChatGPT. He hasn’t jumped into any AI tools yet, but he can see that people around him are finding real value in them. Like many thoughtful people, his first questions weren’t about features. They were about data privacy. About whether these tools were simply repackaging other people’s work. About what was really going on under the hood.

    At one point, he asked a simple question:

    Is it like having Jarvis around whenever you need him?

    To me, the honest answer is yes.

    But it’s also important to realize that Jarvis isn’t perfect. And neither are the AI tools available to us today.

    The First Questions Matter. Almost every serious conversation about AI tools begins in the same place.

    Is my data safe?

    Who owns the output?

    Can I trust what I’m getting back?

    These are the same questions we ask whenever a new digital tool emerges.

    At a basic level, paid versions of tools like ChatGPT don’t use our conversations to train public models. Even with that protection in place, I still guard my data carefully. If I’m asking questions related to finances, health, or legal matters, I use hypothetical scenarios rather than personal specifics. I’m the first line of defense when it comes to my personal information.

    In professional and commercial environments, organizations using business or enterprise versions gain additional protections around data isolation, encryption, access controls, and audit logging. At the enterprise level, some platforms even allow customers to manage their own encryption keys on top of the platform’s security.

    The tool doesn’t decide what’s appropriate to share. We do.

    Who Owns the Output? We do. The tool doesn’t claim authorship. It doesn’t retain ownership of what it produces for you. The output becomes yours because you directed the work, supplied the context, and decided how the result would be used.

    But ownership is only part of the story. Responsibility matters just as much.

    The tool doesn’t know your intent. It doesn’t understand your audience. And it doesn’t bear the consequences of getting something wrong. That responsibility stays with the human in the loop. That’s us.

    In that sense, using AI isn’t fundamentally different from working with many other analytical tools we may have used for decades. The work becomes yours because you shape it, refine it, and ultimately stand behind it.

    A Note on Sources and Attribution. Owning the output also means owning the responsibility for its accuracy and integrity. This is especially important when it comes to research and citations.

    AI tools can pull together large volumes of information, synthesize ideas across many inputs, and present them in clean, compelling language. That capability is incredibly useful. But it doesn’t remove the author’s responsibility to understand where ideas come from and how they’re represented.

    The tool may summarize research. It may surface commonly known concepts. It may produce language that sounds authoritative and polished. What it doesn’t guarantee is proper attribution or assurance that content isn’t too closely mirroring a specific source.

    That responsibility stays with the human.

    When I use AI for research or writing, I treat it as a starting point. I ask it to surface each source. I follow links. I read original material. And when an idea, quote, or framework belongs to someone else, I make sure it’s credited appropriately. This step also helps catch hallucinations that sound amazingly accurate.

    Ownership requires standing behind the integrity of the work to the best of your ability.

    Can I Trust What I’m Getting Back? Usually. Only with supervision. AI tools are very good at consuming information, identifying patterns, and accelerating first drafts. They are less reliable when precision, nuance, or real-world verification is required.

    They can be confidently wrong. They can lose context. They can blend accurate information with outdated or incomplete details.

    AI tools hallucinate regularly, though this tendency improves with each new model release. These aren’t reasons to dismiss AI as a tool. They’re reminders to understand what AI is and what it isn’t.

    Trust paired with skepticism is the right approach. AI tools are fast-thinking assistants, never the final authority.

    Verification still matters. Judgment still matters. Experience still matters. In fact, the better your judgment, the more valuable these tools become.

    Why Memory Changes the Equation. Most people use AI tools like a smart search engine. Ask a question. Get an answer. Move on.

    That works. But it barely scratches the surface of what’s possible. The real multiplier happens when the tool is allowed to remember context.

    ChatGPT includes a memory capability that lets you intentionally store preferences, patterns, and reference material across conversations. Used well, this transforms the tool from something you query into something you can collaborate with.

    Over the past year and across hundreds of prompt conversations, I’ve shared:

    -My writing voice and stylistic preferences

    -A digital copy of a leadership book I wrote over a decade ago (about 65,000 words)

    -An autobiography I wrote for my children and grandchildren (about 90,000 words)

    -Hundreds of blog posts published over the past 13 years (roughly 240,000 words)

    -How I like to structure projects and approach new work

    In total, I’ve trained the tool with nearly 400,000 words of my original content. This began as an experiment to see if I could reduce generic responses and encourage the tool to approach questions from my foundational perspective.

    The difference is tangible. Early on, whether I was drafting communication, analyzing problems, or organizing ideas, the tool would produce polished but generic output that required extensive rewriting. Now, it reflects my priorities, uses frameworks I’ve shared, and produces work that feels aligned with how I think. I still edit quite a bit, but I’m refining rather than rebuilding.

    Collaboration Requires Judgment. My friend asked me another important question.

    Do you still feel like the writing you produce with it is yours?

    Yes. Completely.

    Every project I’ve worked on with these tools begins with my original content, reinforced by reference material I created long before AI entered the picture. Hundreds of thousands of words written over more than a decade. Clear intent about audience and purpose, using a defined process I’ve established before drafting anything.

    The tool supports rather than replaces my judgment. Drafts usually require significant edits, shifts in tone, and sometimes complete rewrites.

    Where it excels is in synthesis. In retrieval. In pattern recognition across large bodies of work. In accelerating first drafts that already have direction.

    Large projects require constant supervision. Threads get crossed. Context gets muddled. The tool needs redirection, clarification, and sometimes retraining as the work evolves.

    This is simply the nature of collaboration.

    Why the Hype Misses the Point. There’s a popular narrative circulating that anyone can now write a book, write a complex software application, create a website, start a business, or become an expert with just a few well-written prompts.

    This misunderstands both the tools and the craft associated with each of these tasks.

    I think of AI the way I think of a great camera. We can all buy the same equipment. That doesn’t guarantee an amazing photo. The quality still depends on the eye behind the lens, the patience and skills to frame the shot, and the willingness to edit ruthlessly afterward.

    Ansel Adams once said that asking him what camera he used was like asking a writer what typewriter he used. The tool matters. But it has never been the point.

    The same is true with AI tools.

    Without intent, taste, and care, straight AI output feels flat and formulaic. Readers will notice. Substance can’t be faked. Depth doesn’t appear by accident.

    These tools reflect the discipline of the person using them.

    Hitting the Ground Running. For someone just getting started, the biggest mistake is expecting magic. The better approach is to build understanding and training into the process (for you and the AI tool).

    Explain what you’re trying to do.

    Tell the tool how you think.

    Correct it when it’s wrong.

    Guide it when it drifts.

    Treat it like a junior collaborator. One that’s fast, tireless, and remarkably capable…but still dependent on direction and context.

    If you’re looking for a practical first step, try this. Find an article you’ve read recently and ask the tool to summarize it. Compare that summary to the original. Notice what it captured, what it missed, and what it misunderstood. This simple exercise reveals both the tool’s strengths and its limitations in a low-stakes way.

    From there, you might ask it to help you draft an email, outline a presentation, or brainstorm solutions to a problem you’re facing. Start with tasks where you can easily evaluate the quality of the output and provide feedback on what the tool provides. 

    Over time, you’ll notice the quality improves. That’s when the tool begins to resemble the Jarvis we imagined. It isn’t perfect, but it becomes more aligned with what you value most and how you like to approach your work. At the same time, your understanding of its strengths and limitations becomes clearer through consistent use.

    AI doesn’t replace thinking. It requires it.

    Used carelessly, it produces noise. Used deliberately, it sharpens your insights.

    The question is whether we’re willing to slow down at the beginning, set expectations, and engage AI tools with proper intention.

    Only then can these tools truly serve us well.

    Photo by Chris Haws on Unsplash – photographers often say, “It’s about the photographer, not the camera.”

    If this post was helpful, please feel free to share it.