New platforms arrive. Old tools fade. Processes are reworked. Skills must evolve.
In that sense, disruption has long been part of the job description.
Software developers create new and improved tools. They streamline workflows. They automate tasks that once required entire teams. Over time, they have reshaped and disrupted how work gets done across nearly every industry.
This pattern has been in place for decades.
For software developers, something different is happening now.
With the arrival of AI-assisted development tools, including systems like Anthropic’s Claude Code, disruption has begun to turn inward. These tools are reshaping how developers approach their own work.
For many in the profession, this feels unfamiliar.
Software development continues, but the definition and details of the role are shifting. Tasks that once required sustained manual effort can now be generated, refactored, tested, and explained with remarkable speed.
A developer who once spent an afternoon writing API integration code might now spend fifteen minutes directing an AI to produce it, followed by an hour reviewing edge cases and security implications. The center of gravity moves toward judgment and direction rather than execution and production.
When job roles experience disruption, responses tend to follow predictable patterns. Some people dismiss the change as temporary or overhyped. Others push back, trying to protect familiar and comfortable ways of working. Still others approach the change with curiosity and engagement, interested in how new capabilities can expand what’s possible.
Intent Makes the Difference
An important distinction often gets overlooked when discussing pushbacks.
Some resistance grows from denial. It spends energy cataloging flaws, defending established workflows, or hoping new tools disappear. That approach drains effort without shaping new outcomes. It preserves little and teaches even less.
Other forms of resistance grow from professional judgment.
Experienced developers often notice risks that early enthusiasm misses. Fragile abstractions, security gaps, maintenance burdens, and failures that appear only at scale become visible through lived experience. When developers raise concerns in the service of quality, safety, and long-term viability, their input strengthens the eventual solution. This kind of resistance shapes progress rather than attempting to stop it.
The most effective developers recognize this shift and respond deliberately. They move away from opposing new tools and toward advocating for their effective use. They ask better questions. They redesign workflows. They establish guardrails. They apply experience where judgment continues to matter.
In doing so, they follow the same guidance developers have offered others for years.
Embrace new tools. Continually re-engineer how work gets done. Move upstream toward problem framing, system design, and decision-making.
Greater Emphasis on Judgment
AI generates code with increasing competence. Decisions about what should be built, which tradeoffs make sense, and how systems must evolve over time still require human judgment. As automation accelerates, these responsibilities grow more visible and more critical.
This opportunity in front of developers calls for leadership.
Developers who work fluently with these tools, guide their thoughtful adoption, and help their teams and organizations navigate the transition become trusted guides through change. Their leadership shows up in practical ways:
-pairing new capabilities with healthy skepticism
-putting review processes in place to catch subtle errors
-mentoring junior developers in how to evaluate results rather than simply generating them
-exercising judgment to prioritize tasks that benefit most from automation
Disruption has always been part of the work.
The open question is whether we meet disruption as participants, or step forward as guides.
One of the best parts of the Iron Man movies is Jarvis, the ever-present AI system that acts as an extension of Tony Stark’s mind. Jarvis is a collaborator. A research analyst. A pattern finder. A problem solver. He handles logistics, runs calculations, surfaces insights, and stays ready in the background until Tony needs him.
Jarvis amplifies and extends Tony’s genius.
Recently, I introduced a friend to ChatGPT. He hasn’t jumped into any AI tools yet, but he can see that people around him are finding real value in them. Like many thoughtful people, his first questions weren’t about features. They were about data privacy. About whether these tools were simply repackaging other people’s work. About what was really going on under the hood.
At one point, he asked a simple question:
Is it like having Jarvis around whenever you need him?
To me, the honest answer is yes.
But it’s also important to realize that Jarvis isn’t perfect. And neither are the AI tools available to us today.
The First Questions Matter. Almost every serious conversation about AI tools begins in the same place.
Is my data safe?
Who owns the output?
Can I trust what I’m getting back?
These are the same questions we ask whenever a new digital tool emerges.
At a basic level, paid versions of tools like ChatGPT don’t use our conversations to train public models. Even with that protection in place, I still guard my data carefully. If I’m asking questions related to finances, health, or legal matters, I use hypothetical scenarios rather than personal specifics. I’m the first line of defense when it comes to my personal information.
In professional and commercial environments, organizations using business or enterprise versions gain additional protections around data isolation, encryption, access controls, and audit logging. At the enterprise level, some platforms even allow customers to manage their own encryption keys on top of the platform’s security.
The tool doesn’t decide what’s appropriate to share. We do.
Who Owns the Output? We do. The tool doesn’t claim authorship. It doesn’t retain ownership of what it produces for you. The output becomes yours because you directed the work, supplied the context, and decided how the result would be used.
But ownership is only part of the story. Responsibility matters just as much.
The tool doesn’t know your intent. It doesn’t understand your audience. And it doesn’t bear the consequences of getting something wrong. That responsibility stays with the human in the loop. That’s us.
In that sense, using AI isn’t fundamentally different from working with many other analytical tools we may have used for decades. The work becomes yours because you shape it, refine it, and ultimately stand behind it.
A Note on Sources and Attribution. Owning the output also means owning the responsibility for its accuracy and integrity. This is especially important when it comes to research and citations.
AI tools can pull together large volumes of information, synthesize ideas across many inputs, and present them in clean, compelling language. That capability is incredibly useful. But it doesn’t remove the author’s responsibility to understand where ideas come from and how they’re represented.
The tool may summarize research. It may surface commonly known concepts. It may produce language that sounds authoritative and polished. What it doesn’t guarantee is proper attribution or assurance that content isn’t too closely mirroring a specific source.
That responsibility stays with the human.
When I use AI for research or writing, I treat it as a starting point. I ask it to surface each source. I follow links. I read original material. And when an idea, quote, or framework belongs to someone else, I make sure it’s credited appropriately. This step also helps catch hallucinations that sound amazingly accurate.
Ownership requires standing behind the integrity of the work to the best of your ability.
Can I Trust What I’m Getting Back? Usually. Only with supervision. AI tools are very good at consuming information, identifying patterns, and accelerating first drafts. They are less reliable when precision, nuance, or real-world verification is required.
They can be confidently wrong. They can lose context. They can blend accurate information with outdated or incomplete details.
AI tools hallucinate regularly, though this tendency improves with each new model release. These aren’t reasons to dismiss AI as a tool. They’re reminders to understand what AI is and what it isn’t.
Trust paired with skepticism is the right approach. AI tools are fast-thinking assistants, never the final authority.
Verification still matters. Judgment still matters. Experience still matters. In fact, the better your judgment, the more valuable these tools become.
Why Memory Changes the Equation. Most people use AI tools like a smart search engine. Ask a question. Get an answer. Move on.
That works. But it barely scratches the surface of what’s possible. The real multiplier happens when the tool is allowed to remember context.
ChatGPT includes a memory capability that lets you intentionally store preferences, patterns, and reference material across conversations. Used well, this transforms the tool from something you query into something you can collaborate with.
Over the past year and across hundreds of prompt conversations, I’ve shared:
-My writing voice and stylistic preferences
-A digital copy of a leadership book I wrote over a decade ago (about 65,000 words)
-An autobiography I wrote for my children and grandchildren (about 90,000 words)
-Hundreds of blog posts published over the past 13 years (roughly 240,000 words)
-How I like to structure projects and approach new work
In total, I’ve trained the tool with nearly 400,000 words of my original content. This began as an experiment to see if I could reduce generic responses and encourage the tool to approach questions from my foundational perspective.
The difference is tangible. Early on, whether I was drafting communication, analyzing problems, or organizing ideas, the tool would produce polished but generic output that required extensive rewriting. Now, it reflects my priorities, uses frameworks I’ve shared, and produces work that feels aligned with how I think. I still edit quite a bit, but I’m refining rather than rebuilding.
Collaboration Requires Judgment. My friend asked me another important question.
Do you still feel like the writing you produce with it is yours?
Yes. Completely.
Every project I’ve worked on with these tools begins with my original content, reinforced by reference material I created long before AI entered the picture. Hundreds of thousands of words written over more than a decade. Clear intent about audience and purpose, using a defined process I’ve established before drafting anything.
The tool supports rather than replaces my judgment. Drafts usually require significant edits, shifts in tone, and sometimes complete rewrites.
Where it excels is in synthesis. In retrieval. In pattern recognition across large bodies of work. In accelerating first drafts that already have direction.
Large projects require constant supervision. Threads get crossed. Context gets muddled. The tool needs redirection, clarification, and sometimes retraining as the work evolves.
This is simply the nature of collaboration.
Why the Hype Misses the Point. There’s a popular narrative circulating that anyone can now write a book, write a complex software application, create a website, start a business, or become an expert with just a few well-written prompts.
This misunderstands both the tools and the craft associated with each of these tasks.
I think of AI the way I think of a great camera. We can all buy the same equipment. That doesn’t guarantee an amazing photo. The quality still depends on the eye behind the lens, the patience and skills to frame the shot, and the willingness to edit ruthlessly afterward.
Ansel Adams once said that asking him what camera he used was like asking a writer what typewriter he used. The tool matters. But it has never been the point.
The same is true with AI tools.
Without intent, taste, and care, straight AI output feels flat and formulaic. Readers will notice. Substance can’t be faked. Depth doesn’t appear by accident.
These tools reflect the discipline of the person using them.
Hitting the Ground Running. For someone just getting started, the biggest mistake is expecting magic. The better approach is to build understanding and training into the process (for you and the AI tool).
Explain what you’re trying to do.
Tell the tool how you think.
Correct it when it’s wrong.
Guide it when it drifts.
Treat it like a junior collaborator. One that’s fast, tireless, and remarkably capable…but still dependent on direction and context.
If you’re looking for a practical first step, try this. Find an article you’ve read recently and ask the tool to summarize it. Compare that summary to the original. Notice what it captured, what it missed, and what it misunderstood. This simple exercise reveals both the tool’s strengths and its limitations in a low-stakes way.
From there, you might ask it to help you draft an email, outline a presentation, or brainstorm solutions to a problem you’re facing. Start with tasks where you can easily evaluate the quality of the output and provide feedback on what the tool provides.
Over time, you’ll notice the quality improves. That’s when the tool begins to resemble the Jarvis we imagined. It isn’t perfect, but it becomes more aligned with what you value most and how you like to approach your work. At the same time, your understanding of its strengths and limitations becomes clearer through consistent use.
AI doesn’t replace thinking. It requires it.
Used carelessly, it produces noise. Used deliberately, it sharpens your insights.
The question is whether we’re willing to slow down at the beginning, set expectations, and engage AI tools with proper intention.
Only then can these tools truly serve us well.
Photo by Chris Haws on Unsplash – photographers often say, “It’s about the photographer, not the camera.”
If this post was helpful, please feel free to share it.
You must be logged in to post a comment.