When the Disruptors Get Disrupted

For most people in IT, change is constant.

New platforms arrive. Old tools fade. Processes are reworked. Skills must evolve.

In that sense, disruption has long been part of the job description.

Software developers create new and improved tools. They streamline workflows. They automate tasks that once required entire teams. Over time, they have reshaped and disrupted how work gets done across nearly every industry.

This pattern has been in place for decades.

For software developers, something different is happening now.

With the arrival of AI-assisted development tools, including systems like Anthropic’s Claude Code, disruption has begun to turn inward. These tools are reshaping how developers approach their own work.

For many in the profession, this feels unfamiliar.

Software development continues, but the definition and details of the role are shifting. Tasks that once required sustained manual effort can now be generated, refactored, tested, and explained with remarkable speed.

A developer who once spent an afternoon writing API integration code might now spend fifteen minutes directing an AI to produce it, followed by an hour reviewing edge cases and security implications. The center of gravity moves toward judgment and direction rather than execution and production.

When job roles experience disruption, responses tend to follow predictable patterns. Some people dismiss the change as temporary or overhyped. Others push back, trying to protect familiar and comfortable ways of working. Still others approach the change with curiosity and engagement, interested in how new capabilities can expand what’s possible.

Intent Makes the Difference

An important distinction often gets overlooked when discussing pushbacks.

Some resistance grows from denial. It spends energy cataloging flaws, defending established workflows, or hoping new tools disappear. That approach drains effort without shaping new outcomes. It preserves little and teaches even less.

Other forms of resistance grow from professional judgment.

Experienced developers often notice risks that early enthusiasm misses. Fragile abstractions, security gaps, maintenance burdens, and failures that appear only at scale become visible through lived experience. When developers raise concerns in the service of quality, safety, and long-term viability, their input strengthens the eventual solution. This kind of resistance shapes progress rather than attempting to stop it.

The most effective developers recognize this shift and respond deliberately. They move away from opposing new tools and toward advocating for their effective use. They ask better questions. They redesign workflows. They establish guardrails. They apply experience where judgment continues to matter.

In doing so, they follow the same guidance developers have offered others for years.

Embrace new tools.
Continually re-engineer how work gets done.
Move upstream toward problem framing, system design, and decision-making.

Greater Emphasis on Judgment

AI generates code with increasing competence. Decisions about what should be built, which tradeoffs make sense, and how systems must evolve over time still require human judgment. As automation accelerates, these responsibilities grow more visible and more critical.

This opportunity in front of developers calls for leadership.

Developers who work fluently with these tools, guide their thoughtful adoption, and help their teams and organizations navigate the transition become trusted guides through change. Their leadership shows up in practical ways:

-pairing new capabilities with healthy skepticism

-putting review processes in place to catch subtle errors

-mentoring junior developers in how to evaluate results rather than simply generating them

-exercising judgment to prioritize tasks that benefit most from automation

Disruption has always been part of the work.

The open question is whether we meet disruption as participants, or step forward as guides.

Photo by AltumCode on Unsplash

Strategy First. AI Second.

Eighty-eight percent of AI pilots fail to reach production, according to IDC research. Most fail because organizations chase the tool instead of defining the outcome. They ask, “How do we use AI?” rather than “What problem are we solving?”

A little perspective

I’m old enough to remember when VisiCalc and SuperCalc came out. That was before Lotus 1-2-3, and way before Microsoft Excel. VisiCalc and SuperCalc were just ahead of my time, but I was a big user of Lotus 1-2-3 version 1. Back then, everyone focused on how to harness the power of spreadsheets to change the way they did business.

Teams built massive (for that time) databases inside spreadsheets to manage product lines, inventory, billing, and even entire accounting systems. If you didn’t know how to use a spreadsheet, you were last year’s news.

The same shift happened with word processing. Microsoft Word replaced WordPerfect and its maze of Ctrl and Alt key combinations. Then the World Wide Web arrived in the early 1990s and opened a new set of doors.

I could go on with databases, client-server, cloud computing, etc. Each technology wave creates new winners but also leaves some behind.

The lesson is simple each time. New tools expand possibilities. Strategy gives those tools a purpose.

The point today

AI is a modern toolkit that can read, reason (think?), write, summarize, classify, predict, and create. It shines when you give it a clear job. Your strategy defines that job. If your aim is faster cycle times, higher service quality, or new revenue, AI can be the lever that helps you reach those outcomes faster.

Three traps to avoid

Tool chasing. This looks like collecting models and platforms without a target outcome. Teams spin up ChatGPT accounts, experiment with image generators, and build proof-of-concepts that fail to connect to real business value. The result is pilot fatigue. Endless demonstrations with no measurable impact.

Shadow projects. Well-meaning teams launch skunkworks AI experiments without governance or oversight. They use unapproved tools, expose sensitive data, or build solutions that struggle to integrate with existing systems. What starts as innovation becomes a compliance nightmare that stalls broader adoption.

Fear-driven paralysis. Some organizations wait for perfect clarity about AI’s impact, regulations, or competitive implications before acting. This creates missed opportunities and learning delays while competitors gain experience and market advantage.

An AI enablement playbook

Name your outcomes. Pick three measurable goals tied to customers, cost, or growth. Examples: reduce loan processing time by 30 percent, cut customer service response time from 4 hours to 30 minutes, or increase content production by 50 percent without adding headcount.

Map the work. List the steps where people read, write, search, decide, or hand off. These are all in AI’s wheelhouse to help. Look for tasks involving document review, email responses, data analysis, report generation, or quality checks.

Run small experiments. Two to four weeks. One team. One KPI. Ship something tangible and useful. Test AI-powered invoice processing with the accounting team, or AI-assisted internal help desk with support staff.

Measure and compare. Track speed, quality, cost, and satisfaction before and after. Keep what moves the needle. If AI cuts proposal writing time by 60 percent but reduces win rates by 20 percent, you need to adjust the approach.

Harden and scale. Add access controls, audit trails, curated prompt libraries, and playbooks. Move from a cool demo to a dependable tool that works consistently across teams and use cases.

Address the human element. Most resistance comes from fear of displacement, rather than technology aversion. Show people how AI handles routine tasks so they can focus on relationship building, creative problem-solving, and strategic work. Provide concrete examples of career advancement opportunities that AI creates.

Upskill your team. Short trainings with real tasks. Provide templates and examples in their daily tools. Make AI fluency a job requirement for new hires and a development goal for existing staff.

Close the loop with customers. Ask what improved. Watch behavior and survey scores, with extra weight on what people actually do, versus what they say.

Governance that speeds you up. Good guardrails create confidence and help you scale.

Access and roles. Limit sensitive data exposure and log usage by role. Marketing might get broad access to content generation tools while finance operates under stricter controls. The concept of least privilege applies. 

Data handling. Define red, yellow, and green data. Keep red data (customer SSNs, proprietary algorithms, confidential contracts) away from general public-facing tools. Yellow data needs approval and monitoring. Green data can flow freely.

Prompt and output standards. Save proven prompts in shared libraries. Require human review for customer-facing outputs, financial projections, or legal documents. Create templates that teams can adapt rather than starting from scratch.

Audit and monitoring. Capture prompts, outputs, and sources for key use cases. Build processes to detect bias, errors, or inappropriate content before it reaches customers.

Vendor review. Check security, uptime, and exit paths before heavy adoption. Understand data residency, model training practices, and integration capabilities. Consider making Bring-Your-Own-Key (BYOK) encryption the minimum standard for allowing your organization’s data to pass through or be stored on any AI vendor’s environment.

Questions for leaders

Which customer moments would benefit most from faster response or clearer guidance? Think about your highest-value interactions and biggest pain points.

Which workflows have the most repetitive reading or writing? These offer the quickest wins and clearest ROI calculations.

Which decisions would improve with better summaries or predictions? AI excels at processing large amounts of information and identifying patterns humans might miss.

Do we have the data infrastructure to support AI initiatives? Clean, accessible data is essential for most AI applications to work effectively. Solid data governance and curation are critical.

What risks must we manage as usage grows, and who owns that plan? Assign clear accountability for AI governance before problems emerge.

What will we stop doing once AI handles the routine? Define how you’ll reallocate human effort toward higher-value activities.

Who will champion AI adoption when the inevitable setbacks occur? Identify executives who understand both the potential and the challenges.

What to measure

Cycle time. Minutes or days saved per transaction.

Throughput. Work items per person per day.

Quality. Rework rate, error rate, compliance findings.

Experience. Customer effort score, employee satisfaction, NPS.

Unit cost. Cost per ticket, per claim, per application.

AI is the enabler

Strategy sets direction. AI supplies leverage. Give your people clear goals, safe guardrails, and permission to experiment and fail along the way.

Then let the tools do what tools do best. They multiply effort. They shorten the distance between intent and execution. They help you serve today’s customers better and reach customers you couldn’t reach in the past.

The question isn’t whether AI will transform your industry.

The question is whether you’ll lead that transformation or react to it.

Which will you choose?

Photo by Jen Theodore on Unsplash – I love this old school compass, showing the way as it always has. The same way a solid strategy and set of goals should lead our thinking about leveraging the latest AI tools.