The Vintage-Aged Developer’s Guide to Not Panicking About AI
Why I’m not afraid of AI, and why experienced engineers should stop treating it like either magic or the apocalypse
by Gary Worthington, More Than Monkeys

There is a particular kind of online conversation about AI that seems designed to be as tedious as possible.
On one side, you have the evangelists declaring that software engineering has fundamentally changed forever and that anyone not handing over their keyboard to a chatbot by Thursday is basically a village blacksmith watching the first Ford roll past.
On the other side, you have the doom chorus insisting that AI is all nonsense, all hype, all hallucination, and all beneath the dignity of Serious Engineers Who Remember How To Write SQL By Hand.
As a vintage-aged developer, I find both camps a bit exhausting.
I am not afraid of AI. That is not because I think it is flawless. It is not.
It is not because I think it can replace engineering judgement. It cannot.
It is not because I think it understands my systems better than I do. That would be hilarious.
I am not afraid of AI because I have been around long enough to see this pattern before. New tools arrive. Some are overhyped. Some are underwhelming. Some quietly become part of everyday work until nobody can remember how we managed without them. The trick is not to worship the tool, and it is not to sneer at it either. The trick is to understand where it helps, where it lies, and where it still needs an adult in the room.
That last part, incidentally, is where experienced developers still earn their keep.
We have seen this film before
AI is being presented as though it is the first force multiplier engineering has ever encountered. It is not, and it won’t be the last.
Version control was a force multiplier. Automated testing was a force multiplier. CI/CD was a force multiplier. Cloud platforms were a force multiplier. Stack Overflow was a force multiplier, despite being held together for years by equal parts brilliance, bitterness, and passive-aggressive comment threads.
None of these tools removed the need for competence. They changed the shape of competent work.
Before good debuggers, people printed things and squinted at them. Before mature frameworks, people wrote mountains of plumbing. Before searchable documentation and forums, people relied on manuals, tribal knowledge, and the office sage who seemed to know everything but was also somehow never in a meeting.
Every serious productivity jump in software development has come from reducing friction. AI is part of that same story.
It can remove some friction.
That is useful, but it does not remove the need to think.
That is also useful, because thinking is still quite important.
The mistake is treating AI as all or nothing
A lot of the anxiety around AI seems to come from binary thinking.
Either it is the future and must be embraced wholesale, or it is nonsense and should be dismissed entirely.
That is not how grown-up engineering works.
No sensible engineer looks at a library, a framework, a cloud service, or an automation tool and says, “I shall now trust this blindly with every aspect of my professional judgement.” Equally, no sensible engineer says, “I refuse to use this because real engineers manually write everything from first principles in a cave.”
AI belongs in the same category as every other tool. It has strengths, weaknesses, failure modes, and a tendency to be marketed by people who have never had to support a production incident at 2am.
The right question is not “Is AI good or bad?”
The right question is “What parts of my work does this actually help with, and what parts still require experience, context, and responsibility?”
That question is far less dramatic, which is probably why it does not trend as well.
Where AI genuinely helps
Used properly, AI can be a very good accelerant.
Not a replacement for engineering. An accelerant.
It is useful for getting from blank page to first draft faster. That matters more than people admit. A lot of engineering time disappears not into solving hard problems, but into setting up the scaffolding around them. Boilerplate. Structure. Initial test cases. Documentation drafts. Terraform skeletons. Regex you really do not want to write from scratch unless you enjoy pain as a lifestyle choice.
It is useful for exploring unfamiliar areas quickly. If I am looking at a library I have not touched before, or trying to understand the shape of an API, or mapping out implementation options, AI can compress that initial orientation period. That does not mean I trust the answer blindly. It means I get to the important questions faster.
It is useful for summarising code, especially legacy code, where the original author has long since vanished into the mist and left behind a service called something like OrderProcessingFinalV2New. AI can often provide a decent first-pass explanation of what a block of code appears to be doing, which is far better than staring at it like it has personally insulted you.
It is useful for generating tests, draft refactors, migration scripts, internal documentation, and those tedious but necessary bits of work that do not require genius, just time and attention.
It is also very useful as a sparring partner.
Not because it is always right. Far from it.
But because the act of interrogating a generated answer often sharpens your own thinking. You spot the missing edge case. You notice the dangerous assumption. You realise the real problem is not the function in front of you but the design behind it.
I often refer to my mate Claude as the most over confident, over zealous Junior developer who needs to be treated as such.
That kind of interaction can be genuinely productive.
Where AI absolutely does not let you off the hook
This is the part that gets lost in the noise.
AI is not accountability.
If an AI-generated solution introduces a security flaw, the AI does not attend the incident review. You do.
If it writes brittle tests, poor infrastructure, or code that technically works but makes the system worse, the AI does not have to maintain it six months later. Your team does.
If it confidently invents an API method, misunderstands business rules, or produces something elegant-looking and dangerously wrong, it will not feel the pain. You will.
That means experienced engineers still need to do what experienced engineers have always done.
You still have to understand the domain.
You still have to evaluate trade-offs.
You still have to know whether a suggestion is sensible in the context of your architecture, your data, your compliance obligations, your operational reality, and your team’s capability.
You still need judgement.
In fact, the more AI enters day-to-day engineering, the more valuable judgement becomes.
Because once generating plausible-looking output becomes cheap, the ability to distinguish good from bad becomes more important, not less.
This is not a small detail. It is the whole game.
Why experience matters more, not less
There is a strange idea floating around that AI somehow devalues experience.
I think the opposite is true.
Experienced developers are often better placed to use AI well because they have context. They have scars. They have made enough mistakes to recognise new ones more quickly.
When AI produces code, an experienced engineer is more likely to spot the hidden coupling, the poor boundary, the false abstraction, the accidental inefficiency, or the missing operational consideration.
When AI suggests an architecture, an experienced engineer is more likely to ask the awkward questions. How does this fail? How do we deploy it? How do we observe it? Who supports it? What happens when the third-party service rate-limits us on a Monday morning because of course it does?
When AI confidently states something that sounds reasonable but is in fact complete nonsense, experience provides a useful internal alarm bell.
That matters.
Junior engineers can absolutely benefit from AI too, and they should. But they need guidance on how to use it properly, how to verify it, and how not to outsource their thinking. Otherwise, you end up with very fast production of very polished rubbish, which is not progress however shiny the demo may look.
The real risk is not using it badly. It is refusing to learn it at all
I understand some of the resistance.
A lot of AI output is mediocre.
A lot of AI marketing is absurd.
A lot of people speaking most loudly about software development with AI have not spent much time building or maintaining serious systems.
Fair enough.
But none of that changes the practical reality that these tools are getting woven into the everyday engineering environment. Editors, code hosts, documentation tools, support workflows, observability platforms, test tooling, and internal knowledge systems are all being shaped by this shift.
You do not have to like every aspect of it.
You do not have to become an evangelist.
You do not have to pretend it is a sentient colleague called Nigel who just happens to be trapped inside your IDE.
But you do need to understand it.
Because opting out completely is not some noble stand for craftsmanship. In many cases, it is just choosing to be slower, more stubborn, and less effective than you need to be.
That is not a principled engineering position. That is professional cosplay.
The correct posture is pragmatic adoption
The sensible response to AI is the same sensible response we should apply to most technology decisions.
Use it where it adds value.
Do not use it where it adds risk.
Verify outputs.
Keep humans accountable.
Teach teams how to use it properly.
Treat it as a tool, not a theology.
For me, that means using AI for acceleration, exploration, summarisation, drafting, and occasionally as a way to interrogate my own thinking. It does not mean letting it make architectural decisions unsupervised. It does not mean copying code into production without understanding it. It does not mean pretending confidence and correctness are the same thing.
They are not.
A fluent liar is still a liar.
A fast assistant is still only an assistant.
A force multiplier still needs a force worth multiplying.
We should be more honest about what great engineers actually do
The mythology of software engineering still clings to the idea of the lone genius producing perfect systems through raw intellect and caffeine. This has always been a bit silly.
Great engineers are not great because they type every line manually.
They are great because they solve the right problems, make sound decisions, reduce risk, improve systems, help teams move faster, and leave things in a better state than they found them.
If AI helps with that, use it.
If it gets in the way, do not.
But let us stop pretending that refusing to use modern tools is somehow evidence of depth. Usually it is evidence of ego, insecurity, or a touching nostalgia for a version of the industry that was not actually as good as people remember.
We did not become better engineers by clinging to old constraints. We became better by learning, adapting, and keeping the useful bits.
That was, and still isthe job.
Final thought from one ageing keyboard-basher to another
If you are a more experienced developer and AI makes you uneasy, I would suggest this.
Do not ask whether it is coming for your job.
Ask whether you understand it well enough to put it to work.
Because that is the real dividing line. Not young versus old. Not traditional versus modern.
Not proper engineers versus AI engineers, whatever that is supposed to mean.
The dividing line is between people who learn new tools and apply judgement, and people who stand at the side muttering that it was all better in 2008 while the rest of the world gets on with it.
And to be fair, some things probably were better in 2008. (n.b. The only thing I can think of from 2008 that was better was the Man Utd side that won the Champions League, but I digress. :-) )
But not enough of them to build a strategy around.
Gary Worthington is a software engineer, delivery consultant, and fractional CTO who helps teams move fast, learn faster, and scale when it matters. He writes about modern engineering, product thinking, and helping teams ship things that matter.
Through his consultancy, More Than Monkeys, Gary helps startups and scaleups improve how they build software from tech strategy and agile delivery to product validation and team development.
Visit morethanmonkeys.co.uk to learn how we can help you build better, faster.
Follow Gary on LinkedIn for practical insights into engineering leadership, agile delivery, and team performance