AI Can Code, But Can It Worry Like a Mother?
- Paul Meyrick
- Jun 23
- 4 min read
When I was growing up, I had an insatiable curiosity about how technology worked. I believed that the only way to truly understand something was to take it apart and see what lay inside, then challenge myself to put it back together again. This curiosity, however, did not always sit well with my mum. She worked part-time at the local university and was the sole breadwinner in our family. Anything new we brought into the house was hard earned and often involved a careful trade-off.
I still remember the genuine pride on my mum’s face when she brought home our first PC. It was more than just a computer; it was a symbol of everything she’d worked for. She’d toiled away, juggled bills, to give my sister and I a head start. That machine, blinking to life in the corner of our living room, wasn’t just for typing essays in WordPerfect or learning to code in BASIC. It was her way of helping us escape the kind of unskilled work she’d had to endure. She was giving us a fighting chance to ride the wave of the digital age and hopefully never look back.
I can also vividly recall the look of sheer terror and maternal despair when she came home from work one afternoon to find the computer completely disassembled, its innards meticulously arranged across the lounge table like some kind of techno-sacrificial offering. I was desperate to understand how it worked, and in my mind, there was only one logical course of action: take it apart. After all, this method had served me well before. I was convinced that true understanding could only be found somewhere between the processor and the power supply.
You see, It wasn’t the first time she’d worn that look. I’d earned a similar reaction the day she found the family phone in pieces, neatly arranged on the same table. For those too young to remember: phones once had actual mechanical parts. I may not look it, but I’m old enough to recall the rotary dial. You’d stick your finger in a hole and spin it for each digit. I think telesales workers of that era were the unsung pioneers of repetitive strain injury, long before it became a modern desk job epidemic.

Anyway, back to the computer story. My mum, of course, needn’t have worried. Much like many an overconfident engineer today, I had everything I needed to put it all back together: blind optimism that all problems can be solved with enough trial and error, the computer manual serving as my target architecture, and a highly demanding stakeholder providing me fast feedback into what would happen if I didn’t meet my success criteria.
These days, engineers have tools like AI to accelerate development, churn out code, and even debug at scale. It’s undeniably powerful and if I’d had something like it back then it might’ve saved me a few hours (and maybe a few of my mum’s nerves). But speed is not understanding. AI can help you go faster but it can’t define the destination. It doesn’t inherently know what problem you're trying to solve. True problem solving starts with breaking things down, examining them piece by piece, and figuring out how those pieces can work together. That messy, slightly chaotic process of learning by doing, with wrong turns and hard earned lessons is what teaches you not just how to build, but why you're building in the first place. And that’s something no shortcut can teach you. I didn’t just learn about computers with a screwdriver. I learned how to think, how to define outcomes, and how to fix what wasn’t working. One rotary dial at a time. I am, as always, with Dave Farley on this.
The takeaway? Utilising AI to assist engineers is wise, but it's vital to also cultivate critical thinking and reasoning with the outputs. Failure to do so may result in faster teams lacking a complete understanding of their creations and unintentional consequences for other parts of your organisations like Security and SRE. Productivity without a clear objective is merely velocity without vision.
How are we thinking about AI in the Hierarchy of Needs?
Integrating AI guidance across our engineering needs is a key focus. We're drawing inspiration from resources like DEFRA's AI lifecycle maturity assessment, alongside other available models, to create guidance for our existing needs. Look out for a follow up article and some updated definitions from Stuart Collins.
Looking back, that early experiment holds a valuable lesson for today’s tech leaders: confidence and tools are not enough without oversight and accountability. When it comes to AI, modern tech leaders can’t rely solely on optimism and a user manual, no matter how sleek the solution or persuasive the cost improvements might be. Governance needs to be more than an afterthought or a checkbox. Leaders must set clear expectations and standards around how AI is used, ensuring it’s not just a blunt instrument for cutting costs or squeezing out extra productivity. Being mindful of risks and impacts. Like any powerful tool AI needs oversight: transparent decision-making frameworks, ethical guardrails, and clearly defined success criteria that consider not just business outcomes. And just like my mum watching over my teenage attempts at computer surgery, there needs to be an engaged, informed stakeholder ready to step in if things start going sideways.
One thing I am fairly confident of though is that AI will reduce the number of cases of repetitive strain injury. And yes. It worked when I put it back together.
Comments