AI Can Code, But Can It Worry Like a Mother?
- Paul Meyrick
- Jun 23
- 4 min read
Updated: Jul 23
When I was growing up, I had an insatiable curiosity about technology. I believed that to understand something fully, I had to take it apart and see what it was made of. Then, I would challenge myself to put it back together. My curiosity, however, often worried my mum. She worked part-time at the local university and was the sole breadwinner in our family. Anything new we brought into the house was hard-earned and usually required careful consideration.
A Memorable Milestone
I still remember the genuine pride on my mum’s face when she brought home our first PC. It meant more than just having a new gadget; it was a symbol of her hard work and sacrifice. She juggled bills to give my sister and me a head start. That machine, blinking to life in the corner of our living room, was not just for typing essays in WordPerfect or learning to code in BASIC. My mum was giving us a fighting chance to escape the kind of unskilled work she had endured.
The Computer Catastrophe
I can also vividly recall the sheer terror and maternal despair in her eyes when she came home one afternoon. She found the computer completely disassembled, its innards meticulously arranged on the lounge table like some kind of techno-sacrificial offering. I was desperate to understand how it worked. In my mind, there was only one logical course of action: take it apart. After all, this method had served me well before. I was convinced that true understanding lay somewhere between the processor and the power supply.
It wasn’t the first time she had that look. I had earned a similar reaction when she discovered the family phone in pieces, neatly arranged on the same table. For those too young to remember, phones once had actual mechanical parts. I may not look it, but I’m old enough to recall the rotary dial. You’d stick your finger in a hole and spin it for each digit. Telesales workers of that era were the unsung pioneers of repetitive strain injury, long before it became an epidemic in modern desk jobs.
Reassembling with Hope
Anyway, back to the computer story. My mum needn't have worried. Much like many overconfident engineers today, I had everything I needed to put it all back together: blind optimism that trial and error could solve any problem, the computer manual as my guide, and a demanding stakeholder giving me fast feedback.
These days, engineers have tools like AI to accelerate development, churn out code, and even debug at scale. It is undeniably powerful. If I’d had something like that back then, it might've saved me a few hours (and maybe a few of my mum’s nerves). But speed is not the same as understanding. AI can help you go faster, but it doesn’t define the destination. It doesn’t inherently know what problem you are trying to solve.
The Value of Learning
True problem-solving begins with breaking things down, examining them piece by piece, and figuring out how those pieces fit together. That messy, slightly chaotic process of learning through doing—complete with wrong turns and hard-earned lessons—is what teaches you not just how to build, but why you're building in the first place. That’s a lesson no shortcut can teach you.
I didn’t just learn about computers with a screwdriver; I learned how to think, define outcomes, and fix what wasn’t working, one rotary dial at a time. I am, as always, with Dave Farley on this.
The Takeaway: Balancing Speed with Understanding
The takeaway? Using AI to assist engineers is useful, but it’s crucial to cultivate critical thinking. Failure to do this could lead to faster teams that lack a complete understanding of their creations. This oversight may result in unintentional consequences for other parts of your organization, like Security and SRE (Site Reliability Engineering). Productivity without a clear objective is merely velocity without vision.
How Are We Thinking About AI?
Integrating AI guidance across our engineering needs is a key focus. We're drawing inspiration from resources like DEFRA's AI lifecycle maturity assessment. Alongside other models, we’re creating guidance for our existing needs. Look out for follow-up articles and some updated definitions from Stuart Collins.
Lessons for Today's Tech Leaders
Looking back, that early experiment holds valuable lessons for today’s tech leaders. Confidence and tools are not sufficient without oversight and accountability. When it comes to AI, modern tech leaders can’t rely solely on optimism and user manuals, no matter how sleek the solution or cost-effective the improvements may seem. Governance needs to exceed an afterthought or mere checkbox.
Leaders must set clear expectations and standards around how AI is utilized. It is essential to ensure it is not merely a blunt instrument for cutting costs or increasing productivity. Being mindful of risks and impacts is crucial. Like any powerful tool, AI requires oversight: transparent decision-making frameworks, ethical guardrails, and clearly defined success criteria that consider not just business outcomes.
As with my mum watching over my teenage attempts at computer surgery, there must be an engaged and informed stakeholder ready to step in if things begin to go sideways.
One thing I can confidently say is that AI will likely reduce the number of cases of repetitive strain injury. And yes, it worked when I put it back together.




Comments