Photo: Earth movers magazine
This post is the second of a three-part series of the top things I wish I had learned sooner and still apply today after 15 years in 50+ software teams:
Software testing - QC for speed, QA for scaling.
Going Serverless-first. It's easier to make simple things complex.
Engineering Culture - choosing the best improvements.
In my last post about quality control (QC) for speed and quality assurance (QA) for scaling, I touched on the importance of testing the software change and how both QC and QA can contribute to engineering productivity.
The technology we introduce significantly helps or hinders engineering productivity, from the pace you deliver software to the team's capability and capacity to own services over time. Technologies influence many long-term considerations, such as local development, hiring, onboarding, quality engineering, incidents, observability, deployment flows, static analysis, compliance, templating and above all else, time spent on customer value.
Part 2: Going serverless-first
ℹ️ When I discuss 'Serverless', I generally mean applying technologies that work for the team instead of the other way around.
🙏 I talk about Kubernetes as a problem for teams. It is a compelling technology, and I highlight it only because it is the most common example I see of groups over-complicating their lives, swapping dependencies on today's DB admins for tomorrow's cluster admins.* Beware of anything with lots of training certifications.
My story with Engineering productivity started with my adoption of serverless architecture almost seven years ago. It has been very successful for both early ventures and modernisation projects. While I now accept that some applications don't suit serverless, almost all applications can start serverless.
Serverless architecture is a step change for developers to build and run software without managing IT operations and configurations so they can focus on building key features and an impeccable user experience. We eliminate dependencies on server or cluster provisioning, patching, OS maintenance, and capacity provisioning by going serverless. Cloud providers and vendors handle these tasks that were once administrative loads requiring dedicated expertise in infrastructure management.
Serverless-first improves engineering productivity by making it easy to focus on critical features without worrying about what's under the hood. If those features attract customers and volume, then a time may come when components outgrow a serverless architecture. However, in my experience, this is the minority of cases. What is more likely is one or more of the following:
Spikey demand with far less baseline request volume than anticipated
The feature isn't popular
The organisation changes priority, and the service is not required.
Vendors offer it as a service
Cloud providers supply a native implementation
How does it help?
Serverless architecture has many benefits when it comes to scaling for engineering productivity. We touched on eliminating dependencies on infrastructure management, which can benefit software teams through:
Less organisational complexity with less need for dedicated infrastructure engineers;
Efficient scaling;
Higher availability;
Lower development costs and a faster release cycle;
Ease of creating microservices;
Leapfrog is an excellent case study of a journey toward serverless architecture, concluding that it is "very well suited for adopting a microservice architecture without the hassle of maintaining the servers, scalability, and availability headaches". Leapfrog had already scaled out a microservices architecture model but found that one of their biggest challenges to solve was right-sized provisioning of EC2 instances to minimise idle application overhead costs, which led to their experiment in adopting serverless architecture.
Going serverless gave them all the benefits I just described, but it's also worth noting that serverless computing doesn't come without its limitations. Some limitations to consider include latency and memory limitations limiting massive computing applications, and the one that is often the most daunting - the billing shock or exceeded throughput when not carefully monitored.
"It's a lot easier to make a simple thing complex".
Myles Henaghan
Kubernetes appears to be becoming the de-facto platform to scale on the cloud. It is a compelling technology with its origins in 'planet-scale' organisations. With its popularity and all the features of Kubernetes in deploying and scaling microservices, it is easy to overlook the commitment involved. To put it simply, Kubernetes is a construction kit and not a ready-made solution - on top of writing and pushing code, developers still need to build and push containers, define and test deployments, and then monitor and scale as needed. Storage, security, and networking issues remain top concerns for Kubernetes-based deployments.
It's a way to scale at the expense of taking on more complexity. However, in my experience, there is a higher chance of the component or service becoming obsolete or refactored than reaching the volume and latency requirements teams anticipate. Building too much too soon is quite an old problem; in programming, we refer to this as YAGNI.
Leapfrog and go serverless-first
Every team will go through a growth curve with strategic inflection points where they must make fundamental shifts to maintain and capitalise on the momentum. One of these inflection points may be a decision to shift to a serverless architecture, and how to implement it.
You don’t have to go through the same journey that Leapfrog did in moving from a monolithic stack, to a microservice architecture, before landing on serverless. With each change comes a mountain of change management and the architectural shift. If you’re looking to go cloud-native, consider leapfrogging straight to serverless architecture (pun intended). While not everything can be serverless, you’d be surprised at how much can be.
Here are a couple of things to consider while going through this journey.
Evolutionary architecture - evolve with growth
Leapfrog’s journey is a great example of how incremental developments in core software engineering practices have paved the way to rethink how architecture can be changed and adapted incrementally over time, moving from monolithic applications to microservices, and finally exploring serverless computing.
It’s important to know that new technologies create new capabilities and disruptions that require software to keep pace. Evolutionary architecture supports guided incremental change and building evolvability into the software architecture and practices to facilitate this incremental change.
Going serverless doesn’t need to mean refactoring everything you have immediately into functions, and the reality is not everything can be broken down into a function. You can start small, with the bare minimum needed to prove your hypothesis on whether an architectural decision is right for your business at a point in time. Just do the minimum needed to get you started on the journey, then let it evolve and make decisions along the way in the series of inflection points as your business scales.
Healthy constraints
Healthy constraints are boundaries or design principles deliberately set to drive good practices and innovation.
Building in some healthy constraints of serverless architecture can help you build highly resilient applications, and take a fast track to understanding distributed event-driven systems.
Challenge the requirements
One of the best ways to improve your chances of success with serverless architecture is to constantly challenge your requirements and keeping it simple. Challenge whether the original requirements of your software application still stand to make the most out of your serverless application.
The recipe I follow for blocking requirements is to work on deleting things.
Make your requirements less dumb.
Try very, very hard to delete part of the process.
Simplify and optimise what’s left.
Accelerate cycle time if you’re going too slowly.
Automate.
Talk to people and get help
The best way to go serverless is just by talking to people to learn about it. The numerous communities built around serverless have likely already faced challenges you can anticipate, and can also shed light on those you didn’t. The content is out there whether you want to learn how to start, test, or scale it.
Some references to get you started:
There is a growing industry of ~27 million software developers worldwide who naturally try to solve things themselves, but why bother solving solved problems, especially when it comes to going serverless?
* Cluster Administration. I acknowledge that cloud providers continue to lessen the operational complexity of cluster operations, however, containerisation remains a fundamentally more complex model than functions as a service, or someone else providing the functionality to you.
Originally published on LinkedIn
Kommentare