Optimised DevOps
Helen Beal ??
Fractional CEO | Chair | Strategic Advisor | Keynote Speaker | Writer | Author | Researcher | Coach | Purpose: bringing joy to work.
When we talk about DevOps, we talk about visualising desired future states and what DevOps utopia looks like - how a culture feels where mastery, autonomy and purpose are the norm, where blame has no place, failure is smart and work is joyful and rewarding. We work towards creating these environments and building the processes and toolchains to enable continuous delivery so that innovation can be delivered to the market immediately, at any time. Most enterprises we work with are on their way there, DevOps is definitely not done, yet. Here's the maturity model we use to discuss current state at a high level:
We've been working with organisations for decades on improving software development lifecycles and have preached the mantra: 'people, process, THEN tools' all the time and more recently updated that through practicing DevOps to: 'culture, interactions, THEN automation'. DevOps IS about culture, but to be fully optimised, automation must happen. Based on what we've learned with our customers, we've focussed our DevOps tools for change efforts around three core application automation areas:
- Release and deployment
- Testing and service virtualization
- Performance management
Imagine our delight then, when we read the 2014 State of DevOps Report and turned to page 14 which looks at IT performance and measurement in some detail:
"IT performance is measured in terms of throughput and stability, two attributes that seem to be opposed, yet are both essential to achieving IT that’s a real strategic asset. The individual measures that make up IT performance are deployment frequency, lead time for changes, and mean time to recover from failure. Throughput is measured by deployment frequency and lead time for changes, while stability is measured by mean time to recover. To increase IT performance, you need to invest in practices that increase these throughput and stability measures."
What we found particularly interesting was what followed - that is, the top practices that correlate with the three IT performance measures identified:
1) Deployment Frequency
"Continuous Delivery
Continuous delivery ensures that your software is always in a releasable state, turning deployment into a non-event that can be performed on demand.
Use of version control for all production artifacts
When it’s easy to recreate environments for testing and troubleshooting, throughput goes up."
2) Lead Time for Changes
"Use of version control for all production artifacts
The ability to get changes into production repeatedly in a reliable, low-risk way depends on the comprehensive use of version control.
Automated testing
With a reliable and comprehensive set of automated tests, you can quickly gain confidence that your code is releasable without lengthy integrations and manual regression testing cycles."
3) Mean Time To Recover (MTTR)
"Use of version control for all production artifacts
When an error is identified in production, you can quickly either redeploy the last good state or fix the problem and roll forward, reducing the time to recover.
Monitoring system and application health
Logging and monitoring systems make it easy to detect failures and identify the events that contributed to them. Proactive monitoring of system health based on threshold and rate-of-change warnings enables us to preemptively detect and mitigate problems."
Uh-huh, we said. We totally see that. When we've 'done' DevOps, we've created environments where, thanks to deployment and test automation and advanced monitoring tools:
- Deployments happen when they are needed, at high speed (on demand)
- Performance issues and outages are proactively identified
- The last known working version of an application can instantly be redeployed
- While the problem, precisely identified by the monitoring solution is fixed by development
- Automatically tested
- Redeployed at speed when it's ready
Like this:
The application travels through the route to live reliably and consistently. All versions and activities are recorded and prebuilt, preused templates make it fast. No manual installations, no scripts and a fully auditable process. Tick that compliance box.
There are a lot of alerting tools out there, but does yours have the ability to monitor your business transactions and tell you exactly where the problem is? That's invaluable information for your developers who know instantly what to fix - no need to blame anyone, just work like a team to get the thing live again. But don't be stressed about that because you've just been able to instantly redeploy the last known working version so there's been no service interruption to your users - that new bit of innovation will be in the market in a minute.
Automated testing and service virtualization makes it quick and easy to create your testing environments, perform your tests, confirm you're good to go and pass to the deployment tool for immediate go live of your new features.
That's DevOpstastic.