It still amazes me how, after all the time and effort spent trying to figure out how to manage and execute technology projects, that some fundamental issues still remain. The foremost of these is: did we achieve the business value (when and for how much we said we would). Waterfall wasn’t great at this, Agile got better, now it’s on to DevOps. The problem is that most of us can provide examples where each of these approaches has failed and, even when they haven’t failed, there were still challenges with what the project produced for the business.
In my experience, very few IT projects articulate the measurable business value – or outcomes – that they will produce when completed. If you review project charters (from actual projects, not from text-book examples), you’ll often see that there is only implied business value that is sought to be measured. A typical example: “Improve access to organizational data.” I think there’s value there, but how can I measure the value that it will bring to the business?
But what if there were a way to change how IT practitioners view projects, and what if, simultaneously you had a way to increase your value proposition to the business and be more successful at execution irrespective of whether you use Agile or DevOps or your super secret project approach?
There is. Enter Accountable Care for IT
Continue reading “4 Steps to Maximize Your DevOps and Agile Results”
So here’s are deal: We’ve created what we’re calling “PaaS-Containers” in our IT production environment. It consists of core technologies like RHEL Atomic Host, Kubernetes, and Docker along with supporting CI/CD components like Jenkins together as part of an offering that supports the end-to-end automated deployments of applications from a code-commit event through automated testing and roll-out through multiple environments (dev, QA, stage, prod). Oh, did I mention that it’s also integrated with our enterprise logging and monitoring as well as our change management process and tooling so that we have a complete audit trail?
Everyone wants to jump on the bandwagon – they see the benefits of rapid deployment, atomicity, enabling business capabilities faster through technology. But as we learned in the 90-day initiative to get it stood up and an existing application deployed on it, all applications aren’t ready for containers and some may never be based on their current architecture.
Here’s what we think about the deployment options in an enterprise context that allows us to enable innovation while managing enterprise risk…
Continue reading “Containers in the enterprise – Are you ready for this?”
Imagine this: deploy an application from code-commit to qa, validate through automated testing, and then push the same image into production with no manual intervention, no outage, no configuration changes, and with full audibility through change records. A month-and-a-half ago, we formed a tiger team and gave them less than 90 days to do it. How? Build an end-to-end CI/CD environment leveraging RHEL Atomic 7.1 as the core platform and integrating with key technologies like git, Jenkins, packer.io, in a hybrid deployment model and in accordance with our enterprise standards. Oh, and make sure we don’t care if we lose a couple of the nodes in the cluster when we’re running the application in production.
Disruptive technology that spawns disruptive business architecture. And it all starts with imagining the life of this thing called an image.
Continue reading “Imagine this – the life of an image”
Deeper In the Engine Room
Fundamentally changing how people work isn’t easy. When you’re midstream on a large strategic integration initiative, it’s even more difficult. (See here to get up to speed on how far we’ve come). Due to these challenges, there are a couple of things we kept in mind as we progressed as well as learning a few additional things. So we wanted to dive a bit deeper in the engine room and share with you two of the biggest lessons: how to operate with globally distributed teams (I’ll follow the sun) and keeping in mind what really matters (the Importance of being Earnest).
Continue reading “Pivoting at Speed to Scaled Agile and DevOps – Chapter 3b”
Chapter 3 – In the Engine Room
The old idiom “the devil’s in the details” couldn’t apply more to our initiative. We have six global development teams that were executing in waterfall that we’ve essentially restructured and told them to do agile. To catch up on how we got here see Chapter 1 and Chapter 2.
Sound like a recipe for disaster? It could very well be, but seeing that we’re at a position where we can only improve, the gamble may just pay off.
Coal Power before Steam
Remember the Integrated Delivery Teams from last time? We now have six IDTs each having their own set of outcomes to achieve. The outcomes are the discrete business capabilities from end-to-end potentially spanning multiple systems. This alone is a huge benefit in that we’ve removed the artificial barriers to cross-team communication and, at the same time, given a group of people a common mission that if they meet, will demonstrate complete business capability and value. But how do we actually get to a state where we can execute?
Continue reading “Pivoting at Speed to Scaled Agile and DevOps – Chapter 3”
Chapter 2 – Turning on a Dime
Being able to respond quickly to a changing environment is what businesses aspire to but also something that is ingrained in our humanity. Having a large program that needs to adjust course in order to avoid disaster is easier said than done. Last time we fixed the radar and realized that without changing course, we were likely headed for just that disaster.
Of course, knowing you need to do something and actually doing it, however, are two very different things. For this, we used an agile approach and moved forward one step at a time. In our case, we developed a “get healthy” plan to make the necessary changes. We broke the plan down into 30, 60, 90 day incremental changes to show improvements over time, validate and then adjust as needed.
Continue reading “Pivoting at Speed to Scaled Agile and DevOps – Chapter 2”
Typically, the two biggest impediments to scaled agile and DevOps are over planning (including over thinking) and dogmatism. Which projects should we “pilot” for scaled agile, how long should the pilot run, what are the details of how we will implement, etc., etc. Of course, this over-thinking ultimately impedes the type of progress that scaled agile hopes to achieve in driving business results.
But what if you don’t over think and intentionally decide to be agile about being agile? How much faster can you move? What if I don’t have all of my rituals figured out to the nth degree and risk forgetting something crucial? Is it too risky?
Over the next several posts, we’ll share with you our experiences in taking a large strategic initiative from a waterfall approach to scaled agile in just a few weeks. The initiative scope was a targeted nine-month system and process integration effort to drive business transparency and, ultimately, business value. Impacted were: five business systems; six independent global development teams; and numerous business stakeholders that spanned the globe.
Continue reading “Pivoting at Speed to Scaled Agile and DevOps”
From Barry Manilow to Kiss, to Mary J. Blige and everyone in-between, people have been singing Talk to me for decades. And is it any wonder? Often we don’t feel heard or feel that we don’t understand what people are trying to say or do. In the fast pace of today’s business, effective communication to get everyone on the same page quickly is essential. Needless to say, this isn’t always easy.
Given that we have to communicate effectively and the fact that most all of us hate writing documentation, we need a better approach. So, what I’ve started to do with groups I work with, is to move beyond ineffective textual documentation to simply creating illustrations. There are four basic types we use depending on the situation and audience:
Continue reading “DevOps: Talk to me… Say what?”
On our road to Delivery Efficiency, we’ve covered knowing and understanding your thing and we’ve also talked about making our thing better over time and where that fits in the delivery continuum. As we continue our journey, the next key element is in relationships.
Continue reading “DevOps as a Hermit or Socialite?”
Last time we started on our Journey to Delivery Efficiency with a conversation on The Thing, The Life, and The Who (see the post here
). In other words, we need to know what the thing
is, that it has a lifecycle during which different people care (the who). These basic elements set us up for being able to care for these things
throughout their lifecycle and ultimately be Faster, Better, Stronger, and Wiser in their delivery.
Continue reading “Being Better – a Delivery Efficiency Continuum”