How to Explain DevOps to Your Boss

Wercker is a Docker-Native CI/CD Automation platform for Kubernetes & Microservice Deployments

Michael Churchman
Michael Churchman
April 4, 2018

"What do you know about DevOps? Is it something we should be using? Would it do us any good? Is it something we need in our operations, or is it just another new management theory? I'm not in the mood for any more new management theories—at least not this week."

What would you say if your boss (or any other non-technical person) asked questions like these, or wanted you to explain DevOps? The answers may seem difficult to produce because DevOps is so deeply technical. It can be hard (though not impossible!) to translate the DevOps concept to non-technical folks, including but not limited to your boss.

In this post, I offer tips on communicating the meaning and value of DevOps in a way that is clear and easily comprehensible to a non-technical person.

 

Give the Basic Version First

Here's a tip: When you explain anything, it's usually best to start with a very quick introduction—just enough to give your audience some idea of what you're talking about. For DevOps, such an introduction might sound something like this:

"Basically, it's a method for developing and deploying software that is very fast and very responsive, and it allows the people who develop the software and the people who keep it running to work together."

At this point, one of the more obvious questions might be:

What makes it so special? People have been developing software for a long time. Is this really something new under the sun?"

 

It Really is New?

What do you say then? Let's start with what makes DevOps different, and enough background information to show why it's different—and why that difference is important:

Yes, it is something new under the sun, at least in terms of software development.

 

The Hard Old Days

Older development methodologies like waterfall made sense in their time, because they fit the way that computing was done. Software was developed for mainframes, local networks, or desktop systems, and new versions and upgrades had to be intermittent. Just the logistics of distributing a new release (let alone developing it) was complicated enough to require a battle plan. You had to package and ship distribution disks, or send people on-site to install the software. Installation could be complicated and risky, and rollbacks could be difficult, if they were even possible. There were good reasons for developers to lean heavily toward caution, and to allow significant time between releases and upgrades.

 

A Different World

But we're in a different world now technologically. There is still software being developed the old-fashioned way, but more and more major applications are designed as Internet- or cloud-based services. And the underlying technology in that kind of deployment is so different that the old ways of doing things simply don't make sense. What does make sense is to develop software in ways that fit the new technology, and that take advantage of its capabilities.

What do we mean when we talk about new technologies and new ways of doing things?

 

Deployment as Uploading

Many major applications now run on the Internet, on intranets, or in private cloud environments on local networks. This means in effect that deployment consists of uploading new software, updates, or bug fixes to a single location, or at most, to a handful of platforms. Uploading is quick; actual upload time is generally trivial, and the process of configuration for one or a few well-understood platforms involves much less overhead than would be required for software running on desktop systems or individual networks.

 

Virtual is Real

The platforms themselves are largely virtual. This can't be emphasized enough—Hosted deployment generally means deployment on a system that is separated from the underlying hardware by several layers of abstraction (typically a cloud). This means that the host environment itself is as configurable as the application being uploaded. The infrastructure is just more software, and can be treated like software.

 

Put it in a Script

Most of the steps required for development, testing, and deployment can be run by scripts, and thus automated. Development and release have always been at least partly scriptable, and as more formerly bare metal infrastructure elements are replaced by software, more of the delivery chain becomes scriptable. The scripts that control these various processes can be quite sophisticated, and can make intelligent decisions within the scope of their functionality.

 

Automated Testing, Automated Analytics

Deployment on virtualized platforms quite naturally leads to virtualized testing. If your production software will be running on virtual machines or in equally virtual container systems, you should be testing it on virtualized systems. This means that testing can be fully or almost entirely automated, and that much of it can be done in parallel.

In a virtual environment that can be run automatically by scripts, tasks such as software monitoring and collection and analysis of logs generated by the various processes can also be automated. Analysis tools are now sophisticated enough to automatically detect potential problems and alert the appropriate technical staff.

 

DevOps as a Framework

What does this all add up to? Basically, it adds up to a framework for a new way of developing and deploying software. And that new way is what we call DevOps.

Consider what happens when most of the software delivery chain can be run by intelligent scripts:

 

  1. Developers write code—That part is still done manually, but build production is automated.
  2. When the build is ready, it's automatically sent to QA, where it's run through automated, highly parallel tests.
  3. If the automated testing system finds any problems, they're reported back to development; otherwise, the build goes on to the next stage.
  4. The build is automatically packaged with any necessary libraries or other resources, then uploaded for limited-deployment testing.
  5. If it passes limited-deployment testing, it goes into full deployment; the scripts that control this process also handle all of the configuration details.
  6. After the software is deployed, it is automatically monitored, and any problems are reported to the operations personnel.

 

The Move to Microservices

Now consider what happens when the software in question is designed so that it is broken down into small parts (or "microservices"), which work together, but can be deployed independently, in individual containers. The entire process described above can be applied to a single microservice; since microservices are small, very often each of the steps can be completed quickly, and the time required for the entire cycle may be very short.

When a program is broken down into microservices in this manner, an update involving internal changes to individual microservices means that only those microservices need to go through the production cycle. The rest of the program can continue to run while the individual microservices are being updated.

With a very fast production cycle and little or no downtime required for updates, there is no reason for deployment to not be an ongoing process. Continuous delivery, which is one of the basic features of DevOps, arises naturally out of this new capability.

 

The Merger of Development and Operations

The virtualization and scriptability of infrastructure means that there is no longer any natural division between development and operations; creating software and deploying it are now part of the same process, and more often than not, use the same basic set of tools. The traditional sharp division between programmers and IT staff simply doesn't make sense anymore. It is now much more natural for them to work together, and in fact, in order for continuous deployment to proceed smoothly, they need to work together as a close team.

 

Lean and Flexible

The result is a system of software development and delivery that is very fast, very flexible, extremely responsive to both technical issues and changing market requirements, and which can significantly cut expenses by streamlining processes which were previously complex and labor-intensive.

This system—automated, continuous delivery of software by an integrated development/ operations team—is the core of DevOps. It arose naturally, in response to the new capabilities and challenges of a rapidly changing IT environment (and not as a top-down application of management theory).

Why should you move over to DevOps in your workplace? Because it really is the best set of practices for the contemporary IT world. It's not a new fashion. It's new technology, with new capabilities, and new ways of using those capabilities.

 

About the Author

Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the ‘90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues.

 

Like Wercker?

We’re hiring! Check out the careers page for open positions in Amsterdam, London and San Francisco.

As usual, if you want to stay in the loop follow us on twitter @wercker or hop on our public slack channel. If it’s your first time using Wercker, be sure to tweet out your #greenbuilds, and we’ll send you some swag! 

 

Topics: Tutorials