If you live or work in technology, as I do, then you already know that change in IT is constant. We all are faced with the reality that the technology you are using will eventually be usurped by something better/faster/smaller/better, and you will need to reducate yourself in the latest, newest, coolest thing. Taking a page from The Innovator’s Dilemma, far better to do this proactively and ‘disrupt’ yourself, rather than wait for market forces to push you into it. In fact, I have spent my career trying to, as Wayne Gretzky allegedly once told a reporter, “…skate to where the puck is going,” rather than where it is now.
The past several years, those of us working in infrastructure have been somewhat preoccupied with virtualization, automation, cloud business models, and sassy-sounding acronyms like ITaas, IaaS, PaaS, and SaaS. I use the term preoccupied because those of us hawking the concept of ‘cloud’ and ‘IaaS’ are about to be disrupted again, which is a euphemism for what happens when you are so focused on doing your own thing you don’t see the large unstoppable force careening toward you from an unanticipated direction…
Ten to fifteen years ago, if a company needed to roll out a new service or application, the IT department was empowered to make its own decisions on how best to service the needs of the business and deploy the appropriate technology for that solution. Over time, with the ever-increasing ubiquity of the internet, cloud and mobile applications, that power has started to shift away from IT. I have seen many IT departments placed under serious pressure to build an infrastructure that supports a consumer-driven mindset leveraging self-service, chargeback, immediate response, ubiquitous access and alwys-on availability. All this while under ever-increasing budget constraints. Virtualization and cloud are supposed to fix, or at least help this, right?
While IT has been trying to figure all this out, the Development Community (I am using capitals to represent the collective effect of millions of independent developers) has been working to solve these problems as well. How to deliver quality software to customers / business with ever increasing frequency and ever increasing quality? How to deploy the workloads necessary to support my application or service as quickly as possible?
As a result of these efforts on the part of the Development Community, we have seen the rise of “Continuous Integration” (CI) as a practice to speed the build and test of software. The Community is now trying to conceptually extend CI all the way through to production, leveraging the same principles of automated deployment, testing, and lifecycle management of the entire application stack through “Continuous Delivery” (CD). Together this is referred to as CI/CD.
The Development Community is also making extensive use of APIs as a technique to maximize their ability to deploy applications and manage workloads once in production…. this has significant repercussions for many infrastructures, as it appears OpenStack is gaining the mind-share in this space. More on that in another post…
Simultaneously, a separate movement has arisen focused on aligning the efforts of developers with the efforts of those in operations. Rather than a shift in technology, this is more of a shift in the mindset and approach to IT operations, and it illustrates that while ‘cloud computing’ may be a part of the puzzle, it is a prime target for disruption, even though the market is only a few years old.
What is DevOps?
DevOps is something of an IT-cultural movement encompassing people and process, in which joint collaboration occurs between all parties (development teams and operational teams), working toward a common goal: increasing the business’ responsiveness and value to its customers. In today’s world, that means:
- Developers own their code from inception to production
- Developers and operations share the responsibility of deploying applications and running the environment for greater appreciation of each others’ roles
- More communication between teams, earlier in the process of releasing new applications / features
- More responsibility on development teams to ensure operational readiness
- More responsibility on operational teams to be service oriented
- Agreement that failure will occur, and everyone must be willing to take responsibility for it
What is interesting to me is that this movement has arisen independently of the ‘cloud’ and the CI/CD movements, both of which are heavily weighted toward technology. Cloud computing tries to address the infrastructure underpinning services and applications, while CI/CD tries to address the delivery of software onto infrastructure. Neither sufficiently addresses the organizational changes necessary to people and process to realize the benefits of either. The success of the Devops movement in promoting organizational change is a severe indictment on how clearly ‘cloud’ has failed to deliver on its promises.
While DevOps may be acknowledged as a trend toward a loosely defined operating model, it carries with it specific and identifiable – though not formalized – goals, as mentioned above. These are all focused on increasing developer productivity, improving operational readiness, and increasing environmental resilience. Often, the implementation of a DevOps-enabled system focuses on the application release pipeline – from the development of the code, through testing and Q/A, and finally promotion to a production environment. We may infer, then, that there are certain characteristics of a DevOps-enabled datacenter (“DEDC”) that we might find advantageous.
First and foremost, we can identify as our primary requirement for our DEDC a high degree of automation for the application release pipeline itself. Often referred to as “continuous integration and continuous delivery” (or CI/CD), a highly automated release pipeline has the potential to have the greatest impact on developer productivity and therefore the greatest impact on responsiveness to the business. Of course, CI/CD carries with it its own requirements, including:
- A hyper-standardized infrastructure, potentially yielding:
- Greater parity of environments across Dev, Test, QA, and production
- Shorter mean time to resolution (MTTR) for troubleshooting
- Highly optimized and efficient procedures for deployment and replacement or equipment
- A high degree of automation in the infrastructure for deployment of workloads
- Proper instrumentation for operational transparency, monitoring, alerting, reporting, etc.
Note that the characteristics described above are also found in highly virtualized environments, as well as cloud-based infrastructures.
What is the Difference Between DevOps and Cloud?
Similar to DevOps, the cloud model also encompasses people, process, and technology, but more tightly defined, and with an emphasis on infrastructure rather than applications. I personally view ‘Cloud’ as a business model that addresses the budgeting, procurement, implementation, consumption and maintenance of IT assets. Virtualization and consolidation by itself is not cloud, as virtualization really only addresses the implementation and consumption of those assets. Cloud must encompass the entire IT deployment lifecycle, including the budgeting and procurement (and ultimately chargeback) of those assets as well as the implementation and consumption of the infrastructure. The entire business of IT changes to support a cloud-enabled business.
The National Institutes of Standards and Technologies (NIST) actually has a definition of cloud computing (which you can read here), but I prefer a simpler, shorter definition (from VMware’s definition, circa 2010):
Cloud Computing is an approach to computing that leverages the efficient pooling of on-demand, self-managed virtual infrastructure, consumed as a service.
Unfortunately, companies that have failed to incorporate cloud business models into their operational procedures are finding themselves falling behind their competition at increasing rates. As cloud computing only addresses the infrastructure portion of the technology value chain, recognition is sinking into the industry that while cloud computing is a step in the right direction, it ultimately falls short of truly meeting the needs of today’s businesses.
So How Do We Enable DevOps in the Datacenter?
Traditional infrastructures with heavy reliance on physical systems or those lacking programmable interfaces are inherently brittle and cumbersome. The greater the effort necessary to make configuration changes to layers of the stack, the less flexible and responsive the infrastructure (and by extension, the operations team) will be. As we discussed in the last section, cloud computing attempts to address these issues through a highly virtualized environment coupled with a service-oriented mentality. We now know, however, that hyper-standardization (which is almost a prerequisite for virtualization, and certainly a best practice) and virtualization together are not enough. Ideally, we need automated deployment of physical or virtual servers. We need deployment of those workloads on demand to the environment (dev/test/prod) of our choosing. We need a finely tuned continuous integration and continuous delivery application release pipeline. In short, the entire environment is designed with streamlined delivery of new applications / code / features in mind. We need a software-defined enterprise.
A software-defined enterprise allows for changes to the infrastructure and application(s) on the fly, as ideally the entire application and its requisite SLA(s) are encapsulated in software, defined as code, governed by policy, and therefore inherently flexible. It only makes sense, then, that a software-defined enterprise is better equipped to embrace DevOps.
In a future post, I will discuss how VMware and the vRealize Suite may be utilized to enable DevOps in the Software-Defined-Datacenter.