View All Videos

Archive for February, 2010

Q&A: Erik Troan on the role of version control in Operations

Q&A: Erik Troan on the role of version control in Operations

2

Damon Edwards / 


“Just logging into a box and changing the software stack or its configuration is an idea whose time has passed.”

                                                                  -Erik Troan 

 

I recently spoke with Erik Troan about a topic he’s passionate about: bringing version control concepts and tools to IT Operations. Below is the lightly edited transcript of the highlights of the conversation. 

 

Erik Troan is the Founder and CTO of rPath. Erik previously served in various roles at Red Hat including Vice President of Product Engineering, Senior Director of Marketing, and chief developer for Red Hat Software. You might also know him as one of the two original engineers who wrote RPM.

 

Damon:
You were one of the original engineers who wrote RPM. Since your RPM days, how has your thinking about the role of version control in operations evolved?

Erik:
RPM was originally written to let two guys in an apartment build a Linux distribution without losing their minds. We really focused on the build side. Pristine source and patches were very important to us. The idea of sets of packages was not important to us. Probably the biggest shift as I started working with large companies and large deployments — originally at Red Hat and now with rPath — package management wasn’t about moving around individual packages, it was about installing a consistent set of software on machines and making sure those machines stayed consistent. So that’s where this idea of strong system version control came from.

It’s interesting to have a version of a package but it’s more interesting to be able to have a version of a deployed system. One helps the developers at a Linux distribution shop and the other helps systems administrators out in the wild. When you look at how RPM dependencies work, dependencies are solved against whatever packages happens to be newest (which can change tomorrow) while tools in the version control universe solve dependencies against a versioned repository with richer and finer grain dependency discovery and resolution. When you look at version control from that perspective, you get a sys admin tool rather than a developer tool.

Version control for system definitions is one piece of the scheme — you still have to have configuration, orchestration, and orchestration processes around that. But version control is a great underpinning to all of that because your orchestration tools and your configuration tools get smarter as you have consistent sets of software on client boxes and a consistent way to maintain those over time. It’s the difference between puppet making sure apache is installed and it making sure the right version of apache is installed along with the system components that apache has been validated against. You put puppet policies into a version control system; shouldn’t configuration versioning be matched by a version control system for provisioning software?

 

Damon: 
From a modern software development point of view, keeping everything in version control is almost a forgone conclusion. However, in IT Operations the role of version control systems is fairly new and the use of version control systems is an underused strategy. Why do you think that is?

Erik:
I think that version control is a response to complexity. If you go back 30 years who would use version control for software development? The guys who built UNIX used it. People at IBM used it a little bit. But most people had a directory full of source code and they just lived with it. In the 80’s, source code became complicated. Projects became bigger. All of a sudden you had a whole bunch of fingers in the pot and things were changing all the time. You needed the ability to understand what was going on. To track what was going on. To do bisection when something broke – discover when it broke? Which patch broke it? All of this really just arose out of the complexity of source code development. 

I think you are seeing the same thing happen in the IT arena. Again, if you go back to the 80’s, minicomputers had just come out and people only had three computers to maintain. That’s just not that hard. Then the 90’s come along and people had 25 or 30 Sun machines to maintain. In today’s world, people have tens of thousands of machines to maintain. I don’t talk to very many companies now who have fewer than 5,000 machines. They have them in cloud environments, datacenters, virtual environments — just the sheer complexity and scale of that is making version control an important part of the process. It lets you ask questions like “how is this machine configured today?”, “how is it going to be tomorrow?”, “how is it different than it was yesterday?”. It adds that time dimension to systems management so you understand where you are, how you got there, and where you are going next. That’s really what version control is all about. And then of course the ability to go backwards — if something breaks, how can I undo that and get back to something that worked?

Just logging into a box and changing the software stack or its configuration is an idea whose time has passed.

 

Damon: 
What is the value of using version control in an operational environment from an individual’s point of view? What’s the value from an organizational point of view?

Erik: 
One of the things I should emphasize is when we talk about version control we are really talking about a version control system — the whole system for how you do version control. You can think about it like CVS or Subversion. As an individual systems administrator, I can log into a box and see what version of what software is on the box, which you can do with something like RPM. But I should also be able to see what version of the entire manifest is on there. So instead just versioning individual packages, you’ve versioned a set of packages. If box #1 and box #2 are running the same manifest version, then I know that the boxes are probably the same. Having the version numbers of 1000 RPMs is information overload; simplifying that to a version number of the complete system manifest simplifies things and makes them comprehensible.

But you also want to do things like tell how a system is different from the version that is supposed to be on there. So you can go to the box and say show me how you are different from the manifest that is on this box? Or, there’s a new manifest — show me how this system differs from that new manifest. The other thing you can do is say that this box is broken — it was working yesterday but it is not working today — let me look to my version history and know exactly what has changed. What packages have changed? What configuration files have changed? Who made the changes? This box needs to be online right now so let me undo those changes — go back in time and put the box back to how it was. So this idea of being able to examine forward, move backwards, and having all of your configurations and everything else under a version control system is really a strong tool in a system admin’s quiver.

On the enterprise or departmental level, this really becomes valuable by enabling automation. The Visible Ops handbook certainly talks a lot about the idea of a Definitive Software Library, which is a versioned repository where all of your software comes from. The reason it does that is for all of the software artifacts that are going on machines in your infrastructure you want to know where they came from and how they got there. You want to be able to go back and say “I need to know everywhere this version of SSL is running because there is a security hole in the library”. Version control systems let me ask that kind of question. For automation, version control becomes critical. If you are deploying 1,000 machines you’re not deploying 1,000 individual machines. You’re deploying 1,000 cookie cutter machines. If they are all the same, then that definition ought to be in a version control system so you can view that definition over time and understand how it has changed — from yesterday to today to tomorrow. Not to mention deploy another 1000 next week that are exact replicas of the last 1000.

Version control systems also bring structure and predictability to the release lifecycle. Systems can be easily and consistently recreated as the system moves through the development, test and production lifecycle and it eliminates the risk of configuration drift as the system progresses through those stages.

 

Damon:
Like all ideas in technology, I’m sure there are those who will come out on the other side of this issue. What are some of the arguments you hear against the idea of advanced usage of version control in operations or objections as to why it won’t work in their specific situation? 

Erik: 
There’s really two arguments.  One of which is “that’s not the way we do it now… what we do now is OK… it’s too hard to change and we just don’t want to do it”. I get that a lot — just that inertia — people don’t want to change their systems. The other argument you get is “every machine we have is different so we can’t make our machines the same… every artifact is different so why would you version control them?” My answer to that is if you have 1,000 machines in your business and they are all different then you are doing something extremely wrong. Your version control system can help you understand why they are different. It can represent why they are different. But it can also help you eliminate those differences.

And then there is also just the prioritization — “how much of a priority is this?” and “do we have other things we need to solve today?”. It’s unusual to talk to anyone who says that using version control to deploy systems is a bad idea. It’s just finding the people who feel like it’s problem that they have to solve right now because they have a compliance issue, or a repeatability issue, or an automation issue. In IT, it’s the problems that get fixed. You don’t do a lot of things without a problem to solve.

 

Damon:
How would you go to a company’s executive-level management and explain that they need to be dedicating resources to bringing strong version control practices to their operations?

Erik: 
A lot of this comes down to cost control or reduction. Everywhere I look the number of servers under management is growing. Virtualization is putting four managed instances on each physical box. Cloud technologies are provisioning new instances in thirty seconds; each of those needs to be managed. Most shops try to hold a constant ratio of machines per sysadmin. If you let the number of instances grow by five times thanks to virtualization and cloud, are you planning on growing your IT team by five times? If not, you better automate everything you can see. Complete system version control makes system administrators more efficient.

There are two external drivers as well. One is risk; when systems are being hand assembled and configured — you’ll see a Puppet, a Chef, or a cfengine used in a pretty small percentage of enterprises — how do you know they are going to work tomorrow? if you lose an employee is that kind of knowledge that that person had about how a system is put together — and why it was put together — captured anywhere? Or is it just in their head? Version control systems capture that information.

Compliance, such as security compliance standards, is another great motivator. You can have external compliance with a standard like PCI or Sarbanes Oxley. Can you audit your systems to know that they are right? If all of your systems are hand assembled then you don’t even have a definition of what correct looks like. So what are you measuring against if people are just allowed to long into a box and change things willy nilly? For example, with PCI compliance — the standard you have to adhere to if you are going to hold onto credit card numbers for subscription billing — you are going to have auditors come in and ask “Where does this package come from?”, “Who put it there?”, “Why is it there?”, “How do you get updates on there?”, “How do you know which boxes need updates?”. Those are the kinds of questions that could be answered by a good version control system. But if you have hand assembled boxes then they are pretty much impossible to answer.

People over Process over Tools

People over Process over Tools

3

Alex Honor / 

Working as a consultant in the trenches, I find myself practicing an implicit methodology that helps me focus on short term improvements while working towards long term gains. The methodology applies to many aspects of the software development life cycle that spans development, test and operations.

The methodology begins by looking for the right people or person to fulfill a needed role. Secondly, ensure that role’s process is correct and optimal. Lastly, improve the process’ efficiency by reducing steps through some tool based automation. Notice, tools and automation are last! I call this methodology: People over Process over Tools. It’s my mantra and I sound like a broken record about it.

 

The people I work with are typically techies and like many techies, there is often a preference to first react to process problems with tools. The thinking goes like this: tools institute a methodology, embody a process and through their use enforce a policy. Tools can seem like a magic bullet.  Address all the issues at once in a new tool project, the thinking goes.

What I have found is this: selecting and instituting a tool is almost always the worst first step in improving an important function, especially when the function is spread over more than one team. This is because it is almost always simpler to look at how the function is being done — who does it and how. Choosing, developing, rolling out a tool is an investment and one that might not pay off if the right people and process are not first put into place.

Aligning the team to the function and then fixing or optimizing the process they use is almost always much more straightforward and is a prerequisite to the use of new tools or automation. 

People

Often the very easiest step to improvement is making sure there is a person responsible for the function at hand. This seems obvious but often organizational dislocation results in ambiguities about what person or group is actually responsible. For example, a process might be repeated inside several groups. Roughly the same process may occur in each group, but there is no sharing of knowledge or procedure between them. To address this kind of dislocation, the organization might centralize that function into one role or set up a meeting to ensure common practices. In other examples, perhaps the needed skill set does not exist for that function. The solution in that case would be to define the needed skills for the role and offer training or fill that position accordingly.

Process

The right people might be on the job but how the job is done is not correct or is very difficult. Incorrect processes are those that do not end with the desired result. Incorrect processes might have undocumented assumptions or preconditions. Steps might not be explained accurately or presented in the wrong order. Often processes evolve over time and become dogma even if they present many hurdles or exceptions to the person that performs it. 

In these cases, an eye towards process (re-)engineering can help. Concentrating on accuracy and order are first priorities. Optimization and reduction of duration must come later. 

Tools

Once the right people are performing the right process one can look for further gains by using tools to automate steps… assuming it is worth the cost and effort. I always like to emphasize the cost and effort side of the decision. It may be obvious to tools developers how automation will make drastic improvements but they might not build a business case needed to support their effort over the long haul.

For tool makers with the right skills, a development project that yields productivity gains always seems like a no brainer. But how do they prove it to their management? More importantly, how can they increase their chances that their tool will be adopted by the people that will use it as opposed to those people working around it. I have found that building a relationship with the future end users is always the first and most important step. 

 

Rather than begin with code, begin by looking at the process from an organizational perspective. Make sure there isn’t dislocation at the hand off points. Make sure the procedure is correct and manually repeatable. Only then look for available tools or a justification to write your own.

What is DevOps?

39

Damon Edwards / 

Update 1: Wikipedia now has a pretty good DevOps page

Update 2: Follow-up posts on the business problems that DevOps solves and the competitive business advantage that DevOps can provide.


 

If you are interested in IT management — and web operations in particular — you might have recently heard the term “DevOps” being tossed around. The #DevOps tag pops up regularly on Twitter. DevOps meetups and DevOpsDays conferences, are gaining steam.

DevOps is, in many ways, an umbrella concept that refers to anything that smoothes out the interaction between development and operations. However, the ideas behind DevOps run much deeper than that.

 

What is DevOps all about?

DevOps is a response to the growing awareness that there is a disconnect between what is traditionally considered development activity and what is traditionally considered operations activity. This disconnect often manifests itself as conflict and inefficiency.

As Lee Thompson and Andrew Shafer like to put it, there is a “Wall of Confusion” between development and operations. This “Wall” is caused by a combination of conflicting motivations, processes, and tooling.

 

Development-centric folks tend to come from a mindset where change is the thing that they are paid to accomplish. The business depends on them to respond to changing needs. Because of this relationship, they are often incentivized to create as much change as possible.

Operations folks tend to come from a mindset where change is the enemy.  The business depends on them to keep the lights on and deliver the services that make the business money today. Operations is motivated to resist change as it undermines stability and reliability. How many times have we heard the statistic that 80% of all downtime is due to those self-inflicted wounds known as changes?

Both development and operations fundamentally see the world, and their respective roles in it, differently. Each believe that they are doing the right thing for the business… and in isolation they are both correct!

To make matters worse, development and operations teams tend to fall into different parts of a company’s organizational structure (often with different managers and competing corporate politics) and often work at different geographic locations.

Adding to the Wall of Confusion is the all too common mismatch in development and operations tooling. Take a look at the popular tools that developers request and use on a daily basis. Then take a look at the popular tools that systems administrators request and use on a daily basis. With a few notable exceptions, like bug trackers and maybe SCM, it’s doubtful you’ll see much interest in using each others tools or significant integration between them. Even if there is some overlap in types of tools, often the implementations will be different in each group.

Nowhere is the Wall of Confusion more obvious than when it comes time for application changes to be pushed from development operations. Some organizations will call it a “release” some call it a “deployment”, but one thing they can all agree on is that trouble is likely to ensue. The following scenario is generalized, but if you’ve ever played a part in this process it should ring true.

Development kicks things off by “tossing” a software release “over the wall” to Operations. Operations picks up the release artifacts and begins preparing for their deployment. Operations manually hacks the deployment scripts provided by the developers or creates their own scripts. They also hand edit configuration files to reflect the production environment, which is significantly different than the Development or QA environments. At best they are duplicating work that was already done in previous environments, at worst they are about to introduce or uncover new bugs.

Operations then embarks on what they understand to be the currently correct deployment process, which at this point is essentially being performed for the first time due to the script, configuration, process, and environment differences between Development and Operations. Of course, somewhere along the way a problem occurs and the developers are called in to help troubleshoot. Operations claims that Development gave them faulty artifacts. Developers respond by pointing out that it worked just fine in their environments, so it must be the case that Operations did something wrong. Developers are having a difficult time even diagnosing the problem because the configuration, file locations, and procedure used to get into this state is different then what they expect (if security policies even allow them to access the production servers!).

Time is running out on the change window and, of course, there isn’t a reliable way to roll the environment back to a previously known good state. So what should have been an eventless deployment ended up being an all-hands-on-deck fire drill where a lot of trial and error finally hacked the production environment into a usable state.

While deployment is the most obvious pain point, it is only one part of the need for DevOps. As John Allspaw points out, the need for cooperation between development and operations starts well before and continues long after deployment.

 

What’s the benefit of DevOps?

DevOps is a powerful idea because it resonates on so many different levels.

From the perspective of individuals toiling in hands-on development or operational roles, DevOps points towards a life that is free from the source of so many of their hassles. It’s by no means a magical panacea, but if you can make DevOps work you are removing barriers that are both a significant time-sink and a source of morale killing frustration. It’s a simple calculation to make: invest in making DevOps a reality and we all should be more efficient, increasingly nimble, and less frustrated. Some may argue that DevOps is a lofty or even farfetched goal, but it’s difficult to argue that you shouldn’t try.

 

For the business, DevOps contributes directly to enabling two powerful and strategic business qualities, “business agility” and “IT alignment”. These may not be terms that the troops in the IT trenches worry about on a daily basis, but they should definitely get the attention of the executives who approve the budgets and sign the checks.

A simple definition of IT alignment is “a desired state in which a business organization is able to use information technology (IT) effectively to achieve business objectives — typically improved financial performance or marketplace competitiveness” [source].

DevOps helps to enable IT alignment by aligning development and operations roles and processes in the context of shared business objectives. Both development and operations need to understand that they are part of a unified business process. DevOps thinking ensures that individual decisions and actions strive to support and improve that unified business process, regardless of organizational structure.

A simple definition of agility in a business context is the “ability of an organization to rapidly adapt to market and environmental changes in productive and cost-effective ways” [source].

Of course, developers also have their own specialized meaning of the word “agile“, but the goals are very similar. Agile development methodologies are designed to keep software development efforts aligned with customer/company goals and produce high quality software despite changing requirements. For most organizations, Scrum, the iterative project management methodology, is the face of Agile.

Agile promises close interaction and fast feedback between the business stakeholders making the decisions and the developers acting on those decisions. If you look at the output of a well functioning Agile development group you should see a steady stream improvement that is in tune with business needs.

However, when you step back and look at the entire development-to-operations lifecycle from an enterprise point of view, that Agile stream and it’s associated benefits are often obscured. The Wall of Confusion leads to a dissociation of the application lifecycle. Development works at one pace and Operations works at another. The long intervals between production deployments, in effect, turn the Agile efforts of an organization right back into the waterfall lifecycle it was trying to avoid. No matter how Agile the development organization is, it’s exceedingly difficult to change the slow and lumbering nature of a business while the Wall of Confusion is in place. Andrew Rendell has a great post that tells the anecdotal story of how an organization’s cumbersome release processes turn their agile development efforts right back into a waterfall.

DevOps enables the benefits of Agile development to be felt at the organizational level. DevOps does this by allowing for fast and responsive, yet stable, operations that can be kept in sync with the pace of innovation coming out of the development process.

If you are seeking to establish a DevOps project within your organization, be sure to keep the terms “IT alignment” and “business agility” in mind.

 

How do we bring DevOps to life?

Like most emerging topics, it’s easier to find a consensus about the problem than it is about the solution.

If you listen to the current DevOps conversations, there does appear to be 3 areas of focus for DevOps related solutions:

1. Measurement and incentives to change culture – Changing culture and reward systems is never easy. However, if you don’t change your organization’s culture, fulfilling the promise of DevOps will be difficult, if not impossible.  When looking to influence culture in a business organization, you need to pay close attention to how you measure and judge performance. What you measure influences and incentivizes behavior. All parties across the development-to-operations lifecycle need to understand their stake in the larger business process of which they are a part. The success of both individuals and groups needs to be measured within the context of the success of the entire development-to-operations lifecycle. For many organizations this is a shift from more of a siloed approach to performance measurement, where each group measures and judges performance based on what matters to that specific group. This previous post I wrote dives deeper into the process for getting the correct end-to-end view of measurement into place.

2. Unified processes – The important theme of DevOps is that the entire development-to-operations lifecycle must be viewed as one end-to-end process. Individual methodologies can be followed for individual segments of that processes (such as Agile on one end and Visible Ops on the other), so long as those processes can be plugged together to form a unified process (and, in turn, be managed from that unified point-of-view). Much like the question of measurement and incentives, each organization will have slightly different requirements for achieving that unified process. Here is an excellent post by Six Sigma Blackbelt Ray Riescher on his experience bridging Scrum and ITIL.

3. Unified tooling –  This is the area in which most of the DevOps discussion has been focused. This isn’t surprising since it seems to be the natural reflex of technologists, for better or for worse, to jump straight into tooling discussions when looking to solve a problem. If you follow the communities of tools like Puppet, Chef, or ControlTier then you are probably already aware of the significant focus on bridging development and operations tooling. “Infrastructure as code”, “model driven automation”, and “continuous deployment” are all concepts that would fall under the DevOps banner. Alex Honor wrote a good post about some of the design patterns that toolsmiths working on DevOps tools need to worry about.

Jake Sorofman does a great job with the following overview of what types of tooling is required to make DevOps a reality:

A version-controlled software library—which ensures all system artifacts are well defined, consistently shared, and up to date across the release lifecycle. Development and QA organizations draw from the same platform version, and production groups deploy the exact same version that has been certified by QA.

Deeply modeled systems—where a versioned system manifest describes all of the components, policies and dependencies related to a software system, making it simple to reproduce a system on demand or to introduce change without conflicts.

Automation of manual tasks—taking the manual effort out of processes like dependency discovery and resolution, system construction, provisioning, update and rollback. Automation—not hoards of people—becomes the basis for command and control of high-velocity, conflict-free and massive-scale system administration.

It’s essential that all individual tools be considered part of a larger toolchain that spans the entire Development to Operations lifecycle (even if tight technical integration isn’t a option). Tool choice and implementation decisions (on both the toolchain and individual tool levels) need to be made in the context of their impact on that end-to-end lifecycle.  If you are wondering how that is done, take a look at this example of an open source fully automated provisioning toolchain that can be plugged into a larger Development to Operations toolchain.

 

What DevOps is not!

At the recent OpsCamp Austin, Adam Jacob from OpsCode/Chef railed against the idea that some system administrators were now seeking to change their job title to “DevOps”. I have to admit that, at the time, I was a bit skeptical that this was actually happening. However, I have since witnessed people on multiple occasions expressing this desire to rewrite job titles or establish DevOps as some sort of new role to be filled.

For example, Stephen Nelson-Smith wrote an excellent post about DevOps. While I agree with almost everything he said, I have to strongly disagree with the idea that DevOps should be a unique position or job title.

Turning “DevOps” into a new job title or special role sets a dangerous precedent. This makes DevOps someone else’s problem. You’re a DBA? Don’t worry about DevOps, that’s the DevOps team’s problem. You’re a security expert? Don’t worry about DevOps, that’s the DevOps team’s problem.

Think of it this way. You wouldn’t say “I need to hire an Agile” or “I need to hire a Scrum” or “I need to hire an ITIL” would you? No, you would just say I need to hire developers, project managers, testers, or systems administrators who understand these concepts and methodologies. DevOps is no different.

 

Why the name “DevOps”?

Probably because it’s catchy. It’s also a good mental image of the concept at the widest scale — when you bring Dev and Ops together you get DevOps. There has been other terms for this idea, such as Agile Operations, Agile Infrastructure, and Dev2Ops (a term we’ve been using on this blog since 2007). There is also plenty of examples of people arriving at the idea of DevOps on their own, without calling it “DevOps”. For an excellent example of this, read this recent post by Ernest Mueller or watch John Allspaw and John Hammond’s seminal presentation “10+ Deploys Per Day: Dev and Ops Cooperation at Flickr” from Velocity 2009.

For better or for worse, DevOps seems to be the name that is catching peoples’ imaginations. I credit the efforts of Patrick Dubois for championing the term “DevOps”, bringing the first DevOps Days conference to a (successful) reality, and maintaining the devops.info site.

Be sure to join in the DevOps conversation at the upcoming DevOps Day USA conference on June 25, 2010 in Mountain View, CA. It’s the day after O’Reilly’s Velocity 2010 conference, so be sure to hit both!

 

Deployment management design patterns for DevOps

Deployment management design patterns for DevOps

7

Alex Honor / 

If you are an application developer you are probably accustomed to drawing from established design patterns. A system of design pattern can play the role of a playbook offering solutions based on combining complimentary approaches. Awareness of design anti-patterns can also be helpful in avoiding future problems arising from typical pitfalls. Ideally, design patterns can be composed together to form new solutions. Patterns can also provide an effective vocabulary for architects, developers and administrators to discuss problems and weigh possible solutions.

It’s a topic I have discussed before, but what happens once the application code is completed and must run in integrated operational environments? For companies that run their business over the web, the act of deploying, configuring, and operating applications is arguably as important as writing the software itself. If an organization cannot efficiently and reliably deploy and operate the software, it won’t matter how good the application software is.

But where are the design patterns embodying best practices for managing software operations? Where is the catalog of design patterns for managing software deployments? What is needed is a set of design patterns for managing the operation of a software system in the large. Design patterns like these would be useful to those that automate any of these tasks and will facilitate those tools developers who have adopted the “infrastructure as code” philosophy.

So what are typical design problems in the world of software operation?

The challenges faced by software operations groups include:

  • Application deployments are complex: they are based on technologies from different vendors, are spread out over numerous machines in multiple environments, use different architectures, arranged in different topologies.
  • Management interfaces are inconsistent: every application component and supporting piece of infrastructure has a different way of being managed. This includes both how components are controlled and how they are configured.
  • Administrative management is hard to scale: As the layers of software components increase, so does the difficulty to coordinate actions across them. This is especially difficult when the same application can be setup to run in a minimal footprint while another can be designed to support massive load.
  • Infrastructure size differences: Software deployments must run in different sized environments. Infrastructure used for early integration testing is smaller than those supporting production. Infrastructure based on virtualization platforms also introduces the possibility of environments that can be re-scaled based on capacity needs.

Facing these challenges first hand, I have evolved a set of deployment management design patterns using a “divide and conquer” strategy. This strategy helps identify minimal domain-specific solutions (i.e., the patterns) and how to combine them in different contexts (i.e., using the patterns systematically). The set of design patterns also include anti-patterns. I call the system of design patterns “PAGODA”. The name is really not important but as an acronym it can mean:

  • PAtterns GOod-for Deployment Administration
  • PAckaGe-Oriented Deployment Administration
  • Patterns for Application and General Operation for Deployment Administrators
  • Patterns for Applications, Operations, and Deployment Administration

Pagoda as an acronym might be a bit of a stretch but the image of a pagoda just strikes me a as a picture of how the set of patterns can be combined to form a layered structure.

 

Here is a diagram of the set of design patterns arranged by how they interrelate.

 

The diagram style is inspired by a great reference book, Release It. You can see the anti patterns are colored red while the design patterns that mitigate them are in green.

Here is a brief description of each design pattern:

Pattern Description Mitigates Alternative names
Command Dispatcher A mechanism used to lookup and execute logically organized named procedures within a data context permitting environment abstraction within the implementations. Too Many Tools Command Framework
Lifecycle A formalized series of operational stages through which resources comprising application software systems must pass. Control Hairball Alternative names
Orchestrator Encapsulates a multi-step activity that spans a set of administrative steps and or other process workflows. Control Hairball, Too Many Cooks Process Workflow, Control Mediator,
Composable Service A set of independent deployments that can assembled together to support new patterns of integrated software systems. Monolithic Environment Composable Deployments
Adaptive Deployment Practice of using an environment-independent abstraction along with a set of template-based automation, that customizes software and configuration at deployment time. Control Hairball, Configuration Bird Nest, Unmet Integration Environment Adaption
Code-Data Split Practice of separating the executable files (the product) away from the environment-specific deployment files, such as configuration and data files that facilitates product upgrade and co-resident deployments. Service Monolith Software-Instance Split
Packaged Artifact A structured archive of files used for distributing any software release during the deployment process. Adhoc Release Alternative names

The anti-patterns might be more interesting since they represent practices that have definite disadvantages:

Anti-Pattern Description Mitigates Alternative names
Too Many Tools Each technology and process activity needs its own tool, resulting in a multitude of syntaxes and semantics that must each be understood by the operator, and makes automation across them difficult to achieve. Command Dispatcher Tool Mishmash, Heterogeneous interfaces
Too Many Cooks A common infrastructure must be maintained by various disciplines but each use their own tools to affect change increasing chances for conflicts and overall negative effects. Control Mediator Unmediated Action
Control Hairball A process that spans activities that occur across various tools and locations in the network, is implemented in a single piece of code for convenience but turns out to be very inflexible, opaque and hard to maintain and modify. Control Mediator, Adaptive Deployment, Workflow  
Configuration Bird Nest A network of circuitous indirections used to manage configuration and seem to intertwine like a labyrinth of straw in a bird nest. People often construct a bird nest in order to provide a consistent location for an external dependency. Environment Adaptation  
Service Monolith Complex integrated software systems end up being maintained as a single opaque mass with no-one understanding entirely how it was put together, or what elements it is comprised, and how they interact. Code-Data Split, Composable Service House Of Cards, Monolithic Environment
Adhoc Release The lack of standard practice and distribution mechanisms for releasing application changes. Packaged Artifact  

 

Of course, this isn’t the absolute set of deployment management patterns. No doubt you might have discovered and developed your own. It is useful to identify and catalog them so they can be shared with others that will face scenarios already examined and resolved. Perhaps this set offered here will spurn a greater effort.

 

 

Videos: Michael Coté, Travis Campbell, Erica Brescia, Andrew Shafer at OpsCamp Austin 2010

Damon Edwards / 

Here’s the second round of “3 Questions” style interviews I filmed at OpsCamp Austin 2010. For the first round, go here.

Michael Coté (Redmonk), Travis Campbell (University of Texas / LOPSA), Erica Brescia (BitRock / BitNami.org), and Andrew Shafer (Agile Infrastructure developer and speaker) were asked:

1. What brought you to OpsCamp?
2. What excites you in 2010?
3. Wilcard question!… 

 

Michael Coté is noted analyst and blogger/podcaster at RedMonk covering primarily enterprise software.
Wildcard question: How can open source projects better interface with analysts?

 

 

Travis Campbell is a Senior Systems Administrator at UT Austin and is on the board of directors of LOPSA.
Wildcard question: What does the HPC community think of “cloud computing”? 

 

 

Erica Brescia is the CEO of BitRock (sponsors of BitNami.org)
Wildcard question: How does BitRock’s unique legacy set it up for success today? 

 

 

Andrew Shafer is a blogger, speaker, and developer currently focused on Agile Infrastructure. 
Wildcard question: What advice would you give people looking into the topic of Agile Operations? 

 

 

Videos: Luke Kanies, Bill Karpovich, Ernest Mueller at OpsCamp Austin 2010

2

Damon Edwards / 

Here’s the first round of “3 Questions” style interviews I filmed at OpsCamp Austin 2010

Luke Kanies (Reductive Labs / Puppet), Bill Karpovich (Zenoss), and Ernest Mueller (National Instruments) were asked:

1. What brought you to OpsCamp?
2. What excites you in 2010?
3. Wildcard question!…

 

Luke Kanies is the CEO of Reductive Labs and the original author of Puppet.
Wildcard question: How did you enable the Puppet Community’s early success?

 

 

Bill Karpovich is the CEO & Co-Founder of Zenoss.
Wildcard question: Where is the innovation happening in monitoring?

 

 

Ernest Mueller is a Web Systems Architect at National Instruments.
Wildcard question: What advice do you have for open source toolmakers?