View All Videos

Archive for March, 2010

Videos: Jesse Robbins, Ezra Zygmuntowicz, Colleen Smith at Cloud Connect 2010

Damon Edwards / 

Here’s another round of “3 Questions” interviews that I shot at Cloud Connect 2010 in San Jose, CA on March 17, 2010.
Jesse Robbins (Opscode / Chef), Ezra Zygmuntowicz (Engine Yard), and Colleen Smith (Symantec) were asked:
1. What brought you to Cloud Connect?
2. What aspect of the Cloud excites you the most these days?
3. Wildcard question!…
Jesse Robbins is the CEO of Opscode and one of the creators of Chef.
Wildcard question: How does “infrastructure as code” unlock the promise of the Cloud?


Ezra Zygmuntowicz is a Senior Fellow and Co-Founder of Engine Yard.
Wildcard question: What tooling changes are needed to make DevOps a reality?  


Colleen Smith is an Information Technology Architect at Symantec
Wildcard question: How will Clouds impact the culture of internal enterprise IT?


Thanks to all for playing along!

Criteria for Fully Automated Provisioning

Criteria for Fully Automated Provisioning


Damon Edwards / 

“Done” is one of those interesting words. Everyone knows what it means in the abstract sense. However, look at how much effort has to go into getting developers to agree that done really does mean 100% done (no testing, docs, formatting, acceptance, etc. left to do).

“Fully” is similarly an interesting word. I can’t tell you how many times I’ve encountered a a situation where someone says that they’ve “fully automated” their deployments. Then when they walk me through the steps involved with a typical deployment it’s full of just-in-time hand-editing of scripts, copying and pasting, fetching of artifacts, manual “finishing” or “verification” steps, and things of that nature. Even worse, if you ask two different people to walk you through the same process you might get two completely different versions of “fully” definitely not meaning “fully”.

Just like Agile developers use the mantra “done means done“. Operations needs the mantra “fully automated means fully automated“. Without a clear definition of what “fully automated” means, it’s going to be difficult to come up with any kind of consensus around solutions.

As part of the original “Web Ops 2.0: Achieving Fully Automated Provisioning” whitepaper, we listed a criteria for “Fully Automated Provisioning”. I’ve taken that content and posted to the new DevOps Toolchain project. Hopefully it will spur some discussion on what “fully automated” actually means.

Here’s the initial list of criteria:

1. Be able to automatically provision an entire environment — from “bare-metal” to running business services — completely from specification

Starting with bare metal (or stock virtual machine images), can you provide a specification to your provisioning tools and the tools will in turn automatically deploy, configure, and startup your entire system and application stack? This means not leaving runtime decisions or “hand-tweaking” for the operator. The specification may vary from release to release or be broken down into individual parts provided to specific tools, but the calls to the tools and the automation itself should not vary from release to release (barring a significant architectural change).

2. No direct management of individual boxes

This is as much a cultural change as it is a question of tooling. Access to individual machines for purposes other than diagnostics or performance analysis should be highly frowned upon and strictly controlled. All deployments, updates, and fixes must be deployed only through specification-driven provisioning tools that in turn manages each individual server to achieve the desired result.

3. Be able to revert to a “previously known good” state at any time

Many web operations lack the capability to rollback to a “previously known good” state. Once an upgrade process has begun, they are forced to push forward and firefight until they reach a functionally acceptable state. With fully automated provisioning you should be able to supply your provisioning system with a previously known good specification that will automatically return your applications to a functionally acceptable state. The most successful rollback strategy is what can be described as “rolling forward to a previous version”. Database issues are generally the primary complication with any rollback strategy, but it is rare to find a situation where a workable strategy can’t be achieved.

4. It’s easier to re-provision than it is to repair

This is a litmus test. If your automation is implemented correctly, you will find it is easier to re-provision your applications than it is to attempt to repair them in place. “Re-provisioning” could simply mean an automated cycle of validating and regenerating application and system configurations or it could mean a full provisioning cycle from the base OS up to running business applications.

5. Anyone on your team with minimal domain specific knowledge can deploy or update an environment

You don’t always want your most junior staff to be handling provisioning, but with a full automated provisioning system they should be able to do just that. Once your domain specific experts collaborate on the specification for that release, anyone familiar with a few basic commands (and having the correct security permissions) should be able to deploy that release to any integrated development, test, or production environment.


DevOps Toolchain project announced at O’Reilly’s Velocity online conference

DevOps Toolchain project announced at O’Reilly’s Velocity online conference

Lee Thompson / 

If you are the type who gets distracted at work while trying to stay plugged into the industry, yesterday was a big big problem.  In Austin, you had SXSW going on; in San Francisco, you had OSBC; in San Jose you had Cloud Connect; and on the internet you had the O’Reilly Velocity Online Conference.  Wow!

The dev2ops guys were busy.  Damon and Alex were presenting at Cloud Connect while I was presenting at Velocity OLC.  I’m an Austin resident, but SXSW really isn’t the DevOps hang-out, at least yet! (heh). 

At Velocity, it was my privilege to announce the next generation of the provisioning toolchain project.  Some of the feedback we received from the original toolchain paper was from the front lines of DevOps: “yeah that’s pretty interesting, but there is alot more to a datacenter than just provisioning”. Good point.

So we scope creeped the hell out of the automated provisioning paper and started the devops-toolchain project dedicated to defining best practices in DevOps and open source tools available to accomplish those practices. 


So this time, the devops-toolchain project is an opensource community driven project, which due to its nature will need to be reved frequently due to the constantly shifting nature of “best practices”.  We’ve kicked started some of the content at  and formed a Google Group for the discussion at Come join the conversation!

Here are the slides from my presentation:



The Velocity team did a great job hosting the conference! An example of the great content presented is from Ward Spangenberg from Zynga. He updated us on the latest on security in Cloud deployments.  Getting security worked out gets more compute into the cloud:


I’m an OSBC alumni. If you’re into vintage conference or need to find a way to get over insomnia, check this out from 2007… contributers presenting at Cloud Connect and Velocity Online Conferences contributers presenting at Cloud Connect and Velocity Online Conferences

Damon Edwards / 

Perhaps proving that you can be two place at the same time, contributers will be presenting at two different conferences tomorrow (Wednesday, March 17):



Alex Honor and Damon Edwards will be presenting at Cloud Connect in San Jose, CA at 8:30am (PDT):
“Orchestration: The Next Frontier for Cloud Applications”






Lee Thompson will be presenting on the Open Source Toolchain Project at O’Reilly’s Velocity Online at 9:00am (PDT). Registration for this online event is FREE!