View All Videos

Archive for the ‘News’ Category

Puppet and Chef Rock. Doh. What about all these shell scripts ?!


Alex Honor / 

Incorporating a next generation CM tool like Puppet or Chef into your application or system operations is a great way to throw control around your key administrative processes.

Of course, to make the move to a new CM tool, you need to adapt your current processes into the paradigm defined by the new CM tool. There is an upfront cost to retool (and sometimes to rethink) but later on the rewards will come in the form of great time savings and consistency. 

Seems like an easy argument. Why can’t everybody just start working that way? 

If you are in a startup or a greenfield environment, it is just as simple as deciding to work that way and then individually learning some new skills.


In an enterprise or legacy environment, it is not so simple. A lot of things can get in the way and the difficulty becomes apparent when you consider that you are asking an organization to make some pretty big changes:
  • It’s something new: It’s a new tool and a new process.
  • It changes the way people work: There’s a new methodology on how one manages change through a CM process and how teams will work together.
  • Skill base not there yet: The CM model and implementation languages needs to be institutionalized across the organization.
  • It’s a strategic technology choice: To pick a CM tool or not to pick a CM tool isn’t just which one you choose (eg, puppet vs chef). It’s about committing to a new way of working and designing how infrastructure and operations are managed.
Moving to a next generation CM tool like Chef or Puppet is big decision and in organizations already at scale it usually can’t be done whole hog in one mammoth step. I’ve seen all too often where organizations realize that the move to CM is a more complicated task than they thought and subsequently procrastinate.

So what are some blocking and tackling moves you can use to make progress?

Begin by asking the question, how are these activities being done right now?
I bet you’ll find that most activities are handled by shell scripts of various sorts: old ones, well written ones, hokey rickety hairballs, true works of art. You’ll see a huge continuum of quality and style. You’ll also find lots of people very comfortable creating automation using shell scripts. Many of those people have built comfortable careers on those skills.


This brings me to the next question, how do you get these people involved in your movement to drive CM? Ultimately, it is these people that will own and manage a CM-based environment so you need their participation. It might be obvious by this point but I think someone should consider how they can incorporate the work of the script writers. How long will it take to build up expertise for a new solution anyway? How can one bridge between the old and new paradigms?

The pragmatic answer is to start with what got you there. Start with the scripts but figure out a way to cleanly plug them in to a CM management paradigm. Plan for the two styles of automation (procedural scripting vs CM). Big enterprises can’t throw out all the old and bring in the new in one shot. From political, project management, education, and technology points of view, it’s got to be staged.

To facilitate this pragmatic move towards full CM, script writers need:
  • A clean consistent interface. Make integration easy.
  • Modularity so new stuff can be swapped/plugged in later.
  • Familiar environment. It must be nice for shell scripters
  • Easy distribution. Make it easy for a shell scripter to hand off a tool for a CM user (or anybody else for that matter)
Having these capabilities drives the early collaboration that is critical to the success of later CM projects. From the shell scripter’s point of view, these capabilities put some sanity, convention and a bit of a framework around how scripting is done. 

I know this mismatch between the old shell script way and the new CM way all too well. I’ve had to tackle this problem in several large enterprises. After a while, a solution pattern emerged. 

Since I think this is an important problem that the DevOps community needs to address, I created a GitHub project to document the pattern and provide a reference implementation. The project is called rerun. It’s extremely simple but I think it drives home the point. I’m looking forward to the feedback and hearing from others who have found themselves in similar situations.


For more explanation of the ideas behind this, see the “Why rerun?” page.
Devops Chicago and Devops Camp

Devops Chicago and Devops Camp

Dev2ops / 

Martin J. Logan @martinjlogan is the founder of and the DevOps Chicago meetup group. He is also an Erlang hacker and co-author of Erlang and OTP in Action. In his spare time, he serves as Director of Merchandising, SEO, and Mobile Technology at Orbitz Worldwide. In a former life, I had the pleasure of working with Martin on what would now be called a Platform as a Service (PaaS) team at Orbitz.

> @mattokeefe: Martin, when did you first hear about “DevOps”?

> @martinjlogan: For me it was about 10 years ago. I was working at a place called Vail Systems for one of the Camp DevOps speakers, Hal Snyder. He implemented CFEngine back then and got the company to a state of rather high production environment automation. I subsequently left Vail and thought for sure that moving on to larger companies I would be witness to amazing automation when compared with what I had seen at little ol Vail with Hal. I was shocked to find out that this was definitely not the case. This was the genesis of my discovery of DevOps and formal Ops automation which at the time was not called anything of the sort.

> @mattokeefe: Hal is working with you again now at Orbitz. Did you help to attract him with the mission of implementing DevOps-style automation?

> @martinjlogan: Indeed I did. I brought Hal in to Orbitz for a presentation on CM automation about 3 years ago. Everyone was impressed with the talk but the organization at the time was not quite ready for what he was showing us. Well, since then Orbitz has come a long way. Hal impressed our head of operations, Lou Arthur, that first time and I of course kept his name fresh in peoples heads. Some time later Lou, Hal and I had lunch some and Hal elaborated some very exciting concepts in automation; I think it was a done deal after that and an offer went out.

> @mattokeefe: Suppose you were to make another hire… would you look for a developer seeking to learn more about Ops, or a sysadmin looking to learn more about development?

> @martinjlogan: Well, that is an interesting question indeed. Companies tend to invest more heavily on the development side looking at Ops as more of a cost center than a driver for returns. DevOps is working to change this mentality. Spending money on broadening Ops is in line with looking at Ops as a revenue generator and I think most places are a bit unbalanced in this respect. At the end of the day though, I am looking for both. We need an engineer that is a sysadmin, or a sysadmin that is an engineer, and we need this person to help us build our Ops as a service that is more and more sophisticated and conducive to the efficient release of software to our customers.

> @mattokeefe: Can DevOps work in a company with highly centralized Operations? ITIL?

> @martinjlogan: There is a lot of technology that underpins DevOps and Continuous Delivery particularly but in a lot of ways DevOps is about breaking down walls. In a large organization there are a lot of walls. Any given organization will have appetite or see value, and subsequently benefit from, breaking down some of those walls.

> @mattokeefe: In some Agile orgs, walls are broken down by co-locating teams with all disciplines seated in the same area. Is this a good idea with DevOps too, or is it enough to perhaps have a war room where you can sit down together when needed?

> @martinjlogan: I am a big believer that DevOps is an extension of Agile in many ways. It is really taking Agile to its logical conclusion. In Agile we say, done means tested. Taken to its logical conclusion done means in production (in production of course implies tested). Agile is also again about breaking down walls and fostering the communication and feedback loops that allow for empirical process control to actually happen. If I want to be really Agile then I want to know when things are going wrong in in production, I want the whole team to feel the pain and solve the problem together and learn together and implement controls and improvements that solve the problem moving forward. I want them to own that together – so yes, I think sitting together fosters such more than does the use of a war room here and there. That said, I think putting whole teams in windowless rooms that were once used for meetings is cruel and unusual.

> @mattokeefe: What are some of the tools that your teams are using today? Do you find that developers and sysadmins have different preferences for tools?

> @martinjlogan: We have quite a variety of tools here. We are certainly big users of Graphite on the monitoring front. We are Jenkins users as well. We are moving over to Git in many places throughout the company. One of the reasons being that it fits in quite nicely with many third party tools like Gerrit. There is definitely some difference in the tools Dev and Ops naturally gravitate towards. I think this is natural. For example Ops has been a proponent of Puppet while there is a strong dev contingent advocating glu.

> @mattokeefe: Which session are you looking forward to the most at Camp DevOps?

> @martinjlogan: That is honestly a tough question. There are quite a few big brains presenting. Jez Humble is amazing and his book Chris Read speak a while back and he really impressed me as well with the tremendous depth of hands on practical knowledge he has. John Willis is also fantastic, he is quite a personality and has done so much for DevOps, not to mention the Chef expertise. It is honestly a difficult one to call. I guess if I had to pick for me right now, Pascal-Louis Perez. He was the CTO at Wealthfront where he moved them onto Continuous Deployment. Certainly doing such a thing for a financial services company takes some serious ingenuity. Very excited to learn from him. Really though, I am looking forward to the whole thing.

Infrastucture as Code, or insights into Crowbar, Cloud Foundry, and more

Infrastucture as Code, or insights into Crowbar, Cloud Foundry, and more

Keith Hudgins / 

Note: This is part 3 of a series on Crowbar and Cloud Foundry and integrating the two. If you haven’t yet, please go back and read part 1 and 2.

Over the last few days I’ve introduced you to Crowbarand Cloud Foundry. Both are fairly new tools in the web/cloud/DevOps orbit, and worth taking a look at, or even better, getting involved in the community efforts to flesh them out into full-featured pieces of kit that are easier to use, more stable, and closer to turnkey. Are either one ready for production? Oh heck yeah, but you’ll have to be able to read the code and follow it well in order to figure out what either project’s doing under the hood: they’re both real new, and the documentation (and architecture, when it comes right down to it) are still being written.

So where are these projects going?

Let’s start with Cloud Foundry:

Cloud Foundry works, right now. It’s rough around the edges yet, and there’s still some tooling and packaging that needs to be done to make it easier to run on your own infrastructure, but the bones are very solid and they work. VMWare is actively engaging partners to help expand the capabilities of the platform. Through its Community Leads program, the project has already gained application support for Python and PHP applications.

Just yesterday, the CF project announced Cloud Foundry Micro, which is a VMWare appliance image that you can use to set up a development box on your desktop. This is a neatly packaged box that allows you to test your applications before you deploy them into the Cloud Foundry PaaS. This is cool, but a little limited: it doesn’t yet support PHP and Python, and it’s a bit of a black box. Great for developers, but if you want to crack the hood and see the shiny engine, you’ll need to roll your own.

You can do that (sort of) with the older developer instance installer, which was documented in my last article. (Link’s here for convenience.) Very soon, VMWare will be releasing more robust Chef cookbooks that hopefully will come closer to a production style install library. The cookbooks inside that install package were the basis of the pending Cloud Foundry barclamp.

Since there are now three beta PaaS products based on Cloud Foundry, the future is looking bright. It’s fully open source, and you can either use it or help build it.

So what about Crowbar?

I’m glad you asked! Crowbar is a much lower-level tool on the stack (Your users and/or customers will never see it!) and is being put together by a smaller team (for now), but it solves a very interesting problem (how do you go from a rack of powered-down servers to a running infrastructure?) and is just beginning an important shift in focus.

Crowbar, as we’ve seen before, was originally written to do one thing: install OpenStack on Dell hardware. Very soon, it will begin to install other application stacks (Cloud Foundry, to start) and is opening up to be more easily extendable. CentOS/RHEL support is in the works for client nodes (currently, only Ubuntu Maverick is supported). The initial code has been committed to enable third-party barclamps. There are a small handful of community developers, and (I’ve been assured) the door is open for more. Fork the project on github, read the docs (I’ve got some more here), and start hacking.

Bonus: I’m documenting how to create your own barclamp and the lessons I’ve learned so far. As I write this, it’s a blank page. By the time I’m done , you’ll be able to make a barclamp to deploy your own application by following my instructions.

Cloud Foundry, Crowbar, and You

Cloud Foundry, Crowbar, and You

Keith Hudgins / 

This is part 2 in our ongoing series on Crowbar and the new Cloud Foundry barclamp. If you haven’t read part 1, go here. We’ll wait for you. I’ve got some nice sweet tea when you get back.

If you’ve followed the “cloud world,” you’ll know that in CloudLand, there’s three kinds of aaS. Crowbar was built as an *ahem* lever to get your IaaS in place. With the CloudFoundry barclamp, you can now begin to build a PaaS installation (Okay, this one’s brand new and has some limitations, we’ll talk about that in just a bit…) that can host web applications built on a wide variety of frameworks. (Seriously… with the latest announcement, just about any Linux/open-source based web application can be adapted to fit.)

Cloud Foundry in a Nutshell

Before we dive into running the tutorial, let’s get acquainted with Cloud Foundry and it’s architecture.

Cloud Foundry is a hosting platform for web applications. It provides runtime environments for Java, Ruby, Python, and PHP, along with a handful of dependent services like MySQL, RabbitMQ, and Redis. So you can build, say, a lightweight Sinatra app that is only an API broker to other services, or a full-blown Spring-based website with a Redis cache and MySQL back end and host them in the same hands-off (from the developer’s perspective, anyway) platform.

How does that work? Magic, my friend, magic… well, really it’s an asynchronous Rails app handling the API, talking via a NATS messaging queue to spin up the various parts for you as needed. There’s a gem you can install that’ll do all the API work for you, called vmc. You can always use the code inside the gem for some further automation if you need it, or just wrap it into your continuous integration loop to automatically deploy the latest version of your app on a successful build. (My team at DTO did it, roughly, a few weeks ago as a proof of concept)

I’ve got some links for you with much, much more in-depth information. Please wade through them at your leisure:


Enough already, on with the worky bits!

I’ve put together a tutorial on how to get our own Cloud Foundry instance running on your Crowbar environment.

Now, this barclamp is in its early stages: it’s a port of the chef-solo cookbooks VMWare put together in their development environment install script… so it only supports a self-contained single box install at the moment. Work will be proceeding on shoring up the installation so that we can break out the components across multiple servers so you can create a truly dynamic environment to host just about everything. Have questions? (Yup, their cert’s bad. They know.) Want to help? Get involved!


Introduction to Crowbar (and how you can do it, too!)

Introduction to Crowbar (and how you can do it, too!)


Keith Hudgins / 

Dell’s Crowbar is a provisioning tool that combines the best of Chef, OpenStack, and Convention over Configuration to enable you to easily provision and configure complex infrastructure components that would take days, if not weeks to get up and running without such a tool.

What is Crowbar?

Crowbar was oribinally written as a simplified installer for OpenStack’s compute cloud control system Nova. Because you can’t run Nova without having at least three separate components (one of which requiring its own physical server), creating an automated install system requires more than just running a script on a server or popping in a disk.

Crowbar consists of three major parts:


  1. Sledgehammer: A lightweight CentOS-based boot image that does hardware discovery and phones home to Crowbar for later assignment. It can also munge the BIOS settings on Dell hardware.
  2. OpenStack Manager: This is the Ruby-on-Rails based web ui to Crowbar. (Fun fact: it uses Chef as its datastore… there’s some pretty sweet Rails model hackery inside!)
  3. Barclamps: The meat and potatoes of Crowbar – these are the install modules that set up and configure your infrastructure (across multiple physical servers, even)


Dell and other contributors have been working on extending Crowbar outside of just OpenStack as a generalized infrastructure provisioning tool. This article is the first in a 3-part series describing the development and build-up to a Barclamp (A crowbar install snippet) that gets CloudFoundry running for you.

So, that’s neat and all, but what if you want to hack on this stuff and don’t have a half-rack worth of servers lying around? This is DevOps, man, you virtualize it! (Automation will come later. I promise!)

Virtualizing Crowbar

The basics of this are shown on Rob Hirschfield’s blog and a little more on project wiki. Dell, being Dell, is pretty much a Windows shop. We don’t hold that against them, but the cool kids chew their DevOps chops on Mactops and Linux boxen. Since VMWare Fusion isn’t that far from VMWare Workstation, it only took a half day of reading docs and forum posts to come up with a reliable way to do it on a Mac

Those Instructions Are Here.

Bonus homework: Get that running and read down to the bottom of the DTO Knowledge Base article I just linked, and you’ll get a sneak preview of where we’re going with this article series.

Now, go on to part 2!

DevOps HackDay at VMware featuring CloudFoundry

DevOps HackDay at VMware featuring CloudFoundry

Dev2ops / 

Start: 2011-09-08 09:00

End: 2011-09-08 18:00

Timezone: US/Pacific

Register now to spend a day hacking in the clouds with CloudFoundry.  The focus of this hack is to get a few working applications up in the cloud the #devops way: a fully programmable infrastructure that extends from the OS through the application lifecycle.  

We will be building a pipeline line that starts out in source control, flows through a build process, get packaged as Infrastructure as Code and ends up as a working application deployed to the CloudFoundry Open Source PaaS. The project will also include underlying instrumentation to collect data along the pipeline and also include an infrastructure for data storage and analysis.

The initial plan is implement this pipeline for a sample Java application. We’ll be using Chef to automate the installation and configuration of the OS and middleware stack, then use CloudFoundry to automate application provisioning. Have a CloudFoundry ready app that you want to try out? Bring it along. Hopefully at the end of the day we will have a few Java, Ruby on Rails, and NodeJS applications up and running. 

Some of the tools we’ll be using:

  • Source Control – GIT
  • Build – Jenkins
  • Artifacts – Nexus
  • DNS – DynDNS
  • Release – Rundeck
  • Configuration – Chef
  • Deployment – IaaS Cloud
  • Data Collection: Zenoss

Whether you are coming to learn from the experts, have ideas you want to share, or just like the camaraderie of a good Hack Day — Bring your laptop and come ready to participate.  

DTO Solutions will lead this full day (9am-6pm) hands-on hackathon. We will have coffee/pastry in the morning, lunch, and a beer bash from 4-6pm with CloudFoundry/Layer 2 folks to discuss take-aways from the day.

Sign up now for your spot.


Page 2 of 1012345Last