Tag Archives: Puppet

Perhaps DevOps Misnamed?

There are only two mistakes one can make along the road to truth; not going all the way, and not starting.

–Buddha

Andi Mann posted ‘Myopic DevOps Misses the Mark‘ earlier today and after reading it, I wanted to put my thoughts out there, particularly since I had hoped some of what I consider his misconceptions would have been cleared up before this post.

To be fair, Andi does ask some good questions and has clearly spent his share of time thinking about ops in general, so hopefully I can make some attempt to address them as well.

To start with, Andi asserts that DevOps is mostly about developers. I’m not entirely certain what makes him think that, but it is patently false and the majority of people involved are heavily from an operations background. That said, I do believe semantics matter, and it might just be the name itself that leads people to that conclusion.

Maybe NeoOps, or KickassOps would have been better… but it is probably too late for that now.

I may be mistaken, but I believe the credit for the term DevOps belongs to Patrick Debois when he organized the first DevOpsDays last year.

Patrick is a bit of a Renaissance man, playing many roles in the process of software delivery along the way. I’m not particularly a fan of labeling people, but Patrick has self identified himself as a sysadmin on more than one occasion. I’m also not particularly a fan of certification, but Patrick’s CV lists certifications like ITIL and SANS, that I’d wager are almost exclusively taken by people in Ops/admin roles. The glaring exception is SCRUM, and I know for a fact Patrick has fought tooth and nail to get the Agile community to recognize the role of systems administrators in the process of delivering value.

Of anyone involved in what has apparently transitioned from ‘a bunch of good ideas’ to ‘a movement’, I probably have the most dev centric background.

  • Patrick Debois – IT Renaissance Man
  • John Allspaw – WebOps Master
  • James Turnbull – Author of Pro Linux System Administration, Hardening Linux, Pulling Strings with Puppet, and he apparently has a day job doing security.
  • Luke Kanies – Recovering sysadmin
  • Adam Jacob – Still calls himself a sysadmin
  • Kris Buytaert – Another Belgian Renaissance Man and a system administrator
  • I’m sure I’m missing lots of people, sorry, maybe we need a poll

Andi keeps saying DevOps is developer centric, and I think the problem (besides maybe the name) is the fact that there is code involved in automation that isn’t a shellscript. Of course, I’m only speculating because he doesn’t actually articulate what makes him think this, but let’s move on to his questions.

Andi makes assertions about lack of control, process, compliance and security. This is ludicrous, bordering on negligent. I’ve seen Puppet deployments on 1000s of machines in what can only be classified as ‘the enterprise’ and I will guarantee those machines are more tightly controlled, compliant and secured than 99% of the machines in most organizations claiming to embrace ITIL. A solid Puppet installation is closer to a functional CMDB than anything I’ve seen in the wild with the advantage that it is both auditing and enforcing the configuration on an ongoing basis. DevOps automation and ITIL are not mutually exclusive and can coexist. (I’m not going to really get into what I think about most of ITIL… but this should help.)

More Specific Questions (most of which are predicated on the misconception that ops somehow goes away, but there are some other bits worth addressing):

Who handles ongoing support, especially software update for the unrestrained sprawl of non-standard systems and components?

Ops. Unrestrained sprawl of non-standard systems is a bad assumption. First of all, the slow moving ITIL loving enterprise tends to have as much or more problems with heterogeneous systems as anyone, second of all, when you start to model and automate systems it makes the problem of the heterogeneity both more apparent and more manageable. No one I know advocates anything but pushing towards simple homogeneous systems whenever possible. No one is pretending support and software updates go away.

Who ensures each new application doesn’t interfere with existing and especially legacy systems (and networks, storage, etc.)?

Ops of course, but with the added benefit of an automated  infrastructure with semantics relevant to the questions being answered.

Who handles integration with common production systems that cannot be encapsulated in a VM, like storage arrays (NAS, SAN), networking fabrics, facilities, etc.

Yep, Ops. VMs are nice because they are typically only an API call away, but there are tools for doing API driven provisioning on bare metal and they will only get better, but… VMs are just the bottom of abstraction mountain. The API driven abstractions of storage and networking fabric are coming. That isn’t the reality today, but it will happen, and relatively soon.

Who handles impact analysis, change control and rollback planning to ensure deployment risk is understood and mitigated?

This is a good one, because frankly I don’t think Ops can do this in isolation anyway. This is a cross cutting concern involving Ops, Dev, Product Management and the other business stakeholders, but change control and rollback are orders of magnitude easier to reason about and accomplish with DevOps approach.

Who is responsible for cost containment and asset rationalization, when devops keeps rolling out new systems and applications?

Similar to the last question, but with the added misconception that DevOps means rolling out random stuff just cause. I know I’ve personally made this point explicitly, but the whole point is to enable a business, and cost containment and asset rationalization are obviously cross cutting concerns of that business.

Who ensures reporting, compliance, data updates, log maintenance, Db administration, etc. are built into the applications, and integrated with standard management tools?

Ops doesn’t really do this now. What is the definition of ‘ensure’? Ask nicely? Write up documents? Beg? Get mad? At worst, attempts to do this are often at the root of ‘the wall of confusion’ between Ops and Dev. Again, I’m not sure where Andi got the idea DevOps = ‘cowboys without any concern for anything but deploying stuff as fast as they can’. What are the ‘standard management’ tools? As much as anything, maybe that is what DevOps is replacing, because most of them are embarrassingly poor. The best way to accomplish everything on this list is to expose sensible internal APIs. When we can get to the point that we have reasonable conventions, integration with the next generation of ‘standard management tools’ will be trivial. That might strike you as a dev centric perspective, but really it just means that the present is isn’t evenly distributed.

Who will assure functional isolation, role-based access controls, change auditing, event management, and configuration control to secure applications, data, and compliance?

DevOps for the win, with the help of tools that can actually model, audit and enforce all those things programmatically.

I’m sure Andi means well, but I’m not clear why he got the impressions he did of what DevOps means or is trying to accomplish. I did the best I could. (Twitter ‘lives in the now’ so that link will probably only be useful for a few days.) I guess if you use the word ‘API’ people won’t process anything further because you are obviously a cowboy developer. C’est la vie…

Finally, Andi finishes with a list of things he would like to see. The irony here is everything on his list is DevOps:

Including ops during the design process, so applications are built to work with standard ops tools.

Devops!

Taking ops input on deployment, so applications will go in cleanly without disrupting other users

Devops!

Working with ops on capacity and scalability requirements, so they can keep supporting it when it grows

Devops!

Implementing ops’ critical needs for logging, isolation, identity management, configuration needs, and secure interfaces so the app can be secure and compliant

Devops!

Giving ops some advance insight into applications, especially during test and QA, so they can start to prepare for them before they come over the wall

Tear down the wall! DevOps!

Allowing ops to contribute to better application design, deployment, and management; that ops can do more for the release cycle and ongoing management than just ‘manipulating APIs

Allow ops to contribute to better application design, deployment, and management, in addition to manipulating APIs! DevOps!

See, there is hope for Andi yet! (I just hope he has a good sense of humor about the title… and would be willing to discuss this over a nice meal if he comes through Salt Lake or we end up in the same city soon.)

Advertisements

Practical Puppet at MWRC

**update** The underlying repos changed so the demo won’t work from the user data. It will still work after running ‘apt-get update’ on the box. If I get bored I might repackaged it.

*update*  Confreaks has my MWRC video up. The notes below about launching on EC2 are probably still useful for context.

These are my slides from Mountain West Ruby Conf.

Starts with a little intro about the beginning of computing, before ‘computer science’, driven by desire for computation in math and physics, when there wasn’t a division between the people that program and people that understand how the systems work. (this period probably lasted for about 10 minutes.)

Then some Puppet code…

These modules will build passenger and install rails.

Finishes with a short discussion on the tribes, trading ideas, evolutions, the opportunity of clouds and encouraging people to do something awesome.

The talk was given with a live demo of this code building rails in EC2.

You can do it too if you have an EC2 account.

Start ami-2b10f742 littleidea/mwrcpuppetrailsdemo.manifest.xml sending this as the user data:

#!/usr/bin/env puppet

rails::site{ the_next_big_thing:
servername => "www.tnbt.com",
rails_version => "2.2.2"
}

Set the server name and the rails version to whatever you want.

If you need more gems or packages you can add:

#for gems
package { gemname:
    ensure => installed,
    provider => gem
}

#for native packages
package { pgkname:
    ensure => installed,
}

If you need a certain version of a gem or package, change ‘installed’ to the version you want (the default will install the latest available.)

The image is based on ami-7cfd1a15 Ubuntu Intrepid packaged by Eric Hammond with puppet and facter installed. The puppet manifests from the slides are modules found in /usr/share/puppet/modules.

When the ami starts up it will grab the user data and run it.

You can also ssh to your instance and just put the puppet code in a file (strip the #! /usr/bin/puppet) and run puppet –debug to watch it build.

puppet --debug mysite.pp

Mountain West Ruby Conference

Rocky Mountain High

Rocky Mountain High

I’m definitely looking forward to Mountain West Ruby Conference. Last year was most excellent, and this year I’m presenting. Puppet, clouds, live demo, no net, should be a good time…

MWRC is a single track regional conference, but it brings in some of the biggest names in the Ruby world. (maybe we can get Matz to come next year) Sharing the stage with some of these guys is an honor and a privilege, if not a little intimidating. And if last year was any indication, even the guys you haven’t heard of will be awesome.  I can’t recommend this conference highly enough.

Pat Eyler of On Ruby is one of the organizers and just posted an interview with me. He is interviewing other speakers as a lead up to the conference, so keep an eye out.

Use your powers for Awesome!!!

Use your powers for Awesome!!!


Puppet and Capistrano

I haven’t just been slacking off.


Can you smell what the Puppet is cookin’?

Imitation is the highest form of flattery…

Things are getting really interesting in the configuration management space.  The confluence of clouds, web 2.0, dev-ministration and chocolate sauce.

Chef is upping the ante, one way or another. I’m a little saddened that the majority of the ideas in Chef were discussed in the context of Puppet and implemented by someone who has made a living off of Puppet… but I’m not totally surprised.

I understand why Adam would do this, and on many levels it parallels Luke’s relationship with cfengine.  Adam has probably used Puppet to solve more real problems and build more infrastructure than anyone else, just like Luke had done consulting with cfengine for years before Puppet was born. In a sense, Adam is the embodiment of the future of system administration that Luke had envisioned and hoped to create.

Socrates, Plato, Aristotle… and so it goes.

Adam is a smart guy who thinks clearly about solving problems, had Puppet as an example and data from the front line. He strikes me as a genuine fellow and while we aren’t best friends, I have enjoyed his conversation and insights on more than one occasion.  Chef adds some nice functionality that was obvious and some pieces that are differences in philosophy.  I’ve done my best to absorb both Luke and Adam’s expressed positions, and paradoxically, in my estimation they are both right, in their context. One way or another, there are still a lot of machines out there being managed by meatclouds, so there is plenty of work to do.

Puppet was first released in 2005 and has grown in functionality and adoption since that time. Puppet is revolutionizing system administration, similarly to how Rails revolutionized web development, and Chef can only accelerate that process, by its own merits and by driving innovations in Puppet. At the heart of the story are many questions about progress, open source, community, technology, obligations and the attributions.  The storyline already has mystery, intrigue, tragic heroes, and double agents. There is bound to be some drama, just because there are humans involved, but sometimes nothing motivates like a nice punch in the mouth.

Remember when Nintendo was king of the world then couldn’t sell anything, and now they pwn Playstation and Xbox… Or when apple was awesome and then wasn’t and now we all have iPhones… Innovators innovate, ebb and flow, ebb and flow.

And rails is merb is rails, and you never know what the future holds.  Buckle up… I’m just sayin’…

Progress is impossible without change, and those who cannot change their minds cannot change anything.

–George Bernard Shaw


More Puppet Stories

These are my slides from RubyConf.  They are mostly images, so I’ll talk you through them (maybe not what I said at RubyConf, but in the same spirit)

I love that Escher’esque image of interlocking puppets (the Puppet logo is an abstraction of that). That was just there for something interesting during my 30 second introduction.

If you were at Mountain West Ruby Conference last year, then those next slides mostly make sense, if not, then you should be there this year.

The next 4 slides are about the different mindsets between developers and sysadmins.  I also like to point out how software changed when it isn’t shipped on CDs. If you are working on a web applications, particularly of any scale, any turbulent disharmony between these two tribes is going to cost you.  I also admit that I’ve solved problems with ‘chmod 777’ so there isn’t any doubt which tribe I came from.

Yay, Puppet, Yay, Ruby, these next slides I talk about what the Puppet project is, that it is all in Ruby, that Puppet, like Ruby, is a passion project driven.

Then the inquisitor… great tools are opinionated. Ruby on Rails is opinionated. Puppet is opinionated. When you, or your project, resonate with those opinions those tools will make you happy. If you don’t or can’t, you might think those tools hate you. Sometimes you should rethink what you are trying to do, and sometimes you might need a different tool.

Then the sysadmin slides, into Luke’s story, that most sysadmins don’t see any problems, other than the fact that most organizations are afraid to touch their machines and shudder at the thought of having to rebuild anything significant in the production environment.

This is an image of the internet, a couple years old now, but I think it is a stunning picture and gives a sense of the scale.

The obligatory cloud slide… minimizes your hardware headaches, but multiplies your configurations.

And how do people handle all those configurations?  Ahh yeah, the meat cloud…

But really most sys admins do the same work… not just over and over, but from..organization to organization..

Now back to Luke… can cfengine… and wanting something better… way better…

And some of the people using Puppet… and we want to make Puppet like a gun in a knife fight… Pub by 4… and a quick overview of why puppet has this effect

Little slide about portability and then how once these configurations are generalized, people can share them

Then a bunch of slides showing parallels between puppet code and common sysadmin task of installing, configuring and starting services.

Perspectives was a little reflection about how much stuff is in Puppet and what to show Rubyists at a conference.

I decided to show an example of a simple Type and Provider, first explaining a bit about the model and idempotence, then finally how you can use that in the Puppet language. Code code code…

Talk about other tools and where/why you might use other tools.  Not everything is a nail.

Puppet’s Open Community

And what’s Next?


Semantics Matter (or I finally get it…)

Silence is better than unmeaning words.

-Pythagoras

Over the time I’ve been programming, I’ve come to value certain things in a code base. Without going into too much detail (which will probably be its own post soonish), I value code that is easy to understand and manipulate.

The first level of understanding, which is facilitated by the style and organization in a code base is ‘What’. If I can look at code and mentally map ‘What’ it is doing, I start to get a level of productive joy. I can’t get that if I’m trying to sort out which branch of the if-else plinko, meta-scrambled self.foo or side effect driven development is actually getting executed at any given moment. ‘What’ is really really nice.

A higher level of understanding code is ‘Why’. ‘Why’ is more subtle, but orders of magnitude more powerful than ‘What’. ‘What’ is a technique, ‘Why’ is the purpose, the driving principle. Understanding ‘Why’ gives flexibility and options to ‘What’. (Unfortunately, most code does a poor job of conveying ‘What’, let alone ‘Why’, but that is a topic for another day.)

Which brings me to ‘semantics matter’, which is something I’ve heard Luke say over and over when talking about Puppet. When I heard him before, I just nodded and thought he was talking about nifty Puppet language features like ‘after’ and ‘subscribe’ for managing relationships between services and the underlying packages and config files, because that was what he was using as the example. I was understanding the ‘What’ level.

You think I would have figured this out faster after listening to Luke rant about it so much, but the whole ‘semantics’ thing didn’t click until a discussion with Teyo about preparing for his ‘Golden Images’ talk at Linux Plumbers Conf.

Do not make a golden image...

Do not make a golden image...

Puppet let’s one model system configuration with code. Just like any other language, the ‘What’ and especially the ‘Why’ can be made apparent if the code is organized in certain ways and/or if one shares certain metaphors with the original author.

Until quite recently, I was having a hard time understanding why rebuilding virtual images with Puppet was superior to just versioning working images. I mean I heard the words, but in my mind I was questioning the practical difference. In one case you start with a base image, get everything set up and save the working copy, in the other you start with the base image and let puppet build it up. I kept thinking to myself, what is the difference, both solutions end up in the same place, right?

Being afraid to turn off real machines where you have no idea what is running, because there might be some critical cron job that matters on the third Tuesday of the month is one thing… (this happens, for real, some of you know this is true, someone reading this works at places like this right now, guaranteed), but once everything is virtual and running what you want, what’s the harm of just making images?

The difference is the potential to encode ‘What’, and if your code is sensibly organized, ‘Why’.

I was only seeing the static state of the working system. What if you want to change things? If you have working images, you have to reconstruct ‘What’ by discovery, good luck with ‘Why’. If you are lucky, it was you that set up the systems and it wasn’t over 6 months ago. The ‘What’ and ‘Why’ were apparent to someone, potentially you, when the systems were first set up, but now you just have this bucket of bootable bits that ostensibly does something. If it isn’t working, or there is a need to change something significant, the choice is poking around the bucket of bits until the new ‘What’ is in place or starting over with a new ‘Why’ that is lost as soon as the new image is finished.

If Puppet is building your services, ‘What’ and ‘Why’ can be recorded, clarified, recovered and manipulated. Version control becomes straight forward, manageable, and transparent. Services can have clear definitions and relationships. So obvious… can’t believe it took me this long to ‘get it’…

How many 500 MB images do you want to version? Can you make any sense of the diffs? Really? Seriously?


%d bloggers like this: