Category Archives: Agile

Agile Infrastructure: The Movie

In a war between cowboys and ITIL, a small resistance fights for their ideals…

Dedicated to the notion that ‘the way things are done is not the best way to do them’, our heros struggle to liberate their comrades and add to their community of practice. Long live DevOps!

InfoQ just released just released this presentation, which Paul Nasrat and I gave at Agile 2009 in Chicago.

The presentation is quite long, but I hope people find something beneficial and would appreciate any feedback here or on InfoQ.


Speed Chess

Is typing speed a factor in programmer productivity? Would you improve at chess if you moved the pieces faster?

Jason Gorman, who appears to be a relatively reflective programmer based on his blog and his twitter, tweeted this comment about chess a few days ago.

A bunch of people I follow ReTweeted, and it caught my attention because I used to think this way, both about chess and programming.

Now I don’t claim to be the greatest at either activity, but I’ve put some time and I have enough ability to claim to be above average at both. (Which is to say, I’m aware of my relative mediocrity when compared with real masters)

I’ve played chess off and on for years. I learned to play when I was quite young and I could usually beat other casual players quite easily. After losing to rated tournament players, I spent some time studying the game.

For a long time, I thought the best way to learn was to methodically look for the best move and I thought playing blitz games was somewhat degenerate. Luckily, someone convinced me to start playing blitz games regularly, and that accelerated my understanding of the game considerably.

I still play better when I take the time to be methodical, but that’s not the same thing as learning. I think the blitz games accelerated my learning for two reasons, first, because playing at that speed put that many more games, positions and patterns in front of me and two, because I didn’t attach as much ego to the games I experimented more, which led to that many more positions.

I do think there is a point of diminished returns to just move the pieces faster, and improvement is predicated on some reflection, but I will contend unequivocally that, unless you are a master, you will improve at chess if you spend a considerable portion of your playing time moving the pieces faster.

The same applies to code. You don’t need to type +100 words per minute, but if you can’t touch type at least 40-50 wpm, spend the 20 minutes a day for a month or so until you can. You will never regret it. (And I worked as a programmer for years before I could touch type.)

I would walk you through the arguments, but there is already a classic Yeggethon on the topic, which articulates all the positions I would and then some.

“Lose Your First 100 Games As Quickly As Possible”
–Proverb for Go Beginners

Working Together… with Techology

If you want to build a ship, don’t drum up people together to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.

-Antoine Jean-Baptiste Marie Roger de Saint Exupéry

Where to start with this one? and where to finish? The only thing I will really promise is meandering, but I think this is interesting, if not well formed…

The first thing you need is a little context about what is driving my thoughts on this:

  • I’m an XP fanboy. Even if the XP community is manifesting paranoid schizophrenic at this point (more on this in a future post), once you have experienced pairing and have a break through where things start to flow, you can never go back. Everyone working in isolation becomes really painful and is frankly less efficient IMHO, especially over the life of a project.
  • I haven’t found a way to effectively pair working on Puppet‘s core. Mainly because of the ~1000 miles in between myself and Luke. We talk on the phone, open the same files and goto line numbers or sometimes we try sharing the screen. This certainly helps with knowledge transfer, but feels clumsy and never gets the flow that you need to crank out solutions together

So how can a distributed team work together?

I’d been reflecting on a development tool called UNA, one of my personal highlights from Agile 2008, when I saw the Devunity presentation from TechCrunch 50 which set out to provide a similar environment. These types of tools facilitate possibilities for working together that I will try to illuminate, and then I’ll throw in my 2 cents plus interest.

I volunteered to participate in a ‘Coding with the Stars Competition’. This was envisioned as a sort of geek fest Agile development engineering version of ‘Dancing with the Stars’. I was paired with John De Goes. We were the first team eliminated. I admit I was disappointed just a bit, not because we were eliminated, as much as that we didn’t do a good job of portraying the power of what Una enables. In order to be true to the show, the judging was capricious, but at least I’m not bitter about that… :/ (In all seriousness, the whole thing was just for fun and while Una is a great tool when the network isn’t feeble, the collaboration Una enables probably isn’t best demonstrated in 4 minutes of software development ‘theater’ without a lot of setup and backstory.)

Resistance is Futile...

Resistance is Futile...

John is obviously intelligent and intense, which means I automatically like him. Marketing admittedly might not be his strong point (I do appreciate the Gigeresque imagery, though I’m not sure the proverbial ‘everyone else’ can.), but he’s put a lot of time and thought into building software and how people can do it better. Una is the result and when you see John talk about Una, this is obviously a labor of love.

Ok, so what does Una enable? I’m not totally sure, but I know that watching John use it and what I learned using Una for two hours was compelling enough that I’m still thinking about the experience six weeks later.

Una is for all intents a text editor bordering on IDE with a networking component that lets multiple people work on the same files at the same time. I have my cursor, you have yours and when the network packets are flowing, we can see each others work in real time. This is different from sharing a screen, because we can all control what we see and do independently. Whenever anyone saves a file UNA runs the tests and everyone knows how long ago the tests were run because there is a timer which resets after each run.

I admit that this seemed a bit disconcerting in the beginning, but after a couple hours of working together with John, some really interesting things started to happen. We started working ‘together’…

How many times have you watched someone type and witnessed a typo? Usually, you point something like that out and the person will go back and fix it. No big deal, right? 5 seconds of ‘hey’, ‘right there’ and it’s fixed. But what if you could just fix it? And while he (or she) could see you fix it, the break in concentration goes from 5 seconds to maybe one. After about the first hour of working together, this type of thing started happening, spontaneously and organically. The cadence of test driven pairing potentially changes from syncopated keyboard switches to an almost surreal flow. The communication really started to go ‘into the code’ and the oral cues became less and less necessary (this probably wasn’t the best thing for the ‘competition’, que sera, sera)

And this was only after an hour of working with a tool I’d never seen before and a person I’d never met! Imagine what might happen after working this way all the time with a talented team… Honestly, I’m not really sure, but based on what I saw from the TC50 presentation Devunity also sees some of the same potential.

This is where my two cents starts, but first I need to introduce a few more ideas (I did promise ‘meandering’). I really really really like data, and I especially like figuring out interesting ways to take lots of data from different places and present it in a way that facilitates understanding.

For he today that sheds his blood with me shall be my brother...

'For he today that sheds his blood with me shall be my brother... ' --Big Will

At some point in the past, I also enjoyed playing multi-player real time strategy games. My favorite games enable epic battles with all manner of complexity in the viable strategies (ok, and an occasional zerg rush). A critical skill one develops playing these types of games is keeping an eye on the mini map. Players develop an intuitive notion of critical places on the map. Just one pixel of certain colors anywhere might warrant investigation, but flashes of color in certain places require immediate attention or the game is lost. The mini-map also let’s you know what your allies are doing, where they might be winning or losing, and is critical for any sophisticated coordination.

In parallel, people have put a lot of energy into tools that let you analyze the quality of code, particularly Object Oriented code.

What if we had a tool that took an OO code base, generated a view of it in a ‘mini-map’, maybe several views, like the class hierarchy, the dependency/call graphs, and the file layout, then using color we super impose more data, like cyclomatic complexity and test coverage. (I know some Java IDEs are pretty close to being able to do this)

Now what if we can update this mini-map data in real time and use it as feedback in an environment like UNA?

We know the critical pieces of the code, and have a sense of the aesthetics and metaphors that code embodies. What if we not only get feedback when we run our tests, but we see green and red cascade across the mini map in the places other team members are working? Or we can see cyclomatic complexity increase? Or new untested code… (Too bad you can’t fix problems like that by pushing ‘A’ and clicking the mini-map to direct a screen full of your hydralisks… *shrug*)

The team will start to intuitively know ‘what’ colors ‘where’ warrant investigation, or perhaps even immediate action. The battle unfolds transparently and with a clear sense of where the team might be winning or losing.

Take it a step farther, what if instead of version control being based on file sets, the whole ‘battle’ is recorded, keystroke for keystroke. Not only can any point be recovered, but that is some serious data about how everyone works. I’m not even sure where that would lead. I do know that new information enables optimization, both specific and systemic improvements. Tactics and strategies can’t help but evolve. (hopefully not draconian notions of productivity…)

While we’re at it, let’s put VOIP channels in this thing and facilitate/record dialog, too.

I can’t think of a single human endeavor that doesn’t benefit from observation, analysis and reflection. The code review process would go from looking at the static result to watching the journey. Root cause analysis potentially reveals more about ‘how’ and ‘why’, instead of just groping at ‘what’. What could this feedback reveal about how we work, and how we work together? As I stated in the beginning, I don’t know… but it sounds pretty cool, no?

I not sure where the line is between enabling productivity and gratuitous indulgence, but the technology is all there, it’s just a matter of hooking the right things together in interesting ways.

I’m certainly not going to build anything like this anytime soon, but if someone reading this does, and you happen to make a whole lot of money revolutionizing collaborative software development, just promise to buy me one nice dinner, and you probably owe John one too, unless you are John, in which case run with it. (I’m partial to sushi, but I’m open to other options. If you devote your life to building this and lose everything, I’ll buy you dinner. Omakase Onegaishimasu)

Miles Per Gallon

In truth, a good case could be made that if your knowledge is meagre and unsatisfactory, the last thing in the world you should do is make measurements; the chance is negligible that you will measure the right things accidentally.

George Miller

Last week I drove to Los Angeles with my wife, two kids, my mother-in-law and her sister in our not so eco-friendly SUV. Hilarity ensued, but that’s not the point of this post.

I don’t usually drive this car. Until recently, most everything in my life was in about a 5 mile radius, work, play, whatever. The SUV is ‘her’ car; my wife drives it to where she does research and home most everyday.  Occasionally, we all get in and go somewhere exciting, like grandma’s.

You are starting to wonder what is the point, and I’m hoping I’m about to make some.

This late model SUV has a speedometer, a tachometer, and in addition has a computer that is estimating instantaneous MPG and average MPG since the last fillup, since the beginning of the trip, since the epoch, etc.

On the way to LA, with this SUV stuffed full of people and random stuff, I averaged just over 20 MPG at ~80 MPH. On the way back, I averaged 25.6 MPG with the same people and even more stuff and nearly identical MPH.

On the way there, I kept watching the gauges and thinking ‘these rolling hills are killing my gas mileage’. On the way back, I was optimizing the speed and acceleration patterns, only dipping below 20 MPG on the steepest inclines.

If one rides a bike on hills, he or she figures this out pretty fast. There are certain speeds you can hit a hill that can be maintained with less effort than it will take to go slower, all this is based on fitness, the equipment and the hill. Also, acceleration uphill is fairly expensive. Your body figures this stuff out because energy expenditure and efficiency have consequences, and the feedback is rapid.

In an automobile, I doubt I ever would have figured it out without the gadgetry.

You can’t manage what you can’t measure.

What does this have to do with software? Velocity is a commonly used idiom in project planning. I’ve seen velocity measured different ways in practice, and seen it’s measurement discussed in many more.

But I’ve never really seen efficiency measured with relation to velocity. The only time efficiency is considered is with respect to getting more velocity.

I’ve never seen anyone in a planning meeting ask, how can we maintain this velocity more efficiently? Is this velocity the most efficient? Over the life of any decent sized project, this seems like it would be important.

(Aside: The XPers might, as they have a notion of sustainable pace, but I’ve never been in a pure XP setting. I’m starting to think they only exist in legends or they are like the Jedi scattered by the clone wars trying to cultivate the force in swamps and deserts… I’ll be at Agile 2008 this week, so I’m hoping to come home with some new Jedi minds tricks.)

How many miles per gallon are you getting? Are you losing MPG to the rolling hills?

Perhaps the more important question, how would you know? What do you measure? (The instantaneous estimate feedback was key for me, hmm…)

Agile Infrastructure

True stability results when presumed order and presumed disorder are balanced. A truly stable system expects the unexpected, is prepared to be disrupted, waits to be transformed.

-Tom Robbins

One thing that became clear as software practices matured and self-optimized was that not being able to build a project from source in an automated fashion can bring development progress to a grinding halt, particularly as more bodies are added. Without that ability to build from source in a predictable manner, which is the predicate for any flavor of test driven development or continuous integration, the development efforts from a growing team is like so many butterfly wings each capable of unleashing storms on the unsuspecting halfway around the world.

But how many organizations dependent on a web application can reliably build their production servers from bare metal? automatically? unattended? When your application is a ‘service’ on a server, how is that fundamentally different from building a traditional application from source?

How does capacity planning change in a world where ‘Digg’ and ‘Slashdot’ are explicit goals? When Facebook can drive adoption? When adding new servers changes from a purchase order and weeks of waiting to a web service call?

If you want to participate in this ‘as a Service’ brave new world (get up in that ‘aaS’ if you will), and your plan to bring up new servers involves a meatcloud sshing their little hearts out, you might as well give up now. Seriously…

How Agile is your infrastructure?

How Agile is your infrastructure?

Further, what is the plan to manage the life cycle of the servers? Most people have figured out that ‘tail -f’ is not a monitoring solution. But how many of them know exactly what is running on their machines and why? How many have servers that they are afraid to turn off because they aren’t sure what is running, but it might be important? How many configure a server, back away slowly and hope they aren’t the next one who has to touch it?

In another recent episode doing some custom Puppet work with Luke, who has essentially crossed the Developer-Sysadmin divide (I’m not sure he is a chief of the new tribe, but he’s definitely a shaman), Luke became frustrated that he couldn’t write Puppet code like he could Ruby code. (He had not written complex Puppet code for a while, since he stays pretty busy working on Puppet’s internal code.)

Sure, I guess this would be awesome if I was a sysadmin, but I can’t test this code. The only way I can have any confidence it works is to run the whole thing. I guess I just take for granted all the tools that are available to me as a developer now.

Luke Kanies

How does it change things when your infrastructure is code? Can be versioned and diffed? Can be shared and reused? Can be tested? Continuously?

How awesome will PuppetUnit or PuppetSpec be?

Test Driven Infrastructure?

It is only a matter of time…

Adopting Agile (The Art of the Start)

In theory there is no difference between theory and practice. But, in practice, there is.

-Jan L. A. van de Snepscheut

So you decided you want to be Agile, what now? First ask why? Do you really believe in the principles or has something else convinced you to become buzzword compliant? Have you read Fowler, Martin, Cockburn or Beck? Do you know the difference between XP and Scrum? Have you heard of Crystal? Do you know the pitfalls? Have you read the critiques? the rants? Do you know all the personalities involved and their biases?

How could you? Who has time for that? Oh, and the software project that the life your company depends upon can wait for you to sort it all out? Not likely. . .

While we deliberate about beginning it is all ready too late to begin


A reoccurring theme for teams adopting Agile practices is what may seem like an overwhelming sense of chaos as the old habits and sensibilities collide with the new. The chaos can be real or perceived, but is most prevalent in teams where Agile is being adopted without a lot of experience, often without buy-in or organizational support and on top of a legacy code base. Furthermore, the fledgling Agile champions, while enthusiastic, are trying to balance the notion that the methods only work if they follow all the disciplines and the fact that there is no possible way a team can adopt all the practices instantaneously.

Let me be clear, I personally think Agile, as embodied by the values of the manifesto, is about as good a way to approach the problem as one can get, but transitioning a team can often be a source of pain. In fact, let’s remind ourselves what those values actually are, because in practice, I think people often lose sight of them.

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

No river can return to its source, yet all rivers must have a beginning

-Native American Proverb

How to proceed? Well, that depends, where are you at now? and where do you want to go?

There are plenty of blogs and books and consultants, which offer appropriate advice, and depending on whether you have more time or money these can be great resources.

Here is my free unsolicited 2 cents of self referencing advice for teams who really want to deliver, take it for what it is. . .

Focusing on Trust and Results, a lot of problems will just melt away. Standups, sprints and software will just become byproducts of commitment and accountability. Honest conflict in retrospectives optimize the process.

I’ve got no science for you, just faith. (Unless someone can give me a way to measure ‘trust’)

Like I said, take it for what it is. . .

So where do you start?

Create trust, problem solved.

Shared Metaphor: Gnome Cloud Meat Pastries 2.0

The problem with communication … is the illusion that it has been accomplished.

– George Bernard Shaw

Underpants ????? Profit

One of the topics arising frequently in my conversations is the need for clear vocabulary to describe emerging concepts forced by technical evolution and how those concepts fit into to a personal understanding of the world.

Dialects arise, but until we have shared symbols and metaphors these discussions must always be clumsy. (after we have shared vocabulary, it will still mostly be clumsy)

This is a feeble (and misguided) attempt to share some metaphor.

Gnomes: Some of you might know of venture funded technical start ups that are going to change the world. Some of those start ups might have ridiculous valuations and little or no revenue. Some just have no revenue. . . at least they have a business plan, good luck with that.

Clouds: Everyone knows about ‘the clouds‘ Or maybe they don’t? Or maybe the whole point is that they don’t? Or something? Some anonymous thing over there does something like this other thing I once knew and could touch. . .

Meat: Every organization probably has some hardware and some software, but even in sophisticated technical endeavors there are niggling little tasks that probably should have been automated a long time ago. How are those tasks accomplished? Meatware. . . but of course.

Le Patisserie: This might be less common for some, but I fear those of you in software might have lived it in one form or another. This might also be referred to as ‘dessert tray‘ Agile, or euphemistically as frAgile development. You take one part flailing development organization, two parts buzzword compliance, cut in some shortening or lard, and then bake until golden brown, top with whipped cream and viola! Sprinkle with legacy code if you want a real treat!!

Now comes the fun part. . . mix and match. I haven’t found a permutation that isn’t relevant, informative and amusing. (Ok, so I might be easily amused)

Cloud Gnomes – a virtualization startup

Meat Pastries – developers doing mind numbing repetitive work with Agile razzle dazzle fairy dust.

Meat Clouds – Uhm, like. . . built the pyramids and make your shoes

I leave the rest as an exercise for the reader:

Gnome Meat Pastry Clouds

Cloud Pastry Gnome Meats

Pastry Meat Cloud Gnomes

. . .

Pardon me, I have to get back to playing with puppets . . . and gathering underpants.

Less is More . . . doh!!

Smaller projects are more work, not because they are, but because they encourage a business to pile a bunch of them on top of each other.

Best Practices TM

Perfection is attained by slow degrees; it requires the hand of time

– Voltaire

Some say there are things you never want to see being made, if you like them.

Sausage, politics and software to name a few.

Here is the dirty little secret, no one in software or IT really knows the BEST way to do anything. . . no one. . .

It’s true. period.

I fondly remember a scolding I received from Robert Mecklenburg, when I used the term ‘Best Practices’ in the context of a code review. I credit him for forever altering my personal understanding of the concept.

‘Best’ implies a comparison, a hierarchy. There can be no ‘Best’ without a relationship to ‘Good’ and ‘Better’. How can you know the ‘Best’ solution without knowledge of all possible solutions?

How can you know anything is the ‘Best Practice’? Perhaps we should call them Seem-Like-A-Good-Idea-Today Practices. . . Or maybe Next-Time-We’ll-Know-More Practices . . . How about This-Seems-To-Work-OK Practices. . .

You can apply this from the barest metal all the way up the software stack to the tippy top of the user interface, from code conventions and software processes to feature planning and meeting agendas, pretty much anything that has to do with anything that isn’t already perfect. (that’s a lot of ground to cover)

That’s doesn’t mean there aren’t good practices or even great practices. There are certainly great programmers and sysadmins.

But next time you hear someone talking about ‘Best Practices’, ask them how they know. (And maybe what they are trying to sell. . . Who knows it might be what you need, but its probably not the best, at least not provably so)

I think context is also important, what works best in one context can be a disaster in what appears to be a similar context. Further, context is constantly changing. New hardware, new software, new team members, new problems to solve, that’s one reason reflection and adjustment are critical. We can’t know the best practice, but that shouldn’t stop us from trying to find it for ourselves and our teams.

Today’s ‘Best Practice’ is tomorrow’s quaint notion.

add to :: Add to Blinkslist :: add to furl :: Digg it :: add to ma.gnolia :: Stumble It! :: add to simpy :: seed the vine :: :: :: TailRank :: post to facebook

Dysfunction: Inattention to Results

Never mistake motion for action.

– Ernest Hemingway

The final dysfunction from The Five Dysfunctions of a Team is Inattention to Results. This ultimate team dysfunction occurs when members of the team value something other than the collective goals.

This dysfunction doesn’t mean they have lives and balance outside the context of the company (that’s another post. . .) but that the members of the team are focused on highlighting their own achievements regardless of the overall outcome.

You can smell this dysfunction when individuals or groups start to point to everything they did right, even when the overall project is flailing. (maybe especially when it is flailing)

“Product management did such a good job of creating requirements, its those lousy developers that didn’t understand, I mean just look at this pretty document. Don’t you love the font . . .”

“I wrote the code exactly to the spec, not that those idiot clients and product managers know what they want anyway . . . I am a programming god.”

Have you ever participated in either side of that conversation? Played Not-my-fault-I-am-teh-awesome hot potato?

(Quality Assurance gets blamed for everything, cause as everyone knows ‘kwalytee’ starts with the last people who touched the software, duh. . . You do have QA, right?)

There is nothing so useless as doing efficiently that which should not be done at all.

– Peter Drucker

Inattention to Results is most damaging to an organization when it becomes institutionalized processes. When people stop trying to do the best thing and just make sure they go through the checklist.

When things go South, the checklist becomes a buzzword compliant shield, an Agile security blanket.

When I first read the model, I thought the most important dysfunction to address was trust, because it is the foundation of all the other dysfunctions. After a little reflection, I’m starting to think that you need at least as much focus on results, if not more. Sure you need to work on trust, because if you don’t, eventually the rest of the dysfunctions grow to be monstrous, but ‘winning’ gives you the opportunity to work through a lot of trust issues. Especially, if working on trust (. . .conflict, commitment, accountability. . .) is an explicit goal.

This is all well and dandy, but what does that mean? How can I use this?

There is not a formula that you can just plug in and solve this problem. On some level, this is all about culture. On another, an individual, not on top of the food chain, recognizing these dysfunctions has to navigate between Scylla and Charybdis. One becomes faced with the prospect of being at odds with the perception of management, watching and suffering the dysfunctional culture, or finding something else to pay the mortgage.

(eh, maybe that’s why the book’s target audience is executive teams. . .)

Insanity: doing the same thing over and over again and expecting different results.

– Albert Einstein

what to do?

Reflect and adjust my friend. . . Reflect and Adjust

add to :: Add to Blinkslist :: add to furl :: Digg it :: add to ma.gnolia :: Stumble It! :: add to simpy :: seed the vine :: :: :: TailRank :: post to facebook

%d bloggers like this: