Monthly Archives: September 2008

Working Together… with Techology

If you want to build a ship, don’t drum up people together to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.

-Antoine Jean-Baptiste Marie Roger de Saint Exupéry

Where to start with this one? and where to finish? The only thing I will really promise is meandering, but I think this is interesting, if not well formed…

The first thing you need is a little context about what is driving my thoughts on this:

  • I’m an XP fanboy. Even if the XP community is manifesting paranoid schizophrenic at this point (more on this in a future post), once you have experienced pairing and have a break through where things start to flow, you can never go back. Everyone working in isolation becomes really painful and is frankly less efficient IMHO, especially over the life of a project.
  • I haven’t found a way to effectively pair working on Puppet‘s core. Mainly because of the ~1000 miles in between myself and Luke. We talk on the phone, open the same files and goto line numbers or sometimes we try sharing the screen. This certainly helps with knowledge transfer, but feels clumsy and never gets the flow that you need to crank out solutions together

So how can a distributed team work together?

I’d been reflecting on a development tool called UNA, one of my personal highlights from Agile 2008, when I saw the Devunity presentation from TechCrunch 50 which set out to provide a similar environment. These types of tools facilitate possibilities for working together that I will try to illuminate, and then I’ll throw in my 2 cents plus interest.

I volunteered to participate in a ‘Coding with the Stars Competition’. This was envisioned as a sort of geek fest Agile development engineering version of ‘Dancing with the Stars’. I was paired with John De Goes. We were the first team eliminated. I admit I was disappointed just a bit, not because we were eliminated, as much as that we didn’t do a good job of portraying the power of what Una enables. In order to be true to the show, the judging was capricious, but at least I’m not bitter about that… :/ (In all seriousness, the whole thing was just for fun and while Una is a great tool when the network isn’t feeble, the collaboration Una enables probably isn’t best demonstrated in 4 minutes of software development ‘theater’ without a lot of setup and backstory.)

Resistance is Futile...

Resistance is Futile...

John is obviously intelligent and intense, which means I automatically like him. Marketing admittedly might not be his strong point (I do appreciate the Gigeresque imagery, though I’m not sure the proverbial ‘everyone else’ can.), but he’s put a lot of time and thought into building software and how people can do it better. Una is the result and when you see John talk about Una, this is obviously a labor of love.

Ok, so what does Una enable? I’m not totally sure, but I know that watching John use it and what I learned using Una for two hours was compelling enough that I’m still thinking about the experience six weeks later.

Una is for all intents a text editor bordering on IDE with a networking component that lets multiple people work on the same files at the same time. I have my cursor, you have yours and when the network packets are flowing, we can see each others work in real time. This is different from sharing a screen, because we can all control what we see and do independently. Whenever anyone saves a file UNA runs the tests and everyone knows how long ago the tests were run because there is a timer which resets after each run.

I admit that this seemed a bit disconcerting in the beginning, but after a couple hours of working together with John, some really interesting things started to happen. We started working ‘together’…

How many times have you watched someone type and witnessed a typo? Usually, you point something like that out and the person will go back and fix it. No big deal, right? 5 seconds of ‘hey’, ‘right there’ and it’s fixed. But what if you could just fix it? And while he (or she) could see you fix it, the break in concentration goes from 5 seconds to maybe one. After about the first hour of working together, this type of thing started happening, spontaneously and organically. The cadence of test driven pairing potentially changes from syncopated keyboard switches to an almost surreal flow. The communication really started to go ‘into the code’ and the oral cues became less and less necessary (this probably wasn’t the best thing for the ‘competition’, que sera, sera)

And this was only after an hour of working with a tool I’d never seen before and a person I’d never met! Imagine what might happen after working this way all the time with a talented team… Honestly, I’m not really sure, but based on what I saw from the TC50 presentation Devunity also sees some of the same potential.

This is where my two cents starts, but first I need to introduce a few more ideas (I did promise ‘meandering’). I really really really like data, and I especially like figuring out interesting ways to take lots of data from different places and present it in a way that facilitates understanding.

For he today that sheds his blood with me shall be my brother...

'For he today that sheds his blood with me shall be my brother... ' --Big Will

At some point in the past, I also enjoyed playing multi-player real time strategy games. My favorite games enable epic battles with all manner of complexity in the viable strategies (ok, and an occasional zerg rush). A critical skill one develops playing these types of games is keeping an eye on the mini map. Players develop an intuitive notion of critical places on the map. Just one pixel of certain colors anywhere might warrant investigation, but flashes of color in certain places require immediate attention or the game is lost. The mini-map also let’s you know what your allies are doing, where they might be winning or losing, and is critical for any sophisticated coordination.

In parallel, people have put a lot of energy into tools that let you analyze the quality of code, particularly Object Oriented code.

What if we had a tool that took an OO code base, generated a view of it in a ‘mini-map’, maybe several views, like the class hierarchy, the dependency/call graphs, and the file layout, then using color we super impose more data, like cyclomatic complexity and test coverage. (I know some Java IDEs are pretty close to being able to do this)

Now what if we can update this mini-map data in real time and use it as feedback in an environment like UNA?

We know the critical pieces of the code, and have a sense of the aesthetics and metaphors that code embodies. What if we not only get feedback when we run our tests, but we see green and red cascade across the mini map in the places other team members are working? Or we can see cyclomatic complexity increase? Or new untested code… (Too bad you can’t fix problems like that by pushing ‘A’ and clicking the mini-map to direct a screen full of your hydralisks… *shrug*)

The team will start to intuitively know ‘what’ colors ‘where’ warrant investigation, or perhaps even immediate action. The battle unfolds transparently and with a clear sense of where the team might be winning or losing.

Take it a step farther, what if instead of version control being based on file sets, the whole ‘battle’ is recorded, keystroke for keystroke. Not only can any point be recovered, but that is some serious data about how everyone works. I’m not even sure where that would lead. I do know that new information enables optimization, both specific and systemic improvements. Tactics and strategies can’t help but evolve. (hopefully not draconian notions of productivity…)

While we’re at it, let’s put VOIP channels in this thing and facilitate/record dialog, too.

I can’t think of a single human endeavor that doesn’t benefit from observation, analysis and reflection. The code review process would go from looking at the static result to watching the journey. Root cause analysis potentially reveals more about ‘how’ and ‘why’, instead of just groping at ‘what’. What could this feedback reveal about how we work, and how we work together? As I stated in the beginning, I don’t know… but it sounds pretty cool, no?

I not sure where the line is between enabling productivity and gratuitous indulgence, but the technology is all there, it’s just a matter of hooking the right things together in interesting ways.

I’m certainly not going to build anything like this anytime soon, but if someone reading this does, and you happen to make a whole lot of money revolutionizing collaborative software development, just promise to buy me one nice dinner, and you probably owe John one too, unless you are John, in which case run with it. (I’m partial to sushi, but I’m open to other options. If you devote your life to building this and lose everything, I’ll buy you dinner. Omakase Onegaishimasu)


Shared Metaphor Encore

Recently, the fact that the my ‘Shared Metaphor’ post will eventually be my most read post of all time became painfully clear. Apparently, more people care about ‘gnome’ and ‘pastry’ than care about ‘software’, ‘dysfunctions of a team’, ‘agile’ and ‘puppet’ combined.

The other thing that is painfully clear: no matter what, it is always ‘meatcloud’ at the bottom, and often ‘meatcloud’ all the way down. (if you get that joke, I am sorry…)

In case you missed it the first time, without further adieu… ‘Gnome Cloud Meat Pastries 2.0

The problem with communication … is the illusion that it has been accomplished.

– George Bernard Shaw

Underpants ????? Profit

One of the topics arising frequently in my conversations is the need for clear vocabulary to describe emerging concepts forced by technical evolution and how those concepts fit into to a personal understanding of the world.

Dialects arise, but until we have shared symbols and metaphors these discussions must always be clumsy. (after we have shared vocabulary, it will still mostly be clumsy)

This is a feeble (and misguided) attempt to share some metaphor.

Gnomes: Some of you might know of venture funded technical start ups that are going to change the world. Some of those start ups might have ridiculous valuations and little or no revenue. Some just have no revenue. . . at least they have a business plan, good luck with that.


Clouds: Everyone knows about ‘the clouds‘ Or maybe they don’t? Or maybe the whole point is that they don’t? Or something? Some anonymous thing over there does something like this other thing I once knew and could touch. . .


Meat: Every organization probably has some hardware and some software, but even in sophisticated technical endeavors there are niggling little tasks that probably should have been automated a long time ago. How are those tasks accomplished? Meatware. . . but of course.


Le Patisserie: This might be less common for some, but I fear those of you in software might have lived it in one form or another. This might also be referred to as ‘dessert tray‘ Agile, or euphemistically as frAgile development. You take one part flailing development organization, two parts buzzword compliance, cut in some shortening or lard, and then bake until golden brown, top with whipped cream and viola! Sprinkle with legacy code if you want a real treat!!

Now comes the fun part. . . mix and match. I haven’t found a permutation that isn’t relevant, informative and amusing. (Ok, so I might be easily amused)

Cloud Gnomes – a virtualization startup

Meat Pastries – developers doing mind numbing repetitive work with Agile razzle dazzle fairy dust.

Meat Clouds – Uhm, like. . . built the pyramids and make your shoes

I leave the rest as an exercise for the reader:

Gnome Meat Pastry Clouds

Cloud Pastry Gnome Meats

Pastry Meat Cloud Gnomes

. . .

Pardon me, I have to get back to playing with puppets . . . and gathering underpants.


Semantics Matter (or I finally get it…)

Silence is better than unmeaning words.

-Pythagoras

Over the time I’ve been programming, I’ve come to value certain things in a code base. Without going into too much detail (which will probably be its own post soonish), I value code that is easy to understand and manipulate.

The first level of understanding, which is facilitated by the style and organization in a code base is ‘What’. If I can look at code and mentally map ‘What’ it is doing, I start to get a level of productive joy. I can’t get that if I’m trying to sort out which branch of the if-else plinko, meta-scrambled self.foo or side effect driven development is actually getting executed at any given moment. ‘What’ is really really nice.

A higher level of understanding code is ‘Why’. ‘Why’ is more subtle, but orders of magnitude more powerful than ‘What’. ‘What’ is a technique, ‘Why’ is the purpose, the driving principle. Understanding ‘Why’ gives flexibility and options to ‘What’. (Unfortunately, most code does a poor job of conveying ‘What’, let alone ‘Why’, but that is a topic for another day.)

Which brings me to ‘semantics matter’, which is something I’ve heard Luke say over and over when talking about Puppet. When I heard him before, I just nodded and thought he was talking about nifty Puppet language features like ‘after’ and ‘subscribe’ for managing relationships between services and the underlying packages and config files, because that was what he was using as the example. I was understanding the ‘What’ level.

You think I would have figured this out faster after listening to Luke rant about it so much, but the whole ‘semantics’ thing didn’t click until a discussion with Teyo about preparing for his ‘Golden Images’ talk at Linux Plumbers Conf.

Do not make a golden image...

Do not make a golden image...

Puppet let’s one model system configuration with code. Just like any other language, the ‘What’ and especially the ‘Why’ can be made apparent if the code is organized in certain ways and/or if one shares certain metaphors with the original author.

Until quite recently, I was having a hard time understanding why rebuilding virtual images with Puppet was superior to just versioning working images. I mean I heard the words, but in my mind I was questioning the practical difference. In one case you start with a base image, get everything set up and save the working copy, in the other you start with the base image and let puppet build it up. I kept thinking to myself, what is the difference, both solutions end up in the same place, right?

Being afraid to turn off real machines where you have no idea what is running, because there might be some critical cron job that matters on the third Tuesday of the month is one thing… (this happens, for real, some of you know this is true, someone reading this works at places like this right now, guaranteed), but once everything is virtual and running what you want, what’s the harm of just making images?

The difference is the potential to encode ‘What’, and if your code is sensibly organized, ‘Why’.

I was only seeing the static state of the working system. What if you want to change things? If you have working images, you have to reconstruct ‘What’ by discovery, good luck with ‘Why’. If you are lucky, it was you that set up the systems and it wasn’t over 6 months ago. The ‘What’ and ‘Why’ were apparent to someone, potentially you, when the systems were first set up, but now you just have this bucket of bootable bits that ostensibly does something. If it isn’t working, or there is a need to change something significant, the choice is poking around the bucket of bits until the new ‘What’ is in place or starting over with a new ‘Why’ that is lost as soon as the new image is finished.

If Puppet is building your services, ‘What’ and ‘Why’ can be recorded, clarified, recovered and manipulated. Version control becomes straight forward, manageable, and transparent. Services can have clear definitions and relationships. So obvious… can’t believe it took me this long to ‘get it’…

How many 500 MB images do you want to version? Can you make any sense of the diffs? Really? Seriously?


Exhale…

Ok, the last few months have been insane…

August started with a week in Toronto at Agile 2008, which I started to write a post about.  Then I looked up when it was 2000 words and had at least 5 clearly deserving topics.  At about this point I realized I had too much real work piled up to justify untangling the topics, so I just left it as a draft, but I’m hoping to get through all those thoughts in the next few weeks.

August ended with the Utah Open Source Conference, which, all things considered, was pretty much awesome.  ~550 people came out of the woodwork for a local conference in a place where Linux battles with Sex, not too shabby.  I had a great time, reconnected with old friends and made a couple new ones.  Also got the opportunity to give my first hour long Puppet talk (slides), which I think went pretty well.  I underestimated the time it would take to get through some of the code examples, but that just left more time for questions, which filled out the hour.

I’ll do better next time 🙂


%d bloggers like this: