thank you for your support

 

 

One of the penalties for refusing to participate in politics is that you end up being governed by your inferiors. — Plato

My last post here was an open letter to OpenStack.

The result was a number of conversations, some public and some private, about OpenStack’s history and direction.

Without solicitation, I was nominated as a candidate to the OpenStack board.

In follow up conversations, I have been convinced to actively campaign for this position and this is my attempt to do so.

While I believe the OpenStack board is not empowered to take actions directly, a board seat would legitimize my voice to provide perspectives that are currently under represented in the current conversation at that level.

That is why I was nominated and I would see that as my duty.

I have opinions and perspectives that oppose the status quo. I am now 100% confident that these should be given voice by virtue of the fact that I’ve had people who have bet their careers and livelihood on OpenStack thank and encourage me for articulating issues with OpenStack that they experience on a daily basis.

In the interest of full disclosure, I wanted to list my position and what I represent as a candidate:

  • I accept that the board shouldn’t be dictating technical solutions. I reject the idea that the board shouldn’t be concerned with or discuss technical quality.
  • Accepting and fully embracing the notion that the OpenStack Foundation and by extension the board are primarily concerned with marketing, I believe the best possible marketing is a high quality project that is a pleasure to use and operate.
  • I am against a proliferation of projects while core infrastructure functionality remains painfully unresolved.
  • I am not in favor of time based releases that prioritize dates above other considerations.
  • I promise to make the experience of the front line user and operator a primary driver in all actions and decisions I’m involved in as an OpenStack board member.
  • Towards that end, I will support my positions with users and operator case studies and publicly available information from the projects.

My goal is not to focus on anything negative, but to provide counterpoint and recognize that negative doesn’t get addressed by ignoring it’s existence.

If you believe I represent your interests enough to warrant a vote, I would be honored to have your support next week in the OpenStack Board Election.

Andrew Clay Shafer

P.S. I’m happy to respond to questions in the comments.

 


to whom it may concern

I’ve needed to write this for a long time.

I’ve struggled with how to frame everything and what tone would be best.

I would like to believe this could help OpenStack, but I have my doubts that anyone who can do anything will, or they already would have. I would like to think reading this might help someone else, but it might end up just being for me, and I’m ok with that.

For context, I was around when OpenStack was announced at OSCON in 2010 as the VP of Engineering at Cloudscaling. I have experience deploying and operating OpenStack and CloudStack implementations of some significance. I also evaluated Open Nebula, Eucalyptus and Nimbula along the way, as I really wanted a good solution to the cloud problem. I spent time consulting on OpenStack projects and last focused on OpenStack when I was hired by Jesse Andrews at Rackspace, before my whole team left to go to Nebula.

I’ve since moved on to focus on other things, but still watch OpenStack as I have a literal vested interest in the success of the project as well as many friends I care about in the community.

OpenStack was born into a vacuum created by the acceleration of AWS adoption and features, plus missteps by Eucalyptus and CloudStack with respect to the value of Open Source communities and how to cultivate them as a vendor. When OpenStack was first announced, I felt there was so much potential. I certainly made an effort to evangelize the project. Now, while many will declare victory, I’m afraid most of that potential will not be realized or worse, that OpenStack will leave a wake of bad projects that people unwittingly mistake for operable software solutions.

OpenStack has some success stories, but dead projects tell no tales. I have seen no less than 100 Million USD spent on bad OpenStack implementations that will return little or have net negative value.

Some of that has to be put on the ignorance and arrogance of some of the organizations spending that money, but OpenStack’s core competency, above all else, has been marketing and if not culpable, OpenStack has at least been complicit.

My compulsion to record this for posterity was triggered by details in the last release and related chatter around other announcements.

I would like to highlight the ‘graduation’ of Ceilometer. Ceilometer is a tragedy masquerading as a farce. In my opinion, this project should not exist and as it exists should not be relied upon for anything, much less billing customers.

First, the idea that monitoring/metering is something that should be bolted on the side of a cloud is almost as nonsensical as adding on reliability and scalability. Experience operating a web service that has been developed with instrumentation will quickly disavow anyone but a masochist of the bolted on approach. Second, Ceilometer’s implementation is such a mishmash of naive ideas and pipe dreams without regard for corner cases and failure scenarios, that Ceilometer’s association with OpenStack should be seen as a negative and graduation of the project calls into question the literal foundation of OpenStack decision making. Ceilometer’s quality is bad, even by OpenStack standards.

When I was still inclined to take actions about OpenStack related things, when Ceilometer was just a name someone proposed, I made all these same arguments but with more detail, specifics and depth. What I came to understand is that solving metering was not the primary motive. No one really cared about that. Certainly not in the way one would approach a project they intended to deploy and operate. The primary motivation was to have a project so that someone could be a Project Technical Lead (PTL).

That precedent started at the beginning with the creation of Glance, a project that never should have existed, and the subsequent creation of PTLs. The dynamics of the perceived prestige of a PTL superseded other considerations.

This dynamic allowed for a splintering of vision and mandate. If there has been any one thing that I have seen waste time implementing and operating OpenStack, that would have to be trying to coax the disparate OpenStack services, often from the same ‘release’, to work together. Press releases trumpet a new release of OpenStack as if this was working software and the naive ‘Enterprise buyer’ would rush headlong into that assumption hopped up on adrenaline and hubris. I accept Conway’s Law as a truism. Organizations will build software that reflects the communication of the organization. If the people implementing Keystone, don’t respect or understand the people implementing Nova, well, at least there is a PTL…

Don’t get me started on networking or storage…

I used to be hopeful, evangelistic even, about the possibility of a cloud service provider ecosystem built on open source. Now I am quite skeptical and feel that opportunity may be lost. Not that OpenStack doesn’t work, or at least that it can’t be made to, given certain competence and constraints, but that OpenStack doesn’t have the coherence or the will to do more than compromise itself for politics and vanity metrics.

People contribute to projects for a variety of reasons. OpenStack releases like to highlight the number of committers. That’s not a bad thing, but it’s not necessarily a good thing either. How many engineers do you think are working on AWS? GCE? How many of those committers will be the ones responsible for the performance and failure characteristics of their code? How many of those committers are dedicated to producing a world class service bent on dominating an industry? There has been some interesting, and even impressive work dedicated to improve code reviews and continuous integration, but that should not be confused with a unified vision and purpose. The per project difference in emphasis and the focus on nuances of stylistic python over other considerations of quality, let alone architecture and failure conditions determine OpenStack’s present and future. OpenStack wants to differentiate with features going up the stack, but has still not solved the foundational infrastructure and is busy furiously discussing what should be considered ‘core’. Projects that prove to be unreliable and a poor experience for operators and users ultimately damage the OpenStack ‘brand’.

OpenStack loves to declare its own victory. OpenStack’s biggest success has been getting vendors excited about abdicating their cloud strategy in lieu of going it alone. As long as OpenStack succeeds at that, there will be no shortage of funds for a foundation, summits and parties. Ultimately, if that is OpenStack’s primary accomplishment, Amazon (and perhaps others) will run away with the user driven adoption as service providers until there is nothing left of OpenStack but bespoke clouds and mailing list dramas.

I’m not sure anyone wants my advice at this point, but here it is.

  • Focus on users, not vendors. If there can’t be a benevolent dictator, there has to be an overriding conscience. The project setup solves vendor political problems more than user software problems.
  • PTLs end up being trophies. Electing PTLs every cycle is distracting and impacts continuity. Everything about this hurts OpenStack. Don’t confuse the way things are being done with the best way to do them.
  • Define a core and stick to it, or acknowledge that your governance is broken and make OpenStack an explicit free for all.
  • Stop declaring victory with vanity metrics. Divide OpenStack google trends number by the number of marketing dollars spent. The total number of committers is less meaningful if OpenStack is an excuse to sponsor parties around a collection of disparate projects.
  • Make ‘OpenStack’ brand mean something to users. Stewardship is greater than governance. If OpenStack is just a trough for the old guard IT vendors to feed slop to their existing enterprise buyers without regard to the technology or user experience then OpenStack has completely lost the plot. Make ‘OpenStack’ mean something and defend that.

I’ve certainly exceeded my 0.02 limit. Do what you will.

To all my friends at the OpenStack Summit, enjoy what Hong Kong has to offer… 幹杯.


incubators are a ghetto

Update: Following up to all the conversations, I’ve had about incubators and Silicon Valley since last week, I didn’t mean to imply ‘a list’ or that there are not other incubators besides YC, TechStars and 500 Startups that create value. Though, like most things, I think the tendency is a Pareto Distribution. I would also like to point out that there is nothing wrong with programs that are not explicitly designed to help companies get investment, just don’t mention investment as a central feature in marketing the program.

The way for anyone to establish that value is to provide transparent expectations and data about their program. Done.

There has been an explosion of incubators in the last few years. Most of them suck. Some suck so bad that the net value created by the program is probably negative. I’m not going to name names. This is just about results.

Let’s start with a story. There are minor variations, but I’ve seen it played out in real time more than once in the last few years. The story goes like this. An incubator has a class of companies, they give them a little cash, they have a weekly session with a mentor or whatever, time goes by, demo day, no one gets funding, fail, fail, FAIL.

what’s wrong

They tried to copy the Y Combinator model, and by ‘copy’ I mean cargo cult. They performed the outwardly obvious ceremony, but didn’t understand and thus couldn’t replicate the mechanics of cause and effect.

Y Combinator has had impact on the dynamics of startup formation and funding not because of the exact details of a program.  But the details are what cargo culters can see: three months, a dollar figure, weekly sessions, gogogo, demo day… the end , most of the companies dissipate.

To be successful an incubator has to do two things. First, create companies that are actually fundable, second, get them an audience with investors interested and able to fund. That’s it. That’s all. Connect the dots. Success.

create companies that are actually fundable

To accomplish the first, you can look at details like Paul Graham’s judgement of people and ideas, or Brad Feld’s numbers and analysis, or Dave McClure’s hustle and passion. You can look at the program, and the mentors. You can talk about lean startup pivots or vision or whatever. At the end of the day, there is a fairly narrow band in the total spectrum of business opportunities that are venture fundable (though that band still represents infinite opportunities). If through whatever process of filtering, coaching and pivoting, the resulting companies don’t represent an opportunity for plausible venture returns, then by definition those companies will not get funded.

The fundability is also a function of how the opportunity is represented. Raising a round of funding is telling the story of your company to a particular audience. If you can’t connect the right dots for the investors, they probably can’t connect them for you. I literally grimace when I meet founders who have come through these programs and don’t understand how to discuss the addressable market, the go-to market, let alone term sheets. The point is that a company is only as fundable as they are able to tell the story that they are fundable. And that skill is something that many incubators fail to teach. Which is a segue to what has to be in place to accomplish the second: get them an audience with investors interested and able to fund.

the network: the only thing that’s real

The traditional venture model ran on warm introductions. If you have an incubator that can’t make that hand off personally, or get the right audience in the room for demo day, then the value of the program is severely limited. So much, that I’d argue for what you trade in equity for the amount of money, the founders’ time would be better spent reading through Venture Hacks or ‘The Business of Venture Capital‘, and then hustling to social engineer introductions themselves.

As much as anything that’s what the successful incubators are leveraging. YC leverages the personal connections and reputation of Paul Graham et al. Dave McClure is PayPal Mafia. He knows people and more importantly, people know Dave. These programs, through the network of people involved, can make introductions to dozens of individuals who in many cases MUST fund startups. That is a competitive advantage.

out of the ghetto: advice for founders

What can be done?

If you are a founder looking at a program that hasn’t had at least 50% of the previous companies funded, you might reconsider the options. You may have a good experience and learn some things, but there is an opportunity cost. If your life’s mission is really to make something amazing happen with your idea and you have the resources to devote your time to that, then what are you waiting for. If the time spent and the equity traded isn’t going to open more doors for you than just working on the code and angel list, then that program might be a setback, because you’ll just have to do those things anyway and you won’t have the burden of sorting good advice from the bad. You will also inevitably be judged by the quality of companies you stand next to, at least in the context of the program.

If you can go to YC, Techstars or 500 startups, you should. I would. You’ll learn things and get a tiny bit of money, but the connections you make to the network of founders and mentors is what will make all the difference. That’s the fat head of the incubator distribution. I’m sure there are others that add value in the middle, but I want to encourage people to be aware of what you are getting into and for what. In addition to the time and equity commitment, be sure to get more data and weigh the options and benefits.

  • How many companies have been through the program and how many got funded?
  • Who are the mentors in the program and what are their backgrounds?
  • Can you get in contact with people who have been through the program? Especially founders that failed.
  • What are the other options? (for example, building something)

unsolicited advice for the well intentioned incubator

If you run an incubator and your earnest concern is the creation of value, this is my unsolicited advice. (Remember, there’s just two things you need to do, create fundable companies, get them in front of investors.)

  • If your companies aren’t getting funded, take it personally. Look at what your program creates and the relationships that are being built with investors.
  • If no one involved has ever been funded, that’s a problem. If nearly everyone involved hasn’t been funded or is in a position to fund, that’s a problem. Fix it. You have very little hope to create fundable companies otherwise.
  • The companies should be getting as many questions from you as they are answers, probably more… a lot more. Of course, that’s predicated on knowing the right questions to ask.
  • The incubator needs to be building relationships with investors as much or more than any of the companies. If no one running an incubator has or can build those relationships, how can they can help a company build relationships?

There will always be advantages in resources and relationships. That’s how the world works. Understanding that is the first step to gaining those advantages. Hope that helps.

tl;dr if an incubator is run by people who have never run a start up, never successfully pitched venture or haven’t got the cash on hand plus the risk tolerance to make considerable investments, the companies that accomplish anything will be in spite of the program rather than because of it. Some of these incubators appear to be nothing but a hobby for individuals of relatively high net worth, often having had nothing to do with venture investment in the past, to tell their war stories filtered through survivor bias to founders who have to build companies in an environment different from anything their mentors have ever experienced. The resulting bad advice and misguided effort may be a net negative for all involved, the founders and investors.

addendum: After writing this I found this, which could potentially be a useful for collecting and comparing data. There is a link there  to a post and literal dissertation on forming seed accelerator programs and a follow up to that. Jed Christiansen provides a thoughtful treatment of the subject with sound advice, but the primary focus is how to make an incubator appealing to the entrepreneur, while follow on funding is mentioned in passing as a complication. Venture funding is a pull system, there has to be explicit signals to pull and someone to do it. (if you don’t know what a pull system is, see me after class)


Goodbye 2011, I barely knew you

2011 was a blur.

I managed to fly 114K miles and spent time on 3 continents.

In the middle, we moved a family of 4 from Salt Lake City to Pittsburgh and managed to survive the first half of my wife’s first year of medical residency. I’ll spare the gore, but it isn’t for the faint of heart.

My 2011 was filled with a mix of cloud, OpenStack, devops, startups, Agile and business lessons. There were triumphs and some disappointments, but I am grateful for and humbled by all the amazing things I get to see and the people I get to work with.

I spoke at some conferences, helped organize a few events, wrote a bit of code, I designed architectures, I outlined strategy, but the details I really cherish are the good friends, good conversations and good food. I’ll assume most of you know who you are but there were plenty of people I missed in 2011 and hope 2012 provides more opportunities in that regard.

2011 is deployed.

The arrival of 2012 brings both excitement and trepidation. Big things are already underway, but I don’t want to jinx anything. Here’s to good friends, good conversations and good food.

2012, Bring it.


Learning Machines and the Future of Academics

Institutions will try to preserve the problem to which they are the solution.

– Shirky Principle

Learning, How does it work?

There has been progress and evolution, but the roots of our academic institutions are essentially medieval. For all the progress that has been made, for a variety of technical and social reasons, the whole system is largely hierarchical and based on lineage. Expertise was always a scarce resource and the time and investment to transfer expertise required physical proximity. While we have passed the stage where participation is solely based on exposure to Latin and Greek as a filter to participation, on several levels there is still a strong bias that filters on context and circumstance. Subtle and sublimated as that bias might be, these filters may be least obvious to those who benefit most and have the power to change anything. Consequently, we have not yet fully leveraged available human potential. The present is not evenly distributed.

I did the advanced track of the Artificial Intelligence and Machine Learning classes from Stanford in the last 10 weeks and wanted to share a few thoughts. Technology and efforts like these have the potential to change everything about how people learn.

Information wants to be free. The marginal cost of broadcasting the highest quality lectures from the best teachers on the planet is trending to zero. That is changing everything. Stanford is changing it. MIT is changing it. Khan academy is changing it. Know It, Busuu, and probably a long list of education start ups I don’t even know about are going to be changing it. There is a good chance that this transition disrupts the university system as we now know it. In every sense of the word disrupt.

The two Stanford classes had a slight overlap in topic, but they were qualitatively very different. There are plenty of reviews about the classes already. What I’m interested in is slightly meta.

How do people learn? What is the incentive? What is a measure of progress? And what can they do with the things they learn?

In particular, what is the most effective path to someone being productive in a deeply technical skill?

What Possibility…

Now, back to the Stanford classes. The contrast between the two approaches provoked some thoughts.

Sebastian Thrun started out by stating the purpose of the AI class is 1) to teach you the basics of artificial intelligence and 2) to excite you. They definitely delivered on that purpose. Sebastian and Peter Norvig split time covering an introduction to AI. The format was video lectures with embedded questions at the end of most videos. The format was the same for the lectures, the homework, the mid-term and the final. Watch the video, answer the questions. Done.

The ML class used a different format. This system was also video lectures. Andrew Ng’s presentation in the video medium felt natural and flowing. This class didn’t cover as many topics but almost every topic came with a programming assignment. Questions in the lectures were not graded, but there were weekly review questions and the programming assignment. You were allowed to resubmit the review or the assignment multiple times with no penalty, so you were graded, but getting 100% was really a measure of persistence. (Andrew seemed excited to be teaching people. The thank you he gave in the concluding lecture was so heartfelt, I wanted to give him a hug. Andrew made me feel like it was a true honor for him to teach this. The honor was all mine.)

At the end of AI, you had learned some things from watching videos and got graded for submitting a bunch of forms, at the end of ML, you had learned some things from watching videos and had the opportunity to have working code to train neural networks, support vector machines, k-means clusters, collaborative filtering, etc. On the one hand you have people tweeting their scores on the other you have people BUILDING SELF DRIVING CARS!

By three methods we may learn wisdom: first, by reflection, which is noblest; second, by imitation, which is easiest; and third, by experience, which is the most bitter.

– Confucius

Take The Next Step

Which brings me to the point I really want to make. What is an education? What are academics? The pursuit of knowledge and understanding? These things people are doing and building to help people learn are amazing and inspiring, but that’s only one part of the equation… the dissemination of knowledge, understanding and skill. What about creation?

Scientific journals which at one point served as a filter of quality and point of aggregation, now act as a barrier to access. If the internet does anything, it disintermediates. This current system of publishing slows and prevents the access to information. The ‘publish or perish’ tenure and research grant funding process also creates disincentives to open collaboration. I imagine a future where collaboration in research is open and transparent. Experiments aren’t done in secret and partially explained in publications, but all the methods and results are shared and updated in real time. Like a Github for science. If I can’t replicate results, I open an issue. If I find an interesting pattern or insight, I open a pull request. Everyone can see everything, streams of open data. This has to scare the living hell out of some people. There is a lot of time, money and personal identity tied up in the current system, but its essential inefficiencies are not beneficial or necessary.

(Aside: Resistance to this is not unlike what we are witnessing with the entertainment/media/copyright lobby that resulted in the SOPA legislation, where entrenched institutions attempt to prolong the last gasp of disrupted models of creating and capturing value. That resistance won’t fix outmoded approaches to servicing markets that no longer exist, it can only stunt the growth of emerging models. Piracy is a distraction. People always made copies and traded media, just the medium has changed. People have also never had a problem trading for something they value. People love to buy stuff they love. Compete in the market. Embrace the opportunities.)

Finally, there would be a benefit to more permeability between academics and industry. There are literally billions, maybe even trillions of dollars worth of technology shelved in universities. Industry loses on the opportunity to greater utilize research and expertise while academics often lose touch with the reality of practice in the wild. We all lose on the prospect of more abundant prosperity. In most cases there is a risk and implied disincentives to transition between the two disjoint worlds, which in some sense don’t even respect each other’s reality. If the system facilitated a properly incentivized flow of people and information in both directions, I can’t help but believe both would be better off.

The open questions now are how quickly the transitions happen and to what extent to those personally attached to the status quo resist. Same story, different stage.

tl;dr We live in amazing times. You can either understand how to build self driving cars or you can’t. You will either help others do it, or you won’t. Get ready for the next level or better, help make it happen. Special thanks to Stanford, Andrew Ng, Sebastian Thrun and Peter Norvig for their contributions to the future.


Dear Google, Connect the Dots

I have some little ideas.

Step 1: Be Social
Step 2: ????
Step 3: Profit

I won’t pretend I’m privy to what GOOG is moving towards internally, but this is what I would be doing if I was them.

First, social is predicated on one thing, identity. Facebook is the clear leader, then twitter, Google should make it super simple and a value add to integrate signing in with any gmail or google apps account. The next level of this game is about identity and context. It’s not too late to win that.

Whoever dominates identity will be in a great position. Someone will. Might as well be you, no? Get on it.

Also, user experience, it matters. You left authentication and authorization for google apps a discombobulated mess for how long? You know who I am, you just let me read my email, why choke on the other accounts I’m logged into when I try to open a document? It really is the little things.

Here’s where it really gets good. Make the web social. Don’t make a service that is social, make the whole thing social. Seriously.

The first run GOOG had was making the web readable. Now make it writeable. There is no reason that every site has to be a destination. Let us mark it up. Leave notes for each other. Chase each other. Gameification. Come on GOOG, let’s make this thing fun again. Let us overlay selective layers of context on the web. You can feed this data and context back into search. Win and win.

Don’t stop there, make the world searchable and writable. Break it out of the web.

You have mobile devices, you have identity, hook all that together.

Make the real time web, the real time everything.

Let’s start with an easy one. Location. Now ignore the fact that you bought Dodgeball, smothered the baby, and now someone is going to buy foursquare from the same guy for even more. Spilt milk under the bridge… Location is context. Push based location is a dead end, but it was a novelty that opened up some possibilities. Location needs to be more of an ambient context. Solve identity, you have android all over the place, find the balance between privacy and role based ambient location.

Let people filter on both sides, I only want certain people to see certain locations and I only want to see certain people in certain circumstances. Make it dynamic and easy to configure. People will love it.

Let people create contexts, and engage each other around contexts. This is what humans do. This is what social means.

Did I mention someone needs to solve identity?

That should be enough to get started.


Free, as in Fear

I have plenty of better things to do than blog right this moment, but having been in the midst of events that generated an inordinate amount of FUD, I just couldn’t contain myself.

NEWSFLASH: AN OPEN SOURCE CLOUD FRAMEWORK ACCEPTS CODE

OpenStack Nova has not stopped supporting anything. Rackspace hasn’t done anything but start contributing code to support their APIs. Considering that Rackspace is actually spending real money to support the project, that seems entirely reasonable. (PROTIP: If a community is going to use an Open Source project to interact with your product, you might want to consider a strategic investment in making that experience a good one. Just Sayin’)

THERE WAS NO STATEMENT, NO CODE, NOTHING WHATSOEVER THAT INDICATED OPENSTACK WOULD REMOVE SUPPORT FOR AWS APIS.

This whole thing started when James Watters (@wattersjames, last seen preparing himself for VMWorld with a platter of carnitas) asked the people at the Silicon Valley Cloud Computing meetup on OpenStack to settle a bet he had with Simon Wardley (@swardley) about the AWS APIs. In passing, it was noted that Nova would support the Rackspace API, but at no time was there any indication that the project would remove the existing AWS support.

@wattersjames proceeded to banter with @swardley in the twitterz, hilarity ensued, and now we are here. (+1 evil genius points to James)

OMFGWTFBBQ, Rackspace is going to implement their APIs in an open source project that they have devoted resources towards. The Clouds are falling.

Aside from people just getting their facts straight because it is the responsible thing to do, OpenStack is far and away one of the most open ‘open source’ projects by design, and definitely when compared with other open source cloud frameworks.

There is a reason there are ~150 people in #openstack on IRC. There is a reason people are submitting patches.

This isn’t because of Rackspace. This is because of how the community has been engaged and the promise of a truly open cloud framework.

There are two other things worth noting for people who haven’t followed this story and can’t be bothered to get the facts straight. First, there are other entities involved in OpenStack, not the least of which is NASA. Maybe you have heard of NASA? I don’t think NASA is in this beholden to Rackspace. OpenStack will evolve in the direction that is a combination of the collective utility of the community and whoever chooses to actually contribute code. Which brings me to the second point, code wins. If you think something should work a certain way, prove it with code.

Next time someone wants to talk about changes in an open source project like OpenStack, please include the revisions and/or patches.

The whole cloud API discussion is ridiculous anyway. All the APIs should be different in a few years. If we are really moving the cloud thing forward, they will be.

Seriously, is there anything left to debate?

Go build something.


Meatcloud Manifesto – The gauntlet is thrown…

No good deed goes unpunished...

This is the first year I won’t attend both Velocity Conf and Structure.

I would have gone to Structure if it didn’t overlap with Velocity, but given the choice, I’ll go with the people that build things.

Both conferences have an overlapping theme of the infrastructure renaissance.

So in the spirit of friendly competition, @ShanleyKane and I are going to see who can get the most people to sign the ‘Meatcloud Manifesto’, take a picture with it, and post it to flickr (or the photo sharing site of their choosing), and tweet a link with the tags of the conference you are at (#structure10 or #velocityconf) and #meatcloud just for good measure.

It’s should look something like this:

We hold these truths to be self-evident

Or this:

So if you are a builder of things, and you love some APIs, show your support for Velocity Conf and sign the manifesto.

You are either with us or against us.

Meatcloud manifest destiny, for real this time…


New Beginning: Cloudscaling

Cloud Rising

Every new beginning comes from some other beginning's end. --Seneca

Some of you already know this, but last week I joined the Cloudscaling team fulltime.

Cloudscaling helps organizations transition towards the application centric operations models that analysts/the blogosphere/random people can’t seem to define well (or quit arguing about) but refer to in generalizations as ‘cloud computing’.

We have a few interesting developments coming and a couple big projects we can’t quite speak freely about yet, but we provide strategic consulting and implementation assistance, especially for large organizations looking to invest in internal IaaS resources or to differentiate themselves as public IaaS providers.

So far, I’ve been getting up to speed on our projects and the tools, in addition to learning some things that I’ve typically been somewhat removed from, like layer 2 networks and other details most developers (and even many sysadmins) take for granted in their day to day.

The bottom line is Cloudscaling is working on pushing the boundaries of ‘Infrastructure is Code’. We can agnostically evaluate and implement solutions using the best tools and track the evolution of the space. We have a team with both breadth and depth up and down the technology, from the datacenter to virtualization, from hardware to APIs.

I’m really excited to be part of the team (although there’s some great people not on that page, like Lew Tucker ex-Sun Cloud CTO who just joined our board of advisors) and I’m expecting big things and a great year from us.

Look for some systems management and cloud related thoughts from me on the cloudscaling blog


Cloud Standards Considered Harmful

The nice thing about standards is that there are so many of them to choose from.
–Andrew S Tanenbaum

standard -noun

  1. something considered by an authority or by general consent as a basis of comparison; an approved model.
  2. an object that is regarded as the usual or most common size or form of its kind: We stock the deluxe models as well as the standards.
  3. a rule or principle that is used as a basis for judgment: They tried to establish standards for a new philosophical approach.
  4. an average or normal requirement, quality, quantity, level, grade, etc.: His work this week hasn’t been up to his usual standard.

SQL was first developed at IBM in the early 70s.

Many of the first database management systems were accessed through pointer operations and a user usually had to know the physical structure in order to construct sensible queries. These systems were inflexible and adding new applications or reorganizing the data was complex and difficult.

ANSI adopted SQL as a standard in 1986, after a decade of competing commercial products using SQL as the query language.

SQL became ‘the standard’ because it was open, straightforward, relatively simple and helped solve real problems.

TCP/IP emerged as the standard after a proliferation of competitive networking technology for largely the same reasons.

(another interesting story of emergent standards is POSIX, but apparently no one posts about it in any detail online, and you can only read about it if you are willing to part with $19… you know, the marginal cost of producing a PDF and all.)

People often compare cloud computing to a utility like electricity, one big happy grid of computational resources. Often those same people champion the call for ‘standards’, which makes me wonder if they have traveled much.

The call for standards is usually trumpeted with a need for ‘interoperability’ and avoiding lock in. We all know how well SQL standards prevent vendor lock for databases.

In discussing the evolution of standards with @benjaminblack, I pointed out that TCP/IP was more ‘standardized’ than SQL. His perspicacious response noted that with TCP/IP ‘if you don’t interop you are useless’ and ‘if databases had to talk to each other, they’d interop, too’.

Interoperability arising from a standard is a lie. The order is wrong. Interoperability comes because everyone adopts the same thing, which becomes the standard. Don’t confused a ‘specification’ with a ‘standard’. SQL became the defacto standard long before it was ‘officially’ a standard. SQL implementations will never be fully interoperable and truth be told there are often real advantages in proprietary extensions to that standard. TCP/IP became the defacto network standard and interoperable because that’s just the natural order of things. Interoperability will happen because it must, or else it won’t. Interop cannot come from a committee.

Interoperability is even more of a lie when it comes to cloud computing. If we are talking about IaaS (infrastructure as a service) then the compute abstractions for starting, stopping, and querying instances are almost trivial compared to the work of configuring and coordinating instances to do something useful. Sysadmin as a Service isn’t part of the standards. This is so trivial that you can find open source implementations that abstract the abstractions to a single interface. (Seriously, libcloud is just over 4K lines of python to abstract a dozen different clouds. At this point, supporting a new cloud with a halfway decent API is a day or two at most) The storage abstractions are in their infancy and networking abstractions are nearly non-existent in the context of what people consider cloud infrastructure. The APIs and formats are a distraction from the real cloud lock in, which is all the data. You want to move to a new cloud? How fast can you move terabytes between them? Petabytes?

Which brings me to PaaS (platform as a service), otherwise known as ‘locked in’ as a service. PaaS has all the same data issues, but without any common abstractions whatsoever. I mean sure, you could theoretically move a non-trivial JRuby rails app from Google App Engine to Heroku, but let’s be honest, sticking your face in a ceiling fan would probably be more fun and leave less scarring. That’s an example that is possible in theory, but in most cases, PaaS migration will mean a total rewrite.

Finally, SaaS (software as a service), which I love and use all the time, but I can’t convince myself that every web app is cloud computing. (Sorry, I just can’t.) Again, data is the lock in, please expose reasonable APIs, but standards don’t make any sense.

Committee-driven specifications get some adoption because most people like it when someone else will stand up and take responsibility for leading the way to salvation. CORBA and WS-* aren’t the worst ideas ever (I give that prize to Enterprise Java Beans) but they aren’t always simple or straight forward in comparison to other solutions. Adopting an official standard is good for three things, first, providing some semblance of interoperability, second, stifling innovation and finally, giving power to a standards body. For cloud computing, a standard in the name of interoperability is essentially solving a non-problem and calcifying interfaces pre-maturely.

Frankly, I’d rather double down on more innovation. Standards will emerge.

You want to make a cloud standard? Implement the cloud platform everyone uses because it is simple, open and solves real problems.

(Thanks to Ben Black for his feedback and for telling the same story a different way last year.)