This is Part 1 of a planned three part series that traces the evolution of the CloudIQ Platform from first idea to what it is today, then considers what it is likely to become.
Right after graduate school we spent a few years living in the foothills of a desert mountain range. I remember the first time that I hiked a narrow trail to the top of the nearest peak – standing at the bottom was rather intimidating; the circuitous ascent itself was such a tangled mixture of switchbacks, short ascents and descents, under cover and open trail that it could only be described as non-obvious (at best). However, it was only upon achieving the original goal that we gained any perspective on how, in fact, we had gotten there.
In this post I will reflect on the evolution of CloudIQ – the truly exciting (if I must say so myself!) cloud application platform that we announced a couple of weeks ago.
As some pondered the impending Y2K “crisis” and others looked for the best millennium parties, most of our founding team was deeply enmeshed in building, selling, and supporting an enterprise-grade (scalable, reliable, etc.) payment server.
Upon leaving that company I had time to reflect on my enduring frustrations -
why was it so hard to build software that we (and our customers) could rely upon?
It seemed that we were spending 60 to 70% of our engineering efforts not on core functionality, but in our best attempt to ensure that the resulting application could be relied upon.
Later on an early supporter coined the term “reliability tax” to refer to this overhead.
As I asked friends at other companies and enterprise shops most recognized the same problem – a few argued the overhead was actually higher, most thought that the cruel irony was that it was very, very difficult to ensure true reliability for enterprise apps – but all agreed that this just didn’t seem right, not nearly 50 years after Gracie Hopper did her most famous work.
With this question just really bugging me I had an opportunity to build the beginning of a digital recording studio. Using completely commodity gear – no name, cheap- I was genuinely shocked at the results. Serious performance, cheap.
So that led to the second question -
why weren’t we using commodity stuff like this for problems that we really cared about?
The answer to this seemed easy enough – who could trust this cheap stuff? What if it broke? (and it would break).
In pondering the first question it seemed to us that the core problem for software development was one of complexity – mainstream application architectures were simply too complex, and becoming inexorably more so.
The First Idea
Then it became fairly clear – we could solve both of these problems at the same time by enabling groups of commodity boxes to work together to ensure a stable platform for applications.
But what exactly did that mean? Or more to the point, could we build it? Ever MORE to the point, once we built it, how could people use it, and for what applications would this new thing be useful?
Over the course of a few months the founding team hammered out the first answers to those questions. Throughout this process we were driven by use cases – we wanted the resulting platform to be equally adept at running anything from fine grained, transactional applications to more computationally-intense enterprise applications.
This led to much refinement of the basic idea, which evolved to a self-organizing group of commodity machines that could act like one thing, reliably execute all sorts of applications, grow (and shrink) as needed without affecting the execution of any running applications, and be very simple to both write applications and operate.
The Hive is Alive
We decided to call this hive computing, and on June 12, 2002 we had the first successful demonstration of a running hive. We assembled a few commodity boxes on a re-purposed kitchen rack, loaded the prototype hive software, and … it worked!
We were able to (carefully) pull a few plugs and the application kept running without missing a beat – in fact, without even losing a bit of data.
Within two years we had our first paying customer (Sprint), a couple of patents filed, and a demonstration system on which we ran an eye-opening benchmark – a wall of 100 commodity computers that could legitimately double Visa’s then-current peak transaction load, for a total bill that was well under 10% of the conventional alternative.
The best part? It was arguably far more reliable as well. We were constantly amazed by the resilience and ease of use of this new type of application platform … though truth be told, we were not yet ready to use the “P” word.
In our pursuit of the possible we (the founding team) sometimes thought that economics would do all the persuading for us. Well that turned out to be sometimes yes, but mostly no.
In fact, sometimes economics actually worked against us – the combination of 90% lower costs, simpler development, easier operations, and simultaneously increased reliability and scalability simply seemed too good to be true for many people.
The fact that we required some modification of the application also made adoption more complicated. While we supported several languages and multiple operating systems (and could easily support more of each), the plain, simple truth was that you did need to modify – albeit lightly – many components in each and every application.
This raised the adoption barrier a bit higher.
Then there was a little matter of language. Without a native category to call our own, sometimes we were put into all sorts of categories – everything from grid computing to autonomic computing, with several others between.
Early on I even told some folks that we were basically building the “Borg for applications”. While hard-core geeks loved that (and usually laughed), it didn’t exactly help us build trust with the typically non-technical executives responsible for making the final purchase decisions.
Yet, It Worked … Well
Despite these go-to-market difficulties the product itself worked well – really, really well. In fact, by mid 2002 several of us became firmly convinced that beyond a shadow of a doubt there would be a time in the future – say 10 or 15 years – where most mainstream computing would be done this way.
The economics and functional advantages were simply too compelling for any other outcome.
The only real question in our minds was when and who – when this transition would begin to occur and who would help make that transition happen.
So as 2004 came to a close we pondered solutions to these issues and continued to press rapidly forward.
In Part 2 we will talk about why this worked so well, and the transition to the application fabric.
Awhile back my colleague Sam organized a “Cloud Forecast 2009″ podcast, which was a lot of fun. Enough fun that we’re investing a bit in new audio gear and will turn out some more installments … we’ll keep you posted.
In any case, the occasion of that recording got me to thinking a bit, and with the Christmas and New Year’s holidays now well behind us, and the de-facto SuperBowlholiday looming (I was really glad to see Kurt Warner earn another shot at the title – the 2000 SuperBowl win by the Rams remains one of my favorite sports memories), I finally realized my theme for 2009, one that had been gnawing on me for some time and is now very clear:
I am excited.
I am excited about cloud computing, including progress in everything from virtualization to real-big-public-clouds with cool new storage facilities, from billion-core processors to dirt-cheap multi-terabyte drives … honestly, who can’t get excited about a terabyte for less than $100?
Even the debate over the shape, color, texture, and full extent of cloud computing is energizing. Of course, my favorite part of the debate is that it’ll be sorted out in the marketplace, and I like our chances … after all, does anyone need cloud-enabled applications?
I think so.
I am particularly excited about the level of market awareness amongst all sorts of folks that we’re talking with, about real customers doing real stuff with real clouds – public, private, and a mixture of both.
Agree with me or not, the development of private clouds is a great boon to enterprises and government agencies everywhere, and looks to be terrific for us in 2009.
Closer to home I am excited about some really cool stuff that we’re building into our products here at Appistry. Great stuff, and can’t wait until we can start talking more …
Heck, I am even excited about my new mouse! (every once in a while Microsoft does something really great, and this is one of them).
There are a lot of individual reasons to be excited, true enough. Yet the real reason to be excited runs far deeper.
After years and years and years of tons of folks making progress in all things computing, slavishly building bigger and better nets; faster and cheaper processors, beau coup rotgut-cheap drives, processors, and more; fundamentally new business models that enable some serious capital-expenditure-avoidance; cloud-friendly ways to build scalable, reliable apps; and much, much more … I think that we’re finally here.
We’ve reached critical mass.
Critical mass as in all of computing is going to go Klein-bottle on us, turning inside out in many dimensions, imitating the caterpillar and coming out as something fundamentally new on the other side.
All of the right factors have come together … economic, technological, conceptual … probably even cultural.
Time to help create the Fourth Age of Computing, time to see just what is possible.
Like I said, this is a great time to be in the computing biz.
Butterfly image courtesy of The Butterfly House, a really cool contrast to the prevailing mainstream of a Midwestern winter … plus it’s close to where I live. If you’re ever in St. Louis, please go enjoy the Butterfly House. If I see you, I’ll buy you a beer!
I remember the first few days of what became Appistry very well. Not simply well as in “cool we’re starting something that’ll change the world” well, but really, really well as in “day 2 of our awesome new venture was … 9/11”.
Not just some 9/11, the 9/11.
So as our team watched that morning in the same horror and disbelief that is etched so indelibly in our collective cortex, we were faced with a very practical question – what do we do now?
Sure we had a pretty strong hunch that we could actually make the world of commodity infrastructure safe, easy, and cheap for enterprise software. Probably even creating the most reliable computing infrastructure in the planet, and sure we had been able to raise seed funds fairly easily, but that was yesterday, even early this morning … what about today?
What about today, in a world where the markets weren’t just jittery, they were closed. Banks weren’t being sold, they were closed. Air travel wasn’t expensive and late, it was closed.
I even had a check for more than $200,000 in my pocket from an eager investor, except there was only one problem – the entire US financial system was on indefinite holiday.
Would we ever be able to raise another nickel? Would anyone ever care about what we were about to build?
A Few Initial Steps
As we slowly awoke to that new reality, with so many unanswerable questions, we took care of a few first things first. A few of us drove my brother down to Marine Corp 4th Division HQ in New Orleans, where he reported for the first of three post-9/11 tours – all combat, from mine clearing in Afghanistan to quelling instability in Fallujah – with a wife and seven kids at home (a cool story, but really best saved for another day).
We finished building our commodity development systems (hey, we’re seriously hardcore about making sure that we can use the simplest, least expensive computers for anything) and screwing together our desks, and went about all of the other time honored rituals of the New Venture that are so comforting, at least to the entpreneureus serialius.
A little slower than usual at first, sure … but we gradually picked up steam. Before we knew it we had answered that initial question about what to do next – we built our new technology, built our new company, made the vision real. – in short, we kept that original goal, the same audacious vision.
Sure we did some things differently than we’d planned – we had a little longer to incubate the fundamental technologies, we made even better use of funds that we’d ever thought possible – but the bottom line is that we still moved forward, still strove relentlessly to make that vision real.
And it worked well. Really, really well.
Which brings us to the interesting times in which we find ourselves now – financial commotions in every corner, gloom and doom on the street corner for free.
Should we grab everything we hold dear and run for the hills? Just give up and go back to school, waiting for some sort of “better times”?
Well that’s certainly one course, but I think there is a better way – a far better way. Let’s start by thinking about how the average enterprise (other than say, the unhappy folks at Uber-Leveraged-Bad-Debt LLC) is likely to respond to this
disappointing news chaos.
The Big Picture
There are two macro-level trends at this time. First, there will be a natural tendency to become more operationally focused, to think less about some entirely new capability and more about doing what you do today better. Historically this is at a maximum level in the earliest portions of a downturn, when organizations are still coming to grips with the life in which they are now immersed.
Like a boxer who’s just taken an unexpected gut shot, most organizations will step back, shake their heads and begin to think about how best to (tentatively) take the next few steps.
Once they wake up from the shock, the fog clears and they realize – “Oh right … I’m still here and there’s lot’s to do” – the second macro trend starts to become apparent. In particular, each organization will begin to think clearly about the choices before them and how best to move confidently forward.
It is within this trend that companies will begin to move beyond coping with the problem and start to think very hard about how to drive cost out of their business while increasing core capabilities. And they’ll be looking to do this at as close to rock bottom prices as possible.
These kinds of economic stress can actually become a great equalizer, at least in the sense that the rules are changing for everybody and so much is up for grabs. Imagine officials stopping a World Cup match or a baseball playoff game and ordering every player to wear 75 kg backpacks for the remainder. On top of that they decide that remaining games will be played in formal attire and kick out a few of the remaining teams for good measure.
The result would be chaotic, to say the least.
While all analogies have their limitations (and this might have more than most!), you get the idea – those who can adapt to the new rules fastest and first are going to win. Even more to the point, in the real life uncertainty in which we all find ourselves, some of the newer technologies are crucial to the quest.
In a few days I’ll post Meltdown Part 2, which is focused on what an enterprise can do with technology to thrive in these times.
… or a small adventure with the laws of physics and the vagaries of press releases.
I have been paying a lot more attention to the world of things that fly around the Earth since we have been working with the great team of folks over at GeoEye these past few years.
Before I go any further, I want to throw out my high-fives and share in the elation over the successful launch of GeoEye-1 this past Saturday.
This is a great event for all around, and I think is a harbinger of a whole new class of imagery-enabled, imagery-consuming services.
I’m working on a couple of posts that will explore the confluence of these sort of uber-pipes of fresh, interesting data with our sort of arbitrarily scalable, reliable cloud-based apps. So many possibilities … but, as happens so often in my life I first went off on a slight detour …
While thinking about these posts this little item caught my eye:
Ok, this all looks pretty cool – a new crack at using satellites to make it easier to get the ‘net out to remote parts of the world.
Their distribution model even looks pretty smart (wholesaling to ISPs), and their bandwidth admirable (claimed 10Gbs – cool!).
But then I got down to this statement:
O3b Networks uses parabolic antennas, which reduce latency.
There’s nothin’ the antenna shape can do to change the fact that at the speed of light it still takes about 246 msec (about 1/4 of a second) to traverse the 23,000 miles UP and then the 23,000 miles DOWN to / from the satellite.
But It Goes to 11?
So I thought maybe this was just something lost in the translation to press release. So I went to their site, and found this statement:
O3b Networks’ system virtually eliminates the delay of standard GEO satellites by reducing the round-trip transmission time from over ½ second to just 1/10 of a second. The reduced round-trip delay creates a web experience closer to terrestrial systems such as DSL or Optical Fiber.
Ok, I had to admire their persistence. So, either they had actually figured out how to jump past the speed of light, or maybe the satellites were a lot closer to the ground.
From a story in NetworkWorld today
… O3b will be able to offer the same capacity for $500 or less by using different, cheaper medium-earth orbit (MEO) satellites.
Geosatellites orbit the earth at an altitude of 22,500 miles, while MEO satellites are around 5,000 miles. The latency, or the time it takes for a signal to make a loop between earth and the satellite, can be upwards of 600 milliseconds for a geosatellite because it is further out. For a MEO, latency is around 120 milliseconds, close to that of a fiber network
Darn, that actually makes sense … well, back to work on figuring out how to go faster than the speed of light …
Actually, I meant to say back to work on uber-scalable, cloud-based apps that deliver on the promise of cool new stuff like GeoEye-1!
The local (St. Louis area) pitch-fest that we tried last Friday turned out to be a great success – lots of fun, some interesting ideas, and perhaps the best part was the beginning of an open, entrepreneur-oriented community event.
This is a photo of all of those who pitched their ideas – I think we had nearly 20 groups.
Btw, at the top of the list for doing better next time is bringing a legitimate camera … but the iphone will have to do for this one.
I’m glad to see this get started. While there’s no doubt that net-based communities are the lifeblood of our business, there’s definitely real value to everyone to have a broader, deeper tech startup community within a beer’s reach … or at least a short drive!
Congrats to everyone who helped pull this together, including Jim Brasunas (ITEN), Alex Miller (Terracotta), Tom Nierman, our very own Kevin Haar and Jean Roberson and the rest of the mentors, the folks at BusyEvents (check out their photostream) and anyone who else who gave us a hand.
I’m definitely looking forward to the next one.
Tonight’s St. Louis pitch event is shaping up to be a lot of fun, and has now grown to include three steps. Even if you haven’t signed up, come on down and join in the fun.
Pitch. If you’re willing to stand up and give a very short pitch (max 3 minutes), show up by 4.
Watch. If you want to watch the best of the pitches, but don’t have your own, show up at 5.
Hang Out. If you just want to hang out and meet other folks who just love starting stuff, then we’re going to be at Schalfly’s Tap Room at about 7:30 or so.
We’re open to tweaking the format of course … the main point is to bring together the startup community in and around the St. Louis area together. There’s a lot of good stuff going on around the area already, and I’m excited about the prospects for more to come.
See you tonight!
The desktop of the future is going to be a hosted web service
The Browser is Going to Swallow Up the Desktop
The focus of the desktop will shift from information to attention
Users are going to shift from acting as librarians to acting as daytraders.
The Webtop will be more social and will leverage and integrate collective intelligence
The desktop of the future is going to have powerful semantic search and social search capabilities built-in
Interactive shared spaces will replace folders
Sorting through a couple of Capt. Obvious points (“The Webtop will be more social” … you’re kidding me?!?!?), he does make a few more interesting points. For example,
The focus of the desktop will shift from information to attention
is a really good point. Sure it’s just the latest way to say “drinking from a firehose”, yet it at least cleanly articulates what we all deal with daily, at levels that when we step back and think about it are nearly incomprehensible … with much more to come.
A Bit of Wishful Thinking
Yet from my perspective at least a couple of the points just fall into true blue believer wishful thinking … as in “But it JUST HAS to happen this way … doesn’t it?”
Ummmm … no.
Let’s pick one to illustrate:
The Browser is Going to Swallow Up the Desktop
That meme has been going around for quite a while. Probably the most famous, recent, and all around hard to escape incarnation of that philosophy is clearly the iphone. So let’s take a look there and see what we can learn.
Great browser? Check.
Uber-outstanding display? Check.
Tons of mindshare with a maximum mind-control field targeted at making everyone believe that browser apps constituted everything anyone would ever need? Check.
Fast network? Check.
Ubiquitous? Check. Check. Check.
On the Road to Web-App Total Domination
Well we all know what happened with six months of this strategy … Google unveiled Android, and Apple had no choice but to open up their platform (ok, not really very open … but at least non-Apple employees can sort of write apps!).
The marketplace is voting at a furious pace, with more than 60,000,000 apps downloaded in the first month. Yes some folks extrapolate from their own first month experience and say that all of this will die down soon, to be replaced by the browser alone.
Yet I just don’t see it.
The reality is that simply physics (bandwidth is NOT the same thing as latency) still dictates local responses for highly interactive tasks. No doubt much of that will (and already is, of course) done in browser apps.
But to contend that everything will move within the browser is just as unsupportable as saying something like “all development will be done in language <insert your favorite language / framework here>”.
How many times has that prediction been made, in one form or another?
It’s just silly, really.
All the Same
I understand that true blue believers can take exception to everything I’ve said, except for one thing …. 60,000,000 apps in one month, on the best mobile web browsing platform ever … with some great tailored web apps (the newest google reader really is awesome).
Rather than arguing for what is effectively both the repeal of the laws of physics and universal world peace at precisely the same moment, perhaps it would be more productive to create more effective clients to use all that cloud-based services have to offer …
… and build these inside or outside the browser, as best fits the circumstances.
A quick shout-out to Reuven Cohen for noticing this particular post, which I’d overlooked in a category in my RSS reader that had over 755 unread items in it … today! Relatively ironic, wouldn’t you say?
This time last year mobile devices seemed really stale … it had been years since meaningful change. There were lots of promises and rumors of good stuff to come, yet actual new devices all seemed to be hardly any different at all from the tired ones you held in your hand.
Small improvements here and there, but the big problems remained. Then the iPhone became the first of the new mobile platforms to become more real … and in the past year it has crashed into the mobile markets, becoming the first device to be a real enabler for what innovation in clouds and cloud-based apps are beginning to deliver.
The ability of these new devices to meaningfully participate in general web-based stuff is what sets them apart.
But that was not so obvious a year ago …
Forced to Change
About that time I left my old reliable Treo 700 in the seat pocket in front of me … minutes after landing from a red-eye to Europe, groggy and incoherent, unable to go back and retrieve the phone because of the (well-posted) rules in Lisbon.
This turned out to be about a week before the iPhone intro. I’d promised to all that I wouldn’t just run out and get an iPhone, but what could I do?
What I actually did do was buy the current Treo (750p) and the iPhone, and did my own mini-bakeoff, with the loser to be returned ignominiously. I’d been a Palm OS user for about seven or eight years (and two billion Palm corporate reincarnations), and been part of it going from responsive, stable, reliable hand-held goodness to a frustrating steam-driven, unable to handle threads handheld time bomb that regularly crashed during phone calls … ARGGGHHHH.
Tried but couldn’t get over the WM6 UI – yes it had a stable kernel, but every time I’d tried a WM device it seemed like interaction had been governed by the “5th People’s Directorate of Handheld Design”.
Pretty much the same for the Blackberries that I’d tried, though they’ve definitely evolved fast, and hung on to that addicting email / texting / now twittering title. Still no substitute for real keys for that stuff.
Yet, in the end my two week trial turned into two days – the Treo went back into hiding, and I hung on to the iPhone.
Why I Picked the iPhone Last Year
In the end it was easy – the iPhone was simply the first handheld that really did a decent job at web stuff. Particularly great for keeping up with RSS feeds, blogs, etc.
It was decent at email, contacts, texting, twittering etc – decent, but not great.
But the ability to take care of a bunch of web tasks far more than made the switch compelling for me. The wifi took the sting out of the slow mobile data connection, which helped. Besides, it never crashed during a call … not once. The stability alone was worth something.
Probably best of all, it turned out to be surprising durable. I think I dropped it hard about a gazillion times, and it kept working fine. Still does.
What has Bugged Me All Year Long
Lack of cut & paste is just stupid – still is, for that matter – but beyond that it’s been lack of
- an SDK
Well, all but the cut & paste are fixed with the iPhone 3G, with some cool bonus stuff as well (gps, ostensibly improved batter life, better initial price).
On the shaky list remains Apple’s irritating reluctance to approve developers and AT&T’s handling of data / voice plans.
I Am A Member Of The Cult Of iPhone (Arrington)
I heard Apple’s new iPhone can sense your thoughts … I heard the new iPhone can change diapers and release a lemony fresh scent … I heard the new iPhone eliminates entropy (tweets by Alex Miller during the intro)
it’s time to shift attention to the most important question about this device: How much money will it make for Apple and its carrier partners? (Om Malik)
look for units sales of 14 million in calendar 2008 and 24 million in calendar 2009 (Mike Abransky, RBC Capital)
I think the bottom line likely reality is really pretty simple – Apple is going to sell about a billion iPhones, and with the release of the 3G the platform itself is reaching critical mass as a meaningful mobile device.
While it is by no means even vaguely close to perfect, it’s the first of the next-generation devices that seemed for so long to have been stuck in a maddeningly crazy “coming soon” time warp … to move out of that time warp and actually become real. And it is very real.
Android could be next, with everyone else fighting it out to either stay relevant or perhaps also gain relevance. Btw, here’s a shout out to CrazyBob and the rest of the Android team for forcing Apple to accelerate the SDK release … looking forward to seeing production Android devices in the mix.
I think this the iphone is the first device to begin to match what’s happening in the web – particularly clouds & cloud-architected apps – enabling fundamentally new stuff… particularly with the SDK. Of course, Apple could still screw this up, but I hope not.
Besides, it’s actually not too bad for old-school phone calls!
I’ll admit that when MarkSu first blogged about building a scalable Twitter / Twitter client about a year ago (he kept going with his second, third, and fourth posts) I just started laughing, because like Stephen Forte I just couldn’t figure why anyone would care.
Then I started looking at it closer, drawn primarily out of curiosity about why they kept going whump. Of course, most of the early adopters are fairly vocal, so the commotion over these failures was … well is high-volume.
After using twitter for a couple of weeks I think I mostly agree with Alex Miller on why he likes twittering.
I think the “rules of engagement” will morph quite a bit as usage grows, and I do think usage will grow. Only, like a growing number of folks I’m not sure there’s any particular need for a centralized service, particularly one that isn’t stable.
Maybe Twitter will fix it’s scaling problems (very doable) and discover a revenue model, or maybe the whole idea of micro-blogging will go decentralized, or maybe some combination of both (which gets my vote for most probably outcome right now).
So MarkSu, I have to admit: you were right!
MarkSu also just bought a SmartCar (actually two – one that’s pictured above and obviously already here, one on the way), but I really doubt that he’ll get me to do one of those – but you never know!