This is the first of a series of excerpts from The Executives Guide to Cloud Computing (Wiley 2010, available on hardcover, kindle), a book that I recently co-authored with Eric Marks. In particular, this series will focus on the reasons why the transition to cloud computing is simply inevitable. The excerpts themselves are slightly edited to better fit this format. Enjoy!
There have been very few fundamental changes in computing.
On the surface, that may sound like the statement of a madman, or perhaps at least someone from an alternate universe. Nonetheless, it is true.
Sure there have been, are, and will likely continue to be a nearly incomprehensible fire hose of particular changes, some rather flashy in and of themselves. Simple things like pocket-sized flash drives that store more than the corporate mainframes of 30 years ago, or perhaps ubiquitous mobile devices for everything from the mundanely practical—e-mail, calendars, and contacts—to the cheerfully sublime. Much more complex developments such as the open source movement; the advent of relational databases; and the rise (and fall) of whole operating systems and their surrounding ecosys- tems, even those whose perpetual dominance once seemed assured (how many desktop machines are running CP/M these days?)— These have come and gone, perhaps lingering in some niche, forgotten by all but a few fanatical devotees.
But truly fundamental change—the tectonic shift that literally changes our landscape—happens only once in a long while, perhaps every ten or more years, even in the computing business. Fundamental change of this magnitude requires a number of smaller innovations to pile up until a true nexus is reached, and we all start marching down a different road.
Of course, as historians are fond of lecturing the rest of us mere mortals, these sort of fundamental changes are nearly impossible to recognize while we are in the middle of them, even as they loom imminently.
When researchers at the University of Pennsylvania were feverishly working on ENIAC—generally recognized as the first program- mable, general-purpose electronic computer—as the future of the world hung in the balance in the midst of World War II, do you think they envisioned computers embedded in nearly everything, from greeting cards to automobiles, from microwaves to MRIs? When researchers at the University of California, Los Angeles, and elsewhere in the midst of the Cold War strove to make computer networks more resilient in the face of nuclear attack,1 do you think any of them envisioned the Internet as we see it today? Likewise, when Tim Berners-Lee and other researchers at CERN were trying to come up with an easy way to create and display content over this new, literally nuclear-grade network, do you think they envisioned the impact on everyday life (both personal and professional) their new creation would have, or even the simple breadth and depth of stuff—from the sublime to the silly—that would be available on this new, supercharged ‘‘Internet’’? One estimate is that there are more than 500 exabytes—that’s 500 billion gigabytes—in this ‘‘digital universe,’’ and that this will double every 18 months.
The simple truth is that very few, if any, of the people involved in these developments had much of an idea of the consequences of their creations, of the impact on our personal lives, our culture, even the society on which we live—from how we interact with our families to how we conduct business.
Whether you are ‘‘technologically modest,’’ or are either by age or temperament not ashamed to let it be known, at least in certain circles, that you are a bit of a geek . . . either way, it is pretty much a given that developments in computing are having a big impact on our society, and more to the point, an even bigger impact on how we conduct our business.
And bigger changes—tectonic-shift scale changes—will have at least commensurate impact on our lives in every dimension, including the fields of commerce. One example, perhaps a seemingly simple one, yet central to many of the changes now underway, will suffice to illustrate this point.
An Example for All to See
Consider for a moment newspapers. We now face the very real prospect—actually the near-certainty—of at least one (and probably many) major metropolitan area in the United States without a traditional (local, general purpose, print, widely circulated) newspaper. While this eventuality may be stayed—perhaps for quite some time—via government intervention, the fact that this will eventually occur is not in doubt. In a culture still echoing with such reporteresque icons as Clark Kent, or at least the more prosaic Bernstein and Woodward, this was once unthinkable. Now it is simply inevitable.
There was a time when the technology of newspapers—cheap newsprint (paper), high volume printing presses, delivery networks including everything from trucks to kids on bicycles—was the only reasonable means for mass distribution of information. In fact, with help from some of the newer technologies there was even a new national newspaper (USA Today) founded in the United States as late as 1982. But with the advent of alternative delivery channels—first radio, then broadcast, cable, and satellite television—increasing amounts of pressure were put on the newspapers.
The immediacy of the newer channels led to the widespread death of afternoon newspapers in most markets; anything delivered to the dinner table in a physical paper was hopelessly out of date with the evening news on television or radio. The morning papers had the advantage of broad coverage collected while most people slept, and as a result have held on longer.
However, at the same time intrinsic limitations of the newer technologies made them better for certain types of information, though not as useful for others. For example, a two-minute video from a war zone could convey the brutal reality of combat far more effectively than reams of newsprint, but did little to describe the complex strategic elements—political, economic, cultural—of the conflict itself. As a result, a certain stasis had been reached in which newspapers carved out what appeared to be a sustainable role in the delivery of news.
Then came the Internet.
In particular, the effectively free and ubiquitous—and yes, near- instantaneous—delivery of all sorts of information mortally wounded the newspaper business. As the first round of the web ecosystem grew, the only remaining stronghold of the traditional newspapers— their ad-based revenue model—was made largely irrelevant. eBay, Craigslist, and freecycle (among others) replaced the classifieds, and online ads took out most of what was left.
Some newspapers will undoubtedly manage the transition in some manner or another, perhaps even emerging as something fairly recognizable—particularly national/international properties such as the Wall Street Journal and the previously mentioned USA Today—and perhaps even financially sound.
But those that do will likely largely do so without their original distribution technologies, and more important, many will not make the transition at all.
What Happens Next
All of this upheaval in news delivery—the enormous changes that have already occurred and that which is yet to come—have been enabled by developments in computing technologies, with the widespread adoption of everything from the Internet to the iPhone. It is probably worth remembering that all of this has occurred largely without cloud computing, and as a result we are probably less than 10% of the way through this transition in news delivery, and this is only one industry. One industry, one example, with entire economies yet to transform.
Even so, some things have not changed much, even in the delivery of news. The computing infrastructures range from the stodgy (server, even mainframe- based systems within many newspapers) to circa-2009 state of the art (which we might as well start referring to as ‘‘legacy web,’’ web 2.0, old-school web, something like that). By and large these systems still cost too much to acquire, do not adapt to changes in demand nearly easily enough, are not reliable enough, and remain way too complex and costly to operate. Even the few systems that do not suffer from all of these problems are not ideal, to say the least: Some are proprietary, and most are either too complex to create new application software, or simply do not scale well enough, at least for the sort of software that researchers are hard at work developing. In particular, with the first generation of electronic news infrastructures focused on just delivering the news, the next generation will be focused on sifting through all of that content, looking for just the right stuff.
All of that sifting and sorting and searching will take orders of magnitude more computing capacity than we have anywhere today. How will we pay for hundreds and thousands, perhaps even tens of thousands times more servers and storage than we have today— almost unimaginable quantities of computing? How will we operate them? Write new software for them? It is fair to wonder how we will even power all that gear. Assuming that all of these concerns are resolved, then, we will face a larger question still, one which we presume has many answers: What sort of business models are enabled by all this, and how do we get there?
This Scarcely Seems Possible
Before we leave this example, it is probably worth considering our present circumstances just a bit more. In particular, most of the history of both economics and engineering can be understood by thinking about managing scarcity. In other words, how do I get the most done with the least stuff, or within certain limits? For example, that underlying drive to dealing with scarcity, at its core, drives the startup team to work harder and pay less, the Fortune 500 enterprise to optimize manufacturing processes, and entire nations to set energy policies. Allocating scarcity is just Economics 101. Of course, it is also Engineering 101. Dealing with scarcity causes communica- tions engineers to develop better video compression schemes, improve CPU designs to get more done in the same amount of time, and even rethink server packaging to reduce power consumption and labor costs.
While scarcity may be the nemesis of some, it is quite literally a prime mover behind the developments that have together come to be known as cloud computing. What does this mean, and how can it be possible?
Copyright © 2010 Eric A. Marks and Roberto R. Lozano.
In the next installment we’ll look at the underlying technological flow, and how that has made cloud computing possible. If you like what you’ve seen, keep in mind that the book is available (hardcover, kindle) now!
This is a part of an ongoing series in which themes from the Executives Guide to Cloud Computing are illustrated by events in our collective transition to “all things cloud”.
Hardly a week passes without buzz about at least one -usually more – upcoming, uber-cool new mobile device(s), jumping all over one that is coming out right now, or perhaps digesting the one that was hot just before. It seems like product half-lives are down to a month or two (particularly in the Android market); whether Apple can stick to an annual refresh for the iPhone is certainly debate, but that’s a discussion for another day.
Much of this excitement is justified and easily understood … after all, who doesn’t like having what seems like a couple of billion pixels dancing in front of your eyes, or the latest <insert feature here>?
Whatever your current handset preference is mostly immaterial to anyone else – from the macro perspective the rapid penetration of this class of handsets (any OS, from wherever) is a clear enabler for media and other services consumption, and therefore (of course) for their creation.
Still, this is only a small part of the larger picture.
Platforms, Infrastructure, and Contribution
In a similar manner the movement in the ecosystem of cloud services and technologies is both rapid and continual – from pure-play public infrastructures, to private infrastructure enablement, to the emergence of strong cloud application platforms as a key for any serious enterprise strategy – the breadth and depth of progress is indeed meaningful and encouraging.
Much of this was on display at Structure last week … a real sense of inevitability is growing here as well.
For that matter the new thinking of the other-than-relational data store world may ultimately be more individually significant than any of the rest of this … that is why the a couple of years ago we (at Appistry) began extending our platforms into reliable, commodity-based storage (tightly integrated with the larger platform, of course). Conference after conference highlights newer data and app frameworks for big data. For that matter, it’s hard to enter into too many conversations about actual, intensely-scalable apps without quickly focusing on serious data issues.
Still, this too is only a part of the larger picture.
Fundamental changes in how cloud-based stuff is built, distributed, monetized, and otherwise sliced and diced has been, in many ways, some of the least anticipated yet most crucial features of the broad transition to cloud. For example, entire ecosystems driven, at least in part, by fine-grained advertising long ago passed from novel to given.
In another example, consider the rise of relatively controlled, device-specific application stores. Early last week Tomi Ahonen did a long, well researched post which makes these case that device-specific app stores (and their apps) offer intrinsically bad economics for developers.
… don’t invest in it (apps and app stores) today.. Put your creativity and investment into the real money opportunities, remember Pop Idol simple SMS votes earning half a billion dollars in USA this year alone..
Upon first read this conclusion didn’t seem right, and upon second read it bugged me a bit more. Andrew Odewhan nailed what was bugging me in a good bit of analysis. From that post
What the App Store did brilliantly is create a marketplace that anyone with the appropriate skills can enter. The development tools are free, the membership dues are cheap, and Apple’s 30 percent take seems pretty reasonable when you consider the frictionless access to a global marketplace they’re providing.
While this still is not the whole story, now we’re getting much closer to a more complete picture.
Cloud and Confluence
When embroiled in the the chaos of any one of these it’s easy to lose sight of the meta-shifts that are in progress, and in particular what they mean collectively.
What is most significant in the ongoing transition to cloud is not any one of these changes – be they new infrastructure, pay-as-you-go models, eventual consistency data models, or any of thousands of other items, meaningful as they may be individually – but rather, the confluence of all of them.
It is the confluence of all of these changes that are, together, enabling an entirely new possibility for computing. From the Execs Guide:
Computing—computation, storage, communication—is relatively free, scales up or down as needed, scales as much as needed, operates itself, and always works.
which then leads to a basic shift in what is possible for the organizations that embrace “all things cloud”. In particular, this shift enables …
A reality in which the organization can be largely freed from the traditional constraints that computing has placed on all for so long—constraints based on the cost, availability, capabilities, and the difficulties of using computing-enabled stuff.
and that is a reality worth making real.
Transformation of supply channels are never pretty – emotions and corporate carnage everywhere, a few new winners, and when all settles out eventually arrive at a more productive & efficient marketplace.
Since I spent much of the second half of last year writing my first book, as you can well imagine I’ve been paying particular interest to the morphing of all things publishing, with a particular focus on these things we have traditionally called “books” – will such a thing remain, how will they be distributed, how to construct them, and so on, all culminating into the ultimate question – how best to deliver something plenty of folks will actually find valuable?
This book was been done in what may soon be the (mostly) last vestiges of the old-school model, at least circa late 2009 / early 2010 – a traditional publisher (Wiley -who have been great to work with, by the way), a traditional distribution and retailing model (with Amazon playing a prominent, though by no means exclusive part of each of these two steps), and an expected ebook edition (by Amazon for the Kindle) four to six weeks after the paper book appears.
What is interesting is that in this model (pre-iPad, more on that in a moment) Amazon has a significant amount of control over the ebook distribution channel, and of course they have been working very hard to gain more. They really stepped up their pace recently, with presumably pre-emptive moves to sweeten the pot for authors and open up the Kindle for apps in the week+ before the iPad announcement.
Apple Enters the Fray
In a peculiarly ironic turn of events I received my “page proofs” (the near-final edition of a book, with just about final formatting, content, and so forth all in place for one last review) on the very day that Apple placed a big foot right in the middle of this fight, or as the infamous Nuke LaLoosh said in Bull Durham (my second favorite baseball movie), Apple “… announced their presence with authority”.
Amazon, of course neither had nor continued to stand still. In fact, they responded very forcefully by pulling all titles from Macmillan (a big publisher who is one of Apple’s launch partners), and going so far as to remove all references to these now-banned Macmillan titles from all Kindle users’ wish lists, in addition to removing sample chapters from these titles from all Kindles. That’s right, Amazon reached out and de-licensed, removed, whacked, destroyed – you pick the term – content that it had either agreed to maintain on a customer’s behalf or had previously delivered to that same customer.
Charles Stross (a well known science fiction author with some titles published by Tor, a Macmillan subsidiary) clearly describes the battlefield, the combatants, and what’s at stake in an excellent post – well worth the read.
As Stross points out
This whole mess is basically about duelling supply chain models.
…It’s interesting to note that unlike the music industry who had to be pushed, the big publishers seem to be willing to grab a passing lifeline.
… But Amazon, in declaring war on Macmillan in this underhand way, have screwed me, and I tend to take that personally, because they didn’t need to do that.
To his last point, Amazon (in this battle with Apple) really angered many content creators (in this case authors) … a losing proposition when one is in the business of distributing content.
But I think there is an even more important point here, one which has already been and will no doubt continue to underly many disputes as cloud computing continues its inexorable path towards dominance.
The Underlying Question of Trust
This point is very simple, really.
In their fight with MacMillan and Apple, Amazon crossed two boundaries with their customers that should never have been crossed.
In particular, they 1) deleted content that had already been delivered to the customer and 2) deleted content that they were storing for their customers (the wishlists). Hard to say which was worse (both were not good), but the wishlist content is something which Amazon encouraged customers to create and is also part of the customer experience, part of why someone would presumably intend to stay with Amazon and the Kindle for content.
- Amazon has been here before – remember the incident last summer when they deleted Orwell books? – and really should have learned their lesson.
- Irony of ironies, this sort of death-grip control is precisely why Apple has received so much criticism of the iPhone app store. Yet even when Apple has removed applications from the app store, I’m pretty sure that they did not have the temerity to remove apps that were already purchased by end customers.
- Even though these examples may be minor and may even be reversible, any time a provider breaks an implicit or explicit contract – no matter how minor, and particularly when this is a remote, seemingly abstract cloud provider – it is a very pointed reminder that the end customer is at the eternal whim of the provider, and most definitely is not in control.
- Customers do not like being reminded that they are not in control, and will gravitate towards service providers who understand this fact and behave accordingly.
- Deliberate removal of content is different, and arguably more toxic than inadvertent removal of content (as from operational or technical failures).
One last thought – once a reputation for capricious removal of a customer’s content (whether already delivered or ostensibly maintained on their behalf) is earned, it will be very, very, very hard to lose.
I have no idea how this battle to establish the next dominant book distribution channel(s) will turn out, and truthfully have not even picked a favorite yet (others have opinions, of course).
What I do know, however, is that I – speaking as both a customer and a content creator – will drop any provider / channel-enabler who thinks that it is ok to break this boundary with customers, and am fairly certain that most customers will do the same.
How many occurrences will it take? Not sure, but we just don’t want to go there.
These are exciting times to be in both content creation and consumption. Many of these changes, while not seemingly directly “cloud computing”, both are enabled by and will drive much of the the near to mid term cloud computing adoption. As such, I will write on these topics from time to time.
A hopeless mess … has it really only been less than three years since mobile devices seemed like a hopeless, stale, torpid kill-zone?
Think back to the spring of 2007, when the best choices were either aging PalmOS, messaging-centric Blackberries, and the occasional Windows Mobile devices, with absolutely none able to render a decent web page.
On the off-chance that you could get a readable webpage, chances are the device itself would – particularly the PalmOS devices – would crash at really handy moments.
The whole situation was frustrating enough that there were many who wondered whether we’d ever get to a stage where any functions of these so-called “smart phones” were reliable enough to count upon … without even hoping for them to be capable enough to actually want to use.
Fortunately, those fears were soon to pass.
Mobile Web (most) Real Now
Where do we stand today?
A few months after the original iPhone announcement – say in the misty, far off ages of late ’07 – the leading edge of the change was already most readily apparent. With only a handful of devices on the market, it was already very clear that – in many ways – mobile web was just now becoming real.
A critical mass of factors – display, user interface, network speed, ubiquitous access, and extensibility, among others – made the iPhone fundamentally different than what had come before. Enough different that the web traffic was nearly always way disproportionate to device population.
In other words, the iPhone was the first mobile device on which “the web didn’t suck”.
Within two years the rapid emergence of the whole Android ecosystem (including the very interesting Nexus One) along with a number of other interesting competitors (if only Palm can get critical mass for WebOS), along with newly-resurgent RIM and the huge (but seemingly wandering – see update below) Nokia have utterly transformed the mobile device markets.
Mobile Without Web
But wait, there’s more … in perhaps a trend that not too many people outside of the true-believers anticipated, ebook readers have actually caught on. Between various Kindles, the Nook, and even a few new entries from Sony there is now real energy in this market.
Yes publishers have been scratching their heads and trying to figure out how best to participate, but not nearly so many people are laughing now. Ebooks are not only real, but the outlines of a path to mainstream acceptance are now beginning to be visible.
Mobile Web x 10?
Earlier today Apple released the iPad, and as is often their habit have probably pushed the mobile web into warp 9, maybe 10.
True enough that tablets of one kind or another have been out for some time – I owned an HP 1100 five or six years ago, for example, and this is not even Apple’s first attempt (think Newton) – but for one reason or another tablets have never really caught on. Whether the iPad offers enough to change that or not remains to be seen – my betting is yes – but I’ll mostly leave that debate to others for now.
Yet there are many betting that this device really portends a new class of devices. For example, MC Siegler posted today on TechCrunch:
[the iPad is] the best way to browse the web in a style that is likely your preferred method: by touching it
Siegler is making the case that for anyone who already owns an iPhone or iPod Touch (now more than 75 million and counting) this is rapidly becoming a very familiar, even the preferred means for using the web. Of course, the same can be said for the owners of all of the Android, WebOS, and other advanced handsets.
Siegler made these comments after some time using one, and in doing so reflected similar comments from others who also spent time driving one today.
His bottom line: this is the first of a class of devices that are entirely optimized for content consumption (which also implies a degree of interaction, of course).
The Impact on Cloud Computing
In many ways the rise of the iPhone / Android class of handheld computers has been a real driver for the growth of cloud computing. By enabling interaction with every manner of web-delivered services at nearly any time, these devices contribute mightily to the very meaning of “web-scale”.
And has been seen in case after case after case, the only real systems architectures that have much of a hope of dealing with web-scale are, in fact, cloud computing architectures.
So in that context the introduction of the iPad most likely portends another front in the inexorable growth of web-scale itself, and in doing so will only accelerate the need for adoption of true cloud-computing architectures throughout.
Update: A few hours after I wrote this post Nokia announced strong Q4 results, growing their marketshare to 40% worldwide for smartphones. This result “marked an end to a steady stream of market share losses for Nokia’s smartphones”. So perhaps Nokia will remain a strong factor going forward – if so, great! The real point is that – whoever the leaders are – the new standard for mobile devices is to fully utilize and participate in the web, period. That is what has profound implications for cloud computing.
It is too early to tell the precise nature of this increased demand due to these tablet-class devices, but that it will occur is nearly inevitable. Some time it would be interesting to examine demand metrics as a function of the power of the handheld devices – my guess is that would be very revealing indeed.
The USPTO awarded search giant Google a software method patent that covers the principle of distributed MapReduce, a strategy for parallel processing that is used by the search giant. If Google chooses to aggressively enforce the patent, it could have significant implications for some open source software projects that use the technique, including the Apache Foundation’s popular Hadoop software framework.
Here are my initial thoughts (I plan on a more thorough post upon further reflection):
- This has potentially very profound implications for the rapidly developing world of “big data” application architectures.
- These “big data” problems are some of the key use cases for cloud computing adoption, public, private, or anywhere. One need look no farther than the rapid uptake of Hadoop and the growth of its community and ecosystem.
- Google has definitely done the most to popularize the notion of applying map and reduce operations to processing large sets of data.
- Having said that, the roots of this approach to data manipulation are very old in the history of computer science, from the earliest days of AI / Lisp / etc.
- Map-reduce is not the last word on big data algorithms, but it is an important one.
In short, big data problems are important to cloud computing, map-reduce are an important class of algorithms for some of these big data problems, and many people have put 2 and 2 together.
As for the patent and other implications,
- I have tended to favor some level of software patents – we have a couple ourselves – so I do think that there is a time and a place for patents, yet I also understand how easily abused software patents can be in the hands of … well, people who can abuse them.
- I have absolutely no idea how enforceable this patent could be … that is a much more complex question far better suited for those interested legal scholars and fine patent counsel out there.
- I have even less of an idea what Google intends to do with this patent. Will they benignly gift it back to the community, or twirl it around like some stealthy Sword of Damocles, always there but just outside of the ability of everyone else to perceive it’s precise power?
- As anyone who has ever participated (or even been near) a patent dispute understands, in most cases the value of the patent is directly proportional to the amount of cash to either prosecute alleged infringements or defend against said prosecutions, and
- Google has lots of cash (yes, yes, I realize that this is quite a revelation).
One more thing, and the importance of this is non-trivial- Google loves to portray themselves as “more open than thou”, with a very fine-sounding “do no evil” ethos that is repeated as a litany.
That is all and well so far as it goes, but what does that mean for absolutely everyone else who ever has or ever will use a promising set of basic techniques for a serious cloud app, or perhaps wants to innovate far beyond, but who happens to be outside of Fort Google?
One More Thing
I have been really surprised at the relatively low-key response to this patent award from the community.
When Dell famously received a trademark from the very same USPTO for “cloud computing” in April 2008 the hue and cry of the bereaved was astonishing … and effective. Within a week Dell had politely withdrawn the application, and returned the name to the community.
That was only a name.
This is far more significant, impacting the bounds and capabilities of real applications, and what do we hear so far … not much.
It’d be great to hear from others who care about the future of cloud computing in some way. There are so many aspects to this – legal, technical, operational, even psychological – all wrapped up into the idea of competitive positions, markets, and the future of cloud computing. So yes it’s a complex topic, but there may be some relatively simple answers.
Maybe Google needs look no further than Dell for some guidance on this one.
These are quick thoughts about a complex topic. Hopefully next week I’ll have a chance to reflect more deeply on this situation, see what develops a bit more, and then post further. This should be interesting.
About this time last year we had a bit of fun with putting together a “2009 Predictions Webinar” … enough so that today we are repeating the event for 2010.
I should probably elaborate on the “we” … all colleagues of mine at Appistry, Sam Charrington pulled this together and moderates, while Kevin Haar (CEO), Michael Groner (our Founder and Chief Architect), and myself are the panelists.
We’ll start by looking at our predictions from last year, so that should be entertaining all by itself … Sam has already posted a more detailed analysis as well – you may want to check it out and see if you agree with his evaluations.
In any case, we will be taking questions so hopefully you can join us today and fire away. I think that we’ll also post the recorded webinar relatively soon as well. Hope to “see” you soon.
Just for a little more logistical enjoyment I’ll be participating today from San Diego, where I am preparing to witness my son graduate from bootcamp at MCRD San Diego. More on that another day!
Last week’s introduction of Virtual Private Clouds by Amazon provoked, as you might expect, quite a bit of discussion in many places where the cloudisti congregate.
While much of the energy expended was more or less Brownian in nature (in the sense of lots of movement in all sorts of directions, not necessarily all that useful), there were a couple of points that were made by a few that are worth mentioning, if only because they are so persistently mentioned.
Haven’t We Heard This Before?
Some folks have a view that cloud computing = public cloud computing. For some “if you own it, it’s not cloud”. While ownership is a significant consideration, and the idea of instant access to at least the possibility of vast infrastructure with only a modest credit card is a cool option, to then conclude that all clouds must be public is simply a non sequitur.
Even less defensible is the related notion that “if it’s on your premise, it’s not cloud”. While that may seem obvious to the startup, with everything that you own (or hope to own!) within arm’s reach, it doesn’t even begin to make sense in the case of a global enterprise, with facilities scattered – well, all over the globe – here and there, even everywhere. Of course, some stuff is bought, some leased, some fully out-sourced and so forth.
Those who are in this “all clouds are public” camp often say that nothing on premise could ever compete economically. Unfortunately, this is usually an emotional argument, almost never backed up with any meaningful analysis. Look no farther than the McKinsey study earlier this year which did look at the numbers and concluded that cloud computing didn’t really make economic sense (I don’t agree with that conclusion either, but that will have to be a post for another day).
In any case, to say that an on premise cloud will a priori always result in less economic benefits is simply not true – it may or may not be true, depending upon the total cost of operations (power, facilities, salaries, etc.) vs. the total cost of the public (or virtual private cloud).
Private Clouds are Real
Of course, while there are many more considerations than simply cost, it is enough for taking a look at what is happening in cloud computing.
While public clouds providers (in many respects, though not all) have made many notable innovations (some technical, some biz models, some cultural, etc.), for the past couple of years there has been tremendous innovation in private clouds as well … those innovations are now beginning to deliver some real benefits.
While it is true that some so-called private cloud offerings may be nothing more than re-branded legacy products, after a bit of market confusion they’ll simply be forgotten as the market develops.
Real private clouds that offer the desirable characteristics of a public cloud – scalability, elasticity, commodity infrastructure, cheap operations – are certainly possible now. Possible and a reality for real enterprises. Real production instances exist today, with many more are in progress.
Yet even the most ardent private cloud advocates absolutely know that these are not the only solution for all cases. That just wouldn’t make sense.
What Will Happen
In reality, what is happening is the development of hybrid clouds – that is, clouds that are a mix of public, private, and everything in between.
Hybrid clouds are the reality that most enterprises will go to by default. That is, they’ll take a look at all their options, including public, private (on premise), and “virtual private” (off premise, but segmented in some manner) from a variety of vendors, and pick the right mix for them.
Of course, that mix will change over time, perhaps even moment to moment.
This is where the cloud application platforms come in, particularly those that enable hybrid clouds. That is, a software, operations, and architectural approach that allows all of these options to be selected precisely for when they make the most sense, in any combination that makes the most sense for that enterprise. at that particular point in time, and for that particular set of work; it is that approach that truly enables hybrid clouds for the enterprise.
It’s about giving control over these costs and capabilities back to the enterprise customers, and letting them decide.
That is a hybrid cloud, and that is the future of cloud computing.
After midnight last night I was just quietly working away on one monitor (turned vertical – some habits sure die hard, and I picked that one up way back in Xerox Altos days in graduate school). At any rate I had twitter quietly going just outside my field of view – you know, where you can sort of pickup a rhythm from the edges of your peripheral vision – when all of sudden there was a big spike in tweet traffic.
Well that broke me out of my focused alter-world, so I went ahead and scrolled back through the timeline to figure out what might have happened to cause this late night excitement. Sure enough, it was Werner Vogel’s post announcing Amazon Virtual Private Cloud.
As I flipped through his post, followed by Jeff Barr’s post with a few more details, then the spike made sense – Amazon was causing some commotion by using the “P word”.
A Few Observations
The “P word” – private – is one that is near and dear to hearts of enterprise executives everywhere. Sure they want the scale, elasticity, and cost reductions that seem to accompany every discussion of cloud, BUT …
Every executive has their own fears that haunt them, fears of data loss, service outages, news of which spreads from website to blog to twitter to facebook before they even have heard what happenned, then if all heck breaks loose ending up as an unwelcome front page storie in the WSJ (complete with those sort of hand-drawn sketches of the suddenly well know).
No amount of therapy can remove those fears.
Of course, there are the necessary exceptions of the startups who first need to succeed so that they can have something to protect! But beyond startup phase, enterprises everywhere need assurance that their operations are safe in the cloud.
Hence, the move towards private clouds of all kinds, and in this announcement a particular flavor of virtual private cloud.
In any case here are a few initial day-of thoughts:
- This is a beta of a basic service. Look at what is provided – raw VMs, accessible across a VPN, with billing a bit more. That’s great, useful, and fine so far as it goes, but …
- Enterprises will demand much more. As my colleague Sam Charrington points out in a very interesting post, Amazon has not addressed many of the issues – such as security, control, and compliance – that will be absolutely essential for so many enterprise deployments. Imagine deploying a public company’s core finacial systems across the service as it stands today, and well you know how that’d go.
- Private clouds are here to stay. The simple truth is that Amazon, a leader in public clouds, has seen the opportunity (some would say were driven to, but that’s neither here nor there) that is the essence of private clouds – that is, some enterprises need / desire / demand some level of “privateness” in their clouds. Yes I know that Werner makes his normal argument that an enterprise can’t do private clouds, but he does so by equating private clouds with virtualization. Well that’s why a real private cloud strategy just doesn’t, in fact cannot rely solely on virtualization, but …
- Private clouds absolutely need a capable cloud application platform. Along with some operational practices (a post for another day) the reality is that most of the benefits normally associated with a cloud – elasticity, scalability, lower costs, and so on – are actually enabled by the cloud application platform.
This was a great announcement for dialing up the energy levels in the industry-wide arguments over what sort of clouds are needed by the enterprise. More than that, this certainly extends EC2 in some great ways, so that now it can be a contender for an expanded role in an enterprise’s hybrid cloud strategy.
That so much of that conversation has been done in the abstract – with only the hope of an enterprise customer – leads to some peculiar conclusions by some, ideas that will never go over with a real enterprise – but that whole topic is certainly a post for another day.
In the meantime, let’s look at the bottom line on this announcement - this is a very good day for all those who have been busy building the world in which any enterprise can choose from the mix of clouds that best suit them, and do what they want to have done, when they want it done, cheaper and more reliably that it’s been done before.
Even more importantly is the true bottom line- this is a very good day for enterprises who are driving to gain more control over their technology operations, see the promise of cloud computing, and know that whatever is used needs to meet their enterprise-grade needs.
Their needs are being heard, and more importantly being acted upon.
And that is a very good thing.