This is Part 1 of a planned three part series that traces the evolution of the CloudIQ Platform from first idea to what it is today, then considers what it is likely to become.
Right after graduate school we spent a few years living in the foothills of a desert mountain range. I remember the first time that I hiked a narrow trail to the top of the nearest peak – standing at the bottom was rather intimidating; the circuitous ascent itself was such a tangled mixture of switchbacks, short ascents and descents, under cover and open trail that it could only be described as non-obvious (at best). However, it was only upon achieving the original goal that we gained any perspective on how, in fact, we had gotten there.
In this post I will reflect on the evolution of CloudIQ – the truly exciting (if I must say so myself!) cloud application platform that we announced a couple of weeks ago.
As some pondered the impending Y2K “crisis” and others looked for the best millennium parties, most of our founding team was deeply enmeshed in building, selling, and supporting an enterprise-grade (scalable, reliable, etc.) payment server.
Upon leaving that company I had time to reflect on my enduring frustrations -
why was it so hard to build software that we (and our customers) could rely upon?
It seemed that we were spending 60 to 70% of our engineering efforts not on core functionality, but in our best attempt to ensure that the resulting application could be relied upon.
Later on an early supporter coined the term “reliability tax” to refer to this overhead.
As I asked friends at other companies and enterprise shops most recognized the same problem – a few argued the overhead was actually higher, most thought that the cruel irony was that it was very, very difficult to ensure true reliability for enterprise apps – but all agreed that this just didn’t seem right, not nearly 50 years after Gracie Hopper did her most famous work.
With this question just really bugging me I had an opportunity to build the beginning of a digital recording studio. Using completely commodity gear – no name, cheap- I was genuinely shocked at the results. Serious performance, cheap.
So that led to the second question -
why weren’t we using commodity stuff like this for problems that we really cared about?
The answer to this seemed easy enough – who could trust this cheap stuff? What if it broke? (and it would break).
In pondering the first question it seemed to us that the core problem for software development was one of complexity – mainstream application architectures were simply too complex, and becoming inexorably more so.
The First Idea
Then it became fairly clear – we could solve both of these problems at the same time by enabling groups of commodity boxes to work together to ensure a stable platform for applications.
But what exactly did that mean? Or more to the point, could we build it? Ever MORE to the point, once we built it, how could people use it, and for what applications would this new thing be useful?
Over the course of a few months the founding team hammered out the first answers to those questions. Throughout this process we were driven by use cases – we wanted the resulting platform to be equally adept at running anything from fine grained, transactional applications to more computationally-intense enterprise applications.
This led to much refinement of the basic idea, which evolved to a self-organizing group of commodity machines that could act like one thing, reliably execute all sorts of applications, grow (and shrink) as needed without affecting the execution of any running applications, and be very simple to both write applications and operate.
The Hive is Alive
We decided to call this hive computing, and on June 12, 2002 we had the first successful demonstration of a running hive. We assembled a few commodity boxes on a re-purposed kitchen rack, loaded the prototype hive software, and … it worked!
We were able to (carefully) pull a few plugs and the application kept running without missing a beat – in fact, without even losing a bit of data.
Within two years we had our first paying customer (Sprint), a couple of patents filed, and a demonstration system on which we ran an eye-opening benchmark – a wall of 100 commodity computers that could legitimately double Visa’s then-current peak transaction load, for a total bill that was well under 10% of the conventional alternative.
The best part? It was arguably far more reliable as well. We were constantly amazed by the resilience and ease of use of this new type of application platform … though truth be told, we were not yet ready to use the “P” word.
In our pursuit of the possible we (the founding team) sometimes thought that economics would do all the persuading for us. Well that turned out to be sometimes yes, but mostly no.
In fact, sometimes economics actually worked against us – the combination of 90% lower costs, simpler development, easier operations, and simultaneously increased reliability and scalability simply seemed too good to be true for many people.
The fact that we required some modification of the application also made adoption more complicated. While we supported several languages and multiple operating systems (and could easily support more of each), the plain, simple truth was that you did need to modify – albeit lightly – many components in each and every application.
This raised the adoption barrier a bit higher.
Then there was a little matter of language. Without a native category to call our own, sometimes we were put into all sorts of categories – everything from grid computing to autonomic computing, with several others between.
Early on I even told some folks that we were basically building the “Borg for applications”. While hard-core geeks loved that (and usually laughed), it didn’t exactly help us build trust with the typically non-technical executives responsible for making the final purchase decisions.
Yet, It Worked … Well
Despite these go-to-market difficulties the product itself worked well – really, really well. In fact, by mid 2002 several of us became firmly convinced that beyond a shadow of a doubt there would be a time in the future – say 10 or 15 years – where most mainstream computing would be done this way.
The economics and functional advantages were simply too compelling for any other outcome.
The only real question in our minds was when and who – when this transition would begin to occur and who would help make that transition happen.
So as 2004 came to a close we pondered solutions to these issues and continued to press rapidly forward.
In Part 2 we will talk about why this worked so well, and the transition to the application fabric.
Awhile back my colleague Sam organized a “Cloud Forecast 2009″ podcast, which was a lot of fun. Enough fun that we’re investing a bit in new audio gear and will turn out some more installments … we’ll keep you posted.
In any case, the occasion of that recording got me to thinking a bit, and with the Christmas and New Year’s holidays now well behind us, and the de-facto SuperBowlholiday looming (I was really glad to see Kurt Warner earn another shot at the title – the 2000 SuperBowl win by the Rams remains one of my favorite sports memories), I finally realized my theme for 2009, one that had been gnawing on me for some time and is now very clear:
I am excited.
I am excited about cloud computing, including progress in everything from virtualization to real-big-public-clouds with cool new storage facilities, from billion-core processors to dirt-cheap multi-terabyte drives … honestly, who can’t get excited about a terabyte for less than $100?
Even the debate over the shape, color, texture, and full extent of cloud computing is energizing. Of course, my favorite part of the debate is that it’ll be sorted out in the marketplace, and I like our chances … after all, does anyone need cloud-enabled applications?
I think so.
I am particularly excited about the level of market awareness amongst all sorts of folks that we’re talking with, about real customers doing real stuff with real clouds – public, private, and a mixture of both.
Agree with me or not, the development of private clouds is a great boon to enterprises and government agencies everywhere, and looks to be terrific for us in 2009.
Closer to home I am excited about some really cool stuff that we’re building into our products here at Appistry. Great stuff, and can’t wait until we can start talking more …
Heck, I am even excited about my new mouse! (every once in a while Microsoft does something really great, and this is one of them).
There are a lot of individual reasons to be excited, true enough. Yet the real reason to be excited runs far deeper.
After years and years and years of tons of folks making progress in all things computing, slavishly building bigger and better nets; faster and cheaper processors, beau coup rotgut-cheap drives, processors, and more; fundamentally new business models that enable some serious capital-expenditure-avoidance; cloud-friendly ways to build scalable, reliable apps; and much, much more … I think that we’re finally here.
We’ve reached critical mass.
Critical mass as in all of computing is going to go Klein-bottle on us, turning inside out in many dimensions, imitating the caterpillar and coming out as something fundamentally new on the other side.
All of the right factors have come together … economic, technological, conceptual … probably even cultural.
Time to help create the Fourth Age of Computing, time to see just what is possible.
Like I said, this is a great time to be in the computing biz.
Butterfly image courtesy of The Butterfly House, a really cool contrast to the prevailing mainstream of a Midwestern winter … plus it’s close to where I live. If you’re ever in St. Louis, please go enjoy the Butterfly House. If I see you, I’ll buy you a beer!
A while back there was a flurry of activity around a startup proposing floating datacenters – at the time I thought it was kind of a dumb idea … followed by another post with a more on-the-point headline:
“Floating Data Centers Miss the Point, Add a Bunch of Risk, and Will Keep You Up At Night; On the Other Hand, Deploying Your Applications on a Cloud of Commodity Computers With Appistry’s Application Fabric Will Deliver the Goods” (note: slight edit)
At first glance this appears to be a more far reaching version of the floating data center notion, adding an interesting (though still fairly conceptual) energy generation notion.
The idea of self-generating power ups the potential benefits idea (beyond cooling and portability) a potential big step … yet the two biggest hurdles remain.
Hurdles That Remain
Leaving expense aside, how to connect sufficient bandwidth to a floating data center is remains an enormous challenge – whether wireless or wired, it’s just going to be difficult.
Second, this really will need to be at least as sturdy as The Unsinkable Molly Brown if it’s going to have any value beyond conceptual bantering.
Whether from terrorists, storms, or just inexplicable mistakes, the prospects of all those computers ending up wet wet wet is a sobering one, indeed.
A Big Idea
Bottom line, I think this is a “big idea” that will pop back up from time to time, and will probably even have some very flashy demos and prototypes.
But … and this is a very big but … I think it’s days as a practical alternative for hosting stuff that we really care about are still a long way off … if ever.
Update 1: This story also showed up on Slashdot, with a decent discussion following.
Over at the cloud computing group in googlegroups there is an interesting discussion about optimal load-utilization. Along the way Tim Freeman brought up an interesting point:
Are there hidden costs at running this high in the first place? We’ve heard the opinion from someone who is in charge of buying 100s-1000s of computers a year that commodity hardware isn’t made to run at this capacity. That you’re not getting as much value for your money over time because of far higher failure rates (i.e., that failures don’t increase linearly with utilization and that there is usually a sweet spot)
So that got me to thinking …
Heat Really Does Kill
Obviously there are many factors in failure rates of computing equipment (spinning or simply processors etc.), but assuming that you have not-horrible power cleanliness the #1 enemy will be heat.
Heat. Heat. Heat.
So, with that in mind one important way stuff becomes server-grade (i.e., expensive, non-commodity gear) is to get better at cooling than commodity gear. Interestingly, server-grade stuff also tends to try to get that last hard-to-obtain chunk of performance out of the components as well as provide varying amounts of built-in redundancy, both of which exacerbate the heat problems considerably, causing the heat-dissipation to get even better, which requires more power, etc.
So in that sense what Tim’s contact has a point, since when running at full utilization most processors throw off lots of extra heat, necessitating (at the very least) extra gear to handle.
And there’s always the chance that the heat will be poorly dissipated, thereby resulting in increased failures … yet that does not mean that buying server-grade gear is the right way to go anymore. Far from it.
A Better Choice
A couple of choices come immediately to mind -
- use lower-power components (as in laptop grade stuff). These will naturally generate less heat, and thereby tend to reduce their self-inflicted failure tendencies.
- run much leaner power supplies than most folks want to supply off the shelf
There’s other ideas – some interesting, some dumb – but those are a few for starters.
Is the Commodity Gear Today What We Need?
Interestingly enough, most of the stuff that folks have bought to build out grids has been server-grade in drag, more or less. Just look at the components and the power supplies – high energy consumption processors, big power supplies, beaucoup fans etc. Not always, of course, but that has generally been the norm.
In fact, it’s this “server in disguise” gear that passes for commodity in most enterprise data centers today … fine so far as it goes. As Cameron pointed out in the thread you can run the current commodity gear at 100% utilization with no particular increase in failure rates. True enough, but what if we think more aggressively?
In fact, let me go so far as to suggest that if we really are able to run at 100% for months without a failure, then we’ve massively overbuilt the “commodity” gear.
Back to what will be possible in changing our infrastructure as we make our transition to clouds – public or private.
This is the Key – Absolutomente Crucial!
Underlying all of the power / failure related infrastructure choices is an unspoken reality – the real key to using commodity at scale is to ensure that the application will survive the failure of individual computers / drives / switches / whatever without losing a darn thing.
Once you do that, at the application level, then you are free to experiment with different infrastructure choices to your hearts content, different utilization rates, whatever comes to mind – provided that your apps don’t care.
In other words, many of the benefits that may result from cloud computing – flexibility, scalability, lower costs, reliability, and so on – are actually enabled at the application layer.
One more thing – when failure of individual computers doesn’t matter to the application then you can pick lower power stuff that is also very cheap – now you’re starting to talk about a great cloud infrastructure.
So as you carry this thinking further then you can start to imagine a much more aggressive type of commodity, one as yet unrealized.
Start thinking of bare-bones, fairly dense components that are uber-cheap … sort of a lego-block approach. Cheap as in $300 -$400 cheap all-up. Perfectly suited for enterprise-grade clouds – public or private – at least those that play by these new rules.
There was (and will continue to be) quite a bit more conversation on this point – it’s one of the more interesting parts of commoditization. In any case, in a future post I’ll outline some more thoughts on the “new commodity” that I believe is fundamentally possible.
A couple of weeks ago I was talking with a group of (primarily) engineering students about how our need for scale is forcing all sorts of changes in our industry … some technological, some economic, some social / cultural, and so on.
As engineers the temptation is to focus on the technological changes, which is good so far as it goes … but there is so much more.
For example, think about the differences between the first bubble and now. For my money the single biggest difference is that now there are some pretty successful economic models in place … ways to monetize crowds, to actually reward investors for taking risk to build an enterprise.
Of course, because of the ability to monetize a crowd, it becomes necessary to serve the crowd … and these crowds are (hopefully) instant mobs, seething, roiling, exploding and demanding more … all the time.
In that climate there are many businesses that can be built, fortunes made, technology consumed … always a good thing for those of us who enable reliable, cheap scale!
The recent sale of Bebo.com for the better part of a gazilion dollars / euros occasioned an opinion piece by Bragg that makes a pretty reasonable case that much of Bebo’s value came from the content provided by musicians. You may be thinking “duh … that’s why it’s called user-generated content”. I’m sure that Michael Birch (a Bebo founder) would contend that he delivered plenty of value to the musicians by providing them exposure, and besides nobody forced any particular musician to upload their stuff on bebo.
Michael Arrington then posted a mostly-reasonable, albeit coldly analytical dismemberment of Bragg’s piece. Better than the post itself, is a lengthy and at times entertaining comment thread that continued to argument for awhile. In any case, Arrington’s basic case is something like “that’s just the way the world is now, so deal with it”.
Except that as Nicholas Carr observes Arrington’s whole argument revolves around one central idea:
Recorded music is nothing but marketing material to drive awareness of an artist.
Which is one way to view the world … the world as pure commerce. But doesn’t that seem to be a bit impoverished?
Maybe that’s inevitable when you’re commercializing your words by the pound … so to speak. Not much time to craft a particularly artful post, just barf and run. While that’s probably too harsh, I doubt if anybody who blogs thinks that some future readers will be considering the merits of any particular post forty or fifty years from now.
A week would be pretty good staying power for most posts, two weeks awesome, a month the stuff of legend.
But music is different in this regard … by a lot.
That the creation and distribution models for all sorts of content are in complete flux is obvious enough … all the economic rules are changing, and in that change is real opportunity.
The music biz is just one example, but it is instructive.
Furthermore, the rapid technological change driven by the rapid commoditization and corresponding ubiquitous nature of computing, bandwidth, and consequently reliable software is adding mega-fuel to this fire.
All of this is good … even very good, but there’s much more.
That this new world (even in the transitional state that we’re in) will provide tons of opportunity is clear enough. I first realized this seven or eight years ago when one of my sons, who was in a jazz-performance program at the time, pointed out that all of his professors (all working jazz musicians) were big proponents of the music-sharing sites.
They universally hated the labels and the rest of the music distribution business, and were so taken by the opportunity for new avenues to expose people to what they did that they were happy to overlook some rough spots.
Better to be “ripped off” by individual people in the short run … after all, they might become fans someday (even attend a gig and perhaps actually buy some music … presciently agreeing with one of Arrington’s contentions!), than by a label who would be very happy to kill your career if you ever got out of line.
So new economic models, new technological means to create the art, distribute it, even to scratch out a living (or hopefully do even a bit better than that) at the same time … all of those are certainly upon us. Good enough as far as it goes …
But none of that changes the meaning of beauty, of truth … thankfully those have meaning that transcend our economic maelstrom.
I for one, am glad of it.
Robin Harris must have woken up grouchy today – he’s dumping all over cloud hysteria on this fine Monday. After throwing the obligatory it’s-all-marketing punch (the truth is that there IS a bunch of marketing, but there’s also a bunch of real substance … more on that in a minute), he gets down to business.
I am paraphrasing a bit, but here are his main points:
The only real key to Google’s low cost structure is active cluster storage – if it’s productized, anyone can be as cheap as Google (including your own datacenter).
Networks are still the thinnest resource in the computing landscape.
Consequently only low-data-rate applications are suitable for the cloud – all others will (or at least should) stay local.
Robin makes some good, albeit incomplete points, though not too sure about his conclusion. Go read his post, then let’s look at his reasoning a bit at a time.
The Main Points
The only real key to Google’s low cost structure is active cluster storage - if it’s productized, anyone can approach Google’s economics (including your own datacenter).
This is probably the biggest miss – perhaps more critical than the reliable commodity storage (which is important!), are all of the applications which natively run on commodity infrastructure. Each app generally runs as well as that particular app needs, and runs in a way that allows for some sort of operational sanity.
Google (& Amazon & others) have built a number of frameworks to make this true for their own applications, of course. Sometimes they build these sort of capabilities directly into the applications themselves. For everyone else, there is a clear need for platforms that reliably scale applications on commodity infrastructure- that is precisely why we built the application fabric.
Simple, coherent operational capabilities are also crucial. When a commodity infrastructure can basically run itself, it becomes a lot more attractive as a deployment option for the serious enterprise.
Networks are still the most limited resource in the computing landscape.
True beyond a shadow of a doubt! Robin makes a good point that the rate of improvement for networks lags behind other parts of computing (like his native storage land). My only caveat is that, while clearly limited, network bandwidth is just as clearly sufficient for many, many mainstream applications (particularly when structured as described below).
Consequently only low-data-rate applications are suitable for the cloud – all others will (or at least should) stay local.
I think many applications will clearly stay local – some for technical reasons, some for security, control and / or cultural reasons, some just because.
Having said that, some data-intense applications will still move to a cloud, provided that the data is stored near the corresponding computing elements. This alternative is even now beginning to play out, such as in the Amazon EC2 / S3 combo (among others). With this approach all high-bandwidth data operations are effectively local.
In the rush to argue for or against cloud computing, many infrastructure-centric folks are missing a couple of key considerations – namely the critical nature of the applications and the need for simple operations.
Good grid-enabled applications (and this includes the storage layer) can run on commodity infrastructure wherever it’s located – in a cloud or close to home – scale as needed, be both reliable and secure, operate itself, and be far cheaper than apps today.
In reality the argument between clouds and grids / application fabrics can become simply a deployment decision – and that may be the best news of all.
I’d like to propose a simple thought experiment. Consider this question:
What if computing is free?
While we’re at it, assume that scale is always sufficient for the problem at hand, latency is acceptable, your applications always work, and that operations are cheap enough to be in the noise.
What’s the Point?
The point of this is simple enough. One answer to this thought experiment was Google … and that worked out pretty well.
Google would not be possible without commodity infrastructure, and apps that assume that they have (more or less) free, unlimited, access to that infrastructure.
Same for most of web 2.0 – after all, most bigger sites are (very loosely) built around some of the same principles. While there are some notables exceptions (EBAY) and many fundamental differences exist, the common meta-trend is that commodity is the right choice for the biggest, gnarliest, most demanding applications..
Now for the Enterprise
Yet that thought has not really begun to penetrate most enterprises. Kind-of commodity may be OK in a fairly stateless web tier, and perhaps for some occasional modeling or research apps, but elsewhere the closest are racks of expensive, heavily-managed blade farms.
Those blade farms may help with operations, but since those farms are normally driven from the operations side of the enterprise, they don’t mean much to the apps. Consequently, these farms haven’t done much for scale for most apps.
Plus they’re still expensive.
Of course, they ARE most definitely commodity when compared with the Z-class mainframes that still dominate the batch settlement / customer service operations that are so prevalent in enterprises the world over.
A Financial Services Example
We have a financial services customer who decided to instantiate this thought experiment – they’ve implemented their settlement infrastructure on commodity. Commodity organized by an application fabric (ours!), so that it is reliable, arbitrarily scalable, and very cheap to operate.
The results? They’re matching industry norms for settlement performance on Z-class mainframes with a handful of commodity boxes … and they can keep scaling for a few hundred bucks at a time. Plus it’s reliable, and never gets more expensive to operate.
That will change their industry.
Back to the Thought Experiment
Over the past couple of years I keep running into organization after organization that has existing operations built on the constraints of expensive, heavy, traditional computing. Constrained by state, constrained by the data tier, constrained by I/O, constrained by budgets … but mostly constrained by human nature, by organization inertia, by just thinking about the problem the way it’s always been thought about.
Whole industries, for that matter.
Time to change that – ask yourself, what if computing is free?
Sometimes you can try to overload a few too many points into a phase, and instead of something useful you end up with a kind of 20-thought-idea-pile-up.
Didn’t mean to – I was just trying to convey a simple point: floating data centers do not address the real problems facing most enterprises today, while grids make it possible to do so. While that is clearly true, my headline did it’s best to completely obscure the point!
Floating data centers are a novel attempt to help with energy consumption and heat problems, but there’ s just too much baggage in the execution.
On the other hand, enabling your applications to run on a self-managing, fully reliable grid of commodity computers (what we call an application fabric) enables very effective improvements in heat dissipation etc., without a whole bunch of serious risk and operational concerns. Deploy the most energy efficient, commodity computers in your own data centers or use a cloud – your choice, both help.
Plus your apps become much more scalable, reliable, flexible … all while being a bunch cheaper to build, deploy, and operate.
A Better Headline
So maybe a better headline would have been something like “Floating Data Centers Miss the Point, Add a Bunch of Risk, and Will Keep You Up At Night; On the Other Hand, Deploying Your Applications on a Grid of Commodity Computers With Appistry’s Application Fabric Will Deliver the Goods”.
But maybe that might have been a bit long-winded …
Maybe the understatement of the year comes from a commenter at datacenterknowledge, when he said
I guess I’d have a few concerns.
To say the least!
As this photo from the company’s brochure shows, the plan is to have "containerized data centers" on deck, with more conventional data centers below decks. The idea is to have them more or less permanently moored at docks, so their marketing picture is a bit misleading. I suppose that would be essential for both power and bandwidth reasons.
In any case, the problems here could be enormous. For starters, I can think of concerns over
- Saltwater-Induced Corrosion.
- Commercial Extortion.
- Drunken Fisherman.
About the only things these would do better is to limit physical access and (perhaps) dissipate heat. For that matter, this is really just a band-aid solution to the fundamental problems that plague data centers today – energy consumption, heat dissipation, and most often the simple need for more space.
This is no solution for the core problems – it simply masks them with a different (pardon the pun) container.
A Better Plan
The beginning of a real solution is to make the decision to go to a commodity infrastructure, then utilize an application fabric to provide scalability, reliability, and simple operations for the apps and their underlying (and now commoditized) infrastructure.
Then you can select for metrics like capacity-per-watt and / or capacity-for-the-budget, without compromising scale, reliability, or operational integrity in any manner.
You can even deploy in a cloud if you’d like.
The point is you’ll have the choice to do what makes the most sense, with no need to pick up a bunch of additional problems from problematic data centers.
One of my favorite parts of the role that Nicholas Carr is playing as an observer of modern computing culture, and a fomenter of useful change, is not so much what he has to say – and I think he says a lot of very insightful, very useful things – but what he triggers other people to say, think, and perhaps do.
At the very least, Carr certainly makes the conversation in our industry far more interesting.
The buzz around The Big Switch started a few months back, but really kicked into high gear just before Christmas. The book was formally released today, so I look forward to reading it soon.
Bernard Golden has a good review up at cio.com. From his review:
Carr argues, computing is moving from company-based data centers to large utility computing infrastructures run by the likes of infrastructure providers (e.g., Amazon and its EC2 offering) and centralized services run by application providers (e.g., Google Applications) …
… IT organizations will be superseded by end user organizations taking computing into their own hands, aided by the availability of centralized utilities and applications …
… The second half of the book goes in a different direction, though. Having described the advantages of centralized computing, Carr begins to methodically outline its drawbacks …
From a recent Q&A with Wired (part of the book buzz) comes this quote:
Wired: When does the big switch from the desktop to the data cloud happen?
Carr: Most people are already there. Young people in particular spend way more time using so-called cloud apps — MySpace, Flickr, Gmail — than running old-fashioned programs on their hard drives. What’s amazing is that this shift from private to public software has happened without us even noticing it.
All of these are pretty good points – not only are they hard to argue with, why would you want to?
The Sound of Inevitability
There is no doubt that clouds have been cutting a wide swath through much of the computing that people really do for the past ten years or so. Quietly until recently, but now a simple, widely-accepted fact of life.
The Whole Story?
Yet … this is definitely not the whole story for the enterprise.
For those applications that are clearly present in the cloud – salesforce.com being the most obvious current-day enterprise example – there’s no doubt that end user organizations, with or without the cooperation and assistance of their IT organization, will simply roll their own.
Beyond these core services, however, most apps will still be built by somebody and run somewhere. Sure, they may be a standard app that’s bought and deployed in a cloud, but they may just as easily (and more likely in many cases) be composite applications built out of the best components that you can live with, wherever they’re found. In the cloud, in the data center, at somebody’s house for that matter.
Anyplace that can meet the scale and data security needed for that particular app.
The point is that the stuff that runs an enterprise has two main functions – it encapsulates what that enterprise knows how to do (hopefully better than their competitors), and it enables a big chunk of that company’s competitive advantage … and this is true no matter who builds it or where it runs.
That is why it is so important to begin building and deploying apps that are truly indifferent to the number of components and locations of the physical infrastructure, that are very happy with lots of commodity computers, that can just as easily make use of cloud apps and components and proprietary apps, and in any of these combinations will simply work as intended.
If we can do this while making it much simpler to build the app – and we can (and have) – then all the better!