This is part of an ongoing series of posts in which I’ll present ideas developed in Executive’s Guide to Cloud Computing. Hope that you find these useful.
Moore’s Law has long been a friend of all in the computing biz … after all, with no particular direct action by most of us all of our lives tended to get better, faster, and cheaper … perhaps in fits and starts, but over time the effects are undeniable.
Yet this progress has not been uniform, and now those realities are beginning to make themselves known – beyond a shadow of doubt – in ways that we are only now beginning to appreciate.
One Small Example
For example, it’s been more than five years since the power of an individual cpu has basically topped out, leading to the advent of multiple cores. Repeat after me: two, four, six, eight … all of these we do appreciate (sorry, couldn’t help that one much – probably comes from coaching so much baseball and softball!).
Still, while multiple cores certainly solved a problem for the silicon vendors to keep upping the ante, there are some very definite consequences to the limitations.
For starters the multi-core phenomena has really put the pressure on software developers … individual applications have to become sophisticated enough to handle multiple threads of execution well (albeit with the simplification of a physically shared memory). Absolutely anyone who has spent much time writing, debugging, or supporting multi-threaded code can attest to the plain reality that this is significantly harder than the more common.
But Wait There’s Moore
Turns out that the multi-core folks are basically constrained by their ability to get data onto and away from the physical chip – from the CPUs themselves. In other words, the total I/O bandwidth on / off the chip is a very serious design constraint, one of which many very smart folks are struggling against mightily.
Yet that is not the most serious version of this problem facing us today. Our basic inability to move as much data as we want (bandwidth) as quickly as we want (latency) is even more onerous as manifest at a macro level. In particular, move up several levels of abstraction from the chip to the cloud … particularly clouds constructed out of aggressively commoditized infrastructures.
(For the rest of this discussion we’ll focus on bandwidth, and cover latency effects another time.)
What’s happening at the cloud level is that there is a real mis-match in the rate of improvement in three basic capabilities;
- Processing power (multi-core or otherwise)
- Storage capacity
- Network / interconnect bandwidth
This is true both individually and in aggregate.
In other words, we are not able to get data from increasingly large storage pools to and from increasingly powerful processing pools quickly enough … and the problem is getting worse, much worse.
@Beaker (Christopher Hoff) brought out the same point in a very good post yesterday. From that post:
… there is an underlying assumption that the networking that powers it (the cloud) is magically and equally as scaleable and that you can just replicate everything you do in big iron networking and security hardware and replace it one-for-one with software in the compute stacks.
The problem is that it isn’t and you can’t.
That, of course, is a great point.
Johhny Cochran the Geek?
Later in the post he riffs a great line
Abstraction has become a distraction.
Funny, except that I’ve drawn a different conclusion from the same facts. Namely, that abstractions (no matter how cool, complete, or sophisticated) still have to run on actual, physical stuff … and that actual, physical stuff matters … a lot. Or said another way
Abstractions are great, on infrastructure they must wait.
So not as catchy as Beaker’s, but I think it makes the point ok.
The simple reality is that raising the levels of abstraction is essential to creating real cloud-native apps. While that is primarily a job for the cloud app platform, it cannot – must not – ignore the infrastructure on which it operates (physical or virtual).
What Then Can We Do?
So that seems to leave us between the proverbial rock and a hard place – customers demand apps that scale, data and application architectures are morphing to enable an entirely new grade of scale, infrastructure operations are getting way better and thereby enabling scale, access is becoming far more ubiquitous thereby driving the need for scale, commodity is becoming far more so which thereby enables … well, do you see the theme? Scale, scale, scale … and the network isn’t keeping up with any of this, and can’t.
It just can’t. Not now, not tomorrow, nowhere into the foreseeable future, and for that matter probably never. Ever.
Yet a solution to this connundrum is beginning to emerge.
I believe that these macro-level forces will lead to is an inevitable merging of the storage and computational pools in many physical infrastructures, and that merging will tend to become the norm rather than the exception.
In other words, SANs … no thank you. NAS … here and there. But for the big stuff a fundamentally new approach, one that is very, very cloud-native, and one to which we are surprisingly close.
In Part 2 I’ll examine at least one way this merger can occur and consider some of the consequences.
Sam Charrington, colleague and friend, spent a few days at the Gartner Application Architecture, Development, and Integration Summit show last week. One of the more interesting things about Gartner shows are the analyst briefings. While there is no single place that can definitively define what’s going on in markets as diverse as those in which we participate, these briefings are a good place to take a snapshot of what things look like today.
Besides, the quality airport time always gives you a chance for reflection, a time to ponder where the market has been, where it is today, and where it’s going. The weather must’ve been bad, because Sam came back very reflective!
When We Started
After we’d incubated a bit, had technology in hand and had started making the rounds of prospective investors, one refrain that we heard over and over again could be summarized something like this:
Why bother? Everything about application development has been settled for good. There’s no room for any more innovation in application development and deployment.
I personally heard this too many times to count. It was almost like a technology version of the French knights in In Search for the Holy Grail , as in “we already got one of those!” (about a minute or so into this linked video).
While folks like Massimo Pezzini (a Gartner analyst who covers this sort of thing) and a handful of others didn’t buy into this line of thinking, they thought we might be talking about a new niche, something out at the edge. Massimo coined the term XTP (extreme transaction processing), and has been building on that theme.
At the summit last week Massimo observed that
By 2012, mounting user need for XTP applications and technology innovation will propel at least one new software vendor into leadership in the application platform market with more than 15% market share in the XTP platform segment.
In fact, he recently issued a report entitled The Birth of the Extreme Transaction-Processing Platform: Enabling Service-Oriented Architecture, Events and More. Great report, well worth the read.
The name alone tells the story … big innovation is here. Sam brought all this back from the summit, and that got me to thinking.
That’s a really, really good question. I think there’s a number of factors, but two really stick out to me:
- Scale. Most of the status quo architectures just can’t keep up with where conditions are driving the enterprise. Call it a fire hose, customer demand, web-scale, or competition – in fact, call it what you like – the simple reality is that in the early days of the third millinea enterprises (both new and old) need their computing infrastructures to scale. Scale really, really well, and do it simply, reliably, and cheap.
- Desire for Simplicity. While a certain amount of complexity may be inevitable, anyone who is deeply involved in writing, deploying, or operating applications today knows that there are just too many moving parts, they’re too hard to move and arrange as needed, and they simply don’t work well enough. Stuff breaks when it shouldn’t, it’s hard for enterprises to
I think the desires for these had been forming for some time, but the rise to dominance of such high-profile players as Google, Amazon, and a few others have shown that new rules are possible. That maybe, just maybe it might be possible for an enterprise to conceive of their application and computing infrastructures scaling as needed, working reliably, being simple to operate, and deployable on commodity infrastructure.
While the rise of the whole virtualization industry is a partial answer to the “desire for simplicity”, it’s not the whole story, not by any means. In fact, it is the inability of the existing players (you know who you are!) to shake the chains that bind them is what has opened the doors for new players.
This is now cold, sober reality. Simple, reliable, easy to operate scale on commodity infrastructure. Here today, in production at serious enterprises, in the core of their operations – where it matters.
I think the next six, 12, 18 months are going to be very exciting times indeed. As Massimo indicates, I too think we are contributing to the birth of a new platform.
While there may be others who eventually make it, we are driving hard to continue earning the trust of our customers, partners, and the community so that we are the first “new software vendor” who takes a “leadership position … with more than 15% market share”.
So that’s a reasonable next step, but more is definitely possible. Much more.
One of the often-repeated baseball truisms is “that you can never have too much pitching”. Even if you don’t know anything about baseball, you can tell that this is true by just searching on that phrase and see what comes up. Go ahead: I’ve made it easy!
(for the non-baseball folks out there Bob Gibson is one of the absolute all-time greats, a pitcher’s pitcher … every baseball team that ever was or ever will be would love to have Mr. Gibson on their team)
Simplicity Really Matters
In the world of scalable applications there is a rule above all rules – simplicity really matters. Or in tribute to the tattered, yet still great game of baseball, “you can never have too much simplicity”.
You can say this many different ways, but the reality is that in order to really build scalable systems we must strive for the simplest abstractions possible.
For a minute I thought I was reading one of our new marketing pieces (I wasn’t) … Nikita Ivanov seems to be all over the “scalability simplified” theme. Of course I agree with his basic point, but there’s more to the story of course.
Making It Real
Even Ivanov’s jab at Nati Shalom illustrated an underlying reality, ignored all too often – enabling a simple world can be complicated. Of course any complexity needs to be supporting an elegantly simple abstraction, such as the one we present. The problems arise when that complexity is exposed, as it is in the vast majority of computing architectures.
In any case, just arguing for development simplicity (while commendable) isn’t enough. After all, somebody has to deploy and operate what you build.
The Whole Story
So yes we must deliver simplicity to the developer … that is a key for enabling scalable applications. But don’t forget the other two legs to this stool:
- Operational Simplicity. The biggest fabrics (or grids) absolutely must be at least as simple to operate as a single server … no matter how big they get.
- Reliability. A fabric must be able to simply ensure the reliability of each operation – this is crucial for being able to rely on commodity infrastructure.
Taken together (development simplicity, reliability, and operational simplicity), then you have an approach that’s meaningful. That is exactly what people are discovering with application fabrics.
Go ‘git me some of that simplicity!
Michael Krigsman has been humming the "simplicity is good / complexity is … not so good" tune lately – check out this and this – and it sounds pretty good to me. While his focus is primarily on self-induced organizational complexity, I think the same exact points apply to architectural complexity.
In fact, this is how I tend to describe the abstraction presented by an application fabric:
each service and application scales as needed, always work as expected, and manages itself.
Makes sense, doesn’t it?
In practice, the only people who actually like architectural complexity either
- don’t think they have any choice,
- don’t have to code for or operate the resulting apps,
- are so far down into the bowels of the existing, uber-complex architectures that they’ve forgotten that there is a world above-ground that is their natural home, or
- are just trying to show off.
While Todd Fast of Sun makes an interesting point against a sort of "false simplicity", I think that is really a different issue and a bit of a red-herring (which I’ll take up again later).
For my part, I’ll choose architectural simplicity each and every time!
Note: the image is from the cover of a great book, "The Evidential Power of Beauty" by Thomas Dubay, which explores the meaning of simplicity and beauty in the physical world.
Two good mentions in this week’s edition of GRIDtoday.
First, from Editor Derek Harris’ weekly overview article comes these great comments:
… (the) dichotomy of uses illustrates the beauty of companies like Appistry who offer everything necessary to handle Web-scale and highly transactional applications, as well as the ability to handle compute-intensive applications …
the potential market is broad.
We certainly agree with that assessment!
Derek also wrote a complete article in this week’s issue discussing a GeoEye application … definitely worth a read.
Technorati Tags: application virtualization, application+virtualization, commoditization, energysaver, geoint, GRIDtoday, applications, scale data, scale+data, scaling db, scaling Web 2.0, scaling+db, scaling+Web+2.0, virtualization, web 2.0, web+2.0
It might be fair to wonder if I’m crazy or not, and well maybe I am … but I don’t think so.
First let me say what I mean by “run its course”. By this I obviously do not mean that there is not the possibility (or even the probability) of increasing revenues for the traditional virtualization vendors, selling traditional virtualization technology – I think these revenues will continue to grow, probably even dramatically (see Larry Dignan’s post for some thoughts on this). After all, there are many more places for traditional virtualization technology and applications yet to be done. I also do not mean that there are not additional products that will be possible around traditional virtualization technologies – there will be.
What then, do I mean?
First a few definitions. Probably the definition I like the best for virtualization is this:
Virtualization is the abstraction of some entity from the supporting physical infrastructure.
With that in mind, in various forms traditional virtualization has been very helpful in abstracting things like instances of operating systems from the hardware on which it runs. This has led to two very helpful use cases -
- Running multiple operating systems on a single physical server (helpful in server consolidation and compatibility emulation … in fact, I use this sort of virtualization everyday to enable me to run both OSX and Windows XP on my macbook pro laptop) and
- Rapidly provisioning particular, pre-configured operating system / application stacks combinations on banks of bare-metal servers. This latter form might even be the biggest growth area, and has already led to the service providers beginning to offer naked-vm services (such as Amazon EC2, among others).
All of these forms of virtualization are about making one physical machine appear to be multiple virtual entities (one to many). While this is a very good thing, and helps with many operational problems faced by enterprises every day of the week, it only goes so far. In fact, it almost raises more questions than it answers.
How do I manage this VM sprawl? How do I know how this is all performing? How do I know what level of service my actual customers are receiving from their actual applications? How do I ensure that these applications are reliable and able to scale? For that matter, how do I create applications that can scale as needed, that can make use of as many resources (both virtual and physical) that I want to give them? How can I trust them to do what we need done, when we need it done?
Where We Must Go
Now we’re getting to what I mean.
Traditional virtualization is not able to address most of these questions well because they do not deal with the application directly. Only a layer which enables application virtualization will be able to provide the answers for all of these questions (& more) raised by traditional virtualization.
Application fabrics provide a simply abstraction for the application developer. With an application fabric,
Each application (service) scales as needed, always works as expected, and manages itself.
Notice that application virtualization must (for all but the simplest applications) enable an application to work across many independent resources (be they virtual or physical) without drama, without complication … it just needs to be true. True for fine-grained, transactional apps as well as coarse-grained analytical apps. Time-sensitive and batch-oriented. Data and computationally intensive. C, C++, Java, and .Net. Spring and pojo.
(As a quick aside, please beware of those approaches (such as traditional grids) that leave this problem to the developer. Honestly, who do they think they’re kidding?)
So this is the essence of application virtualization – applications that can automatically consume whatever resources are needed and available, always work as expected, and manage themselves. Done as an intrinsic part of the abstraction, so it is very simple for the developer.
Building on the goodness of traditional virtualization, which unfortunately can only take us so far. To build on those helpful building blocks and take us beyond the limits of traditional virtualization, true application virtualization is an absolute necessity.
That is what we do at Appistry. That is the point of EAF (the Enterprise Application Fabric).
Welcome to our world!