For the past year I’ve been having a blast helping a friend of mine with a restart – a 50 year old company that should be data intensive by its very nature, but has been constrained by an increasingly antiquated app foundation.
So we decided to put the company “up on blocks”, build a fantastic new application foundation – think entirely shiny new, modern, APIs everywhere, web-centric, big data … and all cloud-enabled, of course!
We put together a small team and have been moving fast. The results are beginning to bear fruit, and I’ll be posting some more as all this goodness is unveiled.
Been learning a bit along the way, and have been greatly encouraged to see this cloud-centricity that all of us have been advocating, building, toiling and sweating over for so long works so very well in practice. No surprises in the fact that it all works, of course, but definitely a little surprise at just how compelling the economics are in practice.
It’s one thing to help others build their apps and change their companies, and another to do so in a more personal way.
Been grateful for the opportunity, and looking forward to reflecting on the lessons learned even as we press the advantage.
Overnight there were many, widespread reports that Egypt has severed most of it’s ties to the Internet (WSJ, CNN, Telegraph) and has also disabled SMS (text) services. What is not clear at this time is how stable the internal communications are at this time – mobile, landline, or net.
While isolating populations and breaking internal communications is probably step 4 in the “Totalitarian Handbook: How to Crush the Opposition”, this has not been tried at this scale before. My guess is that it will have precisely the opposite effect as intended, mostly because the action itself is far more disruptive than anyone realizes at this time – disruptive to the economy, to the normal interactions of the residents, for that matter, even disruptive to the actions of the government itself.
I’ll leave the considered analysis of the sociological, political, economic, and yes religious dynamics mostly to others who are undoubtedly far more qualified to think this one through … but I do want to begin to consider what impact cutting off an entire country from the net might have on that country.
In graduate school I remember attending a seminar in which the speaker (whose name eludes me right now), who was working for the Federal Reserve Bank in San Francisco at the time, told us how the US had actually implemented the freeze of Iranian funds during the then-recent hostage crisis.
The decision to freeze the funds was made in late 1979, so it turns out that after receiving a middle-of-the-night call to freeze all Iranian funds (about $8B dollars), he simply drove down to the SF Fed, walked over to the physical server that was used as a gateway for all large funds transfers (a PDP-11/70, if I remember correctly), and used a bit of tape to stick a handwritten note on the console saying something like “don’t release any Iranian funds”.
Apparently he used pretty decent tape, because that money stayed most-definitely out of Iranian hands.
While this arguably aggravated daily life in Iran, it generally functioned as intended – it caused pressure on the target country as a whole, yet did not directly impact most people within that country, at least not in a clearly discernible manner. This was one nation to another, and when considered in that context was actually a fairly targeted, precise instrument.
By all accounts Egypt is in real turmoil right now, to say the least. While it may have seemed to make sense to the present government to cut off the Internet and the ability of people to text each other, I think that the impact of the unavoidable other stuff that will happen as a result of this action will end up making the situation far worse.
Bad enough, I think, to perhaps be the exact action that pushes the population over the edge and topples this government.
Here are a few of the unintended / at least unavoidable reasons why (in no particular order):
- External Economic Disruption. Everything from the banks to the stock markets to all sorts of companies and individuals need to reliably communicate with their counterparts around the world to function. Breaking those will most likely hurt the Egyptian economy in entirely hard to anticipate, though likely significant, ways.
- Internal Economic Disruption. It’s probably pretty safe to assume that much of the internal Egyptian economy will grind to a halt. Yes part of that is because of the riots themselves, but a much larger part of that will be because businesses can’t communicate with their customers or each other. Pretty simple, really.
- Partial Isolation Externally. Cutting off much of the external net access will certainly create a sense of isolation; however, there will be some flow that continues. That will spread old-school, imperfectly and rather slow by comparison, yet it will spread.
- Government Disruption. This is pretty mundane but probably non-trivial – the actions of the government across a large country will probably be impeded by the lack of the very tools that are targeted here. While there’s probably official communications channels that remain functioning, much of the day to day, informal actions probably are done the way people conduct most of their lives – and that is broken now.
Too Late, We Know How to Talk With Each Other
Perhaps if Egyptians had never become accustomed to facebook, youtube, twitter, email and every other aspect of the relatively open, functional net culture / economy (or at least one that is perceived to be fairly functional – China is a net that is probably far more controlled than we all think, though done in a much more precise, subtle manner), then these actions would have much less impact.
In other words, I doubt if North Korea cutting off international internet access (such as it is) would have much of an impact on that peculiar, sad country.
But Egyptians have, like most of the world, come to expect and rely upon pervasive, fairly reliable, net services for many aspects of daily life.
Take them away and people will notice – the economy suffers, society decays, people’s lives diminish. While technology is not causing the problems that Egypt is struggling with, disrupting that technology will certainly exacerbate those problems.
Let us keep the people of Egypt in our thoughts and prayers as they move through this time of extreme uncertainty and sorrow.
This is the first of a series of excerpts from The Executives Guide to Cloud Computing (Wiley 2010, available on hardcover, kindle), a book that I recently co-authored with Eric Marks. In particular, this series will focus on the reasons why the transition to cloud computing is simply inevitable. The excerpts themselves are slightly edited to better fit this format. Enjoy!
There have been very few fundamental changes in computing.
On the surface, that may sound like the statement of a madman, or perhaps at least someone from an alternate universe. Nonetheless, it is true.
Sure there have been, are, and will likely continue to be a nearly incomprehensible fire hose of particular changes, some rather flashy in and of themselves. Simple things like pocket-sized flash drives that store more than the corporate mainframes of 30 years ago, or perhaps ubiquitous mobile devices for everything from the mundanely practical—e-mail, calendars, and contacts—to the cheerfully sublime. Much more complex developments such as the open source movement; the advent of relational databases; and the rise (and fall) of whole operating systems and their surrounding ecosys- tems, even those whose perpetual dominance once seemed assured (how many desktop machines are running CP/M these days?)— These have come and gone, perhaps lingering in some niche, forgotten by all but a few fanatical devotees.
But truly fundamental change—the tectonic shift that literally changes our landscape—happens only once in a long while, perhaps every ten or more years, even in the computing business. Fundamental change of this magnitude requires a number of smaller innovations to pile up until a true nexus is reached, and we all start marching down a different road.
Of course, as historians are fond of lecturing the rest of us mere mortals, these sort of fundamental changes are nearly impossible to recognize while we are in the middle of them, even as they loom imminently.
When researchers at the University of Pennsylvania were feverishly working on ENIAC—generally recognized as the first program- mable, general-purpose electronic computer—as the future of the world hung in the balance in the midst of World War II, do you think they envisioned computers embedded in nearly everything, from greeting cards to automobiles, from microwaves to MRIs? When researchers at the University of California, Los Angeles, and elsewhere in the midst of the Cold War strove to make computer networks more resilient in the face of nuclear attack,1 do you think any of them envisioned the Internet as we see it today? Likewise, when Tim Berners-Lee and other researchers at CERN were trying to come up with an easy way to create and display content over this new, literally nuclear-grade network, do you think they envisioned the impact on everyday life (both personal and professional) their new creation would have, or even the simple breadth and depth of stuff—from the sublime to the silly—that would be available on this new, supercharged ‘‘Internet’’? One estimate is that there are more than 500 exabytes—that’s 500 billion gigabytes—in this ‘‘digital universe,’’ and that this will double every 18 months.
The simple truth is that very few, if any, of the people involved in these developments had much of an idea of the consequences of their creations, of the impact on our personal lives, our culture, even the society on which we live—from how we interact with our families to how we conduct business.
Whether you are ‘‘technologically modest,’’ or are either by age or temperament not ashamed to let it be known, at least in certain circles, that you are a bit of a geek . . . either way, it is pretty much a given that developments in computing are having a big impact on our society, and more to the point, an even bigger impact on how we conduct our business.
And bigger changes—tectonic-shift scale changes—will have at least commensurate impact on our lives in every dimension, including the fields of commerce. One example, perhaps a seemingly simple one, yet central to many of the changes now underway, will suffice to illustrate this point.
An Example for All to See
Consider for a moment newspapers. We now face the very real prospect—actually the near-certainty—of at least one (and probably many) major metropolitan area in the United States without a traditional (local, general purpose, print, widely circulated) newspaper. While this eventuality may be stayed—perhaps for quite some time—via government intervention, the fact that this will eventually occur is not in doubt. In a culture still echoing with such reporteresque icons as Clark Kent, or at least the more prosaic Bernstein and Woodward, this was once unthinkable. Now it is simply inevitable.
There was a time when the technology of newspapers—cheap newsprint (paper), high volume printing presses, delivery networks including everything from trucks to kids on bicycles—was the only reasonable means for mass distribution of information. In fact, with help from some of the newer technologies there was even a new national newspaper (USA Today) founded in the United States as late as 1982. But with the advent of alternative delivery channels—first radio, then broadcast, cable, and satellite television—increasing amounts of pressure were put on the newspapers.
The immediacy of the newer channels led to the widespread death of afternoon newspapers in most markets; anything delivered to the dinner table in a physical paper was hopelessly out of date with the evening news on television or radio. The morning papers had the advantage of broad coverage collected while most people slept, and as a result have held on longer.
However, at the same time intrinsic limitations of the newer technologies made them better for certain types of information, though not as useful for others. For example, a two-minute video from a war zone could convey the brutal reality of combat far more effectively than reams of newsprint, but did little to describe the complex strategic elements—political, economic, cultural—of the conflict itself. As a result, a certain stasis had been reached in which newspapers carved out what appeared to be a sustainable role in the delivery of news.
Then came the Internet.
In particular, the effectively free and ubiquitous—and yes, near- instantaneous—delivery of all sorts of information mortally wounded the newspaper business. As the first round of the web ecosystem grew, the only remaining stronghold of the traditional newspapers— their ad-based revenue model—was made largely irrelevant. eBay, Craigslist, and freecycle (among others) replaced the classifieds, and online ads took out most of what was left.
Some newspapers will undoubtedly manage the transition in some manner or another, perhaps even emerging as something fairly recognizable—particularly national/international properties such as the Wall Street Journal and the previously mentioned USA Today—and perhaps even financially sound.
But those that do will likely largely do so without their original distribution technologies, and more important, many will not make the transition at all.
What Happens Next
All of this upheaval in news delivery—the enormous changes that have already occurred and that which is yet to come—have been enabled by developments in computing technologies, with the widespread adoption of everything from the Internet to the iPhone. It is probably worth remembering that all of this has occurred largely without cloud computing, and as a result we are probably less than 10% of the way through this transition in news delivery, and this is only one industry. One industry, one example, with entire economies yet to transform.
Even so, some things have not changed much, even in the delivery of news. The computing infrastructures range from the stodgy (server, even mainframe- based systems within many newspapers) to circa-2009 state of the art (which we might as well start referring to as ‘‘legacy web,’’ web 2.0, old-school web, something like that). By and large these systems still cost too much to acquire, do not adapt to changes in demand nearly easily enough, are not reliable enough, and remain way too complex and costly to operate. Even the few systems that do not suffer from all of these problems are not ideal, to say the least: Some are proprietary, and most are either too complex to create new application software, or simply do not scale well enough, at least for the sort of software that researchers are hard at work developing. In particular, with the first generation of electronic news infrastructures focused on just delivering the news, the next generation will be focused on sifting through all of that content, looking for just the right stuff.
All of that sifting and sorting and searching will take orders of magnitude more computing capacity than we have anywhere today. How will we pay for hundreds and thousands, perhaps even tens of thousands times more servers and storage than we have today— almost unimaginable quantities of computing? How will we operate them? Write new software for them? It is fair to wonder how we will even power all that gear. Assuming that all of these concerns are resolved, then, we will face a larger question still, one which we presume has many answers: What sort of business models are enabled by all this, and how do we get there?
This Scarcely Seems Possible
Before we leave this example, it is probably worth considering our present circumstances just a bit more. In particular, most of the history of both economics and engineering can be understood by thinking about managing scarcity. In other words, how do I get the most done with the least stuff, or within certain limits? For example, that underlying drive to dealing with scarcity, at its core, drives the startup team to work harder and pay less, the Fortune 500 enterprise to optimize manufacturing processes, and entire nations to set energy policies. Allocating scarcity is just Economics 101. Of course, it is also Engineering 101. Dealing with scarcity causes communica- tions engineers to develop better video compression schemes, improve CPU designs to get more done in the same amount of time, and even rethink server packaging to reduce power consumption and labor costs.
While scarcity may be the nemesis of some, it is quite literally a prime mover behind the developments that have together come to be known as cloud computing. What does this mean, and how can it be possible?
Copyright © 2010 Eric A. Marks and Roberto R. Lozano.
In the next installment we’ll look at the underlying technological flow, and how that has made cloud computing possible. If you like what you’ve seen, keep in mind that the book is available (hardcover, kindle) now!
This is a part of an ongoing series in which themes from the Executives Guide to Cloud Computing are illustrated by events in our collective transition to “all things cloud”.
Hardly a week passes without buzz about at least one -usually more – upcoming, uber-cool new mobile device(s), jumping all over one that is coming out right now, or perhaps digesting the one that was hot just before. It seems like product half-lives are down to a month or two (particularly in the Android market); whether Apple can stick to an annual refresh for the iPhone is certainly debate, but that’s a discussion for another day.
Much of this excitement is justified and easily understood … after all, who doesn’t like having what seems like a couple of billion pixels dancing in front of your eyes, or the latest <insert feature here>?
Whatever your current handset preference is mostly immaterial to anyone else – from the macro perspective the rapid penetration of this class of handsets (any OS, from wherever) is a clear enabler for media and other services consumption, and therefore (of course) for their creation.
Still, this is only a small part of the larger picture.
Platforms, Infrastructure, and Contribution
In a similar manner the movement in the ecosystem of cloud services and technologies is both rapid and continual – from pure-play public infrastructures, to private infrastructure enablement, to the emergence of strong cloud application platforms as a key for any serious enterprise strategy – the breadth and depth of progress is indeed meaningful and encouraging.
Much of this was on display at Structure last week … a real sense of inevitability is growing here as well.
For that matter the new thinking of the other-than-relational data store world may ultimately be more individually significant than any of the rest of this … that is why the a couple of years ago we (at Appistry) began extending our platforms into reliable, commodity-based storage (tightly integrated with the larger platform, of course). Conference after conference highlights newer data and app frameworks for big data. For that matter, it’s hard to enter into too many conversations about actual, intensely-scalable apps without quickly focusing on serious data issues.
Still, this too is only a part of the larger picture.
Fundamental changes in how cloud-based stuff is built, distributed, monetized, and otherwise sliced and diced has been, in many ways, some of the least anticipated yet most crucial features of the broad transition to cloud. For example, entire ecosystems driven, at least in part, by fine-grained advertising long ago passed from novel to given.
In another example, consider the rise of relatively controlled, device-specific application stores. Early last week Tomi Ahonen did a long, well researched post which makes these case that device-specific app stores (and their apps) offer intrinsically bad economics for developers.
… don’t invest in it (apps and app stores) today.. Put your creativity and investment into the real money opportunities, remember Pop Idol simple SMS votes earning half a billion dollars in USA this year alone..
Upon first read this conclusion didn’t seem right, and upon second read it bugged me a bit more. Andrew Odewhan nailed what was bugging me in a good bit of analysis. From that post
What the App Store did brilliantly is create a marketplace that anyone with the appropriate skills can enter. The development tools are free, the membership dues are cheap, and Apple’s 30 percent take seems pretty reasonable when you consider the frictionless access to a global marketplace they’re providing.
While this still is not the whole story, now we’re getting much closer to a more complete picture.
Cloud and Confluence
When embroiled in the the chaos of any one of these it’s easy to lose sight of the meta-shifts that are in progress, and in particular what they mean collectively.
What is most significant in the ongoing transition to cloud is not any one of these changes – be they new infrastructure, pay-as-you-go models, eventual consistency data models, or any of thousands of other items, meaningful as they may be individually – but rather, the confluence of all of them.
It is the confluence of all of these changes that are, together, enabling an entirely new possibility for computing. From the Execs Guide:
Computing—computation, storage, communication—is relatively free, scales up or down as needed, scales as much as needed, operates itself, and always works.
which then leads to a basic shift in what is possible for the organizations that embrace “all things cloud”. In particular, this shift enables …
A reality in which the organization can be largely freed from the traditional constraints that computing has placed on all for so long—constraints based on the cost, availability, capabilities, and the difficulties of using computing-enabled stuff.
and that is a reality worth making real.
I’m really excited to be kicking off a four-part webinar series generally themed around Executive’s Guide to Cloud Computing.
Today’s webinar starts at 2 pm EDT / 11 am PDT, and we’ll be posting a recording soon after the completion.
There has been a significant amount of exciting news brewing in our world, and I can’t wait to share some of it in the days, weeks, and months to come. So as a bit of an initial down payment, we’re going to go have some fun on this first webinar in the new series.
We’d be honored to have you join us!
This is part of an ongoing series of posts in which I’ll present ideas developed in Executive’s Guide to Cloud Computing. Hope that you find these useful.
Moore’s Law has long been a friend of all in the computing biz … after all, with no particular direct action by most of us all of our lives tended to get better, faster, and cheaper … perhaps in fits and starts, but over time the effects are undeniable.
Yet this progress has not been uniform, and now those realities are beginning to make themselves known – beyond a shadow of doubt – in ways that we are only now beginning to appreciate.
One Small Example
For example, it’s been more than five years since the power of an individual cpu has basically topped out, leading to the advent of multiple cores. Repeat after me: two, four, six, eight … all of these we do appreciate (sorry, couldn’t help that one much – probably comes from coaching so much baseball and softball!).
Still, while multiple cores certainly solved a problem for the silicon vendors to keep upping the ante, there are some very definite consequences to the limitations.
For starters the multi-core phenomena has really put the pressure on software developers … individual applications have to become sophisticated enough to handle multiple threads of execution well (albeit with the simplification of a physically shared memory). Absolutely anyone who has spent much time writing, debugging, or supporting multi-threaded code can attest to the plain reality that this is significantly harder than the more common.
But Wait There’s Moore
Turns out that the multi-core folks are basically constrained by their ability to get data onto and away from the physical chip – from the CPUs themselves. In other words, the total I/O bandwidth on / off the chip is a very serious design constraint, one of which many very smart folks are struggling against mightily.
Yet that is not the most serious version of this problem facing us today. Our basic inability to move as much data as we want (bandwidth) as quickly as we want (latency) is even more onerous as manifest at a macro level. In particular, move up several levels of abstraction from the chip to the cloud … particularly clouds constructed out of aggressively commoditized infrastructures.
(For the rest of this discussion we’ll focus on bandwidth, and cover latency effects another time.)
What’s happening at the cloud level is that there is a real mis-match in the rate of improvement in three basic capabilities;
- Processing power (multi-core or otherwise)
- Storage capacity
- Network / interconnect bandwidth
This is true both individually and in aggregate.
In other words, we are not able to get data from increasingly large storage pools to and from increasingly powerful processing pools quickly enough … and the problem is getting worse, much worse.
@Beaker (Christopher Hoff) brought out the same point in a very good post yesterday. From that post:
… there is an underlying assumption that the networking that powers it (the cloud) is magically and equally as scaleable and that you can just replicate everything you do in big iron networking and security hardware and replace it one-for-one with software in the compute stacks.
The problem is that it isn’t and you can’t.
That, of course, is a great point.
Johhny Cochran the Geek?
Later in the post he riffs a great line
Abstraction has become a distraction.
Funny, except that I’ve drawn a different conclusion from the same facts. Namely, that abstractions (no matter how cool, complete, or sophisticated) still have to run on actual, physical stuff … and that actual, physical stuff matters … a lot. Or said another way
Abstractions are great, on infrastructure they must wait.
So not as catchy as Beaker’s, but I think it makes the point ok.
The simple reality is that raising the levels of abstraction is essential to creating real cloud-native apps. While that is primarily a job for the cloud app platform, it cannot – must not – ignore the infrastructure on which it operates (physical or virtual).
What Then Can We Do?
So that seems to leave us between the proverbial rock and a hard place – customers demand apps that scale, data and application architectures are morphing to enable an entirely new grade of scale, infrastructure operations are getting way better and thereby enabling scale, access is becoming far more ubiquitous thereby driving the need for scale, commodity is becoming far more so which thereby enables … well, do you see the theme? Scale, scale, scale … and the network isn’t keeping up with any of this, and can’t.
It just can’t. Not now, not tomorrow, nowhere into the foreseeable future, and for that matter probably never. Ever.
Yet a solution to this connundrum is beginning to emerge.
I believe that these macro-level forces will lead to is an inevitable merging of the storage and computational pools in many physical infrastructures, and that merging will tend to become the norm rather than the exception.
In other words, SANs … no thank you. NAS … here and there. But for the big stuff a fundamentally new approach, one that is very, very cloud-native, and one to which we are surprisingly close.
In Part 2 I’ll examine at least one way this merger can occur and consider some of the consequences.
Transformation of supply channels are never pretty – emotions and corporate carnage everywhere, a few new winners, and when all settles out eventually arrive at a more productive & efficient marketplace.
Since I spent much of the second half of last year writing my first book, as you can well imagine I’ve been paying particular interest to the morphing of all things publishing, with a particular focus on these things we have traditionally called “books” – will such a thing remain, how will they be distributed, how to construct them, and so on, all culminating into the ultimate question – how best to deliver something plenty of folks will actually find valuable?
This book was been done in what may soon be the (mostly) last vestiges of the old-school model, at least circa late 2009 / early 2010 – a traditional publisher (Wiley -who have been great to work with, by the way), a traditional distribution and retailing model (with Amazon playing a prominent, though by no means exclusive part of each of these two steps), and an expected ebook edition (by Amazon for the Kindle) four to six weeks after the paper book appears.
What is interesting is that in this model (pre-iPad, more on that in a moment) Amazon has a significant amount of control over the ebook distribution channel, and of course they have been working very hard to gain more. They really stepped up their pace recently, with presumably pre-emptive moves to sweeten the pot for authors and open up the Kindle for apps in the week+ before the iPad announcement.
Apple Enters the Fray
In a peculiarly ironic turn of events I received my “page proofs” (the near-final edition of a book, with just about final formatting, content, and so forth all in place for one last review) on the very day that Apple placed a big foot right in the middle of this fight, or as the infamous Nuke LaLoosh said in Bull Durham (my second favorite baseball movie), Apple “… announced their presence with authority”.
Amazon, of course neither had nor continued to stand still. In fact, they responded very forcefully by pulling all titles from Macmillan (a big publisher who is one of Apple’s launch partners), and going so far as to remove all references to these now-banned Macmillan titles from all Kindle users’ wish lists, in addition to removing sample chapters from these titles from all Kindles. That’s right, Amazon reached out and de-licensed, removed, whacked, destroyed – you pick the term – content that it had either agreed to maintain on a customer’s behalf or had previously delivered to that same customer.
Charles Stross (a well known science fiction author with some titles published by Tor, a Macmillan subsidiary) clearly describes the battlefield, the combatants, and what’s at stake in an excellent post – well worth the read.
As Stross points out
This whole mess is basically about duelling supply chain models.
…It’s interesting to note that unlike the music industry who had to be pushed, the big publishers seem to be willing to grab a passing lifeline.
… But Amazon, in declaring war on Macmillan in this underhand way, have screwed me, and I tend to take that personally, because they didn’t need to do that.
To his last point, Amazon (in this battle with Apple) really angered many content creators (in this case authors) … a losing proposition when one is in the business of distributing content.
But I think there is an even more important point here, one which has already been and will no doubt continue to underly many disputes as cloud computing continues its inexorable path towards dominance.
The Underlying Question of Trust
This point is very simple, really.
In their fight with MacMillan and Apple, Amazon crossed two boundaries with their customers that should never have been crossed.
In particular, they 1) deleted content that had already been delivered to the customer and 2) deleted content that they were storing for their customers (the wishlists). Hard to say which was worse (both were not good), but the wishlist content is something which Amazon encouraged customers to create and is also part of the customer experience, part of why someone would presumably intend to stay with Amazon and the Kindle for content.
- Amazon has been here before – remember the incident last summer when they deleted Orwell books? – and really should have learned their lesson.
- Irony of ironies, this sort of death-grip control is precisely why Apple has received so much criticism of the iPhone app store. Yet even when Apple has removed applications from the app store, I’m pretty sure that they did not have the temerity to remove apps that were already purchased by end customers.
- Even though these examples may be minor and may even be reversible, any time a provider breaks an implicit or explicit contract – no matter how minor, and particularly when this is a remote, seemingly abstract cloud provider – it is a very pointed reminder that the end customer is at the eternal whim of the provider, and most definitely is not in control.
- Customers do not like being reminded that they are not in control, and will gravitate towards service providers who understand this fact and behave accordingly.
- Deliberate removal of content is different, and arguably more toxic than inadvertent removal of content (as from operational or technical failures).
One last thought – once a reputation for capricious removal of a customer’s content (whether already delivered or ostensibly maintained on their behalf) is earned, it will be very, very, very hard to lose.
I have no idea how this battle to establish the next dominant book distribution channel(s) will turn out, and truthfully have not even picked a favorite yet (others have opinions, of course).
What I do know, however, is that I – speaking as both a customer and a content creator – will drop any provider / channel-enabler who thinks that it is ok to break this boundary with customers, and am fairly certain that most customers will do the same.
How many occurrences will it take? Not sure, but we just don’t want to go there.
These are exciting times to be in both content creation and consumption. Many of these changes, while not seemingly directly “cloud computing”, both are enabled by and will drive much of the the near to mid term cloud computing adoption. As such, I will write on these topics from time to time.
Here are a couple of samples:
… this excellent book is an insightful description of how cloud computing can quickly sharpen the focus of information technology and line executives onto the delivery of real value.
… will soon become a leading industry reference.
Thanks for the kind comments, Kevin.
We’re reviewing the final page proofs now and creating the indices … after that, the book passes on to production (both physical and ebook). Very much looking forward to taking that last step.
This has been very interesting to join the publishing world (in a certain sense) precisely when it is in the midst of the greatest upheaval since, well probably since the invention of the printing press. More on that later.
A hopeless mess … has it really only been less than three years since mobile devices seemed like a hopeless, stale, torpid kill-zone?
Think back to the spring of 2007, when the best choices were either aging PalmOS, messaging-centric Blackberries, and the occasional Windows Mobile devices, with absolutely none able to render a decent web page.
On the off-chance that you could get a readable webpage, chances are the device itself would – particularly the PalmOS devices – would crash at really handy moments.
The whole situation was frustrating enough that there were many who wondered whether we’d ever get to a stage where any functions of these so-called “smart phones” were reliable enough to count upon … without even hoping for them to be capable enough to actually want to use.
Fortunately, those fears were soon to pass.
Mobile Web (most) Real Now
Where do we stand today?
A few months after the original iPhone announcement – say in the misty, far off ages of late ’07 – the leading edge of the change was already most readily apparent. With only a handful of devices on the market, it was already very clear that – in many ways – mobile web was just now becoming real.
A critical mass of factors – display, user interface, network speed, ubiquitous access, and extensibility, among others – made the iPhone fundamentally different than what had come before. Enough different that the web traffic was nearly always way disproportionate to device population.
In other words, the iPhone was the first mobile device on which “the web didn’t suck”.
Within two years the rapid emergence of the whole Android ecosystem (including the very interesting Nexus One) along with a number of other interesting competitors (if only Palm can get critical mass for WebOS), along with newly-resurgent RIM and the huge (but seemingly wandering – see update below) Nokia have utterly transformed the mobile device markets.
Mobile Without Web
But wait, there’s more … in perhaps a trend that not too many people outside of the true-believers anticipated, ebook readers have actually caught on. Between various Kindles, the Nook, and even a few new entries from Sony there is now real energy in this market.
Yes publishers have been scratching their heads and trying to figure out how best to participate, but not nearly so many people are laughing now. Ebooks are not only real, but the outlines of a path to mainstream acceptance are now beginning to be visible.
Mobile Web x 10?
Earlier today Apple released the iPad, and as is often their habit have probably pushed the mobile web into warp 9, maybe 10.
True enough that tablets of one kind or another have been out for some time – I owned an HP 1100 five or six years ago, for example, and this is not even Apple’s first attempt (think Newton) – but for one reason or another tablets have never really caught on. Whether the iPad offers enough to change that or not remains to be seen – my betting is yes – but I’ll mostly leave that debate to others for now.
Yet there are many betting that this device really portends a new class of devices. For example, MC Siegler posted today on TechCrunch:
[the iPad is] the best way to browse the web in a style that is likely your preferred method: by touching it
Siegler is making the case that for anyone who already owns an iPhone or iPod Touch (now more than 75 million and counting) this is rapidly becoming a very familiar, even the preferred means for using the web. Of course, the same can be said for the owners of all of the Android, WebOS, and other advanced handsets.
Siegler made these comments after some time using one, and in doing so reflected similar comments from others who also spent time driving one today.
His bottom line: this is the first of a class of devices that are entirely optimized for content consumption (which also implies a degree of interaction, of course).
The Impact on Cloud Computing
In many ways the rise of the iPhone / Android class of handheld computers has been a real driver for the growth of cloud computing. By enabling interaction with every manner of web-delivered services at nearly any time, these devices contribute mightily to the very meaning of “web-scale”.
And has been seen in case after case after case, the only real systems architectures that have much of a hope of dealing with web-scale are, in fact, cloud computing architectures.
So in that context the introduction of the iPad most likely portends another front in the inexorable growth of web-scale itself, and in doing so will only accelerate the need for adoption of true cloud-computing architectures throughout.
Update: A few hours after I wrote this post Nokia announced strong Q4 results, growing their marketshare to 40% worldwide for smartphones. This result “marked an end to a steady stream of market share losses for Nokia’s smartphones”. So perhaps Nokia will remain a strong factor going forward – if so, great! The real point is that – whoever the leaders are – the new standard for mobile devices is to fully utilize and participate in the web, period. That is what has profound implications for cloud computing.
It is too early to tell the precise nature of this increased demand due to these tablet-class devices, but that it will occur is nearly inevitable. Some time it would be interesting to examine demand metrics as a function of the power of the handheld devices – my guess is that would be very revealing indeed.
The USPTO awarded search giant Google a software method patent that covers the principle of distributed MapReduce, a strategy for parallel processing that is used by the search giant. If Google chooses to aggressively enforce the patent, it could have significant implications for some open source software projects that use the technique, including the Apache Foundation’s popular Hadoop software framework.
Here are my initial thoughts (I plan on a more thorough post upon further reflection):
- This has potentially very profound implications for the rapidly developing world of “big data” application architectures.
- These “big data” problems are some of the key use cases for cloud computing adoption, public, private, or anywhere. One need look no farther than the rapid uptake of Hadoop and the growth of its community and ecosystem.
- Google has definitely done the most to popularize the notion of applying map and reduce operations to processing large sets of data.
- Having said that, the roots of this approach to data manipulation are very old in the history of computer science, from the earliest days of AI / Lisp / etc.
- Map-reduce is not the last word on big data algorithms, but it is an important one.
In short, big data problems are important to cloud computing, map-reduce are an important class of algorithms for some of these big data problems, and many people have put 2 and 2 together.
As for the patent and other implications,
- I have tended to favor some level of software patents – we have a couple ourselves – so I do think that there is a time and a place for patents, yet I also understand how easily abused software patents can be in the hands of … well, people who can abuse them.
- I have absolutely no idea how enforceable this patent could be … that is a much more complex question far better suited for those interested legal scholars and fine patent counsel out there.
- I have even less of an idea what Google intends to do with this patent. Will they benignly gift it back to the community, or twirl it around like some stealthy Sword of Damocles, always there but just outside of the ability of everyone else to perceive it’s precise power?
- As anyone who has ever participated (or even been near) a patent dispute understands, in most cases the value of the patent is directly proportional to the amount of cash to either prosecute alleged infringements or defend against said prosecutions, and
- Google has lots of cash (yes, yes, I realize that this is quite a revelation).
One more thing, and the importance of this is non-trivial- Google loves to portray themselves as “more open than thou”, with a very fine-sounding “do no evil” ethos that is repeated as a litany.
That is all and well so far as it goes, but what does that mean for absolutely everyone else who ever has or ever will use a promising set of basic techniques for a serious cloud app, or perhaps wants to innovate far beyond, but who happens to be outside of Fort Google?
One More Thing
I have been really surprised at the relatively low-key response to this patent award from the community.
When Dell famously received a trademark from the very same USPTO for “cloud computing” in April 2008 the hue and cry of the bereaved was astonishing … and effective. Within a week Dell had politely withdrawn the application, and returned the name to the community.
That was only a name.
This is far more significant, impacting the bounds and capabilities of real applications, and what do we hear so far … not much.
It’d be great to hear from others who care about the future of cloud computing in some way. There are so many aspects to this – legal, technical, operational, even psychological – all wrapped up into the idea of competitive positions, markets, and the future of cloud computing. So yes it’s a complex topic, but there may be some relatively simple answers.
Maybe Google needs look no further than Dell for some guidance on this one.
These are quick thoughts about a complex topic. Hopefully next week I’ll have a chance to reflect more deeply on this situation, see what develops a bit more, and then post further. This should be interesting.