Earlier this week there was what seems to be the first large-scale collision of satellites, a commercial Iridium (satellite phone) with an older Cosmos-series satellite.
One of our partners, AGI, specializes in software for understanding these sorts of events. From their press release …
On February 10 at approximately 1656 GMT, the Iridium 33 and Cosmos 2251 communications satellites collided over northern Siberia. The impact between the Iridium Satellite LLC-owned satellite and the 16-year-old satellite launched by the Russian government occurred at a closing speed of well over 15,000 mph at approximately 490 miles above the face of the Earth. The low-earth orbit (LEO) location of the collision contains many other active satellites that could be at risk from the resulting orbital debris.
Let that picture roll around your head a bit … this was quite a collision, to say the least.
Over the past year or so we’ve worked with AGI to cloud-enable some of their “all on all” collision software. While they did not use our stuff in this particular simulation, it should still give you a general sense of of the type of application. Continuing with AGI’s press release:
To support the space community in better understanding this unprecedented satellite-to-satellite collision, AGI and CSSI have used their software to reconstruct the event.
In other words, as a demonstration of a broader capability AGI has created a cool video using a small portion of their “all on all” collision software. Without further adieu, here it is:
A 720p HD version of this video can be downloaded directly from AGI.
A Few Thoughts on Cloud Suitability
First of all, while it is easy to see the computational requirements of keeping track of all those orbiting objects and trying to understand what might be in danger of running into what, and when … there is also a significant data scaling problem in here as well.
Second, keeping track of “all on all” collision possibilities is not a problem that’s getting any smaller, to say the least . All you have to do is imagine the debris-field portion of this simulation to gain a certain subjective sense of the problem … sort of a space-borne version of Wall•E.
Third, while a certain amount of this processing needs to be done all the time. there will be times when collisions and other events occurr – foreseen or not – and there will definitely be a need for a spike in resources as a result.
So an application that needs scale (both data and computational) and flexibility – sounds like a perfect cloud application … good thing that it is!
Video courtesy of Analytical Graphics, Inc. (www.agi.com). Debris image courtesy of Analytical Graphics, Inc. (www.agi.com). Wall•E and Eve image courtesy of Pixar and Disney. All copyrights belong to their respective holders.
Two applications areas for which cloud computing holds the most promise are in the related areas of intelligence and military applications.
Even if you are not already intimately familiar with the types of computing problems that dominate these application areas, it’s easy enough to see how cloud computing – and of course I mean all sorts of clouds, with a particular emphasis on private clouds – can help.
After all, the very attributes of clouds that are so attractive to startups and enterprise alike – easy sense of scale, flexibility, low cost, and more – have tremendous appeal for intelligence and military applications as well.
A Military Perspective
I was recently interviewed for a story that’s appearing In the current issue of Military Information Technology. entitled “COMPUTING IN THE CLOUDS”. The story covers a number of cloud initiatives, with a focus on some things that are working and challenges that are looming.
Here is a cool quote from the story:
Appistry offers a linchpin technology for cloud computing, called the Enterprise Application Fabric, a cloud application platform for developing and managing large-scale, selfhealing cloud applications rapidly on commodity hardware.
Why Is Appistry a “Linchpin Technology”?
In this quote the story captures precisely one of the concerns of both those pioneering and those contemplating cloud applications in military and intelligence – sure the inherent scale and flexibility are great, but what about the complexity?
Speaking from the IC side of the house, streaming full-motion video from a Predator UAV or a satellite image are huge files to deal with in terms of storage, processing and transport to a soldier in motion…
However, a disadvantage is the added complexity of virtualization, which is inherent in cloud architecture (em. added). “When we virtualize in a cloud, it is more difficult to unwind the problem should it arise. As virtualization increases, logical complexity grows,” Pierce pointed out.
- Ken Pierce, DIA-DS/C4ISR
He went on to say that his organization is already well-positioned to handle the added complexity – but what else can he really say?
The Real Value of a Cloud Application Platform
It is precisely in aggressively taking out complexity – both operational and development – while maintaining all of the goodness of clouds that this emerging thing the industry has begun calling a cloud application platform. delivers the goods.
As you might expect, Appistry EAF as it exists today makes an excellent cloud application platform, and stuff that we’re hard at work on – even as we speak – will expand that lead.
And that is why Appistry is becoming a “linchpin technology”.
Over the past few years it has been a lot of fun to work with the great folks at GeoEye , so on what is certainly a historic day I want to help them show off a bit of the capabilities of their newest satellite, GeoEye-1.
Despite a bit of wispy cloudcover, GeoEye announced today that they were able to capture a cool image of the crowd at today’s inauguration of the 44th president of the United States, Barack Obama.
Both of these are crops from the whole image, which can be downloaded directly from GeoEye.
This half-meter resolution image of the United States Capitol, Washington D.C. was collected by the GeoEye-1 satellite on Jan. 20, 2009 to commemorate the Inauguration of President Barack Obama. The image, taken through high, whispy white clouds, shows the masses of people attending the Inaugural Celebration.
As mentioned, the original image can be found here on the GeoEye site.
But Wait, There’s More
If have you haven’t yet had a chance to play around with Photosynth, by all means take a bit of time and prepare to be fascinated.
In any case, GeoEye also contributed this image to a Photosynth 3d mosaic of the inauguration that you can find (and play around with) here.
One final thought: keep in mind that as cool as these images are, GeoEye-1 is capable of far better!
(with apologies to the good Dr. Seuss on the title – sorry, I just couldn’t help it)
The participants included
- Robert X. Cringely, computer guy & moderator
- Anwar Ghuloum, Intel
- Charles E. Leiserson, Cilk Arts & MIT
- Dan Reed, Microsoft
- Mark Snir, University of Illinois at Urbana – Champaign
and me (go ahead and give me a hard time, I can take it).
The range of discussion was interesting, since the panel included perspectives more rooted in multi-core (Ghuloum and Leiserson), mass ‘o machines (Reed, Snir), and more of a uniform view of both broad classes (me, though I think that’s may be shared by some of the other folks as well, at least a bit).
In addition, the panel was a mix of research and practical applications, which probably tended to color much of the discussion. All in all it made for cool (and hopefully not too boring for the audience!) conversation.
This is one panel that I probably would have much rather had in private, definitely accompanied by some really good adult beverages, but unfortunately we were constrained to an hour on a stage … and (at least for me) that hour passed by pretty fast.
An hour was probably enough time to begin to name a couple of the larger issues, but definitely not time enough for too much more.
Still, it did get me to thinking a bit …
A Few of the Issues
There were (and are) many more of course, but here are a few of the more dominant themes and issues that we discussed …
Market Pull. Whether it’s the inability of the processor manufactures to build individually faster cores at any price we could stomach (hence the advent of multi-core), or the advent of practical clouds (both public and private) opening up the prospects of deploying REALLY BIG apps on LOTS of VMs, the market is clearly demanding new solutions to creating parallelized apps. No question about it.
Complexity is Bad. There was a general agreement that complexity is, well … complex and generally toxic to effective development of parallel apps. Some folks had more of a stomach for complexity than others, but all in all many of the efforts are trying to fundamentally simplify the developer’s task.
Need for New Abstractions. The Complexity Problem is not going to be solved by wishful thinking alone, no matter what Oprah says (sound bite alert). Hence everything from new functional languages like F#, Erlang, Scala, to frameworks like map-reduce, to data-driven reliable service abstractions like our own application fabric are in play as ways to simplify.
Uncovering Inherent Data Orthogonality. I’ve gradually come to the opinion that some very high percentage of the apparent data dependencies that are anathema to effective parallel processing are not truly in the original problem. Rather, they are false dependencies, ones that we have inflicted on ourselves for no particularly good reason other than the tools, methodologies, or just bad habits that we bring to bear on our work.
(btw, don’t press me on a precise percentage or I’ll be forced to make something up here)
We’ve seen this with customers, and the more I look at new problems and how they are solved in most enterprises today, the more I see a big, massive goo of false dependencies.
Fix those, and we have a crack at effective parallelization in many cases.
Where This is Going
I am very optimistic about progress in helping developers create actual parallel applications that can be used in the enterprise, in production solving problems about which people actually care.
The population of these well-done apps is going to be increasing dramatically in the months and years to come, which is a good thing … a very good thing.
The timing couldn’t be better … in truth, I don’t think we really have much of a choice.
We are very active in this space, and I have a particular interest in the “false dependency” problem. I’m sure I’ll be posting more on this in the future.
The desktop of the future is going to be a hosted web service
The Browser is Going to Swallow Up the Desktop
The focus of the desktop will shift from information to attention
Users are going to shift from acting as librarians to acting as daytraders.
The Webtop will be more social and will leverage and integrate collective intelligence
The desktop of the future is going to have powerful semantic search and social search capabilities built-in
Interactive shared spaces will replace folders
Sorting through a couple of Capt. Obvious points (“The Webtop will be more social” … you’re kidding me?!?!?), he does make a few more interesting points. For example,
The focus of the desktop will shift from information to attention
is a really good point. Sure it’s just the latest way to say “drinking from a firehose”, yet it at least cleanly articulates what we all deal with daily, at levels that when we step back and think about it are nearly incomprehensible … with much more to come.
A Bit of Wishful Thinking
Yet from my perspective at least a couple of the points just fall into true blue believer wishful thinking … as in “But it JUST HAS to happen this way … doesn’t it?”
Ummmm … no.
Let’s pick one to illustrate:
The Browser is Going to Swallow Up the Desktop
That meme has been going around for quite a while. Probably the most famous, recent, and all around hard to escape incarnation of that philosophy is clearly the iphone. So let’s take a look there and see what we can learn.
Great browser? Check.
Uber-outstanding display? Check.
Tons of mindshare with a maximum mind-control field targeted at making everyone believe that browser apps constituted everything anyone would ever need? Check.
Fast network? Check.
Ubiquitous? Check. Check. Check.
On the Road to Web-App Total Domination
Well we all know what happened with six months of this strategy … Google unveiled Android, and Apple had no choice but to open up their platform (ok, not really very open … but at least non-Apple employees can sort of write apps!).
The marketplace is voting at a furious pace, with more than 60,000,000 apps downloaded in the first month. Yes some folks extrapolate from their own first month experience and say that all of this will die down soon, to be replaced by the browser alone.
Yet I just don’t see it.
The reality is that simply physics (bandwidth is NOT the same thing as latency) still dictates local responses for highly interactive tasks. No doubt much of that will (and already is, of course) done in browser apps.
But to contend that everything will move within the browser is just as unsupportable as saying something like “all development will be done in language <insert your favorite language / framework here>”.
How many times has that prediction been made, in one form or another?
It’s just silly, really.
All the Same
I understand that true blue believers can take exception to everything I’ve said, except for one thing …. 60,000,000 apps in one month, on the best mobile web browsing platform ever … with some great tailored web apps (the newest google reader really is awesome).
Rather than arguing for what is effectively both the repeal of the laws of physics and universal world peace at precisely the same moment, perhaps it would be more productive to create more effective clients to use all that cloud-based services have to offer …
… and build these inside or outside the browser, as best fits the circumstances.
A quick shout-out to Reuven Cohen for noticing this particular post, which I’d overlooked in a category in my RSS reader that had over 755 unread items in it … today! Relatively ironic, wouldn’t you say?
Yesterday we talked about whether Twitter really ever need to be reliable or not … some said yes, others contend that it’s not necessary.
It’s been bugging me for awhile that something this popular … and Twitter is so … just keels over as often as it does.
Anyhow, the whole argument turned into a bona-fide debacle this morning when GroupTweet (a relatively new feature that seems to have been confusing) was at the heart of disclosing private messages (DMs in tweet-speak) to tons of folks.
So now it looks like Blaine Cook is out as chief architect, and Michael Arrington is calling it the end of amateur hour. That’s probably a bit harsh, because my (limited) interactions with Cook have been pretty decent.
Btw the comment thread on that last post is going crazy. My favorite so far is a short video comment from a Loren Feldman (warning … his language is a bit over the top, but you do know where he stands!) Btw, check here if the first link to the video doesn’t work.
Having said that, we just have to build apps that act like real, grown up (and you can call that boring if you want) apps … taking care of the data entrusted to them, working as expected, and working when we need them to work.
So I’m thinking that the answer to yesterday’s question is … YES. Twitter does need to figure out how to be reliable … and secure, scalable, and all the rest.
This is exactly the point that I’ve been making for awhile … why build to POC quality when it’s now possible to ensure reliability, scalability, and so forth from the beginning?
People are Still People
I don’t care if this is Web 2.0, Enterprise 2.0, or Web 10,000,000,000.0 … consumer or enterprise … people are still people. They still care about their privacy, the reliability of stuff that they come to rely on, basic stuff like that. No free pass.
Even consumer oriented web 2.0 apps need to ensure this, from the beginning.
Pretending that innovation in communication, biz, or technology somehow exempts us from the basics of social interaction is just … well, it’s just wrong.
This lesson is for our whole industry. Those who learn it will prosper, those who don’t …
A few outages ago I wondered aloud whether Twitter was taking the whole business of failure somewhat casually (triggered by some comments Blaine Cook made at SXSW).
Blaine replied with some great points, including
For the record, saying that the press surrounding the downtimes was a plus was a joke. Downtime is never good, and you should do everything you can to avoid it. However, it’s a misrepresentation to say that you can build something successful without any downtime.
Our foremost concern has been and will continue to be ensuring a stable platform; we’ve been working hard on numerous fronts, and that work is paying off. Bad press is horrible, and I’ll be the first to take pleasure in never again seeing a “can Twitter scale?” story.
I believe him, and have quite a bit of empathy for the position he’s in. (I have a friend who always used to talk about “high class problems” and “low class problems” … Blaine and the other Tweetsters have a high class problem, but that’s a post for another day)
There was one point that he made that I fundamentally disagree with, however:
Scaling is a commitment, and one you should only make once you’re sure about an idea.
Yes it is most definitely a commitment, but it is my contention is that we’re fast entering the time when it can be built-in from the beginning with little to no additional effort.
This past weekend there’s was a bunch more instability. Looks like they were putting in some more caching to take the heat off of the data tier, and things went wacky.
Now Robert Scoble is making the case that Twitter is leaving the door wide-open for Friendfeed … (on Twitter of course, though my friend read it on Friendfeed!). Check out this short burst:
Michael Arrington thinks that the mass of the Twitter community makes this concern moot … basically, he contends that Twitter no longer needs to be reliable.
The Days of the Free Pass on Reliability are Over
Basically, I think that 1) the days of web2 services getting a free pass on reliability are rapidly passing, and are probably already over, and 2) it’s a shame to see stuff go whump when it’s sooo unnecessary.
As for the days of the free pass being over, check out Dennis Howlett’s (zdnet) comments on the most recent outage … he’s generally making the case that Twitter itself is really a POC for some better service yet to come, something more suitable to much larger markets.
Could that V1 service be Friendfeed? Maybe. Of course, it’s too early to write-off Twitter entirely … they’ve also hired their own scaling calvary (including the every-helpful Google expat!), so maybe they’ll catch a second wind before the whole sector passes them by.
Build to Scale … From the Beginning
Back to Blaine’s comments. I can completely understand the notion of building a proof of concept … besides, in the web 2.0 world it’s long been accepted practice to throw something out there, and only build to scale when you figure out whether anyone cares.
That makes a lot of sense when building apps to scale is so freakin’ hard. BUT … easing that pain is precisely the point of stuff like our app fabric.
That is why it is my core contention that the ability to scale and be reliable, even for the most trivial services, is going to become the price of entry very soon (if it has not already become so).
… In the software-as-a-service world … source code becomes irrelevant. If someone offered us the schematics to a telephone, we wouldn’t care. We don’t want to know how to make a phone. We want a dial tone. When it comes to IT, we want app tone.
As another way of saying people want apps to work, this could make sense. But irrelevant? That’s just silly.
… If 37 Signals gave me the Basecamp source code for free, I’d still use their service. If Freshbooks burned me a copy of their app, I’d still subscribe to them. Even if Salesforce.com handed me their software, I’d use their hosted portal.
Ok that makes sense to most folks using SaaS offerings … after all, who wants to go through all the trouble to install and run something when a perfectly acceptable alternative is already available?
Of course, if a competing service came out (made much easier to do with that “irrelevant source code”!), perhaps with some improvements in quality of service, or more generous free capabilities or some other such advantage, wouldn’t folks simply switch? Of course they would.
Anyhow, Croll continues on this theme
In the license world, it’s all about the ability to make copies of the software. By contrast, in the world of app tone, it’s about the ability to run instances of the code. It’s about operating an application reliably … and the ecosystem the SaaS provider can build around it through APIs, partners and extensions
Sure that’s all true, but how do you think all that reliability gets there to begin with? Is it purely a consequence of skilled operations? Of course not … the application source code itself can do a lot to improve it’s reliability.
On a Roll …
So in the interest of making an interesting point Alistair gets carried away, leading him to miss the mark entirely here:
Even the open-source movement is feeling the change: Recent modifications to the third revision of the GNU Public License recognize that it’s the service, not the source code, that has value — and that any user of the service has the rights to its source code.
Not exactly. What these changes recognize is that most SaaS offerings are not posting their key source code much at all … even when they incorporate open source libraries that would trigger that posting for software that is delivered conventionally. They don’t have to by the terms of most open-source licenses, so why should they?
Of course, methinks that this might be because there’s significant value in said source code!
The Service DOES Have Value
Of course, the service itself has beaucoup value … for all of the reasons that Croll cites in his post. It’s just that the question of which has value, the service or the source, in not a simplistic case of either / or, but more of a both-and. That is, both the service and the source code itself have significant value, and will continue to do so for some time.
After all, SaaS is not an alternative to meaningful source … it’s another way to deliver that meaningful source.
Robin Harris must have woken up grouchy today – he’s dumping all over cloud hysteria on this fine Monday. After throwing the obligatory it’s-all-marketing punch (the truth is that there IS a bunch of marketing, but there’s also a bunch of real substance … more on that in a minute), he gets down to business.
I am paraphrasing a bit, but here are his main points:
The only real key to Google’s low cost structure is active cluster storage – if it’s productized, anyone can be as cheap as Google (including your own datacenter).
Networks are still the thinnest resource in the computing landscape.
Consequently only low-data-rate applications are suitable for the cloud – all others will (or at least should) stay local.
Robin makes some good, albeit incomplete points, though not too sure about his conclusion. Go read his post, then let’s look at his reasoning a bit at a time.
The Main Points
The only real key to Google’s low cost structure is active cluster storage - if it’s productized, anyone can approach Google’s economics (including your own datacenter).
This is probably the biggest miss – perhaps more critical than the reliable commodity storage (which is important!), are all of the applications which natively run on commodity infrastructure. Each app generally runs as well as that particular app needs, and runs in a way that allows for some sort of operational sanity.
Google (& Amazon & others) have built a number of frameworks to make this true for their own applications, of course. Sometimes they build these sort of capabilities directly into the applications themselves. For everyone else, there is a clear need for platforms that reliably scale applications on commodity infrastructure- that is precisely why we built the application fabric.
Simple, coherent operational capabilities are also crucial. When a commodity infrastructure can basically run itself, it becomes a lot more attractive as a deployment option for the serious enterprise.
Networks are still the most limited resource in the computing landscape.
True beyond a shadow of a doubt! Robin makes a good point that the rate of improvement for networks lags behind other parts of computing (like his native storage land). My only caveat is that, while clearly limited, network bandwidth is just as clearly sufficient for many, many mainstream applications (particularly when structured as described below).
Consequently only low-data-rate applications are suitable for the cloud – all others will (or at least should) stay local.
I think many applications will clearly stay local – some for technical reasons, some for security, control and / or cultural reasons, some just because.
Having said that, some data-intense applications will still move to a cloud, provided that the data is stored near the corresponding computing elements. This alternative is even now beginning to play out, such as in the Amazon EC2 / S3 combo (among others). With this approach all high-bandwidth data operations are effectively local.
In the rush to argue for or against cloud computing, many infrastructure-centric folks are missing a couple of key considerations – namely the critical nature of the applications and the need for simple operations.
Good grid-enabled applications (and this includes the storage layer) can run on commodity infrastructure wherever it’s located – in a cloud or close to home – scale as needed, be both reliable and secure, operate itself, and be far cheaper than apps today.
In reality the argument between clouds and grids / application fabrics can become simply a deployment decision – and that may be the best news of all.
For the past year+ there have been many indicators that fundamental changes in the enterprise software development market are well underway. In particular, it sure seemed like the monolithic predominance of traditional JEE app servers was starting to break up.
A few weeks ago I posted about the rise of Tomcat, and talked about why it is now the leader for deployment of Spring apps. (the why is easy – it’s simple, cheap, easy to use, and works well).
Now Rod Johnson (springsource) has another interesting observation – job postings requiring Spring skills have surpassed those requiring EJB on at least one site.
Indeed.com shows that in November, 2007, Spring overtook EJB as a skills requirement for Java job listings. As of yesterday, the respective job numbers were 5710 for Spring against 5030 for EJB.
… While it’s not an apples-to-apples comparison, it is reasonable to consider Spring and EJB as alternatives for the core component model in enterprise Java applications. And it’s clear which is now in the ascendancy.
… Frankly, the EJB era was an aberration.
What This Means
EJB is often inexorably intertwined with the decision to use monolithic, heavy, traditional app servers on heavy, costly infrastructure.
The trend towards breaking this monolith up does start with the core object model, and Spring is proving fairly prominent in this role. Once this decision is made, then separate decisions can now be made about how to achieve scale, reliability, and operational integrity.
If the applications are not so demanding, then not much more than Tomcat and a few operational tools are required. Many early cloud deployments probably fit this category.
Need for Scale and More
Once the app has more demanding scale, reliability, performance, and other needs then the developer has been faced with a couple of choices. These include
- Deploy the new lightweight app on a traditional app server & infrastructure.
- Write their own state management, coordination, and operational tools to deploy on lighter infrastructure
- Pick a new approach for development and deployment facilities.
It is in this third option that most of the interesting innovation is occurring. That is where our excellent execution model (simple abstraction, multi language, multi OS, highly scalable, reliable and fast), elegant state facilities (lightweight, reliable process flows & spaces), and very simple operational model (the biggest fabric is the same thing to operate as a single server) make a lot of sense.
That all of this can occur on a truly commodity infrastructure (from Tomcat to Linux to grids of uber-cheap commodity processors) is a real bonus.
Best part? You can bring your Spring app over as-is, and gain much of the Appistry goodness from day one.