The Cloud As Platform

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Nearly a decade ago, a well-funded startup called Storage Networks promised to revolutionize the data center by moving enterprise storage into the cloud. Customers would keep their production data off-site in a highly secure facility and access it over the Internet. Unfortunately, the concept of cloud computing was unknown at the time, and the Internet itself was neither fast nor robust enough to permit large corporations to get comfortable with the idea. Storage Networks flamed out.

Now EMC is taking a run at a similar idea using the concept of cloud storage. Its technology, called Atmos, offers a glimpse of how far the cloud concept has come in a few short years and how its emergence as a new platform could drive a new wave of innovation.

As described by EMC, Atmos is a lot more than just a new breed of network storage.  The distributed technology uses an object model and inference engine to make intelligent decisions about where to store, copy and serve data.  With the world as its data center, Atmos is said to be able to flexibly move information to the point where it can be most efficiently served to the people who need it.  For example, if a cliffhanger election in Florida causes a surge of interest from local voters, election results data could be automatically routed to nearby servers.

Intelligent routing is just one of the intriguing ideas that the cloud supports, and it doesn’t have to apply just to storage.  In the future, virtual data centers will consist of computing resources spread around the globe.  Server power can be flexibly deployed to regions that need it. Backups could be administered at a high-level. For example, an organization could specify dual redundant backups for some critical data but only a single backup for less important information.  When the entire fabric is virtualized, this kind of flexibility becomes part of the landscape.

At this point, Atmos is still brochureware, and EMC isn’t sharing any customer experiences.  But I think the concept is more important than the product. Very large and distributed cloud networks can theoretically provide users with almost unlimited flexibility and economies of scale. Systems management, which is an expensive and technical discipline that very few companies do well, could be centralized and provided to all users on the network.  Customers should be able to define policies using a simple dashboard and let the inference engine do the rest.

We are only in the early stages of realizing these possibilities, but the emergence of real-world cloud computing platforms will usher in a new era of innovation.  Platform shifts invariably do that. Coincidentally, NComputing this week will announce an appliance that turns a single desktop PC into as many as 11 virtual workstations.  The company claims that the technology lowers the cost per workstation to about $70.

When applied to a cloud of servers, you can imagine technology like this scaling much higher.  Instead of having to run around supporting hundreds of physical workstations, IT organizations would only have to worry about a few powerful servers providing virtual PC experiences to users.  Move those servers into the cloud, and you can begin to apply best-of-breed security, resource and systems management to each user. The economies of scale become very compelling very fast.

The biggest leaps in technology innovation take place whenever platforms shift.  The cloud is now beginning to come into its own as a legitimate platform. Things should get pretty exciting from here.

A Regulatory Boost for the Cloud

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

In a recent podcast interview on Tech Nation, Tim Sanders, author of Saving the World at Work, quotes a remarkable statistic: it’s been estimated that Google could save 750 megawatts of electricity every year by changing the color of its ubiquitous homepage from white to gray.  That’s because monitors require more electricity to energize the brighter white phosphors.

The total cost savings of roughly $75,000 a year may not convince Google to overhaul its site design, but the statistic drives home the effect that economies of scale can have in computing.

There’s a lot of attention being paid to economies of scale these days as IT consumes a growing proportion of natural resources in the US. IT organizations are increasingly going to find themselves on the hot seat to go green not just because it’s the nice thing to do, but because it’s good sense for the bottom line.

Consider these facts:

  • The Environmental Protection Agency has estimated the data centers consume about 1.5% of all electricity in the US and that about a quarter of that energy is wasted;
  • Gartner estimates that more than 2% of global atmospheric carbon emissions can be traced to the IT industry;
  • Gartner expects that more than 925 million computers and one billion mobile phones will be discarded over the next two years.
  • The International Association of Electronics Recycles estimates that about 400 million units of electronic refuse are generated annually. The actual amount of “e-trash” may be higher because nervous businesses have stockpiled old equipment rather than paying for disposal.

The twin effects of spiraling energy costs and environmental hazards are creating a double whammy. Underutilized computers are consuming increasingly expensive energy and also taking a greater toll on the environment. In Europe, businesses are working under a new standard called the Restriction of Hazardous Substances directive, which limits the use of dangerous chemicals in computers.  US businesses are keeping a close eye the directive, not only because it affects their European businesses but because they see it as a role model for similar legislation here.

Some very large businesses are beginning to make green computing part of the core corporate values. An article by the University of Pennsylvania’s Wharton school tells of Bangalore-based IT services firm Wipro Technologies’ energy efficiency mandates. It tracks metrics like carbon dioxide emissions and paper consumption per employee and is outfitting workers with a new kind of energy-efficient workstation.

In the US, the most promising option is hosted or “cloud” computing. This takes advantage of the excess capacity that exists in giant data centers run by companies like Amazon and IBM. Why let that spare power go up in smoke if there are small businesses that can tap into it?

Increasingly, they are. On Palo Alto’s Sand Hill Road venture capital foundry, technology entrepreneurs that propose to run their businesses on internal servers are referred to as having a “Hummer” strategy.  In other words, they’re consuming far more power than they need to drive their computing environment.  New software and services firms are bypassing captive data centers and opting to farm out everything to third parties. Technology entrepreneur John Landry recently told me that the cost of hosted computing is coming down so fast that it no longer makes sense for a new business to even consider investing in a captive data centers. In other words, it will never be cheaper to manage the infrastructure yourself.

In a consolidated hosting environment, every tenant benefits from the economies of scale provided by the host. What’s more, the arrangement shifts responsibility for technology recycling and disposal to a central entity that has a vested interest in best practices. As the cost of energy continues its inevitable rise and legislators stump for stronger regulation, the appeal of the cloud only grows.

With Cloud Computing, Look Beyond the Cost Savings

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Back in the early days of data center outsourcing, some pioneer adopters got a rude surprise.  These companies had outsourced all or large parts of their data centers to specialty providers who bought their equipment, hired their staff and offered attractive contract terms that shaved millions of dollars in expenses in the first year.

The surprise was that the contract terms weren’t so that attractive once the outsourcer became embedded in the client’s expense line.  Customers found that nearly everything carried hefty escalator fees, ranging from unbudgeted capacity increases to software patches to staff overtime. But there was little customers could do.  They were locked into the contractor and the cost of unlocking themselves was prohibitive.

This story came to mind recently during a chat with Bob McFarlane, a principal at facilities design firm Shen Milsom & Wilke. McFarlane is an expert in data center design, and his no-nonsense approach to customer advocacy has made him a hit with audiences around the country.

McFarlane thinks the current hype around hosted or “cloud” computing is getting out of touch with reality.  Cloud computing, which I’ve written about before, involves outsourcing data processing needs to a remote service, which theoretically can provide world-class security, availability and scalability.  Cloud computing is very popular with startups these days, and it’s beginning to creep onto the agenda of even very large firms as they reconsider their data processing architectures.

The economics of this approach are compelling.  For some small companies in particular, it may never make financial sense to build a captive data center because the costs of outsourcing the whole thing are so low.  McFarlane, however, cautions that value has many dimensions.

What is the value, for example, of being able to triple your processing capacity because of a holiday promotion?  Not all hosting services offer that kind of flexibility in their contract, or if they do, may charge handsomely for it.

What is the value of knowing that your data center has adequate power provisioning, environmentals and backups in case of a disaster? Last year, a power failure in San Francisco knocked several prominent websites offline for several hours when backup generators failed to kick in. Hosting services in earthquake or flood-prone regions, for example, need extra layers of protection.

McFarlane’s point is to not buy a hosting service based on undocumented claims or marketing materials. You can walk into your own data center and kick a power cord out of the wall to see what happens.  Chances are you can’t do that in a remote facility.  There are no government regulations for data center quality, so you pretty much have to rely on hosting providers to tell the truth.

Most of them do, of course, but even the truth can be subject to interpretation. The Uptime Institute has created a tiered system for classifying infrastructure performance. However, McFarlane recalls one hosting provider that advertised top-level Uptime Institute compliance but didn’t employ redundant power sources, which is a basic requirement for that designation.

This doesn’t mean you should ignore the appealing benefits of cloud computing, but you should look beyond the simple per-transaction cost savings. Scrutinize contracts for escalator clauses and availability guarantees.  Penalties should give you appropriate compensation.  While you won’t convince a hosting service to refund you the value of lost business, you should look for something more than a simple credit toward your monthly fee.

If you can, plan a visit to a prospective hosting provider and tour its facilities.  Reputable organizations should have no problem letting you inside the data centers and allowing you to bring along an expert to verify their claims. They should also be more than willing to provide you with contact information for reference customers. Familiarity, in this case, can breed peace of mind.

Utility Computing Train is Coming, But It May Be Late to Your Station

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

The move to utility or “cloud” computing shows every sign of reaching critical mass over the next couple of years.  But it won’t be driven by corporate data centers.  The momentum, instead, is coming from two factors that increasingly dictate the pace of innovation: startups and fear.

In 1991, noted technology columnist Stewart Alsop wrote, “I predict that the last mainframe will be unplugged on 15 March 1996.”  Yet as of last year, there were still 10,000 mainframes running worldwide, according to IBM.  Was Alsop wrong? Technically, yes, but the shift that he foresaw is happening.  It’s just being driven by different factors than he expected.

Technology innovation today follows a strikingly consistent pattern. New companies with no legacy base make the switch first while the people with the most to lose are the last ones to change. Instead, they jump on board when they discover that new technology addresses a significant pain point.

Both forces are evident today in utility computing. Robert Scoble wrote persuasively last November about the “serverless” Internet company. His comments were prompted by a meeting with the CEO of Mogulus, a streaming video firm the claims not to own a single server.  What interested me most about Scoble’s remarks is the 65 comments that follow.  Many are from other small companies that are building IT infrastructure from the ground up without servers.  Some of these companies are offering high-bandwidth services on a very large scale, demonstrating scalability and reliability aren’t a problem. In fact, any startup business today should look first at outsourcing its IT infrastructure before investing in a single square foot of computer room space.

Meanwhile, utility services are actually achieving critical mass in a corner of the mainstream corporate IT market: storage. Services like Amazon’s S3 now have well over 300,000 customers.  EMC just joined the fray by launching an online backup service and hiring a top former Microsoft executive to lead its cloud computing initiative.

The storage industry has been a technology innovator recently because storage is a major pain point for many companies.  With capacity requirements expanding at 30% to 50% annually, people are desperate to do something to manage that growth.

The rapid adoption of utility computing seems likely to continue, but with a curve that looks like a landscape of the Himalayan mountains.  In some segments of the market — like startups — utility infrastructures will become the status quo.  In others — like corporate data centers — adoption will come only as the technology addresses specific pain points.

This jagged adoption curve is why there’s so much debate today over the future of the cloud.  Contrast Scoble’s observations, for example, with a recent CIO Insight article in which a CTO outlines his reservations about cloud computing or a CIO Insight reader forum where IT managers take issue with Nicholas Carr’s forecast that IT will increasingly become a central utility.

This debate is happening because the need for utility computing is not perceived to be compelling in all cases.  Perhaps this is why Gartner predicts that early technology adopters will purchase 40% of their IT infrastructure is a service by 2011. Which means that the other 60% will still be acquired conventionally.

The utility computing train is coming but its arrival won’t occur the same time for all organizations. Check your local schedule.

Reshape IT Via New Models Like Software-as-a-Service

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

IT should embrace SaaS enthusiastically, as it can save a whole lot of headaches building prototypes that users reject.

Anyone who’s been in IT for more than a few years knows the dirty little secret of the profession: many IT projects (in fact, most of them, in my experience) fail. That’s been the story as long as I can remember. Why, after so many years, are we still so frustrated by failure?

There are three main reasons I’ve observed:

  • In too many companies, IT is an island that is organizationally and even physically removed from the business it serves.
  • Too many users suffer from throw-it-over-the-wall syndrome, which leads to projects that fail to match the needs that exist at delivery.
  • Turnover and organizational change undermine too many projects, making them irrelevant by the time they’re delivered.

Let’s look at how you can approach each problem.

IT is an island – IT people themselves are often too willing to accept a balkanized structure that isolates them from the business. There is a bad idea for so many reasons, but the insular, often introverted nature of technical professionals lets them rationalize this situation. They don’t communicate well with the business side, so they settle for separation.

You can’t change people’s personalities, and you can’t force people to work in situations that make them uncomfortable. But you can make sure that IT project leaders have the capacity to work productively with business end-users. That means not talking down or clamming up, but rather showing tolerance, acceptance, and humor. Your project managers are ambassadors. You need to select people with strong diplomatic skills.

With the right ambassadors in place, you can afford to set the rest of your IT organization apart to some degree. The project leaders should serve as both diplomat and translator, buffering the relationship with the business side while speaking both languages fluently.

Customer accountability – The throw-it-over-the-wall problem begins with the user sponsor, and is perpetuated by gullible IT organizations. Often, the perpetrator is a senior business-side executive, a “big idea” type who conceives of a grand vision and then hands off half-baked requirements to an IT group that often doesn’t fully understand what it’s supposed to deliver. Six months later, IT comes back with a prototype, by which time either the requirements have changed, the user has moved on, or he or she has forgotten about the whole thing.

Let’s face it: no one likes creating spec documents or sitting through progress report meetings. They’re tedious and boring. But they are absolutely essential if a project is to remain on track. The CIO needs to be the bad guy here. He or she must insist upon project management discipline and review meetings at least once a quarter to make sure the project is still relevant. The CIO needs the backing of a top company executive in taking this approach. Otherwise, IT will be buffeted by constant changes in the business environment. Which leads to the final problem.

Organizational change – How many managers can you name in your organization who have been in the same job for more than two years? In many companies today, half the leadership has taken on a new assignment in that time. So why do we still start IT projects that have deliverables scheduled a year or more down the road?

The business environment is too changeable these days to permit that kind of scheduling. Projects must be componentized, with deliverables scheduled every few months. If you can’t decompose a project like that these days, it probably isn’t a very good idea in the first place.

Technology may be riding to the rescue. The rise of the so-called “software as a service” (SaaS) business – epitomized by Salesforce.com is enabling users to try applications before they commit to them. SaaS delivers applications over the Internet, and users can often achieve results in a matter of days. In some cases, users may find that a SaaS solution is all they need. But even if they don’t, SaaS is a heckuva way to prototype different approaches and solutions. A lot of IT organizations are approaching SaaS warily, worried that they will lose control. Instead, they should be embracing the model enthusiastically. It can save them a whole lot of headaches building prototypes that users reject.