The Coming Utility Computing Revolution

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Nicholas Carr is at it again, questioning the strategic value of IT.  Only this time I find myself in nearly total agreement with him.

Carr became famous, or infamous, for his 2003 Harvard Business Review article “IT Doesn’t Matter,” in which he argued that IT is an undifferentiated resource that has little strategic business value.  His thinking has evolved since then, and in his new book, The Big Switch, he proposes that utility computing will increasingly become the corporate information infrastructure of the future.

Utility computing means different things to different people.  Some people draw an analogy to the electrical grid, but Carr argues that the information utility is far richer and more strategic.  He outlined some of his perspective in this Q&A interview in CIO Insight magazine.

Nicholas Carr

Nicholas Carr

The utility computing model that Carr foresees encapsulates many of the hottest concepts in IT today: virtualization, modular computing, software as a service, Web 2.0 and service-oriented architecture.  Computing utilities of the future will be anchored in enormous data centers that deliver vast menus of applications and software components over the Internet. Those programs will be combined and tailored to suit the individual needs of business subscribers.

Management of the computing resource, which for many years has been distributed to individual organizations, will be centralized in a small number of entities that specialize in that discipline.  Users will increasingly take care of their own application development needs and will share their best practices through a rich set of social media tools.

In this scenario, the IT department is transformed and marginalized.  Businesses will no longer need armies of computing specialists because the IT asset will be outsouced. Even software development will migrate to business units as the tools become easier to use.

This perspective is in tune with many of the trends that are emerging today.  Software is a service is the fastest growing segment of the software market and is rapidly moving out of its roots in small and medium businesses to become an accepted framework for corporate applications.  Data centers are becoming virtualized and commoditized. Applications are being segmented into components defined as individual services, which can be combined flexibly at runtime.

There are sound economic justifications for all of these trends, and there’s no reason to believe they won’t continue.  So what does this mean for IT organizations and the people who work in them?

Carr sums up his opinion at the end of the CIO Insight interview: “[I]nformation has always been a critical strategic element of business and probably will be even more so tomorrow. It’s important to underline that the ability to think strategically…will be critically important to companies, probably increasingly important, in the years ahead.”

Taking this idea one step further, you can envision a future in which a pure IT discipline will become unnecessary outside of the small number of vendors that operate computer utilities.  University computer science programs, which have long specialized in teaching purely technical skills, will see those specialties merged into other programs.  Teenagers entering higher education today are already skill at building personal application spaces on Facebook using software modules.  It’s a small step to apply those principles to business applications.

Sometimes, the past is a good predictor of the future. In my next entry, I’ll give an example of how technology change revolutionized the world a century ago and draw some analogies to the coming model of utility computing.

Utility Computing Finds its Sea Legs

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Nearly a decade ago, I worked at an Internet startup in the crazy days of the early Web.  Success demanded speed, rapid growth and scalability.  We were frequently frustrated in those days by the demands of building out our computing infrastructure.

The company burned more than $1 million in that pursuit. Racks of Unix servers had to be acquired and configured.  The equipment was set up at a co-location facility that required on-site care and feeding by technical staff.  Installation and testing took weeks.  Maintenance was a time-consuming burden requiring a staff of technicians who had to be available at all hours.  At least twice during the first year, server crashes took the company off-line for more than a day.  Stress and burnout were a constant issue.

Today, I suspect the company would do things quite differently.  Instead of acquiring computers, it would buy processing power and storage from an online service.  Startup times would be days or weeks instead of months.  Scaling the infrastructure would require simply buying more computer cycles. There would be no cost for support personnel. Costs would be expensed instead of capitalized.  More importantly, the company would be up and running in a fraction of the time that was once required.

The current poster child of utility computing is, of all companies,  An article in last week’s Wired magazine describes the phenomenal success of the initiatives called S3 and EC2 that Amazon originally undertook to make a few dollars off of excess computing capacity. Today, the two services count 400,000 customers and have become a model that could revolutionize business innovation.

That’s right, business innovation. That’s because the main beneficiaries of utility computing are turning out to be startups. They’re using the services to cut the time and expense of bringing their ideas to market and, in the process, propelling innovation.

The utility computing concept has been around for years, but questions have persisted about who would use it. Big companies are reluctant to move their data offsite and lose control of their hardware assets.  They may have liked utility computing in concept, but the execution wasn’t worth the effort

It turns out the sweet spot is startup firms. Many business ideas never get off the ground because entrepreneurs can’t raise the $100,000 or more needed for capital investment in computers. In contract, Amazon says it will transfer five terabytes of data from a 400G-byte data store at a monthly fee of less than $1,400. If you use less, you pay less. It’s no wonder cash-strapped companies find this concept so appealing. Wired notes that one startup that uses Amazon services dubbed one of its presentations “Using S3 to Avoid VC [venture capital].”

Now that companies are getting hip to this idea, expect prices to come down even further. Sun already leases space on its grid networkIBM has an on-location variant. Hewlett-Packard has an array of offerings. There are even rumors that Google will get into the market with a free offering supported by advertising. And, of course, there will be startups.

The availability of cheap, reliable and easy-to-deploy computing services could enable a whole new class of entrepreneurs to get their ideas off the ground.  It’s just one more example of IT’s potential for dramatic business change.

Utility Computing Train is Coming, But It May Be Late to Your Station

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

The move to utility or “cloud” computing shows every sign of reaching critical mass over the next couple of years.  But it won’t be driven by corporate data centers.  The momentum, instead, is coming from two factors that increasingly dictate the pace of innovation: startups and fear.

In 1991, noted technology columnist Stewart Alsop wrote, “I predict that the last mainframe will be unplugged on 15 March 1996.”  Yet as of last year, there were still 10,000 mainframes running worldwide, according to IBM.  Was Alsop wrong? Technically, yes, but the shift that he foresaw is happening.  It’s just being driven by different factors than he expected.

Technology innovation today follows a strikingly consistent pattern. New companies with no legacy base make the switch first while the people with the most to lose are the last ones to change. Instead, they jump on board when they discover that new technology addresses a significant pain point.

Both forces are evident today in utility computing. Robert Scoble wrote persuasively last November about the “serverless” Internet company. His comments were prompted by a meeting with the CEO of Mogulus, a streaming video firm the claims not to own a single server.  What interested me most about Scoble’s remarks is the 65 comments that follow.  Many are from other small companies that are building IT infrastructure from the ground up without servers.  Some of these companies are offering high-bandwidth services on a very large scale, demonstrating scalability and reliability aren’t a problem. In fact, any startup business today should look first at outsourcing its IT infrastructure before investing in a single square foot of computer room space.

Meanwhile, utility services are actually achieving critical mass in a corner of the mainstream corporate IT market: storage. Services like Amazon’s S3 now have well over 300,000 customers.  EMC just joined the fray by launching an online backup service and hiring a top former Microsoft executive to lead its cloud computing initiative.

The storage industry has been a technology innovator recently because storage is a major pain point for many companies.  With capacity requirements expanding at 30% to 50% annually, people are desperate to do something to manage that growth.

The rapid adoption of utility computing seems likely to continue, but with a curve that looks like a landscape of the Himalayan mountains.  In some segments of the market — like startups — utility infrastructures will become the status quo.  In others — like corporate data centers — adoption will come only as the technology addresses specific pain points.

This jagged adoption curve is why there’s so much debate today over the future of the cloud.  Contrast Scoble’s observations, for example, with a recent CIO Insight article in which a CTO outlines his reservations about cloud computing or a CIO Insight reader forum where IT managers take issue with Nicholas Carr’s forecast that IT will increasingly become a central utility.

This debate is happening because the need for utility computing is not perceived to be compelling in all cases.  Perhaps this is why Gartner predicts that early technology adopters will purchase 40% of their IT infrastructure is a service by 2011. Which means that the other 60% will still be acquired conventionally.

The utility computing train is coming but its arrival won’t occur the same time for all organizations. Check your local schedule.

Reshape IT Via New Models Like Software-as-a-Service

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

IT should embrace SaaS enthusiastically, as it can save a whole lot of headaches building prototypes that users reject.

Anyone who’s been in IT for more than a few years knows the dirty little secret of the profession: many IT projects (in fact, most of them, in my experience) fail. That’s been the story as long as I can remember. Why, after so many years, are we still so frustrated by failure?

There are three main reasons I’ve observed:

  • In too many companies, IT is an island that is organizationally and even physically removed from the business it serves.
  • Too many users suffer from throw-it-over-the-wall syndrome, which leads to projects that fail to match the needs that exist at delivery.
  • Turnover and organizational change undermine too many projects, making them irrelevant by the time they’re delivered.

Let’s look at how you can approach each problem.

IT is an island – IT people themselves are often too willing to accept a balkanized structure that isolates them from the business. There is a bad idea for so many reasons, but the insular, often introverted nature of technical professionals lets them rationalize this situation. They don’t communicate well with the business side, so they settle for separation.

You can’t change people’s personalities, and you can’t force people to work in situations that make them uncomfortable. But you can make sure that IT project leaders have the capacity to work productively with business end-users. That means not talking down or clamming up, but rather showing tolerance, acceptance, and humor. Your project managers are ambassadors. You need to select people with strong diplomatic skills.

With the right ambassadors in place, you can afford to set the rest of your IT organization apart to some degree. The project leaders should serve as both diplomat and translator, buffering the relationship with the business side while speaking both languages fluently.

Customer accountability – The throw-it-over-the-wall problem begins with the user sponsor, and is perpetuated by gullible IT organizations. Often, the perpetrator is a senior business-side executive, a “big idea” type who conceives of a grand vision and then hands off half-baked requirements to an IT group that often doesn’t fully understand what it’s supposed to deliver. Six months later, IT comes back with a prototype, by which time either the requirements have changed, the user has moved on, or he or she has forgotten about the whole thing.

Let’s face it: no one likes creating spec documents or sitting through progress report meetings. They’re tedious and boring. But they are absolutely essential if a project is to remain on track. The CIO needs to be the bad guy here. He or she must insist upon project management discipline and review meetings at least once a quarter to make sure the project is still relevant. The CIO needs the backing of a top company executive in taking this approach. Otherwise, IT will be buffeted by constant changes in the business environment. Which leads to the final problem.

Organizational change – How many managers can you name in your organization who have been in the same job for more than two years? In many companies today, half the leadership has taken on a new assignment in that time. So why do we still start IT projects that have deliverables scheduled a year or more down the road?

The business environment is too changeable these days to permit that kind of scheduling. Projects must be componentized, with deliverables scheduled every few months. If you can’t decompose a project like that these days, it probably isn’t a very good idea in the first place.

Technology may be riding to the rescue. The rise of the so-called “software as a service” (SaaS) business – epitomized by is enabling users to try applications before they commit to them. SaaS delivers applications over the Internet, and users can often achieve results in a matter of days. In some cases, users may find that a SaaS solution is all they need. But even if they don’t, SaaS is a heckuva way to prototype different approaches and solutions. A lot of IT organizations are approaching SaaS warily, worried that they will lose control. Instead, they should be embracing the model enthusiastically. It can save them a whole lot of headaches building prototypes that users reject.