Data Quality Problems are Corporate IT’s Dirty Little Secret

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

In the early days of home broadband, I was a customer of a very large cable company; whose name I’m sure you know. When making one of my frequent calls to the technical support organization, I was typically required to give all my account information to a representative who then transferred me to a support tech, who asked me to repeat all of the account information I just gave the first person.. If my problem was escalated, I got transferred to a third rep who would ask me for — you guessed it – my account information.

This company’s reputation for lousy customer service was so legendary that if you type its name and “customer service” into a search engine today, you get mostly hate sites. One of its biggest problems was poor data integration. For example, its sales database was so fragmented that I routinely received offers to sign up for the service, even though I was already a customer. I’m a customer no longer.

Does this sound familiar? Most of us and have had frustrating experiences of this kind. Thanks to acquisitions, internal ownership squabbles and poor project management, most large companies have accumulated islands of disintegrated, loosely synchronized customer data. PricewaterhouseCoopers’ 2001 Global Data Management survey found that 75 percent of large companies had significant problems as a result of defective data. It’s unlikely the situation has improved much since then. Data quality is corporate America’s dirty little secret.

The Path to Dis-Integration
There are several reasons for this, according to Paul Barth, an MIT computer science Ph.D. who’s also co-founder of NewVantage Partners. One is acquisitions. As industries have consolidated, the survivors have accumulated scores of dissimilar record-keeping systems. Even basic definitions don’t harmonize. Barth tells of one financial services company that had six different definitions for the term “active customer.”

A second problem is internal project management. The decentralization trend is pushing responsibility deeper into organizations. There are many benefits to this, but data quality isn’t one of them. When departmental managers launch new projects, they often don’t have the time or patience to wait for permission to access production records. Instead, they create copies of that data to populate their applications. These then take on a life of their own. Synchronization is an afterthought, and the occasional extract, transform and load procedure doesn’t begin to repair the inconsistencies that develop over time.

Data entry is another problem. With more customers entering their own data in online forms and fewer validity checks being performed in the background, the potential for error has grown. Data validation is a tedious task to begin with and really effective quality checks tend to produce a lot of frustrating error messages. E-commerce site owners sometimes decide it’s easier just to allow telephone numbers to be entered in the ZIP code field, for example, as long as it moves the customer through the transaction.

Finally, data ownership is a tricky internal issue. Many business owners would rather focus on great features than clean data. If no one has responsibility for data quality, the task becomes an orphan. The more bad data is captured, the harder it is to synchronize with the good stuff. The problem gets worse and no one wants responsibility to clean up that mess.

Barth has a prescription for addressing these problems. It isn’t fast or simple, but it also isn’t as difficult as you might think. Next week we’ll look at an example of the benefits of good data quality and offer some of his advice for getting your data quality act together.

IT Can Innovate in Cutting Energy Costs

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

As the price of gasoline has raced past the once-unthinkable level of $4 a galloon in the US, everyone is trying to come to grips with the implications of this historic event. Boomers like me remember the last time gas prices tripled in the 1970s. It plunged the US into a protracted recession with 18% annual inflation. Such an outcome is unlikely this time – the economy is much more globalized today that it was in those days – but it’s fair to say that the ripple effects of this shock will continue for years.

But times of crisis are also times of opportunity. The energy scare of the 1970s led to a near tripling of automotive fuel efficiency and much broader awareness of tactics for avoiding waste. It also led to shifts in the balance of power in many markets. Small, efficient players seized the opportunity to chop away at the entrenched rivals and make amazing gains. Toyota, which was a bit player in the US in 1970, was a major force a decade later.

The next couple of years are going to be traumatic. The price of everything is going to go up. The market leaders will sigh and say it’s out of their hands, but you don’t have to be so sanguine.

In many markets, new leaders will emerge among companies that can hold the line on prices by making quantum gains in efficiency. IT will be a competitive edge.

Quantum gains don’t come from adjusting the power options on PCs or turning off monitors at night. They come from rethinking entire processes. The gainers will be the companies that can innovate in reducing energy costs in area like these:

Logistics – Moving goods from one place to another as quickly and cheaply as possible will be a competitive differentiator in many industries. The latest modeling and linear programming tools can identify the cheapest and most direct logistics options. Yield management can optimize resources and help companies choose which under-performers to discard.

Workforce management – Airlines have raised fares 21 times this year and that trend will continue as long as fuel prices rise.  Business travel is on its way to becoming an expensive luxury. Back in the office, it’s becoming increasingly pointless for businesses to force their employees to commute to work each day just to sit in meetings. A big part of reducing costs in the future will be reducing real estate footprints, commuting costs and dollars burned on air travel. Technology has a huge role to play here and innovative firms that can create truly mobile and virtual workforces will gain a cost edge over the bigger companies, most of whom still barely even offer telecommuting.

Power management – When it comes to reducing power consumption, IT brings a lot to the table. Data centers consume an estimated 3% of electrical power. Moving processing tasks to off-peak hours, virtualizing under-utilized servers, redesigning data centers to lower cooling costs and switching users from desktops to more power-efficient laptops all have benefits.

The bigger opportunity, though, may be to outsource large parts of the IT infrastructure. I recently wrote about the rise of utility computing services that consolidate many customers into one giant data center. These services are so inexpensive that, for many companies, it simply won’t make sense to buy new servers any more. Even the fully amortized costs of in-house IT won’t match those of a cloud computing service.

Outsourcing –Speaking of outsourcing, the energy crisis lends new momentum to this decade-old trend. Faster networks and better software tools will make it possible for businesses to site operations in low-cost locations, including those with lower energy costs. Many corporations already outsource customer service and software development, but any function that can be performed by distributed teams is a candidate. Look at accounting, marketing operations, data analysis and even help desk as candidates. There is a human factor to be considered in moving work cross-country or overseas, of course, but the vitality of the company may be at stake.

What is your IT organization doing to streamline operations and reduce energy costs? Tell us in the comments area below.

Giving Up Control Unleashes Wisdom of Crowds

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

At IBM, podcasts are now a popular form of internal communication. One IBM executive who used to hold an unwieldy weekly conference call with 500 people spread across the globe now podcasts the same information. Listenership has doubled. At a company in which 40% of the employees work primarily outside of an office, podcasting is revolutionizing internal communication.

It wasn’t always that way. Podcasting was introduced without fanfare at the computing giant, but it quickly achieved traction because employees were allowed to experiment and “play” with the new medium, according to George Faulkner, who is one of IBM’s most visible podcasters.  In fact, one of the first successful internal uses of podcasts arguably had no business value at all: it was an IBM “battle of the bands.”

But that experimentation led to experimentation which yielded practical business applications that have reduced internal communication costs and improved IBM’s outreach to investors and the public. This is one more example of how letting go of control can unleash the innovative energy within an organization.

Sabre Holdings learned this point an enterprise Web 2.0 platform that is changing the ways its employees communicate and creating a company knowledge base. The software is known internally as SabreTown and now is being packaged for sale as Cubeless. It’s social networking software that any company can use behind its firewall.

SabreTown was derived from a website called Bambora that Sabre constructed for consumers. Members define their areas of expertise and agree to answer questions from other members in those areas. The more exchanges that take place, the richer the member’s profile becomes and the more useful the travel database.

SabreTown has helped unlock untapped expertise within Sabre Holdings, according to Al Comeaux, senior vice president, corporate communications for Sabre. With Sabre’s rapid evolution into a globally distributed company (only 45% of employees are U.S.-based today, compared to 85% eight years ago), there was a need to break down barriers of location and time.

Getting employees to buy in to the community meant giving up control over how they used the tool. Sabre rolled out the application internally without restricting the topics employees could discuss. “The more we can get people talking, the more we can capture,” says Erik Johnson, general manager of Cubeless.

More than 200 groups have formed within SabreTown and personal blog spaces are available to everyone. Any information entered into SabreTown is processed by the relevance engine and built into employees’ personal profiles. Sabre is effectively creating a massive knowledge base in which employees willingly populate with their own information. And the company is building out SabreTown’s capabilities to make it into a full-blown social network.

A key element of success was giving up control. Sabre Holdings’ executives say that SabreTown would never have taken off internally if Sabre had tried to dictate how it could be used. By letting people play, innovation took over and business applications emerged. SabreTown is a hit within Sabre and it will pay big dividends as employees share expertise.

As I noted last week, large organizations and their managers struggle with giving up control, but that’s often precisely what they need to do to. Web 2.0 technologies have demonstrated that the wisdom of crowds is greater than the knowledge of any one manager. By giving up control, organizations can gain loyalty and respect. Which, paradoxically, enhances control.

Enterprise Social Networks Are Key to Corporate Knowledge Bases

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

At the Central Intelligence Agency, Intellipedia is challenging long-held views about information propriety.  Intellipedia is an internal application of a wiki, which is one of the most popular enterprise social media tools.  The CIA is using the wiki to capture intelligence gathered from its global network of field agents and internal researchers.  It’s part of a broad effort by the notoriously secretive organization to break down silos of information and create an organizational knowledge base.

It’s also changing the culture of the agency.  In an address last week to the Enterprise 2.0 conference in Boston, the CIA’s Sean Dennehy noted that the success of shared knowledge bases requires giving up control.  “We need to fight against locked down spaces,” he said. The comment drew murmurs of surprise and some applause from the audience, who couldn’t quite believe it came from a CIA executive.

If the CIA can learn to give up control, imagine what your company can do. Social media is all about sharing. It’s based on the principle that participants give a little to get a lot.  The more you contribute to the body of knowledge, the more everyone benefits.  This principle underlines the success of a wide range of collaborative Internet sites, ranging from del.icio.us to Wikipedia. If Web 2.0 has demonstrated anything, it is that most people are motivated to do the right thing.

Pharmaceutical giant Pfizer has made the same discovery. Its Pfizerpedia wiki has more than 10,000 articles as well as numerous how-to videos. A nascent podcasting program is now spreading information by voice, and employees are trading Web discoveries through a giant internal social bookmarking platform. Pfizer is learning the power of sharing.

That’s a difficult concept for some managers to internalize. Traditional organizational structures are based on the idea that employees can’t be trusted to do the right thing.  They need to be constantly monitored and corrected to avoid going off the rails.

Social media is demonstrating that the opposite is true. It turns out that when you remove hierarchy, the community usually takes responsibility for policing its members and insuring quality work. This is especially true within groups of professionals and it couldn’t happen at a better time.

Large organizations need to start capturing their organizational knowledge. There are compelling to do this. Some 64 million baby boomers will retire in the next three years. Over the next 15 years, the size of the workforce between the ages of 30 and 49 will shrink by 3.5 million and by 2015, there will be 16 million more workers over the age of 50 then there are today.

These workers will each take with them years of accumulated knowledge. While some of that knowledge will be obviated by business evolutions, the skills needed to design an assembly line or calculate a cash flow statement won’t change. This makes it more critical than ever for organizations to capture institutional knowledge before it fades away.

Social media tools are an ideal way to do this because people populate the database themselves. In the same way that Facebook or MySpace members continually add personal information to their profiles, business professionals can develop rich descriptions of their skills and experiences by reaching out and helping each other. In the past, enterprises had only rudimentary ways to capture this information. With the arrival of internal social networks, they can now store everything in an enterprise knowledge base.

Over the next couple of entries, I’ll look at how this is playing out in US organizations today and how you can get your users – and managers – on board.

The Collaboration Paradox

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

I’m a big believer in the value of social media, enough to have written two books on the subject.  In the spirit of practicing what I preach, I posted the draft of the first book on my blog 18 months ago with good results.  Several thousand visitors read the chapters and several dozen contributed meaningful feedback.

So when I was writing the second book this spring, I thought I would go one better. I posted the entire draft on a Wiki and used my newsletter, blog and personal contacts to invite people to contribute to the finished product.

Few did.  In fact, over the course of six weeks only nine people joined the wiki and only three or four made meaningful changes. It turned out that a blog, with its limited capacity for collaboration, was far more effective in achieving my collaborative goal.

This got me thinking about the paradox of group collaboration. There’s no question that wikis can make teams more productive. Yet they are probably the greatest disappointment of the suite of Web 2.0 tools.

I’m involved in three or four organizations that use wikis to coordinate people’s activities. Not once have I seen them used to their potential. Of the few people who actually contribute to the wikis, most send a duplicate copy of the content by e-mail to make sure everyone is in the loop. Many public wikis survive only because a small number of members maintain them. Few have many active contributors.

Yet there are some phenomenally successful examples of wiki technology.  The most famous is Wikipedia, with its 10 million articles in 253 languages. Wikipedia founder Jimmy Wales recently started Wikia, a library of 2,500 special-interest wikis in 66 languages that allows people to create reference materials from the perspectives that are important to them.

There’s also evidence that Wikis are enjoying good success behind the firewall.  IBM has said that wikis are its number one social media tool, making it possible for a widely dispersed workforce to collaborate.

Why are wikis such a disappointment when they have so much potential? I think the reason is that productivity has nothing to do with it.

Wikis succeed when the interest of every member is served by participation. Projects that mainly benefit individuals or organization offer few compelling reasons for others to get involved.  It turns out that people are more than happy to comment upon another’s work, but getting them to actively contribute requires an extra measure of self-interest.  People were happy to comment upon my book, but the incentives to get them to actively contribute to someone else’s work were insufficient.  On the other hand, people who are passionate about coin collecting have an incentive to make the numismatics section of Wikipedia an accurate public record.

Productivity is often held out as an incentive for people to use new technology, but I believe that’s only a minor factor.  People continue to use spreadsheets when databases would do a better job.  They fumble along with e-mail, despite its many limitations, because that’s what they know.  The most successful new technologies have been those that enable people to transform their work or their way of life. Incremental improvements are never enough to sustain meaningful behavior change.

Old PCs Pose Environmental, Regulatory Threat

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

We all know how great it feels to have a new PC plunked down on our desktop or in our briefcase.  But for IT organizations, that exhilaration is increasingly compounded by anxiety.  What should they do about disposing of the computer that’s being replaced?

This issue is gathering importance as the number of old computers grows.  Gartner has forecast that consumers and businesses will replace more than 925 million PCs worldwide by 2010.  And that’s just one category of computer.  Gartner expects another 46 million servers to ship during the next five years, and about one billion mobile phones to be discarded yearly beginning in 2010.

There are obvious ecological concerns that attend this problem, of course. Most personal computers contain chemicals that can poison water supplies and old CRT monitors have lead linings that should never make their way into a landfill.

But the risks to businesses these days can hit even closer to home.  Discarded computers can contain proprietary data that, if disclosed, can open a company to a host of legal and compliance problems. Among the regulations that provide severe financial penalties and even imprisonment for improper data protection are the Health Insurance Portability and Accountability Act (HIPAA), Gramm-Leach-Bliley Act and Sarbanes-Oxley Act.  There are also a host of local regulations to consider, the result of Congress’s decision many years ago to make environmental rules the domain of individual states

Companies have gotten by for years on ad hoc approaches to computer disposal.  Often, they sell old machines to employees, give them to charities or palm them off on trash hauling business that dispose of the equipment in places unknown. But regulators don’t buy the “out of sight, out of mind” philosophy. Most place the onus of insuring data protection on the original owner. That means that if a PC or cell phone containing protected information turns up in a landfill overseas somewhere, the firm that captured the data is on the hook for any legal obligations.

A particular concern is the trash companies, who often piggyback their computer disposal services on top of their basic business of hauling away Dumpsters full of refuse. While many of these companies are no doubt legitimate, some tried to cut costs by piling IT equipment into containers and shipping them overseas.

In some cases, this equipment is simply thrown into open holes in the ground, causing unknown public health concerns. Many Third World companies also of the have subcultures of entrepreneurs who to disassemble equipment and sell the piece parts on the open market. In 2006, The BBC bought 17 second-hand hard drives in Nigeria for $25 each and recovered bank account numbers, passwords and other sensitive data from them. Under many regulations, the original buyers of that equipment could be liable for any security or privacy breaches that resulted.

Nearly every business should have a plan for disposing of end-of-life computers.  If storage equipment is to be repurposed, it needs to be thoroughly erased. The Department of Defense’s 5220.22-M erasure standard insures that media is completely cleansed of recoverable data. A simpler approach is to take a hammer and smash the storage media into smithereens. Whatever tactic you use, you need to document the data destruction using the appropriate compliance forms.

A new practice has also emerged called IT Asset Disposition (ITAD). ITAD vendors essentially outsource the disposal process and provide tracking, verification and even insurance against liability. Some firms can also remanufacture components and sell them, thereby reducing costs for their customers.  Research firm International Data Corp. has published a good study on the market. The site Greener Computing also has helpful advice.

Firefox Solidies Mind Share Lead

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Market share gains by the Firefox Web browser continued into early 2008, with Firefox now commanding 27% percent of all website visits. In total installed base, it still trails far behind Microsoft’s Internet Explorer, but the open-source browser has already established itself as the mind share leader. That’s a remarkable feat in less than four years.

Firefox’s success is a tribute to the power of community development and the stickiness of open-source applications. It’s an example of how giving up control can enhance market leadership. Today, I would argue, Firefox is the dominant browser on the Internet.

How can that be when Firefox has only about a third of IE’s market share? Here’s a case where share is deceptive. For one thing, Firefox has momentum, having grown from less than 1% in 2004 to its present base of an estimated 140 million users. Secondly, Firefox has the allegiance of the most influential Internet users: enthusiasts, developers and people who contribute actively to social media sites. If there’s anything the history of the software industry has shown us, it’s that platforms that generate developer enthusiasm invariably edge out their competitors.

One can even argue that Firefox is already the top browser among these thought leaders. For example, look at the results of this poll from LifeHacker, a site devoted to computer and personal productivity advice. It’s unscientific, but still interesting.  Asked why they use Internet Explorer, only about 15%, said it was because they actually preferred the software.  Half of the respondents don’t even use IE at all.  So while Firefox may have relatively low market share among all computer owners, it has achieved parity among the audience of serious enthusiasts.

This is the beauty of an open architecture comes into play.  Firefox was designed and licensed from the beginning to accommodate user-develop extensions.  More than 2,000 of them listed on the Firefox add-ons site, ranging in weekly download activity from hundreds of thousands to less than a dozen.  Some of the extensions aren’t very good, but that doesn’t really matter.  Users make their own choices and sites like LifeHacker take care of publicizing the best work.

Microsoft also permits developers to write extensions to Internet Explorer, but its approaches is quite different.  In the early days, the IE software development kit was tightly controlled and add-ons had to pass Microsoft scrutiny in order to even be listed in the official directory.  Microsoft had made an earnest effort to loosen up this process, but the company is culturally resistant to this unfettered development. In contrast to Firefox’s 2,000 extensions Microsoft’s official directory of an IE add-ons lists less than 100 entries. Perhaps that’s because, as the company states on his homepage, “The add-ons available here have been carefully screened by Microsoft and rated by users to help you select the ones that suit your needs and preferences.” Perhaps users don’t want their choices screened.

Listen to this short podcast from last summer’s O’Reilly’s Emerging Technology Conference. In it, developers contrast the chaotic mess of the Firefox developer forums with the muted restraint of the IE third-party community. As StumbleUpon’s Garrett Camp notes, new Firefox extensions and updates spark endless analysis and debate while IE developers rarely talk at all. Intensity makes markets dynamic and innovative, and that’s what Firefox has.

This market share war is meaningless from a revenue standpoint because browsers are free. But Firefox’ continuing success is a powerful case for the superiority of the open-source model.  Conceding power may paradoxically be the best way to gain power.

Utility Computing Finds its Sea Legs

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Nearly a decade ago, I worked at an Internet startup in the crazy days of the early Web.  Success demanded speed, rapid growth and scalability.  We were frequently frustrated in those days by the demands of building out our computing infrastructure.

The company burned more than $1 million in that pursuit. Racks of Unix servers had to be acquired and configured.  The equipment was set up at a co-location facility that required on-site care and feeding by technical staff.  Installation and testing took weeks.  Maintenance was a time-consuming burden requiring a staff of technicians who had to be available at all hours.  At least twice during the first year, server crashes took the company off-line for more than a day.  Stress and burnout were a constant issue.

Today, I suspect the company would do things quite differently.  Instead of acquiring computers, it would buy processing power and storage from an online service.  Startup times would be days or weeks instead of months.  Scaling the infrastructure would require simply buying more computer cycles. There would be no cost for support personnel. Costs would be expensed instead of capitalized.  More importantly, the company would be up and running in a fraction of the time that was once required.

The current poster child of utility computing is, of all companies, Amazon.com.  An article in last week’s Wired magazine describes the phenomenal success of the initiatives called S3 and EC2 that Amazon originally undertook to make a few dollars off of excess computing capacity. Today, the two services count 400,000 customers and have become a model that could revolutionize business innovation.

That’s right, business innovation. That’s because the main beneficiaries of utility computing are turning out to be startups. They’re using the services to cut the time and expense of bringing their ideas to market and, in the process, propelling innovation.

The utility computing concept has been around for years, but questions have persisted about who would use it. Big companies are reluctant to move their data offsite and lose control of their hardware assets.  They may have liked utility computing in concept, but the execution wasn’t worth the effort

It turns out the sweet spot is startup firms. Many business ideas never get off the ground because entrepreneurs can’t raise the $100,000 or more needed for capital investment in computers. In contract, Amazon says it will transfer five terabytes of data from a 400G-byte data store at a monthly fee of less than $1,400. If you use less, you pay less. It’s no wonder cash-strapped companies find this concept so appealing. Wired notes that one startup that uses Amazon services dubbed one of its presentations “Using S3 to Avoid VC [venture capital].”

Now that companies are getting hip to this idea, expect prices to come down even further. Sun already leases space on its grid networkIBM has an on-location variant. Hewlett-Packard has an array of offerings. There are even rumors that Google will get into the market with a free offering supported by advertising. And, of course, there will be startups.

The availability of cheap, reliable and easy-to-deploy computing services could enable a whole new class of entrepreneurs to get their ideas off the ground.  It’s just one more example of IT’s potential for dramatic business change.

Build a Culture of Sharing

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Last week I spoke to a group of technology executives about the value of social networks. My presentation on this topic is full of optimism about the value that organizations can achieve from sharing the expertise of individual employees with internal and external constituents. The online knowledge maps that companies are now creating truly advance the cause of the corporate intranet.

Afterwards, a manager from a major technology company buttonholed me in the parking lot.  His company is often held up as a shining example of progressive thinking in social media and it has made Web 2.0 a foundation of its marketing plans.  In light of that, what this manager told me was a surprise.

His company was trying to encourage employees to blog and use an internal social network to share information and expertise fluidly amongst its huge global workforce.  But it was being frustrated by the intransigence of many people, he said, particularly older employees. They simply refused to participate.

After overcoming my initial befuddlement, I realized that this situation is all too common in big companies. The reason senior employees resist sharing is because they believe it’s a threat to their job security.  These people have spent many years developing their skills, and they fear that giving up that information may make them irrelevant at a time when companies are all too willing to replace expensive senior people with low-paid twentysomethings.

The politics of sharing

I was reminded of a recent conversation I had with a friend who works in the IT department at a venerable insurance company.  She told me that the manager of that group instructed employees not to share information about the best practices the group was employing in managing projects.  The manager feared that disclosing that information would threaten his value to the organization.

As troubling as these stories seem, the motivations behind people’s behavior is understandable.  Few companies give much credence to the value of institutional memory any more.  In fact, in today’s rapidly changing business climate, too much knowledge of the way things used to be done is often seen as a negative.  But it’s really a negative only if people use memories as a way of avoiding change.  Knowledge of a company’s culture and institutions is actually vital to the change process.  Any CEO who’s been brought in from the outside to shake up a troubled company can tell you that his most valuable allies are the people who help navigate the rocky channels of organizational change.

My suggestion to this manager was to learn from social networks.  In business communities like LinkedIn, people gain prestige and influence by demonstrating their knowledge. The more questions they answer and the more contacts they facilitate the greater their prestige.  In social networks, this process is organic and motivated by personal ambition.  Inside the four walls of a company, it needs some help.

As companies put internal knowledge networks into place, they need to take some cues from successful social media ventures.

  • Voting mechanisms should be applied to let people rate the value of advice they receive from their colleagues.
  • Those who actively share should be rewarded with financial incentives or just publicity in the company’s internal communications.
  • Top executives should regularly celebrate the generosity of employees who share expertise.
  • Top management should commit to insulating these people from layoffs and cutbacks.

Sharing expertise should enhance a person’s reputation and job security. Management needs to make the process a reward and not a risk.

Stop Thinking Solutions; Start Thinking Needs

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Last week, I introduced you to Tony Ulwick and Lance Bettencourt of Strategyn, a company that helps businesses optimize customer inputs to improve innovation. Their methodology is all about doing away with subjective terms and focusing on the real barriers that customers and internal stakeholders encounter in getting their jobs done. In part two of my interview with them, Ulwick and Bettencourt discuss the details of getting customers to avoid generalities and assumptions in order to create a context in which innovation can flourish.

Q: You suggest that solutions shouldn’t be referenced in customer requirements statements. Why not?

Ulwick: It isn’t the responsibility of the customer to come up with innovative solutions, but rather to help the company to understand the needs they have for the job they’re trying to get done. When solutions are included in a need statement, it focuses the customer on the “here and now” rather than what they are trying to accomplish. Getting at what they’re trying to get done is the true basis for innovation. In fact, a need statement that includes a solution has a built-in expiration date, which is problematic. The ideal need statement should be just as relevant ten years from now as it was ten years ago. It should guide short-, medium and long-term innovation. A person can’t imagine today what a solution will look like in ten years.

Q: When you ask people to define their unmet needs, they often simply ask for a better version of what they already have. How do you create questions that get at their true unmet needs?


Ulwick:
If you know that a needs statement must not include any reference to a solution, and that needs must relate to the job the customer is trying to get done, then you don’t have that problem.

We ask customers questions like what takes them a lot of time? What introduces variability into the process? What issues do they have with getting the right output from each step? This straightforward line of questioning focuses on the job rather than on solutions and ensures that metrics relate to time, variability, and output. Those are the three types of metrics we see, regardless of customer type or industry.

Q: You recommend against using adjectives and adverbs in need definition. Can you give an example of how this rule might apply to an internal customer defining a need to the IT organization to improve workplace productivity?

Bettencout: Let’s take one what has great relevance to the IT group role – the task of collaborating with others. If you were to ask employees about what introduces variability in this process, they might say something like “other employees are not dependable.”

The metric for this is problematic because “dependable” can mean many different things. If you ask someone to describe undependable behavior, she might say things like “forgets about meetings” and “fails to pass along important information.” These statements begin to get to the level of specificity we need for innovation. They become viable need statements when phrased as “minimize the likelihood that a team member forgets to attend a scheduled meeting” and “minimize the likelihood that a team member fails to pass on to other team members information that is needed for decision-making.”

Q: Can you offer any guidance on how to deal with terms that are inherently difficult to define, such as “simple” or “easy to use?”

Ulwick: Perhaps the best way is to ask the customer to describe something that is not simple or easy to use. Customer needs often have to do with reducing time, variability, waste and inefficiency. Asking him to provide examples of what getting the job done looks like when it isn’t simple or easy can be very productive. This also holds for other difficult-to-define adjectives, such as “reliable,” “durable” and “scaleable”

Q: How effective are customers at defining their own unmet needs rather than simply asking a way to do what they’re doing a little better?

Bettencourt: If you ask them about the struggles they encounter in the job they’re already trying to get done, they can be quite forthcoming. One way to approach that is to walk them through the steps and asking them about time, variability, and output concerns at each step. However, it’s also possible to ask them about what they like and dislike about current solutions they’re using.

The key is to understand that a customer’s likes and dislikes with today’s solutions have to do with their needs in getting the job done. Again, what’s critical is to understand the requirements of a good need statement; you don’t need to be restricted to asking just one specific type of question.

Q: Many engineering-driven organizations have a culture that doesn’t invite customer input. ow do you challenge this culture and effectively turn the focus back on customer needs?

Ulwick: We find that engineers are actually among the most receptive to outcome-driven innovation thinking. They know how hard it is to innovate without a clear understanding of the customer’s unmet needs, and they appreciate systematic thinking. In mature markets, where problems can’t be easily addressed by engineering-based innovation, engineers appreciate the outcome-driven approach. It gives them specifics to work with instead of taking stabs in the dark.