Build a Culture of Sharing

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Last week I spoke to a group of technology executives about the value of social networks. My presentation on this topic is full of optimism about the value that organizations can achieve from sharing the expertise of individual employees with internal and external constituents. The online knowledge maps that companies are now creating truly advance the cause of the corporate intranet.

Afterwards, a manager from a major technology company buttonholed me in the parking lot.  His company is often held up as a shining example of progressive thinking in social media and it has made Web 2.0 a foundation of its marketing plans.  In light of that, what this manager told me was a surprise.

His company was trying to encourage employees to blog and use an internal social network to share information and expertise fluidly amongst its huge global workforce.  But it was being frustrated by the intransigence of many people, he said, particularly older employees. They simply refused to participate.

After overcoming my initial befuddlement, I realized that this situation is all too common in big companies. The reason senior employees resist sharing is because they believe it’s a threat to their job security.  These people have spent many years developing their skills, and they fear that giving up that information may make them irrelevant at a time when companies are all too willing to replace expensive senior people with low-paid twentysomethings.

The politics of sharing

I was reminded of a recent conversation I had with a friend who works in the IT department at a venerable insurance company.  She told me that the manager of that group instructed employees not to share information about the best practices the group was employing in managing projects.  The manager feared that disclosing that information would threaten his value to the organization.

As troubling as these stories seem, the motivations behind people’s behavior is understandable.  Few companies give much credence to the value of institutional memory any more.  In fact, in today’s rapidly changing business climate, too much knowledge of the way things used to be done is often seen as a negative.  But it’s really a negative only if people use memories as a way of avoiding change.  Knowledge of a company’s culture and institutions is actually vital to the change process.  Any CEO who’s been brought in from the outside to shake up a troubled company can tell you that his most valuable allies are the people who help navigate the rocky channels of organizational change.

My suggestion to this manager was to learn from social networks.  In business communities like LinkedIn, people gain prestige and influence by demonstrating their knowledge. The more questions they answer and the more contacts they facilitate the greater their prestige.  In social networks, this process is organic and motivated by personal ambition.  Inside the four walls of a company, it needs some help.

As companies put internal knowledge networks into place, they need to take some cues from successful social media ventures.

  • Voting mechanisms should be applied to let people rate the value of advice they receive from their colleagues.
  • Those who actively share should be rewarded with financial incentives or just publicity in the company’s internal communications.
  • Top executives should regularly celebrate the generosity of employees who share expertise.
  • Top management should commit to insulating these people from layoffs and cutbacks.

Sharing expertise should enhance a person’s reputation and job security. Management needs to make the process a reward and not a risk.

Stop Thinking Solutions; Start Thinking Needs

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Last week, I introduced you to Tony Ulwick and Lance Bettencourt of Strategyn, a company that helps businesses optimize customer inputs to improve innovation. Their methodology is all about doing away with subjective terms and focusing on the real barriers that customers and internal stakeholders encounter in getting their jobs done. In part two of my interview with them, Ulwick and Bettencourt discuss the details of getting customers to avoid generalities and assumptions in order to create a context in which innovation can flourish.

Q: You suggest that solutions shouldn’t be referenced in customer requirements statements. Why not?

Ulwick: It isn’t the responsibility of the customer to come up with innovative solutions, but rather to help the company to understand the needs they have for the job they’re trying to get done. When solutions are included in a need statement, it focuses the customer on the “here and now” rather than what they are trying to accomplish. Getting at what they’re trying to get done is the true basis for innovation. In fact, a need statement that includes a solution has a built-in expiration date, which is problematic. The ideal need statement should be just as relevant ten years from now as it was ten years ago. It should guide short-, medium and long-term innovation. A person can’t imagine today what a solution will look like in ten years.

Q: When you ask people to define their unmet needs, they often simply ask for a better version of what they already have. How do you create questions that get at their true unmet needs?


Ulwick:
If you know that a needs statement must not include any reference to a solution, and that needs must relate to the job the customer is trying to get done, then you don’t have that problem.

We ask customers questions like what takes them a lot of time? What introduces variability into the process? What issues do they have with getting the right output from each step? This straightforward line of questioning focuses on the job rather than on solutions and ensures that metrics relate to time, variability, and output. Those are the three types of metrics we see, regardless of customer type or industry.

Q: You recommend against using adjectives and adverbs in need definition. Can you give an example of how this rule might apply to an internal customer defining a need to the IT organization to improve workplace productivity?

Bettencout: Let’s take one what has great relevance to the IT group role – the task of collaborating with others. If you were to ask employees about what introduces variability in this process, they might say something like “other employees are not dependable.”

The metric for this is problematic because “dependable” can mean many different things. If you ask someone to describe undependable behavior, she might say things like “forgets about meetings” and “fails to pass along important information.” These statements begin to get to the level of specificity we need for innovation. They become viable need statements when phrased as “minimize the likelihood that a team member forgets to attend a scheduled meeting” and “minimize the likelihood that a team member fails to pass on to other team members information that is needed for decision-making.”

Q: Can you offer any guidance on how to deal with terms that are inherently difficult to define, such as “simple” or “easy to use?”

Ulwick: Perhaps the best way is to ask the customer to describe something that is not simple or easy to use. Customer needs often have to do with reducing time, variability, waste and inefficiency. Asking him to provide examples of what getting the job done looks like when it isn’t simple or easy can be very productive. This also holds for other difficult-to-define adjectives, such as “reliable,” “durable” and “scaleable”

Q: How effective are customers at defining their own unmet needs rather than simply asking a way to do what they’re doing a little better?

Bettencourt: If you ask them about the struggles they encounter in the job they’re already trying to get done, they can be quite forthcoming. One way to approach that is to walk them through the steps and asking them about time, variability, and output concerns at each step. However, it’s also possible to ask them about what they like and dislike about current solutions they’re using.

The key is to understand that a customer’s likes and dislikes with today’s solutions have to do with their needs in getting the job done. Again, what’s critical is to understand the requirements of a good need statement; you don’t need to be restricted to asking just one specific type of question.

Q: Many engineering-driven organizations have a culture that doesn’t invite customer input. ow do you challenge this culture and effectively turn the focus back on customer needs?

Ulwick: We find that engineers are actually among the most receptive to outcome-driven innovation thinking. They know how hard it is to innovate without a clear understanding of the customer’s unmet needs, and they appreciate systematic thinking. In mature markets, where problems can’t be easily addressed by engineering-based innovation, engineers appreciate the outcome-driven approach. It gives them specifics to work with instead of taking stabs in the dark.

Innovation Through Precision

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Does the following scenario sound familiar to you? An internal customer has come to you with a problem.  Her group is consistently missing deadlines because of poor communication.  E-mails frequently go unread for days, group members don’t respond to questions in a timely fashion and too many meetings are required to get everybody back on track.  The manager read in an airline magazine about wikis, and thinks they are the perfect solution.  She wants you to get one up and running as quickly as possible.

What do you do?  Experienced project managers will tell you that the last thing would be to install a Wiki.  Better solutions may be available, and your job as an IT professional is to analyze the needs the manager has defined and identify the most appropriate solution.

Tony Ulwick and his team at Strategyn would tell you to take one more step back. They’d see all kinds of problems in the needs statement that was just presented.  For example, words like “consistently,” ”frequently,” “timely” and “too many” are vague and subjective. Furthermore, the solution that the manager seeks — better performance against deadline — may be far short of the bigger goal of improving group performance.  You need to define the problem better before tackling a solution.

Optimizing inputs

Strategyn specializes in helping companies optimize customer inputs to improve innovation. Ulwick, who has published widely on this topic, believes that most projects fail not because customers don’t understand the problem but because the people trying to solve the problem don’t ask the right questions.  Strategyn’s methodology starts with helping stakeholders define needs very specifically so that vendors and internal service organizations can innovate from them. That means discarding adjectives and subjective statements, talking about jobs instead of outcomes and using very specific terms.

Tony Ulwick

Lance Bettencourt

Over the next two blog entries, I’ll present an interview with Tony Ulwick and Lance Bettencourt, a senior adviser at Strategyn.  You can also find some helpful free white papers on this subject at the Strategyn website (you need to register to view them).

Q: You say businesses often respond to perceptions of customer need rather than actual defined needs. What are some governance principles you believe internal services organizations can embrace to address these needs? Is a structured approach to needs definition necessary?

Bettencourt: Businesses try to respond to too many customer needs because they don’t know which needs are most unmet, so they hedge their bets. An organization must have a clear understanding of what a need is. Without a clear understanding and a structured approach to needs definition, anything that the customer says can pass for a need.

Many organizations are trying to hit phantom needs targets because they include solutions and specifications in their needs statements, use vague quality descriptors and look for high-level benefits that provide little specific direction for innovation.

Customer needs should relate to the job the customer is trying to get done; they should not include a solution or features. They should not use ambiguous terms. They should be as specific and consistent as possible to what the customer is trying to achieve. It’s important that a diverse body of customers be included in this research because different needs are salient to different customers. The ultimate goal is to capture all possible needs.

Q: Your advice focuses a lot on terminology. At times, you recommendations are as much an English lesson as a prescription for innovation! Why the emphasis on terms?

Ulwick: What distinguishes a good need statement from a bad need statement is not proper English, but precision. If a so-called need statement includes a solution, for example, then it narrows the scope of innovation to something the customer is currently using rather than what the customer is trying to get done.

For example, if a need statement includes ambiguous words such as “reliable,” then it undermines innovation in multiple ways. Different customers won’t agree on what that means when, say, printing a document. Or they won’t agree on what “reliable” means in general. This leads to internal confusion and ultimately to solutions that may not address the actual unmet need. Customer need statements have to be precise and focused if you want to arrive at an innovative solution.

Q: You suggest focusing on the job. What’s the definition of “job” for these purposes?

Ulwick: We mean the goal the customer is trying to accomplish or the problem the customer is trying to solve. A job is an activity or a process, so we always start with an action verb when we’re creating a job statement. Positions can’t be jobs. In fact, there may be multiple distinct jobs associated with a given position. For innovation purposes, “job” is also not restricted to employees. We’re not just talking about people trying to get their work done, but all their daily activities.

The best way to get at a clear definition of the job is to begin with the innovation objectives of the organization or, in your example, the IT department. If the goal is to create new innovations in an area where there are already solutions in place, then the organization should understand what jobs the customer is trying to get done with current solutions. If the goal is to extend the product line into new areas, then the definition should begin by understanding what jobs the customer is trying to do in a different but adjacent space. Defining the job helps the organization to achieve its innovation objectives.

Next week, Tony Ulwick and Lance Bettencourt tells how development organizations can ask the right questions to assess customer needs.

Utility Computing Train is Coming, But It May Be Late to Your Station

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

The move to utility or “cloud” computing shows every sign of reaching critical mass over the next couple of years.  But it won’t be driven by corporate data centers.  The momentum, instead, is coming from two factors that increasingly dictate the pace of innovation: startups and fear.

In 1991, noted technology columnist Stewart Alsop wrote, “I predict that the last mainframe will be unplugged on 15 March 1996.”  Yet as of last year, there were still 10,000 mainframes running worldwide, according to IBM.  Was Alsop wrong? Technically, yes, but the shift that he foresaw is happening.  It’s just being driven by different factors than he expected.

Technology innovation today follows a strikingly consistent pattern. New companies with no legacy base make the switch first while the people with the most to lose are the last ones to change. Instead, they jump on board when they discover that new technology addresses a significant pain point.

Both forces are evident today in utility computing. Robert Scoble wrote persuasively last November about the “serverless” Internet company. His comments were prompted by a meeting with the CEO of Mogulus, a streaming video firm the claims not to own a single server.  What interested me most about Scoble’s remarks is the 65 comments that follow.  Many are from other small companies that are building IT infrastructure from the ground up without servers.  Some of these companies are offering high-bandwidth services on a very large scale, demonstrating scalability and reliability aren’t a problem. In fact, any startup business today should look first at outsourcing its IT infrastructure before investing in a single square foot of computer room space.

Meanwhile, utility services are actually achieving critical mass in a corner of the mainstream corporate IT market: storage. Services like Amazon’s S3 now have well over 300,000 customers.  EMC just joined the fray by launching an online backup service and hiring a top former Microsoft executive to lead its cloud computing initiative.

The storage industry has been a technology innovator recently because storage is a major pain point for many companies.  With capacity requirements expanding at 30% to 50% annually, people are desperate to do something to manage that growth.

The rapid adoption of utility computing seems likely to continue, but with a curve that looks like a landscape of the Himalayan mountains.  In some segments of the market — like startups — utility infrastructures will become the status quo.  In others — like corporate data centers — adoption will come only as the technology addresses specific pain points.

This jagged adoption curve is why there’s so much debate today over the future of the cloud.  Contrast Scoble’s observations, for example, with a recent CIO Insight article in which a CTO outlines his reservations about cloud computing or a CIO Insight reader forum where IT managers take issue with Nicholas Carr’s forecast that IT will increasingly become a central utility.

This debate is happening because the need for utility computing is not perceived to be compelling in all cases.  Perhaps this is why Gartner predicts that early technology adopters will purchase 40% of their IT infrastructure is a service by 2011. Which means that the other 60% will still be acquired conventionally.

The utility computing train is coming but its arrival won’t occur the same time for all organizations. Check your local schedule.

The Connected Generation

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

The concept of “presence” is altering business communications. Will you be ready to speak to the next generation of professionals on their own terms?

The chart below should tell you a lot about the future of the workplace. For the past three years, the Associated Press and America Online have measured the use of e-mail and instant messaging (IM) by teens and adults.  In all three surveys, the results have been similar: usage patterns are nearly reversed between the two groups, with teens overwhelmingly preferring IM.

AP-AOL

Source: Associated Press/America Onlin, Nov., 2007

Why?  In part, teens admit, it’s to avoid confrontation and embarrassment by taking the face-to-face element out of awkward situations.  But equally important is that IM reflects teenagers’ always-connected lifestyles.  IM is instantaneous, requires little forethought and lends itself well to use with cell phones, the ubiquitous teen accessory.

Today’s young people expect that their friends will always be available to them, regardless of where they are. Teens are no less communicative than their parents; 50% of them use IM more than one hour a day, compared to 24% of adults, according to the poll. It’s that the nature of their communications is different.  They don’t have time to get to a computer or to carefully compose their thoughts before stating them.  When they have something to say, they want to just say it.

Experts call this “presence:”  our availability and our preferred communications media are a matter of record to the people who need to reach us, whether they’re family, colleagues or customers.  Presence reflects the fact that people are no longer anchored to their desks.  They work at home or on the road most of the time.  Location is a critical element of presence.  Increasingly, our online profiles will include up-to-date information on where we are and how available we want to be. Sophisticated cell phone tracking technology and global positioning systems may even make this transparent to us.

Presence will redefine workplace communications. The New York Times recently reported on the evolution of social networking to include cell phones, which are the primary Internet access points for most of the developed world.  Many of these services factor location into member profiles.  At work, we will need to broadcast our location constantly, since the hyperactive business world no longer tolerates delay.  Corporate directories are evolving to include rich information about people’s background and expertise, along with the means to tap into their knowledge whenever someone in the organization needs it.

The trick will be to balance our need for concentration with the requirement of availability.  I imagine many people blanch at the idea that they would be expected to turn on a dime anytime someone else in their organization needed a question answered. That’s a problem that will be addressed through standards of conduct that will emerge as the technology takes hold. However, the new rules are becoming clear. The next generation of business professionals won’t tolerate the delays that are inherent in business communications.  They are always online, and they expect their colleagues to be there as well.

Tips For a Greener – And Cheaper – Data Center

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

The EPA estimates that data centers eat up about 1.5% of all electricity in the United States and that nearly a quarter of that power is wasted.

As I noted last week, energy waste is one of the dirty little secrets of corporate data centers.  Add on to that the money lost due to PCs sitting idle overnight and the waste inherent in abandoned and underutilized servers and you have a lot of money just sitting out there waiting to be found.

Energy-saving is mostly a matter of common sense.   The simplest approach may be to start, literally, at the ground floor.  If you peer under the raised flooring in your data center, you’ll probably find pockets of cables clustered together. These could be inhibiting air flow.  By re-cabling and deploying vented floor tiles in strategic locations, you can cut energy waste with almost zero capital expense.  Many consultants now specialize in this area, and bringing in an expert can save time and money in the long run. This article tells the story of one company that saved him $1 million annually in power costs through this simple measure.

The next step is to analyze your server use to determining what can be shut down and consolidated.  One speaker at last fall’s Data Center World conference proposed a radical idea: if you don’t know who’s using a server, shut it down. Mark Monroe, Sun Microsystems’ director of sustainable computing, said that his group tried this approach and discovered that nearly 12% of its servers weren’t being used for anything.  The application owners had moved on and no one had bothered to shut down the application.

Consolidate the servers you’re using into one location and direct your cooling resources to that hotspot.  Use virtualization to pack more physical servers onto fewer virtual ones.  By some estimates, about 70% of servers in the data center are only supporting one application.  Utilization rates of less than 15% on single servers are not uncommon. These are ideal candidates for virtualization.

Air conditioning is responsible for the greatest energy waste.  The problem is that most data center managers don’t know where all their hotspots are, so they take a brute force approach and cool the entire data center to a certain level. The reality is that the hottest servers probably consume a minority of floor space.

While high-density servers can consume less power overall than the individual machines they replace, there’s no reason to structure your cooling plan around the needs of maybe 10% of your hardware. Several vendors now sell server racks that are optimized for cooling.  Also, the water cooling technique that was common in the mainframe days two decades ago is staging a revival as server consolidation comes back into vogue.

Crank up the heat

Once you’ve isolated your most power-consumptive servers, turn up the thermostat on the rest of them.  Most servers can operate perfectly well at temperatures of as much as 100° (be sure to check with your supplier before trying that, though) and each 1° increase in temperature can save about 4% in energy costs, according to Sun’s Monroe.

You should also become familiar with the EPA’s Energy Star initiative. This program sets standards for efficient energy use and publishes lists of products that meet them.  Needless to say, computers are a major Energy Star focus. Did you know, for example, the EPA estimates that enabling basic power management features that come with the Windows operating system can save up to $75 per computer per year? While there are legitimate reasons to leave PCs on all night at times, simple open source network tools can enable systems managers to shut down unused computers and still have the flexibility to power them on when needed. The Energy Star website has a list of software tools for remote power management as well as a power management savings calculator.

A somewhat more radical option is to outsource all or part of the data center.  While there are many factors involved in this decision, the potential energy savings of such a move shouldn’t be underestimated.  In the case of total data center outsourcing, contractors should be able to provide you with power savings estimates to factor into your calculations.  Amazon’s S3 storage service is one of many specialized offerings that are emerging. Amazon sells cheap off-site data storage. One of its appeals is that users don’t need to pay for — and cool — on-site storage area networks.

Most technology vendors now have green initiatives, and you should become familiar with what your key vendors are doing. For example, IBM has made its own IT operations a showcase of energy efficiency. In the course of consolidating 155 data centers worldwide down to just seven, it’s cut operational costs by $1.5 billion. This podcast tells more.

What are you doing to save energy in your data center? Write your suggestions in the comments are below.

Computer Industry Finally Going Green

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Data center heat dispersionThe graphic at right may look kind of cool, but it’s anything but.  It’s actually a simulation of the heat distribution of a typical data center prepared by Innovative Research, a computational fluid dynamics company.  It demonstrates graphically what all data center managers already know: the data center is nearly impossible to keep cool.

Unfortunately, this fact is costing us a fortune.  As the price of oil breaches $100 a barrel, new attention is being focused on the possibilities of wringing big savings out of data centers by attacking their notoriously lousy energy efficiency.  Some stats:

  • The amount of electricity consumed by US data centers doubled between 2000 and 2006 and is expected to double again by 2011 according to the U. S. Environmental Protection Agency (EPA).
  • A typical 50,000-square-foot data center consumes about 57 barrels of oil per day.
  • Data centers consume 1.5% of all electricity in the U.S., the EPA says.
  • About 40% of the power used by data centers goes to cooling, according to several estimates. About 60% of that expense is wasted, however, because of what you see in the graphic to the left.  Data center heat distribution is extremely erratic and spot cooling is complicated. Instead, companies use brute force and over-cool most of their equipment just to be sure the hottest machines don’t melt.
  • Over half the power that companies use to run their desktop computers is wasted because the machines aren’t shut off overnight or don’t power down when not in use, according to ClimateSaversComputing.org.  Most companies could save between $10 and $50 per PC per year by using basic power management software, according to Greener Computing. That adds up.

These numbers are deplorable, but the Network World research identified an interesting explanation.  Its survey found that 68% of IT manager respondents weren’t responsible for their energy bills. In most cases, those costs were paid by the facilities department like PC Doctor, PC Doctor is a Computer Repairs Company in Edinburgh. If IT never even sees the electric bill, it has no incentive to reduce it.

There is good news. Data centers are getting unprecedented attention right now as sources of significant cost savings, even if it’s only because there’s so much room for improvement. A recent PriceWaterhouseCoopers study found that 60% of 150 senior executive respondents rated energy costs as a top priority, which means at their IT managers will be getting an e-mail.  IBM has made green data centers a key part of its marketing strategy. Dell recently launched an international competition to design technology products with a greener focus. Then there’s ClimateSaversComputing.org, an initiative sponsored by Google and Intel in which technology providers agree to hit certain energy consumption targets.

Members of the technology CEO Council were in Washington just a few weeks ago to pitch the idea that investments in IT can save energy.  While there agenda was self-serving, there’s no question that the industry as a whole is turning its attention to fixing this mess.

And its such an obvious mess to fix.  Whether your motivations are the rapid payback, the positive environmental impact or the simple satisfaction of knowing that you’re not flushing money down the drain, why wouldn’t you want to make your IT operation more power-efficient?  Next week, will look at a few ideas for just how to do it.

Reinventing U.S. Innovation

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

John KaoJohn Kao (right) believes the United States has an innovation crisis, and he’s calling on today’s corps of young technology professionals to sound the alarm.

Citing technology pioneer Vannevar Bush’s assertion more than 60 years ago that “A nation that loses its science and technology will lose control of its destiny,” Kao said the U.S. is in peril of becoming a technology laggard.

“The US public education system is veering further away from preparing kids for the world,” the author of Innovation Nation: How America Is Losing Its Innovation Edge, Why It Matters, and What We Can Do to Get It Back told the MIT Enterprise Forum early this month. “We spend more on education than any country in world, yet we’re between 24th and 29th in math performance.”

By contrast, Finland, a country that suffered a near economic collapse after the Soviet Union fell apart, today produces twice as many Ph.D.s per capita than the U.S. The Finns turned around their economy, in part, by creating a national design focused on science and technology education. As a result, “Two years ago, Finland was the number one competitive economy in the world, according to the World Economic Forum,” Kao said. “Its education system is rated the best in the world. People want to be teachers in Finland.”

We’ve heard this before, of course.  In the late 1980s, Japan famously challenged the US for leadership in technology innovation with initiatives like the Fifth Generation Computer Project and a nationwide commitment to developing artificial intelligence.  Those ambitious plans foundered, but Kao argues that this time is different.

Today, countries like Singapore and China are making technology innovation the centerpiece of a national strategy that’s backed by incentives, equipment and money. Singapore, for example, has set up the Biopolis, a $300 million biomedical research lab spread across a 10-acre campus. Singapore is allocating money to train expatriate scientists in other countries on the condition that they repay the government with six years of service. The country is also promising to remove the layers of bureaucracy and legal approvals that frustrate scientists in the U.S.

Singapore has convinced top researchers from MIT and the U.S. Centers for Disease Control to pull up stakes and move to the tiny nation-state with financial incentives and promises of academic freedom.

This last point is a key difference between the national technology policies of today and the failed models of two decades ago.  Thanks to the Internet and business globalization, countries now have the capacity to build very large local industries serving overseas customers. Kao told of a friend who’s building a global travel business for 1/10th of what it would have cost a decade ago. He farms out much of the development work overseas. “Countries want to ally with American intellectual capital,” he said.

Therein lies a challenge for US competitiveness. The United States has long been able to rely upon the global brain drain from other countries to fuel its innovation economy. Over half of the engineering Ph.D.s awarded in the U.S. now go to foreign-born students. Many of those people have traditionally settled in Silicon Valley or other technology-rich environments. But the lifestyle trade-offs aren’t as dramatic as they used to be. “Now there’s an Apple store and a Starbucks in Bangalore [India],“ Kao said.

With overseas economies offering tax havens, comfortable salaries, research grants and other perks to technology achievers, some countries that used to lose talent to the US have actually reversed the migration.

What can US technology professionals do?  Well, on a purely selfish level there may be some attractive opportunities to pull up stakes and move overseas. Singapore, for example, has earmarked a half billion dollars to fund digital imagine research. But if you’re more interested in improving the domestic situation, then start by voting for candidates that have a vision and a plan for US technology competitiveness.

You can also out into the classroom and share your own experiences with tomorrow’s innovators.  Many teachers would be glad for the help. In John Kao’s words, “Many people who teach math and science in U.S. public schools are forced to do it.” In contrast, “In China, people with masters degrees in math willingly come in to teach in the schools.”

Wisdom of Crowds, Yes; Democratic Innovation, No

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Technology makes it possible to involve customers intimately in product development, but experts must still make the decisions.

In 2003, two Australian entrepreneurs accomplished something no one thought was possible. Knowing nothing about the business of brewing and distributing beer, they successfully penetrated the duopolistic Australian beer market and, in four years, achieved a base — and 50,000 customers —  in 46 nations. Brewtopia, which is now a publicly traded company, has since expanded into bottled water and soft drinks.

The secret to their success was their customers. The two founders set up a Web site and invited beer enthusiasts to vote on everything from the style and alcoholic content of the beer to the design of the labels.

Their inspiration was the story of PK-35, a Finnish soccer team. PK-35’s coach tried an experiment, asking fans to vote on nearly every aspect of the team’s operations, even its on-field strategy. What Brewtopia’s founders didn’t know was that the results of the soccer experiment were so bad that the coach was fired and the idea was scrapped after just one season.

Both of these stories are related in an inspiring and entertaining new book, We are Smarter than Me, by Barry Libert, Jon Spector “and thousands of contributors.” Using anecdotes and homespun logic, the authors make a compelling case for involving customers directly in a business’ product design and strategic direction. This idea is all the rage today, thanks to visible initiatives like Procter & Gamble’s pledge to derive half of its new product ideas from its customers by the end of the decade.

IT’s Central Role

IT organizations will increasingly find themselves at the center of these customer campaigns. That’s because only robust technology can effectively harness the contributions of thousands — or millions — of voices.

This is an exciting place for IT folks to be: at the center of corporate strategy. But it’s also an arena that demands discipline. As the soccer experiment demonstrated, community governance is not always the best strategy.

Many business executives will be enchanted by the concepts described in this book and will quickly ask their technology groups to set up forums and voting sites to accept customer contributions. The technology side of this challenge is fairly straightforward, but the business implications aren’t so simple.

Customer input is always desirable, but management by consensus or election isn’t. We know, for example, that democracy is a superior form of government, yet voters sometimes elect terrible leaders.

Businesses aren’t governments. They don’t exist to serve the public good, and they need to make difficult decisions in order to remain viable. Customer input needs to be tested and weighed against business objectives.

Companies that are successfully experimenting with this “crowd-sourcing” concept are finding ways to achieve that balance. Dell Computer, for example, has set up a Web site called IdeaStorm, which invites comments from customers and applies community voting tactics and discussion forums to developing innovations.

P&G has tapped into idea networks like Innocentive to get advice and solve problems. It has had spectacular results, but ideas from the community are still vetted by experts within the company for viability and relevance.

Customer innovation is an exciting opportunity for IT organizations to contribute to the business, but it’s important to resist the temptation to simply throw technology at the problem. Innovative organizations will seek to stretch the limits of what technology can do in order to sift through a mountain of suggestions. But at some point, human beings must still step in and make decisions.

Don’t Become an SEO Junkie

This article originally appeared in BtoB magazine.

Back in my days as an editor at an Internet company, I learned to live and die by traffic. Page views, unique visitors and traffic growth were our gods, and we became very good at driving the numbers.

Too good, in fact. Our editors discovered that there was an assortment of cheats and shortcuts they could use to create traffic spikes. Top 10 lists, stupid customer tricks, contests and “was my face red” anecdotes were guaranteed hits that could ease the constant pressure to achieve number goals.

The problem was that the traffic was junk. Visitors came, but they didn’t stay or register. There was no way to monetize their visits, other than with a few low-cost banner ads. Worse, overindulgence in these tactics made a site look shallow and juvenile, which actually drove away the serious businesspeople we were trying to attract. Most of these practices were eventually discarded.

That experience came to mind recently as my inbox and RSS reader filled up with an assortment of “Best of 2007” collections. Hundreds of these articles dealt with traffic optimization strategies, everything from headline writing to tricks for driving visits from Digg.com and StumbleUpon.com. One blogger posted an impressive 8,500-word list of the top marketing blog posts of the year, most of which dealt with traffic strategies.

A lot of this advice was very good, but I shudder to think that marketers may take it too much to heart. Today, they are at an intersection of opportunity and challenge. Their opportunity is to become content producers on par with the mainstream media that have long been the gatekeepers. The challenge is that the online marketing world still lacks consensus on how to measure online success, but there are companies as indexsy seo agency that can help improve your online results.

With no agreement on metrics, many marketers will fall back to Web site traffic as the gold standard says the CEO of https://placementseo.com/seo-reseller-services. But this puts them at the risk of resorting to gimmickry and sensationalism in order to get attention. When everyone else is shouting, the urge is just to shout louder.

The ever-increasing influence of search and recommendation engines only raises the stakes as Google has become the universal home page, most of those who could really be considered one of the best seo experts right now, would agree that because of this an army of consultants has sprung up to figure out how to beat the system. Marketers that hew too closely to their recommendations risk delivering boatloads of traffic to content that is, well, junk.

It will be years before the industry hammers out a consensus on metrics, so don’t wait. Have a discussion with your leadership about the need to measure success with factors that really count: registrations, repeat visits, sales orders and whatever else affects the bottom line. Make sure your goals and compensation are tied to meaningful results. Learn from the search optimization experts, but don’t let them run your life.

 

Enhanced by Zemanta