It’s true what you’ve heard. I’m leaving TechTarget effective Nov. 18. And no, I wasn’t fired. And the company’s doing just great, thank you.I’ll post more details about my reasons for leaving and what I plan to do next on Nov. 19. I’m sure you can’t stand the suspense! 🙂
Author Archives: Paul
Why good companies fail
IT Conversations has a fascinating podcast by Clayton Christensen, the Harvard Business School professor and author of the best-selling book The Innovator’s Dilemma. Speaking at the Open Source Business Conference 2004, Christensen explains why well-managed companies with superior products are sometimes beaten by startups with inferior products. The basic reason: if the product is “good enough” for a large group of customers who didn’t have access to that kind of functionality before, the customers will adopt it and stay with it while the product improves rather than go with the expensive, over-engineered alternatives from the industry leaders.
Listen to the podcast. It’s great.
More marketing myopia
It’s been a long time between posts because of a process known as 2006 strategic planning, which at my company is a grueling analysis of the year’s results and likely progress in the next year. It’s an important and worthwhile exercise but it saps a lot of time.
There’s no question that CIOs are critical influencers in IT buying but you have to put their role in perspective. Most large organizations have IT budgets in the millions of dollars. At the very biggest, that number can be over a billion dollars. The CIO’s role in these companies is to align IT with business strategy: know where the organization is going and how technology can support those goals. This business focus is becoming more and more critical to the CIO role. As it should. CIOs have less and less time to concern themselves with the specific vendors and products.
Product selection is increasingly being delegated to the lower levels of the organization. This only makes sense in markets that are competitive and in which the core feature sets of most products are similar. Technology selection has become an increasingly complex process because choices are made based on nuances such as vertical market features, support, price and vendor viability.
In a typical IT organization, the CIO is responsible for setting strategic direction, managing a budget, identifying approved vendors and signing off on purchase decisions. However, the process of researching and identifying the vendors who will provide new products and services is largely delegated to the people who will work with those products and vendors. IT is becoming more specialized, which means that the specialists are the ones who make the most critical decisions. They decide which products and vendors to recommend to the CIO and it is their reasoning and research that most influences a selection. CIOs don’t have the time or expertise to dig into these questions. In fact, smart CIOs know that if they did try to micro-manage every decision, they would make worse choices because they don’t know as much about the market or technology as the people below them.
That’s why nearly every CIO I’ve spoken to has said that hiring good people is one of his/her biggest challenges. The CIO’s job is too big and complex not to require good delegating skills.
CIOs do play a critical role in signing off on the purchase which is where visibility and relationships come in. It’s important that these executives be familiar and comfortable with the vendors they align themselves with. That’s where brand advertising works.
But vendors who just target the CIO are missing critical influencers. The people lower in the food chain are the ones most likely to decide who gets on the short list. Few marketers get this. Savvy IT organizations go through a rigorous process of identifying needs, researching suppliers and products, developing a “short list” and choosing strategic partners. The CIO is usually involved at the beginning and end of this cycle, but rarely in the middle.
And where the technology is “disruptive” the CIO’s role is even smaller. In fact, nearly every truly game-changing technology that has emerged in the enterprise landscape going back to minicomputers was brought in the back door of the organization. Think of it: the CIO’s role is to maintain stability and reliability. He/she is rarely going to stir the waters with disruptive change. Technologies like PCs, cell phones, PDAs, file servers, the Internet and open source software have been successful because risk-takers at the low levels of the organization adopted them and proved their viability. Microsoft and Dell were successful in the early days because they targeted PC managers, not CIOs.
I’ll post more on this as we wrap up a research study about the IT buying process.
Sun's bold strokes
I have been a pointed critic of Sun Microsystems for some time, at one point comparing it to Digital Equipment Corp., which rode its proprietary strategy into the ground in the early 90s in the face of overwhelming evidence that it was a wrong-headed approach.
While Sun has taken steps to make its UltraSparc technology more competitive, I was more intrigued by its intentions to put Solaris into the open source domain. This was a huge cultural hairball for Sun to swallow. Sun has maintained for years that Solaris was so superior to Linux that it justified the huge premium it commanded in the market. But users have increasingly had trouble buying that story. For the mass market, Linux worked just fine.
Sun has finally accepted the reality that Solaris was not going to win the battle against Linux in any but the uppermost reaches of the Unix market. This insures that Linux will have a potent high-end competitor for a long time to come. For Sun, the challenge is to insure that there’s a reason to buy Sun boxes to run Solaris instead of commodity hardware. That’s an easy argument to make right now, while Solaris is still mainly Sun code. It could be a tougher case a couple of years from now.
But that’s a battle for the future. Sun’s currnet bet is that an open-sourced Solaris will gain enough adherents that the revenue Sun can make from selling hardware to those people will exceed the revenue it would have made selling to a smaller and smaller captive Solaris base. I think it’s a good bet.
Linux needs a spoiler and open source Solaris can fill that role. No one seriously argues that Solaris isn’t a superior Unix. Sun has specialized in high availability, industrial-grade applications for years. The question was whether Solaris deserved the price premium it commanded. Increasingly, it didn’t. By open-sourcing Solaris, Sun is putting a potent Linux competitor into the market. That’s good for Linux and for users. It’s probably bad for Red Hat, Novell and anyone who has cast its lot with Linux.
I don’t see Solaris becoming a mainstream Linux alternative any time soon but for enteprises and those who demand enterprise-class reliability, open source Solaris will be an exciting alternative. If Sun follows through on its commitment to keep Solaris open source, then it will have introduced an exciting new alternative to the market.
The question now is what Red Hat and Novell should do. Both have cast their lots with Linux. But now they have a robust, industrial-grade alternative Unix that could create a profitable revenue stream. Do they stay loyal to Linux or become Solaris adherents, too? It’s an interesting problem…
Service or subversion?
TechTarget editors had an interesting debate last week over whether to publish information that could potentially cause harm in the hands of a malicious or reckless user but which could also do good for people who know how to use it. They also featured information on the best online proofreading service.
It started with a tip submitted by Don Burleson, a respected and oft-published Oracle technical expert and a member of SearchOracle.com’s Ask the Experts team. Don wrote about undocumented features in Oracle that permit a user to manipulate memory to achieve significant performance gains. This technique could save time and money for users who can’t afford new servers or who don’t have the time to optimize their databases in other ways.
But there’s a catch. If applied inappropriately, ths technique can corrupt a database and cause data to be damaged or lost. Don was very up front about that and the editors on our SearchOracle.com site posted a prominent disclaimer at the front of the tip.
Some people thought that wasn’t enough. Tom Kyte, another respected Oracle expert, took issue with Don’s suggestions on his blog. He further suggested, disclaimer or not, it was reckless and dangerous for Don and for SearchOracle.com to post advice that could potentially corrupt data. Responses to the postings on Tom’s blog largely agreed with his position.
Other experts we polled were split down the middle, some thinking the tip was a valuable service to the Oracle community, others saying we were tossing a time bomb into a crowd. What’s the right thing to do?
In the end, the editors decided to keep the tip on the site while somewhat strengthening the language of the disclaimer. I agreed with this decision. Although there are no cut-and-dried answers on what is right in a situation like this, these are the factors I would consider:
- Is the information correct? No question of that in this case. No one disputed the accuracy of the tip
- Is the information useful? If it isn’t useful don’t publish it. I don’t think anyone argued that this advice wasn’t useful to some people. The debate was whether the potential harm outweighed the potential value.
- Is the source credible? There’s no question that both Don and Tom know what they’re talking about.
- Does the potential for misuse outweigh the value of appropriate use? The decision largely hinges on this question. In my opinion, disclaimers should significantly mitigate any potential damage.
On most points, then, the decision to publish the information was obvious. The language of the disclaimer was the only major issue in my mind and I believe the wording that the editors used conveyed the risk appropriately. Basically, anyone who was motivated and interested enough to employ this advice would read he disclaimer and be aware of the risks.
This is not the same as, for example, publishing an Oracle security exploit. In that case, there is little value to the user and great potential for damage. Nor do I believe should media organizations every post advice from anonymous sources unless the content is vetted thoroughly for accuracy. But when respected experts put forth advice that is useful to even a minority of the community they serve – even if there’s risk – it’s the responsibility of independent media to seriously consider publishing it. What happened last week was a debate but not a disservice.
Understanding open source
If you want to get a great understanding of why open source software is such a powerful phenomenon, read Jan Stafford’s interview with Julie Hanna Farris of Scalix Corp. and follow it up by downloading Tim O’Reilly’s podcast presentation, “The Software Paradigm Shift,” on ITConversations.com.
The development world has talked about making software modular going back to the days of 4GLs and, later, object-oriented programming. It’s a noble objective but the development processes of commercial software companies discouraged the practice because software was always delivered in one big clump – or release – that lived in the market until an update was due. There was basically no incentive to develop in a modular fashion.
Open source software is under constant development by thousands or even millions of programmers around the world. If the software isn’t designed to incorporate constantly slipstreamed improvements and fixes, the whole model breaks down. That’s the beauty of open source. It is designed for continuous improvement.
O’Reillyl refers to recent developments at Google, Amazon and others to support his point. Google’s news, maps, local and Froogle services are in seemingly constant beta test, undergoing refinements as they serve users. With Google Maps, Google published interfaces that allowed developers to extend the platform for new applications. For example, GasBuddy.com extends Google maps to allow users to search for cheap gas in their vicinity. Housingmaps.com combined CraigsList.com home and apartment listings with Google maps to help you pinpoint attractive properties in your area. Amazon’s Yellow Pages beta pinpoints nearby businesses and provides rich information about them.
It’s certainly a new approach to software development and one that promises exciting innovations. However, I’m not sure corporate IT organizations will be as enchanted with perpetually modified software as developers are. IT groups value consistency and management. Many would rather have a single version of a package deployed across the company – even if it’s an older version – than have different iterations springing up everywhere depending which fixes and enhancements users and administrators had downloaded.
It’ll be an interesting push/pull. There’s no doubt that modularity and open development increase the speed at which new ideas reach the market. But corporate IT isn’t usually as interested in innovation as dependability. The willingness of enterprises to embrace this new approach to development will have a lot to do with how effectively open source is assimilated into the enterprise.
Cisco in denial
You have to wonder why companies don’t learn from the mistakes of their predecessors. Cisco has been in hot water with its users and the media this last month over security problems in its software. The vendor released a boatload of fixes for various OS and applications problems last month and then recently and then issued a cease and desist order against a former employee who revealed a serious flaw in the IOS operating system at the Black Hat conference this month. User reaction was predictable. People wonder why Cisco is in denial over these problems instead of moving proactively to fix them. In the case of the IOS flaw, the patch had actually been available for months. Why not use the opportunity to tell users to upgrade their software?
Shades of Microsoft and Intel. When Microsoft became the target of security sleuths who pointed out vulnerabilities in Windows, the vendor first reacted by attacking its accusers. It was only after multiple reports of flaws emerged that Microsoft turned the problem into a PR advantage by announcing it would dedicate the company to making its products secure.
Similarly, when Intel was the subject of embarrassing revelations about flaws in Pentium chips in 1994, it waited six months to acknowledge the weaknesses. Much to Intel’s surprise, users and media who had pilloried Intel for months flocked to support the company once it fessed up. The Pentium problems are only a distant memory now.
Cisco should learn from Microsoft’s and Intel’s mistakes. Software is imperfect and prone to bugs. Good companies learn from their mistakes and are direct with their users. No one will criticize Cisco for admitting its problems and rededicating itself to do better. Why wait?
Roboform is great software
It’s not often that I stumble across a piece of shareware that I use every single day, but Roboform from Siber Systems now occupies just such a vaunted position on my desktop. If you’re anything like me, you have dozens of logons and passwords for different Web sites. Most people maintain this information informally in everything from text files to Post-It notes, and passwords are forgotten all the time.
Roboform is a spiffy little browser utility that enables you to store all your logon, password, credit card and personal information in one place. It does a great job of populating Web forms with the information you need, saving you time and preventing errors. It also appears to be free of adware, spyware and other privacy invasions that infest a lot of free software utilities. I’ve been using for two months with no problems, and Roboform has genuinely made my life easier. I highly recommend it.
Interex's death is bad for HP users
Coincidentally, the same day that I posted the article on HP’s sense of purpose, the Interex user group suddenly closed its doors and said it was cancelling its HP World conference. HP World had long been the one place that HP users could gather and share/support/network/commiserate about common issues. Now their only option is the HP Technology Forum, an event produced and, no doubt, tightly controlled by HP itself.
Independent user groups are a dying breed, and that’s bad for the tech industry. The Common and Share groups for IBM users are dramatically smaller than they used to be. The International Oracle Users Group has feuded with Oracle, which would just as soon stamp it out. Encompass (formerly DECUS) is now working hand-in-glove with HP, presumably to insure its own continued existence. ASUG, which is the independent SAP user group, is financially healthy, but uncomfortably chummy with SAP, in my opinion.
In their heyday, independent user groups provided a needed forum for users to band together and share their common concerns with a vendor. Open-forum meetings with vendor executives could be boisterous and even rowdy affairs with users shouting down vendor reps over areas of disagreement. But much of this activism has now moved online and the industry shift to commodity platforms has lessened the “lock-in” factor that once made users so passionate about their platforms. It’s a shame that there isn’t a more active independent movement in the Windows world to represent the users who are locked in to that platform. Give Microsoft credit for never allowing an independent user group to get off the ground.
There is a need for these kinds of gatherings, even in the commoditized technology world. There are still a lot of legacy platforms out there, and newsgroup postings are no replacement for the chance to get in a vendor’s face and make one’s opinions known. Common still does a pretty good job of maintaining its independence, in my view. But most groups have aligned themselves closely with vendors in the interests of survival. Sadly, HP users now have one less place to turn when they want to make their feelings known. And it’s sad for HP, too, in a way. A strong, independent user group is a sign of a vendor’s continued relevance in the industry, and one can argue that Interex’s failure is a blow to HP’s profile as a significant industry force.
HP's sense of purpose
Give credit to Mark Hurd, the new CEO of HP, to act quickly and decisively in cutting 14,500 jobs, or 10% of HP’s workforce, and scrapping an account-driven sales organization that had added complexity and bureaucracy to the organization while increasing the disconnect between sales people and the products they were selling.
Hurd’s actions in many ways are a step toward returning HP to the decentralized, close-to-the-market culture that made the company so great in the first place. They’re also reminiscent of the moves Lou Gerstner made when he first joined IBM during its financial crisis in the early ’90s. Gerstner shook up the complacent IBM culture by cutting hundreds of thousands of jobs and making every project accountable to a business benefit. Painful as they were, Gerstner’s moves saved IBM.
I’m not so sure what’s going to save HP. HP is a great company with a great culture, but it’s a perennial #2 or #3 in most of the markets it serves. In fact, outside of printers I can’t think of a single product category in which HP enjoys a clear leadership position. The company has done some innovative things in the consumer market with its multimedia PCs, and its OpenView systems management suite is a top-tier product, but it’s been a long time since HP has had the kind of big bang hit that gets people buzzing. This is still basically a company that makes most of its profit selling ink.
I trace HP’s decline as an innovator to two events. The company partnerered with Intel in the late ’90s to build a family of chips that became the Pentium family. It was an unprecedented deal and HP trumpeted it mightily at the time. But there’s no evidence that the actual partnership ever amounted to much and the partnership seemed to take some of the urgency out of HP’s efforts to build a competitive processor. It was basically the first major computer to concede the chip war to Intel.
The other was HP’s decision a few years ago to play Switzerland on the Unix/Windows debate. By not lining up on one side or the other, the company essentially positioned itself as the safe choice on servers. It’s okay to be the safe choice if you’re the market leader, but HP did not enjoy such a position at the time. I think that strategy was a recipe for ambivalence. If you don’t have a side to root for in any conflict, it’s hard to get motivated about participating. HP, which had been a leader in Unix workstations and which had challenged in Intel-based servers through the late ’90s, seemed to fall into a funk after it took itself out of the operating system debate.
I don’t know how Mark Hurd is going to shake up that culture. It’s a huge challenge. I do know that playing it safe would be the wrong thing to do. Hurd has the endorsement of HP shareholders, who have seen shares drop 50% in the last five years, to take bold action. The layoffs are a start. Let’s hope he can stay true to Carly Fiorina’s advertising slogan and find a way to make HP “innovate” again.