On to new things

I’ve tried to live by the principle that if you don’t try something, you’ll never know if you would have liked it. And I have always wanted to try my hand at the business side of publishing.

Thus it was that I became a publisher a little more than a year ago. A company reorganization presented the perfect opportunity because my job as chief editor was being scaled down by decentralization. My boss agreed to let me give it a shot.

I worked very hard over the last year. I struggled to learn stuff that was way outside my comfort zone: sales management, pricing, marketing, campaign management, quotas, forecasting and on and on. I had some very gratifying successes but I also made my share of mistakes. Learning on the job is hard.

And ultimately what I learned is that publisher isn’t the job for me. I’m an editor at heart. I’m inspired by the constant change in this technology and I’m fascinated by the dynamics of this industry. I like a good story and I like talking to the people who are creating change. I might have been a competent publisher with time, but I think I’ll always be a better editor. Life’s too short to do work that doesn’t inspire you.

So I decided to leave TechTarget and go out on my own, which is a step I’ve always planned to take. My idea is to launch a business creating online custom content for technology marketers. There’s an explosion of activity in this area – white papers, webcasts, podcasts and live events – and a need for people who can speak the language of IT professionals. Paul Gillin Communications will provide that service when I launch the company the week after Thanksgiving.

I’m leaving TechTarget on the best of terms. The company’s management was terrific and even agreed to become my first customer. I’m hoping to do a lot of business with TechTarget customers on behalf of the company.

I learned a lot in the last year. I gained tremendous respect for good salespeople. That’s a hard, hard job and it takes persistence and a tough skin to do it well. I learned that advertising decisions involving hundreds of thousands of dollars are often made on intuition and faith. And I learned a ton about the finer points of online marketing. I’ll bring all those lessons to my next venture.

Soon this blog will be replaced by the website for my new business. In the meantime, please contact me at paul@gillin.com if you want to learn more about my services. Or just to chat.

Television in turmoil

Give credit to TV producers for not making the same mistake as the music industry and actually making TV programs available free over the Internet. But lest we compliment them too much for their great vision, remember that TV shows don’t have nearly the shelf life of music and that the risk of video piracy is much less than the risk of copyright violation in the music world. Giving away old “Welcome Back Kotter” episodes is kind of a no-brainer.I also wonder why AOL and Time Warner are only making programs available in streaming format. Does anyone think people want to watch TV on their laptop? The only time I’ve seen anyone do that is on a plane or in a car and that’s when you don’t have a broadband connection.More interesting is Apple’s deal with ABC to distribute TV programs as downloads. That’s a better model but I’m not convinced the video iPod is the right device. Someone’s going to figure this out, though. Give the edge to Apple.

Tracking podcasts

There’s an interesting story in Media Post about Audible.com’s new technology for tracking podcasts (you may have to register to read it). Audible was distributing podcasts long before there was an iPod and if anyone will figure out a way to monetize this new craze, Audible will. I expect technology and money to move very quickly into making podcasting a commercial medium. The rap on podcasting has been that you couldn’t sell advertising on it because you couldn’t track what people did with any given MP3 file. Well, companies like Audible are solving that. I expect there’ll be an explosion of well produced commercial podcast products once these tracking issues are resolved.

Why good companies fail

IT Conversations has a fascinating podcast by Clayton Christensen, the Harvard Business School professor and author of the best-selling book The Innovator’s Dilemma. Speaking at the Open Source Business Conference 2004, Christensen explains why well-managed companies with superior products are sometimes beaten by startups with inferior products. The basic reason: if the product is “good enough” for a large group of customers who didn’t have access to that kind of functionality before, the customers will adopt it and stay with it while the product improves rather than go with the expensive, over-engineered alternatives from the industry leaders.

The PC, of course, is the best example of this. But it applies to so many tech innovations: LANs, PDAs, cell phones, open source….When you think of it, none of those products were particularly good at the outset but people loved them because they could do something that they couldn’t have done before. And they stuck with them while they matured.

Listen to the podcast. It’s great.

More marketing myopia

It’s been a long time between posts because of a process known as 2006 strategic planning, which at my company is a grueling analysis of the year’s results and likely progress in the next year. It’s an important and worthwhile exercise but it saps a lot of time.

I’ll rant a bit today about C-level myopia or computer marketing professionals’ hyperfocus on the top level of the IT organization. It’s one of the most frustrating aspects of IT marketing and, I believe, a significant impediment to many companies’ success. I hear this time and again in talks with vendors: they want to reach the CIO. But few of the companies who say that have a chance of getting on the CIO’s radar. And even if they did, it’s doubtful that would do much good for their businesses.

There’s no question that CIOs are critical influencers in IT buying but you have to put their role in perspective. Most large organizations have IT budgets in the millions of dollars. At the very biggest, that number can be over a billion dollars. The CIO’s role in these companies is to align IT with business strategy: know where the organization is going and how technology can support those goals. This business focus is becoming more and more critical to the CIO role. As it should. CIOs have less and less time to concern themselves with the specific vendors and products.

Product selection is increasingly being delegated to the lower levels of the organization. This only makes sense in markets that are competitive and in which the core feature sets of most products are similar. Technology selection has become an increasingly complex process because choices are made based on nuances such as vertical market features, support, price and vendor viability.

In a typical IT organization, the CIO is responsible for setting strategic direction, managing a budget, identifying approved vendors and signing off on purchase decisions. However, the process of researching and identifying the vendors who will provide new products and services is largely delegated to the people who will work with those products and vendors. IT is becoming more specialized, which means that the specialists are the ones who make the most critical decisions. They decide which products and vendors to recommend to the CIO and it is their reasoning and research that most influences a selection. CIOs don’t have the time or expertise to dig into these questions. In fact, smart CIOs know that if they did try to micro-manage every decision, they would make worse choices because they don’t know as much about the market or technology as the people below them.

That’s why nearly every CIO I’ve spoken to has said that hiring good people is one of his/her biggest challenges. The CIO’s job is too big and complex not to require good delegating skills.

CIOs do play a critical role in signing off on the purchase which is where visibility and relationships come in. It’s important that these executives be familiar and comfortable with the vendors they align themselves with. That’s where brand advertising works.

But vendors who just target the CIO are missing critical influencers. The people lower in the food chain are the ones most likely to decide who gets on the short list. Few marketers get this. Savvy IT organizations go through a rigorous process of identifying needs, researching suppliers and products, developing a “short list” and choosing strategic partners. The CIO is usually involved at the beginning and end of this cycle, but rarely in the middle.

And where the technology is “disruptive” the CIO’s role is even smaller. In fact, nearly every truly game-changing technology that has emerged in the enterprise landscape going back to minicomputers was brought in the back door of the organization. Think of it: the CIO’s role is to maintain stability and reliability. He/she is rarely going to stir the waters with disruptive change. Technologies like PCs, cell phones, PDAs, file servers, the Internet and open source software have been successful because risk-takers at the low levels of the organization adopted them and proved their viability. Microsoft and Dell were successful in the early days because they targeted PC managers, not CIOs.

I’ll post more on this as we wrap up a research study about the IT buying process.

Sun's bold strokes

I have been a pointed critic of Sun Microsystems for some time, at one point comparing it to Digital Equipment Corp., which rode its proprietary strategy into the ground in the early 90s in the face of overwhelming evidence that it was a wrong-headed approach.

But I have to admit some admiration for Sun’s recent moves to reinvent itself in the data center. I was in San Francisco this week for Oracle Open World and had a chance to hear Sun’s Scott McNealy outline the company’s comeback strategy. I was impressed.

While Sun has taken steps to make its UltraSparc technology more competitive, I was more intrigued by its intentions to put Solaris into the open source domain. This was a huge cultural hairball for Sun to swallow. Sun has maintained for years that Solaris was so superior to Linux that it justified the huge premium it commanded in the market. But users have increasingly had trouble buying that story. For the mass market, Linux worked just fine.

Sun has finally accepted the reality that Solaris was not going to win the battle against Linux in any but the uppermost reaches of the Unix market. This insures that Linux will have a potent high-end competitor for a long time to come. For Sun, the challenge is to insure that there’s a reason to buy Sun boxes to run Solaris instead of commodity hardware. That’s an easy argument to make right now, while Solaris is still mainly Sun code. It could be a tougher case a couple of years from now.

But that’s a battle for the future. Sun’s currnet bet is that an open-sourced Solaris will gain enough adherents that the revenue Sun can make from selling hardware to those people will exceed the revenue it would have made selling to a smaller and smaller captive Solaris base. I think it’s a good bet.

Linux needs a spoiler and open source Solaris can fill that role. No one seriously argues that Solaris isn’t a superior Unix. Sun has specialized in high availability, industrial-grade applications for years. The question was whether Solaris deserved the price premium it commanded. Increasingly, it didn’t. By open-sourcing Solaris, Sun is putting a potent Linux competitor into the market. That’s good for Linux and for users. It’s probably bad for Red Hat, Novell and anyone who has cast its lot with Linux.

I don’t see Solaris becoming a mainstream Linux alternative any time soon but for enteprises and those who demand enterprise-class reliability, open source Solaris will be an exciting alternative. If Sun follows through on its commitment to keep Solaris open source, then it will have introduced an exciting new alternative to the market.

The question now is what Red Hat and Novell should do. Both have cast their lots with Linux. But now they have a robust, industrial-grade alternative Unix that could create a profitable revenue stream. Do they stay loyal to Linux or become Solaris adherents, too? It’s an interesting problem…

Service or subversion?

TechTarget editors had an interesting debate last week over whether to publish information that could potentially cause harm in the hands of a malicious or reckless user but which could also do good for people who know how to use it. They also featured information on the best online proofreading service.

It started with a tip submitted by Don Burleson, a respected and oft-published Oracle technical expert and a member of SearchOracle.com’s Ask the Experts team. Don wrote about undocumented features in Oracle that permit a user to manipulate memory to achieve significant performance gains. This technique could save time and money for users who can’t afford new servers or who don’t have the time to optimize their databases in other ways.

But there’s a catch. If applied inappropriately, ths technique can corrupt a database and cause data to be damaged or lost. Don was very up front about that and the editors on our SearchOracle.com site posted a prominent disclaimer at the front of the tip.

Some people thought that wasn’t enough. Tom Kyte, another respected Oracle expert, took issue with Don’s suggestions on his blog. He further suggested, disclaimer or not, it was reckless and dangerous for Don and for SearchOracle.com to post advice that could potentially corrupt data. Responses to the postings on Tom’s blog largely agreed with his position.

Other experts we polled were split down the middle, some thinking the tip was a valuable service to the Oracle community, others saying we were tossing a time bomb into a crowd. What’s the right thing to do?

In the end, the editors decided to keep the tip on the site while somewhat strengthening the language of the disclaimer. I agreed with this decision. Although there are no cut-and-dried answers on what is right in a situation like this, these are the factors I would consider:

  • Is the information correct? No question of that in this case. No one disputed the accuracy of the tip
  • Is the information useful? If it isn’t useful don’t publish it. I don’t think anyone argued that this advice wasn’t useful to some people. The debate was whether the potential harm outweighed the potential value.
  • Is the source credible? There’s no question that both Don and Tom know what they’re talking about.
  • Does the potential for misuse outweigh the value of appropriate use? The decision largely hinges on this question. In my opinion, disclaimers should significantly mitigate any potential damage.

On most points, then, the decision to publish the information was obvious. The language of the disclaimer was the only major issue in my mind and I believe the wording that the editors used conveyed the risk appropriately. Basically, anyone who was motivated and interested enough to employ this advice would read he disclaimer and be aware of the risks.

This is not the same as, for example, publishing an Oracle security exploit. In that case, there is little value to the user and great potential for damage. Nor do I believe should media organizations every post advice from anonymous sources unless the content is vetted thoroughly for accuracy. But when respected experts put forth advice that is useful to even a minority of the community they serve – even if there’s risk – it’s the responsibility of independent media to seriously consider publishing it. What happened last week was a debate but not a disservice.

Understanding open source

If you want to get a great understanding of why open source software is such a powerful phenomenon, read Jan Stafford’s interview with Julie Hanna Farris of Scalix Corp. and follow it up by downloading Tim O’Reilly’s podcast presentation, “The Software Paradigm Shift,” on ITConversations.com.

Both speakers make the point that open source’s strength lies not so much in its licensing model or lower cost as in the fundamentally different approach to development. Open source software must be modular to developed by a far-flung community and that modularity is what enables open source programs to be created and modified so quickly. In fact, Linus Torvalds has said that if he had to develop the Linux kernel in the closed, stratified environment typical of commercial software companies, he never would have delivered Linux quickly enough to be meaningful to the user community.

The development world has talked about making software modular going back to the days of 4GLs and, later, object-oriented programming. It’s a noble objective but the development processes of commercial software companies discouraged the practice because software was always delivered in one big clump – or release – that lived in the market until an update was due. There was basically no incentive to develop in a modular fashion.

Open source software is under constant development by thousands or even millions of programmers around the world. If the software isn’t designed to incorporate constantly slipstreamed improvements and fixes, the whole model breaks down. That’s the beauty of open source. It is designed for continuous improvement.

O’Reillyl refers to recent developments at Google, Amazon and others to support his point. Google’s news, maps, local and Froogle services are in seemingly constant beta test, undergoing refinements as they serve users. With Google Maps, Google published interfaces that allowed developers to extend the platform for new applications. For example, GasBuddy.com extends Google maps to allow users to search for cheap gas in their vicinity. Housingmaps.com combined CraigsList.com home and apartment listings with Google maps to help you pinpoint attractive properties in your area. Amazon’s Yellow Pages beta pinpoints nearby businesses and provides rich information about them.

It’s certainly a new approach to software development and one that promises exciting innovations. However, I’m not sure corporate IT organizations will be as enchanted with perpetually modified software as developers are. IT groups value consistency and management. Many would rather have a single version of a package deployed across the company – even if it’s an older version – than have different iterations springing up everywhere depending which fixes and enhancements users and administrators had downloaded.

It’ll be an interesting push/pull. There’s no doubt that modularity and open development increase the speed at which new ideas reach the market. But corporate IT isn’t usually as interested in innovation as dependability. The willingness of enterprises to embrace this new approach to development will have a lot to do with how effectively open source is assimilated into the enterprise.

Cisco in denial

You have to wonder why companies don’t learn from the mistakes of their predecessors. Cisco has been in hot water with its users and the media this last month over security problems in its software. The vendor released a boatload of fixes for various OS and applications problems last month and then recently and then issued a cease and desist order against a former employee who revealed a serious flaw in the IOS operating system at the Black Hat conference this month. User reaction was predictable. People wonder why Cisco is in denial over these problems instead of moving proactively to fix them. In the case of the IOS flaw, the patch had actually been available for months. Why not use the opportunity to tell users to upgrade their software?

Shades of Microsoft and Intel. When Microsoft became the target of security sleuths who pointed out vulnerabilities in Windows, the vendor first reacted by attacking its accusers. It was only after multiple reports of flaws emerged that Microsoft turned the problem into a PR advantage by announcing it would dedicate the company to making its products secure.

Similarly, when Intel was the subject of embarrassing revelations about flaws in Pentium chips in 1994, it waited six months to acknowledge the weaknesses. Much to Intel’s surprise, users and media who had pilloried Intel for months flocked to support the company once it fessed up. The Pentium problems are only a distant memory now.

Cisco should learn from Microsoft’s and Intel’s mistakes. Software is imperfect and prone to bugs. Good companies learn from their mistakes and are direct with their users. No one will criticize Cisco for admitting its problems and rededicating itself to do better. Why wait?