The secret to a successful online guerrilla marketing campaign?

So what’s the secret to building huge traffic for your news and information website, without having to pay for a huge promotion staff and advertising budget?

Obviously, you need a guerrilla marketing campaign, one that encourages people to spread the word about your site, making it a viral sensation. But how can you motivate people to do that promotional work for you?

I’ll share the secret to successful guerrilla marketing online in a moment. But first, I want to assure you that journalists can make money online by running their own websites. Reporters such as Rafat Ali and Josh Marshall have gotten plenty of notice for their successes, but I’ve also found many other publishers, through forums such WebmasterWorld, who are making a more modest, but still comfortable, living from their own websites.

Journalists looking to the Web as an option for extending their careers following a newsroom layoff won’t get by on their reporting skills alone. Quality of content, unfortunately, does not determine who makes an adequate income online. Traffic does. And you need a lot of traffic to build a commercially successful website.

How much? That’s going to depend on the topic(s) your website covers. Cover a beat that attracts big-money advertisers and you will need relatively fewer readers — maybe even just a few hundred a day. Cover a a topic that appeals to businesses selling two-buck ringtones and, well, you’ll need many, many more — hundreds of thousands, most likely.

As Geneva Overholser alluded to in her post yesterday, most journalists aren’t used to worrying about building readership. They work for a newspaper, magazine or station that employs a promotion staff and maybe a circulation department. Folks on the “other side of the Wall” handle that stuff. But when you are publishing on your own (or when your site’s promotion staff has been laid off), you need to take the initiative in building your readership.

Here’s the bad news: No one is going to promote your website for free. If you are expecting people to spread the word about your site just because it has, in your opinion, “THE MOST AMAZING WEB CONTENT EVAH,” you’ll soon join the crowd of frustrated would-be online journalists, waving their $8.37 monthly AdSense checks, moaning about how “nobody can make money online.”

Here, then, is the secret to successful guerrilla marketing online: You have to give people something in return for their effort in promoting your site.

Fortunately for your checkbook, it doesn’t have to be cash. But it does have to be real.

Paul Bradshaw this week pointed to an excellent analysis of the social bookmaking website Digg, which mentioned that successful Diggers built traffic to their websites by friending other Diggers, Digging their stories and getting reciprocal Diggs in return.

The same concept applies to Twitter, and other such services. Sure, you can post your links and updates (as OJR does), but if you really want to build traffic through these communities, you need to participate in both directions, by posting and following those who follow you. (Which, I’ll admit, OJR’s Twitter account has done a lousy job of doing. Our bad.)

Put your favorite websites in your website’s blogroll, then access them by clicking from that blogroll, so your site will show up in their referrer logs. Don’t simply use YouTube as a free video hosting service. Take advantage of its community architecture, and subscribe to others’ channels.

I’ve lamented that a generation of newspaper monopolies has robbed the industry of its competitive spirit. (Heck, that’s half my archive, it seems.) But competition online requires a strong element of cooperation, as well. You need to link to others’ sites to encourage them to link and reward them for linking to you.

Don’t go overboard. Search engines will punish your websites if they suspect that you’re engaged in massive, random link trading. Start with your offline friends and colleagues, then extend your online network to include other writers and publishers whom you respect. Then include fans and other readers of your work.

Even then, social networking only gets you part of the way there. Don’t overlook the power of the traditional media that you might have left behind. Newspaper and cable TV stories can provide one-time bursts of traffic that, if you provide enough “sticky” content and functionality on your site, can help build long-term traffic growth.

Now that you are growing into a publisher, however, don’t forget your reporting skills. Few journalists are going to bother writing about your opinion or rehash of their reporting. You need original content to elicit a news report about your work. Put yourself back in those reporters’ or producers’ place. What angle or information can you offer them that would make them want to write or broadcast a story about your site?

I’ve gotten hundreds of stories over the years about my theme park website through an annual “best of” awards that I release each year on the Fourth of July. This angle works for other news organizations because it gives them fresh content about a popular topic during a slow news period. Lots of papers and TV stations send reporters to the local amusement park on the U.S. Independence Day. My awards provide them a fresh news angle for that story, so many run with it. That’s led to an annual spike of tens of thousands of additional readers for my site over the holiday period each year. And many of those readers stick around, returning to my site multiple times over the remainder of the season.

Again, in each of these cases, you need to provide some value to a reader, a writer or a publisher for taking the time to spread the word about your work. Take the initiative. Send the press release. Link to a fellow blogger. Friend a reader on Digg or Facebook. Follow them on Twitter and YouTube.

Abandon the idea that you are talking, one-way, to an obedient readership and embrace a more reciprocal relationship. Forget about your readership and start thinking about your community.

Then, you will find your guerrilla marketing campaign already underway.

OJR launches individual reader blogs

OJR now allows its registered members to maintain individual blogs on OJR.

Just click the “Post Blog Entry” link near the top of the right navigation rail to get started. You may keep an entry in working mode until you are ready for it to go live. Once it does, it will appear on your public personal profile page on OJR. (The “Your Blog” link over there on the right, if you are logged in.)

OJR’s editors and I will read all the submissions, then select ones to go on the OJR front page feed. You can find links to all the most recent reader-submitted blog entries under the “Recent Blogs” header on the right rail.

Blogs on Online Journalism Review should be used for any of the following:

  • To highlight good journalism on the Internet,
  • To criticize journalism that fails to inform the public truthfully,
  • To introduce readers to useful publishing technology,
  • To examine the economics and sociology of publishing online,
  • To cover and promote events of interest to online publishers,
  • To help journalists and other readers learn how to create sustainable publications online.

    Eric Ulken of the LA Times has used the new blog feature to highlight the Times’ new custom embeddable electoral college map and war casualty database. Chris Jennewein used the blog to write about his new Flip Video. And yesterday, Tom Grubisich posted a critique of the Washington Post’s LoudonExtra “hyperlocal” website.

    Why blog on OJR?

    You can start a free blog just about anywhere on the Web, from Blogger.com and beyond. And many of you likely already have a blog. So why would you post anything on OJR?

    It’s simple: for the readers. A front-page post on OJR will reach several thousand readers via the website, our e-mail newsletter and RSS feeds. (Each individual blog has its own RSS feed, too.) Also, OJR front-page stories are indexed by Google News and Yahoo News, and are available for their popular e-mail news alerts. OJR readers aren’t your average Web surfers, either. They include editors, entrepreneurs and bloggers at many top newspaper and independent news websites.

    So, if you want to draw the industry’s attention to some really neat new work from your shop, you want to comment on something you’ve seen in the industry that’s bugging you, or you want to rant or rave about a new tool or widget you’ve tried, we think OJR provides a pretty good platform for you to do that. Just write it up, and post it with us.

    Thank you for reading OJR, and, soon, I hope to thank you for posting here, too!

    P.S. Our USC Annenberg writers and I will continue to write for the site as well, so it won’t be all reader content on the home page, for anyone wondering about that.

  • 'What is Robots.txt?'

    Every Web publisher ought to be thinking about how to improve the traffic that they get from search engines. Even the most strident “I’m trying to appeal only to people in my local community” publishers should recognize that some people within their community, as is the case in any community, are using search engines to find local content.

    Which brings us to this week’s reader question. Actually, it isn’t from a reader, but from a fellow participant in last week’s NewsTools 2008 conference. He asked the question during the session with Google News’ Daniel Meredith, and I thought it worth discussing on OJR, because I saw a lot of heads nodding in the room as he asked it.

    Meredith had mentioned robots.txt as a solution to help publishers control what content on their websites that Google’s indexing spiders would see. A hand shot up.

    “What is robots-dot-text?”

    Meredith gave a quick and accurate answer, but I’m going to go a little more in depth, for the benefit of not-so-tech-savvy online journalists who want the hard work on their websites to get the best possible position in search engine results.

    Note that I wrote “the best possible position,” and not “the top position.” There’s a difference, and I will get to that in a moment.

    First, robots.txt is simply a plain-text file that a Web publisher should put in the root directory of their website. (E.g. http://www.www.ojr.org/robots.txt. It’s there; feel free to take a look.) The text files includes instructions that tell indexing spiders, or “robots,” what content and directories on that website they may, or may not, look at.

    Here’s an example of a robots.txt file:

    User-agent: Mediapartners-Google
    Disallow:

    User-agent: *
    Disallow: /*.doc$
    Disallow: /*.gif$
    Disallow: /*.jpg$
    Disallow: /ads

    This file tells the “Mediapartners-Google” spider that it can look at anything on the website. (That’s the spider that Google uses to assist in the serving of AdSense ads.) Then, it tells other spiders that they should not look at any Microsoft Word documents, GIF or JPGs images, or anything in the “ads” directory on the website. The asterisk, or *, is a “wild card” that means “any value.”

    Let’s say a search engine spider finds an image file in a story that’s it is looking at one your website. The image file is located on your server at /news/local/images/mugshot.jpg, that is, it is a file called mugshot.jpg, located within the images directory within the local directory within the news directory on your Web server.

    Your robots.txt file told the spider not to look at any files that match the pattern /*.jpg. This file is /news/local/images/mugshot.jpg, so it matches that pattern (the asterisk * taking the place of news/local/images/mugshot). So the spider will ignore this, and any other .jpg file it finds on your website.

    So why is this important to an online journalist? Remember that Meredith said Google penalizes websites for duplicate content. If you want to protect your position in Google’s search engine results and in Google News, you want to search engine spiders to focus on content that is unique to your website, and ignore stuff that isn’t.

    So, for example, you might want to configure your robots.txt so it ignores all AP and other wire stories on your website. The easiest way to do this is to configure your content management system to route all wire stories into a Web directory called “wire.” Then put the following lines into your robots.txt file:

    User-agent: *
    Disallow: /wire

    Boom. Duplicate content problem for wire stories solved. Now this does mean that Web searchers will no longer be able to find wire stories on your website through search engines. But many local publishers would be that result as a feature, not a bug. I’ve heard many newspaper publishers argue that coming to their sites from search engine links to wire content do not convert for site advertisers and simply hog site bandwidth.

    If you are using a spider to index your website for an internal search engine, though, you will need to allow that spider to see the wire content, if you want it included in your site search. If that’s the case, add these lines above the previous ones in your robots.txt:

    User-agent: name-of-your-spider
    Allow: /wire

    Or, use

    User-agent: name-of-your-spider
    Allow: *

    … if you wish it to see and index all of the content on your site.

    Sometimes, you do not want to be in the top position in the search engine results, or even in those results at all. On OJR, we use robots.txt to keep robots from indexing images, as well as a few directories where we store duplicate content on the site.

    Other publishers might effectively use robots.txt to exclude premium content directories, files stores on Web servers that aren’t meant for public use, or files that you do not wish to be viewed by Web visitors except those who find or follow the file from within another page on your website.

    Unfortunately, many rogue spiders roam the Internet, ignoring robots.txt and scraping content from sites without pause. Robots.txt won’t stop those rogues, but most Web servers can be configure to ignore requests from selected IP addresses. Find the IPs of those spiders, and you can block them from your site. But that’s a topic for another day.

    There’s no good reason to lament search engines finding and indexing content that you don’t want anyone other that your existing site visitors or other selected individuals to see. Nor do you have to suffer duplicate content penalties because you run a wire feed on your site. A thoughtful robots.txt strategy can help Web publishers optimize their search engine optimization efforts.

    Want more information on creating or fine-tuning a robots.txt file? There’s a good FAQ [answers to frequently asked questions] on robots.txt at http://www.robotstxt.org/faq.html.

    Got a question for the online journalism experts at OJR? E-mail it to OJR’s editor, Robert Niles, via ojr(at)www.ojr.org