Your site's fate, in a blink of an eye

You’ve got one-twentieth of a second to grab a first-time visitor to your website, according to a new study published in the journal Behaviour and Information Technology.

BBC News reports that conclusions drawn about the aesthetic appeal of websites by users who looked at those sites for just 50 milliseconds closely matched those drawn by other users who looked at the sites for longer periods.

“Unless the first impression is favourable, visitors will be out of your site before they even know that you might be offering more than your competitors,” lead researcher Gitte Lindgaard of Canada’s Carleton University, told the BBC.

No pressure there….

It's not that you got it wrong; it's how often you blew it

Online encyclopedia Wikipedia‘s taken well-deserved hits recently for its bogus entry on a friend of the Kennedy family. But readers need proper context for such criticism. If a publication makes a mistake (which, eventually, we all do), how does its error rate compare with those of others?

The journal Nature this week provides a partial answer. In its investigation, Nature asked leading scientists to examine articles on Wikipedia and in Encyclopaedia Britannica on a variety of science topics. In the 42 articles examined, researchers found 162 errors, omissions or misleading statements in the Wikipedia entries, with 123 in Britannica. Yet the researchers categorized just eight errors as serious – and those were evenly split, with four in Wikipedia and four in Britannica.

The investigation demonstrates, once again, that Wikipedia is not a perfect source of 100-percent accurate information. But neither is Encyclopaedia Britannica. That Wikipedia was able to perform as well as Brittanica in avoid serious errors on difficult scientific content provides a strong endorsement for the concept of getting good information by letting readers collectively write and edit it.

Ban all robots to stop the rogues?

Almost all Web publishers successful enough to have to pay bandwith charges have struggled with how to deal with traffic from robots. These are the automated programs, sent by search engines, crackers, spammers, sloppy developers and even overeager handheld owners, to scan, index and even download thousands of pages from your website.

When I arrived at OJR, I was surprised to find that more than half, almost two-thirds, of the site’s traffic was not from human readers, but from robots. Some of that traffic was welcomed, such as robots from major search engines like Google and Yahoo News. But much of it was from rogue spiders — spammers trolling for e-mail addresses, attempts to download the entire site for duplication on various scraper sites, and such. I spent a fair amount of time tweaking OJR’s robots.txt file to ban identified rogue spiders, and OJR’s stats software to filter hits from the rest.

Well, this week WebmasterWorld.com has taken the radical step of banning all spiders from its site. In a post on the site, administrator Brett Tabke reported that despite spending five to eight hours a week fending off rogue spiders, the site was still hit with 12 million unwanted spider page views last week.

The move, presumably, will result in WebmasterWorld disappearing from major search engine results and would eliminate the site from archive searches, as as Archive.org.

WebmasterWorld has established a large and loyal audience. One could argue that the site doesn’t need search engine traffic. But how loyal will its readership turn out to be if members can’t search for the site, or its archives, through Google, et al?

As Brett titled his post announcing the change, “lets try this for a month or three…”

Then we will see.