I’m linking to this article because I think it is interesting from a “how stuff works” perspective – how much data would one need in order to be able to judge which documents are trustworthy?

What’s funny is that the data consists of merely collecting what people are going to. Bringing people diverse, quality content is not really on the agenda. The difficulty is in following the wisdom of crowds.

Ironically enough, in following that “wisdom,” one has to wonder how 1) Google is so good at what it does 2) the attempt to track something so thoroughly is causing Google to be a bit, I dunno, too snoopy.

Regarding #1 – if one tracks everyone, one finds all the useless sites people go to, but also the sites a significant minority who might know what they’re talking about go to. #2 is obviously the problem, both intellectually and morally: the question is whether everything needs to be tracked, or whether trustworthy links could be found just by tapping into that “significant minority.”

I say it can happen, and a service that aims to find people that are knowledgeable and have developed tastes – not merely like Squidoo, proclaim everyone an expert and find out that a lot of people think they’re hot stuff when they’re not – could probably be a lot more cost-efficient in terms of search and most certainly less invasive.

All that having been said, Google is very, very good at what they do, that’s for sure, and I rely on them heavily. I’m using their blogsearch way more than Technorati when I need to find comments on a topic.