Photo: Flickr user lifeontheedge

Sunday, July 29, 2007

But how do you know if it's accurate?



Alright, this is the most awesome thing I have seen in a very long time.

People have been talking for years (at least, I have) about color-coding words in Wikipedia articles based on how long they've survived unchanged. You could see at a glance what had lasted hundreds of revisions, and what had just been added.

Other people have been talking for years about some sort of reputation system -- you could vote on whether an article was reliable, or whether a user was. There are all sorts of problems with this kind of thinking, but I won't get into them because they've just been made moot: *

We present a content-driven reputation system for Wikipedia authors. In our system, authors gain reputation when the edits they perform to Wikipedia articles are preserved by subsequent authors, and they lose reputation when their edits are rolled back or undone in short order. Thus, author reputation is computed solely on the basis of content evolution; user-to-user comments or ratings are not used. The author reputation we compute could be used to flag new contributions from low-reputation authors, or it could be used to allow only authors with high reputation to contribute to controversial or critical pages. A reputation system for the Wikipedia could also provide an incentive for high-quality contributions.

We have implemented the proposed system, and we have used it to analyze the entire Italian and French Wikipedias, consisting of a total of 691,551 pages and 5,587,523 revisions. Our results show that our notion of reputation has good predictive value: changes performed by low-reputation authors have a significantly larger than average probability of having poor quality, as judged by human observers, and of being later undone, as measured by our algorithms.


And I haven't even gotten to the good part.

The same people developed a color-coding system based on their new trust metric. Text contributed by authors with a high content-driven reputation looks normal (black on white); text contributed by authors with a low reputation has an orange background (of varying shades).

Here's the demo.



Be sure to click "random page" a few times and page through the article histories.

(Damn. Now I want to go to Wikimania. Anyone want to buy me a ticket to Taipei?)

A note to the UCSC Wiki Team, who created this: even if your proposal doesn't get implemented on the Wikipedia servers (because of community opposition, or lack of resources), you can still implement it yourself (on wikipedia content, yes) via greasemonkey, or a firefox extention, or a web-based mashup, or whatever.




* Update: Well, not completely moot. Any automated reputation system can be gamed, and here are a few cantankerous side effects this one might have. That's one reason it may be a better idea to roll it out as an add-on than on the Wikipedia servers themselves.

3 comments:

Rodneylives said...

Of course, if implemented, this will eventually be gamed. People intending to do so will probably do it by making basic infrastructural changes that no one will dispute (fixing grammar and spelling, for example), using that to build reputation towards making changes that would be controversial but, since they don't have much of an orange background, won't be looked at so hard and may remain for some time.

That's not to say this isn't a damn awesome idea, mind you, but that it is still not a replacement for teaching new users how to be good Wikipedia citizens, and pushing forth that kind of editorial morality.

Ben Yates said...

True. (On the other hand, it would provide an additional incentive for the basic infrastructural changes -- paying your dues, or what have you. I'm pretty sure the reputation system only counts edits made in article space, so fewer people would be philosophizing on talk pages to try to gain influence.)

Anonymous said...

Well I've already surely spoiled my reputation for life, having undone my own edits several times.