The Wikipedia Bot wars

There have been a few media stories about editing bots battling it out on Wikipedia and so I wondered what the alpha source for this was. It turns out that it all stems from a paper published within the Open Access Journal Plus One.

Even good bots fight: The case of Wikipedia

First the Basics – What is a Bot?

The published paper describes it all in great detail and so it is worth digging into.

First, start be reading the following general introduction. It comes from the paper and takes you step-by-step into the world of bots. If you have no idea what a bot actually is, then investing a few minutes reading their description might help demystify it all for you …

A bot, or software agent, is a computer program that is persistent, autonomous, and reactive [2,3]. Bots are defined by programming code that runs continuously and can be activated by itself. They make and execute decisions without human intervention and perceive and adapt to the context they operate in. Internet bots, also known as web bots, are bots that run over the Internet. They appeared and proliferated soon after the creation of the World Wide Web [4]. Already in 1993, Martijn Koster published “Guidelines to robot writers,” which contained suggestions about developing web crawlers [5], a kind of bot. Eggdrop, one of the first known Internet Relay Chat bots, started greeting chat newcomers also in 1993 [6]. In 1996, Fah-Chun Cheong published a 413-page book, claiming to have a current listing of all bots available on the Internet at that point in time. Since then, Internet bots have proliferated and diversified well beyond our ability to record them in an exhaustive list [7,8]. As a result, bots have been responsible for an increasingly larger proportion of activities on the Web. For example, one study found that 25% of all messages on Yahoo! chat over a period of three months in 2007 were sent by spam bots [9]. Another study discovered that 32% of all tweets made by the most active Twitter users in 2009 were generated by bots [10], meaning that bots were responsible for an estimated 24% of all tweets [11]. Further, researchers estimated that bots comprise between 4% and 7% of the avatars on the virtual world Second Life in 2009 [12]. A media analytics company found that 54% of the online ads shown in thousands of ad campaigns in 2012 and 2013 were viewed by bots, rather than humans [13]. According to an online security company, bots accounted for 48.5% of website visits in 2015 [14]. Also in 2015, 100,000 accounts on the multi-player online game World of Warcraft (about 1% of all accounts) were banned for using bots [15]. And in the same year, a database leak revealed that more than 70,000 “female” bots sent more than 20 million messages on the cheater dating site Ashley Madison [16].

As the population of bots active on the Internet 24/7 is growing fast, their interactions are equally intensifying. An increasing number of decisions, options, choices, and services depend now on bots working properly, efficaciously, and successfully. Yet, we know very little about the life and evolution of our digital minions. In particular, predicting how bots’ interactions will evolve and play out even when they rely on very simple algorithms is already challenging. Furthermore, as Alan and Sruthi demonstrated, even if bots are designed to collaborate, conflict may occur inadvertently. Clearly, it is crucial to understand what could affect bot-bot interactions in order to design cooperative bots that can manage disagreement, avoid unproductive conflict, and fulfill their tasks in ways that are socially and ethically acceptable.

There are many types of Internet bots (see Table 1). These bots form an increasingly complex system of social interactions. Do bots interact with each other in ways that are comparable to how we humans interact with each other? Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality [17]. Despite recent advances in the field of Artificial Intelligence, the idea that bots can have morality and culture is still far from reality. Today, it is natural to expect interactions between bots to be relatively predictable and uneventful, lacking the spontaneity and complexity of human social interactions. However, even in such simple contexts, our research shows that there may be more similarities between bots and humans than one may expect. Focusing on one particular human-bot community, we find that conflict emerges even among benevolent bots that are designed to benefit their environment and not fight each other, and that bot interactions may differ when they occur in environments influenced by different human cultures.

Benevolent bots are designed to support human users or cooperate with them. Malevolent bots are designed to exploit human users and compete negatively with them. We have classified high-frequency trading algorithms as malevolent because they exploit markets in ways that increase volatility and precipitate flash crashes.

What do Bots do on Wikipedia?

As you might now anticipate, the goal is to design bits of code that trawls through pages fixing stuff and adding links where appropriate in an automated manner. They describe the precise scope of their study as follows …

We study bots on Wikipedia, the largest free online encyclopedia. Bots on Wikipedia are computer scripts that automatically handle repetitive and mundane tasks to develop, improve, and maintain the encyclopedia. They are easy to identify because they operate from dedicated user accounts that have been flagged and officially approved. Approval requires that the bot follows Wikipedia’s bot policy.

Bots are important contributors to Wikipedia. For example, in 2014, bots completed about 15% of the edits on all language editions of the encyclopedia [18]. In general, Wikipedia bots complete a variety of activities. They identify and undo vandalism, enforce bans, check spelling, create inter-language links, import content automatically, mine data, identify copyright violations, greet newcomers, and so on [19]. Our analysis here focuses on editing bots, which modify articles directly. We analyze the interactions between bots and investigate the extent to which they resemble interactions between humans. In particular, we focus on whether bots disagree with each other, how the dynamics of disagreement differ for bots versus humans, and whether there are differences between bots operating in different language editions of Wikipedia.

The Study itself

Knowing the above now makes it easy to see what the study is all about.

  • They downloaded all data for all editing changes, and that includes cases where things get rolled back.
  • They are able to work out who did it – or to be more specific, which editors are human, and which are simply done by automated code.

Once they had the above raw information, it was then possible to filter out human editors and so they proceeded to analyse bot interactions.

One Point to note: The data they analysed was for edits within all 13 different language editions of Wikipedia in the first ten years after the encyclopedia was launched (2001–2010), so it does not cover what has been happening on-line since then.

What did they actually discover?

  • Bots constitute a tiny proportion of all Wikipedia editors (less than 0.1%) but they were responsible for a significant proportion of all edits
  • The level of bot activity significantly differs between different language editions of Wikipedia, with bots generally more active in smaller language editions.
  • Since 2001, the number of bots and their activity has been increasing, but at a slowing rate. However, the number of reverts between bots has been continuously increasing.
    • Translation: bot interactions are not becoming more efficient. This suggests that bot owners have not learned to identify bot conflicts faster.
  • bots revert each other a lot: for example, over the ten-year period, bots on English Wikipedia reverted another bot on average 105 times
    • As a contrast, with human editors, it is an average of about three times. This is understandable; bots just follow a set of instructions. Humans will realise something is up, and so they will talk to each other and work something out.
    • Interestingly enough, bot reverts varies by language: German bots revert each other on average about 24 times, and the Portuguese ones have an average revert rate of 185, but that is perhaps because the Portuguese bots are far more active.
  • bot-bot interactions operate on a different time scale than human-human interactions
    • average time between successive reverts for humans is at 2 minutes, 24 hours, or 1 year. In comparison, bot-bot interactions have a characteristic average response of 1 month
    • Why? They suggest that this is because bots systematically crawl articles and, second, bots are restricted as to how often they can make edits (the Wikipedia bot policy usually requires spacing of 10 seconds, or 5 for anti-vandalism activity, which is considered more urgent). In contrast, humans use automatic tools that report live changes made to a pre-selected list of articles; they can thus follow only a small set of articles and, in principle, react instantaneously to any edits on those.

Further Observations

  • The same bots were responsible for the majority of reverts in all the language editions they looked at. For example, some of the bots that revert the most other bots include Xqbot, EmausBot, SieBot, and VolkovBot. These are all bots specializing in fixing inter-wiki links.
  • There are a few articles with many bot-bot reverts. These articles tend to be the same across languages. For example, some of the articles most contested by bots are about Pervez Musharraf (former president of Pakistan), Uzbekistan, Estonia, Belarus, Arabic language, Niels Bohr, Arnold Schwarzenegger. This would suggest that a significant portion of bot-bot fighting occurs across languages rather than within.

There is a warning here

Bot authors, with the best best intentions, crafted bots to perform a specific task and then set them running. What then happened was a completely unintended consequence that had not been foreseen by the bot authors.

Key Lesson: a system of simple bots may produce complex dynamics with unintended consequences.

People build artificial intelligent systems in complete isolation. Once released into the wild, interaction with other systems is inevitable.

One example is the self-driving car.

You might want to avoid being a pioneer, because there may be unintended consequences …

“Take self-driving cars. A very simple thing that’s often overlooked is that these will be used in different cultures and environments. An automated car will behave differently on the German autobahn to how it will on the roads in Italy. The regulations are different, the laws are different, and the driving culture is very different,”  – Taha Yasseri, one of the study authors

Further Author Quotes

“The fights between bots can be far more persistent than the ones we see between people. Humans usually cool down after a few days, but the bots might continue for years. 

We had very low expectations to see anything interesting. When you think about them they are very boring. The very fact that we saw a lot of conflict among bots was a big surprise to us” – Taha Yasseri, one of the study authors

Leave a ReplyCancel reply

Exit mobile version