However, as irresistible as this story was to news outlets, Facebook’s engineers didn’t pull the plug on the experiment out of fear the bots were somehow secretly colluding to usurp their meatbag overlords and usher in a new age of machine dominance. They ended the experiment due to the fact that, once the bots had deviated far enough from acceptable English language parameters, the data gleaned by the conversational aspects of the test was of limited value.

The first formal instantiation of a Turing Test for machine intelligence is a Loebner Prize and has been organized since 1991. In a typical setup, there are three areas: the computer area with typically 3-5 computers, each running a stand-alone version (i.e. not connected with the internet) of the participating chatbot, an area for the human judges, typically four persons, and another area for the ‘confederates’, typically 3-5 voluntary humans, dependent on the number of chatbot participants. The human judges, working on their own terminal separated from one another, engage in a conversation with a human or a computer through the terminal, not knowing whether they are connected to a computer or a human. Then, they simply start to interact. The organizing committee requires that conversations are restricted to a single topic. The task for the human judges is to recognize chatbot responses and distinguish them from conversations with humans. If the judges cannot reliably distinguish the chatbot from the human, the chatbot is said to have passed the test.


Efforts by servers hosting websites to counteract bots vary. Servers may choose to outline rules on the behaviour of internet bots by implementing a robots.txt file: this file is simply text stating the rules governing a bot's behaviour on that server. Any bot that does not follow these rules when interacting with (or 'spidering') any server should, in theory, be denied access to, or removed from, the affected website. If the only rule implementation by a server is a posted text file with no associated program/software/app, then adhering to those rules is entirely voluntary – in reality there is no way to enforce those rules, or even to ensure that a bot's creator or implementer acknowledges, or even reads, the robots.txt file contents. Some bots are "good" – e.g. search engine spiders – while others can be used to launch malicious and harsh attacks, most notably, in political campaigns.[2]
The word robot is derived from the Czech noun robota meaning “labor”, and is an accomplishment of the cubist painter and writer Josef Capek, older brother of novelist and playwright Karel Capek. The word robot first appeared in 1920 in the Karel Capek’s play “RUR” (“Rossum’s Universal Robots”) and since then this play popularized the word invented by playwright’s brother.[3]

However, the revelations didn’t stop there. The researchers also learned that the bots had become remarkably sophisticated negotiators in a short period of time, with one bot even attempting to mislead a researcher by demonstrating interest in a particular item so it could gain crucial negotiating leverage at a later stage by willingly “sacrificing” the item in which it had feigned interest, indicating a remarkable level of premeditation and strategic “thinking.”
The term "ChatterBot" was originally coined by Michael Mauldin (creator of the first Verbot, Julia) in 1994 to describe these conversational programs.[2] Today, most chatbots are accessed via virtual assistants such as Google Assistant and Amazon Alexa, via messaging apps such as Facebook Messenger or WeChat, or via individual organizations' apps and websites.[3][4] Chatbots can be classified into usage categories such as conversational commerce (e-commerce via chat), analytics, communication, customer support, design, developer tools, education, entertainment, finance, food, games, health, HR, marketing, news, personal, productivity, shopping, social, sports, travel and utilities.[5]
×