In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published,[7] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:
Chatbot, when it plays its role as a virtual representative of an enterprise, is widely used by businesses outside of the US, primarily in the UK, The Netherlands, Germany and Australia. Additionally, the usage of this term is quite popular amongst amateur AI enthusiasts willing to spend vast amounts of time on their own intelligent creations (with diverse outcomes).
The term Chatbot is closely related to chat bot and chatterbot. Chatterbot is more popular in relation to chatbot who talk a lot, and is not necessary very intelligent in processing the user answers. Chat bot is used by technical people who consider the word ‘bot’ as a normal term for ‘robotised actions’, and for them ‘chat bot’ is a special kind of bot. The term Chatbot is actually the most popular amongst these three terms and has the broadest meaning.

The most widely used anti-bot technique is the use of CAPTCHA, which is a form of Turing test used to distinguish between a human user and a less-sophisticated AI-powered bot, by the use of graphically-encoded human-readable text. Examples of providers include Recaptcha, and commercial companies such as Minteye, Solve Media, and NuCaptcha. Captchas, however, are not foolproof in preventing bots as they can often be circumvented by computer character recognition, security holes, and even by outsourcing captcha solving to cheap laborers.
If a text-sending algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seem plausible, for instance making false claims during a presidential election. With enough chatbots, it might be even possible to achieve artificial social proof.[58][59]
Chatbots could be used as weapons on the social networks such as Twitter or Facebook. An entity or individuals could design create a countless number of chatbots to harass people. They could even try to track how successful their harassment is by using machine-learning-based methods to sharpen their strategies and counteract harassment detection tools.
Chatbots could be used as weapons on the social networks such as Twitter or Facebook. An entity or individuals could design create a countless number of chatbots to harass people. They could even try to track how successful their harassment is by using machine-learning-based methods to sharpen their strategies and counteract harassment detection tools.
“There is hope that consumers will be keen on experimenting with bots to make things happen for them. It used to be like that in the mobile app world 4+ years ago. When somebody told you back then… ‘I have built an app for X’… You most likely would give it a try. Now, nobody does this. It is probably too late to build an app company as an indie developer. But with bots… consumers’ attention spans are hopefully going to be wide open/receptive again!” — Niko Bonatsos, Managing Director at General Catalyst
×