The word bot, in Internet sense, is a short form of robot and originates from XX century. The modern use of the word bot has curious affinities with earlier uses, e.g. “parasitical worm or maggot” (1520s), of unknown origin; and Australian-New Zealand slang “worthless, troublesome person” (World War I -era). The method of minting new slang by clipping the heads off respectable words does not seem to be old or widespread in English. Examples: za from pizza, zels from pretzels, rents from parents, are American English student or teen slang and seem to date back no further than late 1960s.[4]

Evie's capacities go beyond mere verbal or textual interactions; the AI utilised in Evie also extends to controlling the timing and degree of facial expressions and movement. Her visually displayed reactions and emotions blend and vary in surprisingly complex ways, and a range of voices are delivered to your browser, along with lip synching information, to bring the avatar to life! Evie uses Flash if your browser supports it, but still works even without, thanks to our own Existor Avatar Player technology, allowing you to enjoy her to the full on iOS and Android.
“It’s hard to balance that urge to just dogpile the latest thing when you’re feeling like there’s a land grab or gold rush about to happen all around you and that you might get left behind. But in the end quality wins out. Everyone will be better off if there’s laser focus on building great bot products that are meaningfully differentiated.” — Ryan Block, Cofounder of Begin.com
The first formal instantiation of a Turing Test for machine intelligence is a Loebner Prize and has been organized since 1991. In a typical setup, there are three areas: the computer area with typically 3-5 computers, each running a stand-alone version (i.e. not connected with the internet) of the participating chatbot, an area for the human judges, typically four persons, and another area for the ‘confederates’, typically 3-5 voluntary humans, dependent on the number of chatbot participants. The human judges, working on their own terminal separated from one another, engage in a conversation with a human or a computer through the terminal, not knowing whether they are connected to a computer or a human. Then, they simply start to interact. The organizing committee requires that conversations are restricted to a single topic. The task for the human judges is to recognize chatbot responses and distinguish them from conversations with humans. If the judges cannot reliably distinguish the chatbot from the human, the chatbot is said to have passed the test.
However, as irresistible as this story was to news outlets, Facebook’s engineers didn’t pull the plug on the experiment out of fear the bots were somehow secretly colluding to usurp their meatbag overlords and usher in a new age of machine dominance. They ended the experiment due to the fact that, once the bots had deviated far enough from acceptable English language parameters, the data gleaned by the conversational aspects of the test was of limited value.
The term Chatbot is closely related to chat bot and chatterbot. Chatterbot is more popular in relation to chatbot who talk a lot, and is not necessary very intelligent in processing the user answers. Chat bot is used by technical people who consider the word ‘bot’ as a normal term for ‘robotised actions’, and for them ‘chat bot’ is a special kind of bot. The term Chatbot is actually the most popular amongst these three terms and has the broadest meaning.
The word robot is derived from the Czech noun robota meaning “labor”, and is an accomplishment of the cubist painter and writer Josef Capek, older brother of novelist and playwright Karel Capek. The word robot first appeared in 1920 in the Karel Capek’s play “RUR” (“Rossum’s Universal Robots”) and since then this play popularized the word invented by playwright’s brother.[3]
The idea was to permit Tay to “learn” about the nuances of human conversation by monitoring and interacting with real people online. Unfortunately, it didn’t take long for Tay to figure out that Twitter is a towering garbage-fire of awfulness, which resulted in the Twitter bot claiming that “Hitler did nothing wrong,” using a wide range of colorful expletives, and encouraging casual drug use. While some of Tay’s tweets were “original,” in that Tay composed them itself, many were actually the result of the bot’s “repeat back to me” function, meaning users could literally make the poor bot say whatever disgusting remarks they wanted. 
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published,[7] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:
×