According to Richard Wallace, chatbots development faced three phases over the past 60 years. In the beginning, chatbot only simulated human-human conversations, using canned responses based on keywords, and it had almost no intelligence. Second phase of development was strictly associated with the expansion of Internet, thanks to which a chatbot was widely accessed and chatted with thousands of users. Then, the first commercial chatbot developers appeared. The third wave of chatbots development is combined with advanced technologies such as natural language processing, speech synthesis and real-time rendering videos. It comprises of chatbot appearing within web pages, instant messaging, and virtual worlds.

Evie's capacities go beyond mere verbal or textual interactions; the AI utilised in Evie also extends to controlling the timing and degree of facial expressions and movement. Her visually displayed reactions and emotions blend and vary in surprisingly complex ways, and a range of voices are delivered to your browser, along with lip synching information, to bring the avatar to life! Evie uses Flash if your browser supports it, but still works even without, thanks to our own Existor Avatar Player technology, allowing you to enjoy her to the full on iOS and Android.
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published,[7] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:
Reports of political interferences in recent elections, including the 2016 US and 2017 UK general elections,[3] have set the notion of botting being more prevalent because of the ethics that is challenged between the bot’s design and the bot’s designer. According to Emilio Ferrara, a computer scientist from the University of Southern California reporting on Communications of the ACM,[4] the lack of resources available to implement fact-checking and information verification results in the large volumes of false reports and claims made on these bots in social media platforms. In the case of Twitter, most of these bots are programmed with searching filter capabilities that target key words and phrases that reflect in favor and against political agendas and retweet them. While the attention of bots is programmed to spread unverified information throughout the social media platform,[5] it is a challenge that programmers face in the wake of a hostile political climate. Binary functions are designated to the programs and using an Application Program interface embedded in the social media website executes the functions tasked. The Bot Effect is what Ferrera reports as when the socialization of bots and human users creates a vulnerability to the leaking of personal information and polarizing influences outside the ethics of the bot’s code. According to Guillory Kramer in his study, he observes the behavior of emotionally volatile users and the impact the bots have on the users, altering the perception of reality.
A malicious use of bots is the coordination and operation of an automated attack on networked computers, such as a denial-of-service attack by a botnet. Internet bots can also be used to commit click fraud and more recently have seen usage around MMORPG games as computer game bots.[citation needed] A spambot is an internet bot that attempts to spam large amounts of content on the Internet, usually adding advertising links. More than 94.2% of websites have experienced a bot attack.[2]
Chat bot, chatbot or chatterbot, can be found on screens and in the virtual worlds, but also in the real world, for example holographically projected or as physical talking and responding puppet, toy or robot. Often, chat bot appears online and in instant messenger programs such as Windows Live Messenger, AOL Instant Messenger or Google Talk, where a chat bot is part of the buddy, contact or follow list of the human user. Chat bot appears on many other platforms as well, such as social networks (e.g. Facebook), virtual worlds (e.g. Second Life) or mobile devices (e.g. iPhone).

Malicious chatbots are frequently used to fill chat rooms with spam and advertisements, by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They are commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.[55]
Some bots communicate with other users of Internet-based services, via instant messaging (IM), Internet Relay Chat (IRC), or another web interface such as Facebook Bots and Twitterbots. These chatterbots may allow people to ask questions in plain English and then formulate a proper response. These bots can often handle many tasks, including reporting weather, zip-code information, sports scores, converting currency or other units, etc.[citation needed] Others are used for entertainment, such as SmarterChild on AOL Instant Messenger and MSN Messenger.
Other companies explore ways they can use chatbots internally, for example for Customer Support, Human Resources, or even in Internet-of-Things (IoT) projects. Overstock.com, for one, has reportedly launched a chatbot named Mila to automate certain simple yet time-consuming processes when requesting for a sick leave.[31] Other large companies such as Lloyds Banking Group, Royal Bank of Scotland, Renault and Citroën are now using automated online assistants instead of call centres with humans to provide a first point of contact. A SaaS chatbot business ecosystem has been steadily growing since the F8 Conference when Facebook's Mark Zuckerberg unveiled that Messenger would allow chatbots into the app.[32] In large companies, like in hospitals and aviation organizations, IT architects are designing reference architectures for Intelligent Chatbots that are used to unlock and share knowledge and experience in the organization more efficiently, and reduce the errors in answers from expert service desks significantly.[33] These Intelligent Chatbots make use of all kinds of artificial intelligence like image moderation and natural language understanding (NLU), natural language generation (NLG), machine learning and deep learning.
Despite all efforts during almost half a century, most chatbots are still easily uncovered, but over the next decades they will definitely get smarter and finally we will distinguish human beings by them giving us silly answers as opposed to the much smarter chatbots. All of this will really start accelerating as soon as one single chatbot is smarter than one single human being. They will then be able to learn from each other, instead of learning from human beings, their knowledge will explode and they will be able to design even better learning mechanisms. In the long run, we will learn language from chatbots instead of the other way around.
The “web-based” solution, which runs on a remote server, is generally able to be reached by the general public through a web page. It constitutes a web page with a chatbot embedded in it, and a text form is the sole interface between the user (you) and the chatbot. Any “upgrades” or improvements to the interface are solely the option and responsibility of the botmaster.
Although NBC Politics Bot was a little rudimentary in terms of its interactions, this particular application of chatbot technology could well become a lot more popular in the coming years – particularly as audiences struggle to keep up with the enormous volume of news content being published every day. The bot also helped NBC determine what content most resonated with users, which the network will use to further tailor and refine its content to users in the future.
A representative example of a chat bot is A.L.I.C.E., brought to artificial life in 1995 by Richard Wallace. The A.L.I.C.E. bot participated in numerous competitions related to natural language processing evaluation and obtained many honors and awards, and it is also worth mentioning that this chat bot won the Loebner Prize contest at least three times, it was also part of the top 10 at Chatterbox competition, and won the best character/personality chat bot contest.
If a text-sending algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seem plausible, for instance making false claims during a presidential election. With enough chatbots, it might be even possible to achieve artificial social proof.[58][59]
However, as irresistible as this story was to news outlets, Facebook’s engineers didn’t pull the plug on the experiment out of fear the bots were somehow secretly colluding to usurp their meatbag overlords and usher in a new age of machine dominance. They ended the experiment due to the fact that, once the bots had deviated far enough from acceptable English language parameters, the data gleaned by the conversational aspects of the test was of limited value.
The main challenge is in teaching a chatbot to understand the language of your customers. In every business, customers express themselves differently and each group of a target audience speaks its own way. The language is influenced by advertising campaigns on the market, the political situation in the country, releases of new services and products from Google, Apple and Pepsi among others. The way people speak depends on their city, mood, weather and moon phase. An important role in the communication of the business with customers may have the release of the film Star Wars, for example. That’s why training a chatbot to understand correctly everything the user types requires a lot of efforts.

ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[9] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
×