^ "From Russia With Love" (PDF). Retrieved 2007-12-09. Psychologist and Scientific American: Mind contributing editor Robert Epstein reports how he was initially fooled by a chatterbot posing as an attractive girl in a personal ad he answered on a dating website. In the ad, the girl portrayed herself as being in Southern California and then soon revealed, in poor English, that she was actually in Russia. He became suspicious after a couple of months of email exchanges, sent her an email test of gibberish, and she still replied in general terms. The dating website is not named. Scientific American: Mind, October–November 2007, page 16–17, "From Russia With Love: How I got fooled (and somewhat humiliated) by a computer". Also available online.

Other companies explore ways they can use chatbots internally, for example for Customer Support, Human Resources, or even in Internet-of-Things (IoT) projects. Overstock.com, for one, has reportedly launched a chatbot named Mila to automate certain simple yet time-consuming processes when requesting for a sick leave.[31] Other large companies such as Lloyds Banking Group, Royal Bank of Scotland, Renault and Citroën are now using automated online assistants instead of call centres with humans to provide a first point of contact. A SaaS chatbot business ecosystem has been steadily growing since the F8 Conference when Facebook's Mark Zuckerberg unveiled that Messenger would allow chatbots into the app.[32] In large companies, like in hospitals and aviation organizations, IT architects are designing reference architectures for Intelligent Chatbots that are used to unlock and share knowledge and experience in the organization more efficiently, and reduce the errors in answers from expert service desks significantly.[33] These Intelligent Chatbots make use of all kinds of artificial intelligence like image moderation and natural language understanding (NLU), natural language generation (NLG), machine learning and deep learning.

Reports of political interferences in recent elections, including the 2016 US and 2017 UK general elections,[3] have set the notion of botting being more prevalent because of the ethics that is challenged between the bot’s design and the bot’s designer. According to Emilio Ferrara, a computer scientist from the University of Southern California reporting on Communications of the ACM,[4] the lack of resources available to implement fact-checking and information verification results in the large volumes of false reports and claims made on these bots in social media platforms. In the case of Twitter, most of these bots are programmed with searching filter capabilities that target key words and phrases that reflect in favor and against political agendas and retweet them. While the attention of bots is programmed to spread unverified information throughout the social media platform,[5] it is a challenge that programmers face in the wake of a hostile political climate. Binary functions are designated to the programs and using an Application Program interface embedded in the social media website executes the functions tasked. The Bot Effect is what Ferrera reports as when the socialization of bots and human users creates a vulnerability to the leaking of personal information and polarizing influences outside the ethics of the bot’s code. According to Guillory Kramer in his study, he observes the behavior of emotionally volatile users and the impact the bots have on the users, altering the perception of reality.


Using chatbot builder platforms. You can create a chatbot with the help of services providing all the necessary features and integrations. It can be a good choice for an in-house chatbot serving your team. This option is associated with some disadvantages, including the limited configuration and the dependence on the service. Some popular platforms for building chatbots are:

One pertinent field of AI research is natural language processing. Usually, weak AI fields employ specialized software or programming languages created specifically for the narrow function required. For example, A.L.I.C.E. uses a markup language called AIML, which is specific to its function as a conversational agent, and has since been adopted by various other developers of, so called, Alicebots. Nevertheless, A.L.I.C.E. is still purely based on pattern matching techniques without any reasoning capabilities, the same technique ELIZA was using back in 1966. This is not strong AI, which would require sapience and logical reasoning abilities.
The process of building, testing and deploying chatbots can be done on cloud-based chatbot development platforms[51] offered by cloud Platform as a Service (PaaS) providers such as Oracle Cloud Platform Yekaliva[47][28] and IBM Watson.[52][53][54] These cloud platforms provide Natural Language Processing, Artificial Intelligence and Mobile Backend as a Service for chatbot development.

According to Richard Wallace, chatbots development faced three phases over the past 60 years. In the beginning, chatbot only simulated human-human conversations, using canned responses based on keywords, and it had almost no intelligence. Second phase of development was strictly associated with the expansion of Internet, thanks to which a chatbot was widely accessed and chatted with thousands of users. Then, the first commercial chatbot developers appeared. The third wave of chatbots development is combined with advanced technologies such as natural language processing, speech synthesis and real-time rendering videos. It comprises of chatbot appearing within web pages, instant messaging, and virtual worlds.


Smart chatbots rely on artificial intelligence when they communicate with users. Instead of pre-prepared answers, the robot responds with adequate suggestions on the topic. In addition, all the words said by the customers are recorded for later processing. However, the Forrester report “The State of Chatbots” points out that artificial intelligence is not a magic and is not yet ready to produce marvelous experiences for users on its own. On the contrary, it requires a huge work:
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[9] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
×