Online chatbots save time and efforts by automating customer support. Gartner forecasts that by 2020, over 85% of customer interactions will be handled without a human. However, the opportunites provided by chatbot systems go far beyond giving responses to customers’ inquiries. They are also used for other business tasks, like collecting information about users, helping to organize meetings and reducing overhead costs. There is no wonder that size of the chatbot market is growing exponentially.
The Evie chatbot has had a huge impact on social media over the last few years. She is probably the most popular artificial personality on YouTube. She has appeared in several videos by PewdiePie, the most subscribed YouTuber in the world. This includes a flirting video with over 12 million views! Evie has been filmed speaking many different languages. She chats with Squeezie in French, El Rubius and El Rincón De Giorgio in Spanish, GermanLetsPlay and ConCrafter in German, NDNG - Enes Batur in Turkish, Stuu Games in Polish and jacksepticeye, ComedyShortsGamer and KSIOlajidebtHD in English. And that is a very small selection. Evie shares her database with Cleverbot, which is an internet star in its own right. Cleverbot conversations have long been shared on Twitter, Facebook, websites, forums and bulletin boards. We are currently working to give Evie some more artificial companions, such as the male avatar Boibot.
Companies and customers can benefit from internet bots. Internet bots are allowing customers to communicate with companies without having to communicate with a person. KLM Royal Dutch Airlines has produced a chatbot that allows customers to receive boarding passes, check in reminders, and other information that is needed for a flight. Companies have made chatbots that can benefit customers. Customer engagement has grown since these chatbots have been developed.
In a particularly alarming example of unexpected consequences, the bots soon began to devise their own language – in a sense. After being online for a short time, researchers discovered that their bots had begun to deviate significantly from pre-programmed conversational pathways and were responding to users (and each other) in an increasingly strange way, ultimately creating their own language without any human input.
The term Chatbot is closely related to chat bot and chatterbot. Chatterbot is more popular in relation to chatbot who talk a lot, and is not necessary very intelligent in processing the user answers. Chat bot is used by technical people who consider the word ‘bot’ as a normal term for ‘robotised actions’, and for them ‘chat bot’ is a special kind of bot. The term Chatbot is actually the most popular amongst these three terms and has the broadest meaning.
The most widely used anti-bot technique is the use of CAPTCHA, which is a form of Turing test used to distinguish between a human user and a less-sophisticated AI-powered bot, by the use of graphically-encoded human-readable text. Examples of providers include Recaptcha, and commercial companies such as Minteye, Solve Media, and NuCaptcha. Captchas, however, are not foolproof in preventing bots as they can often be circumvented by computer character recognition, security holes, and even by outsourcing captcha solving to cheap laborers.
Along with the continued development of our avatars, we are also investigating machine learning and deep learning techniques, and working on the creation of a short term memory for our bots. This will allow humans interacting with our AI to develop genuine human-like relationships with their bot; any personal information that is exchanged will be remembered by the bot and recalled in the correct context at the appropriate time. The bots will get to know their human companion, and utilise this knowledge to form warmer and more personal interactions.
If a text-sending algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seem plausible, for instance making false claims during a presidential election. With enough chatbots, it might be even possible to achieve artificial social proof.
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY'). Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".