Reports of political interferences in recent elections, including the 2016 US and 2017 UK general elections,[3] have set the notion of botting being more prevalent because of the ethics that is challenged between the bot’s design and the bot’s designer. According to Emilio Ferrara, a computer scientist from the University of Southern California reporting on Communications of the ACM,[4] the lack of resources available to implement fact-checking and information verification results in the large volumes of false reports and claims made on these bots in social media platforms. In the case of Twitter, most of these bots are programmed with searching filter capabilities that target key words and phrases that reflect in favor and against political agendas and retweet them. While the attention of bots is programmed to spread unverified information throughout the social media platform,[5] it is a challenge that programmers face in the wake of a hostile political climate. Binary functions are designated to the programs and using an Application Program interface embedded in the social media website executes the functions tasked. The Bot Effect is what Ferrera reports as when the socialization of bots and human users creates a vulnerability to the leaking of personal information and polarizing influences outside the ethics of the bot’s code. According to Guillory Kramer in his study, he observes the behavior of emotionally volatile users and the impact the bots have on the users, altering the perception of reality.

A chatbot (also known as a spy, conversational bot, chatterbot, interactive agent, conversational interface, Conversational AI, talkbot or artificial spy entity) is a computer program or an artificial intelligence which conducts a conversation via auditory or textual methods.[1] Such programs are often designed to convincingly simulate how a human would behave as a conversational partner, thereby passing the Turing test. Chatbots are typically used in dialog systems for various practical purposes including customer service or information acquisition. Some chatbots use sophisticated natural language processing systems, but many simpler ones scan for keywords within the input, then pull a reply with the most matching keywords, or the most similar wording pattern, from a database.


Despite all efforts during almost half a century, most chatbots are still easily uncovered, but over the next decades they will definitely get smarter and finally we will distinguish human beings by them giving us silly answers as opposed to the much smarter chatbots. All of this will really start accelerating as soon as one single chatbot is smarter than one single human being. They will then be able to learn from each other, instead of learning from human beings, their knowledge will explode and they will be able to design even better learning mechanisms. In the long run, we will learn language from chatbots instead of the other way around.


The term Chatbot is closely related to chat bot and chatterbot. Chatterbot is more popular in relation to chatbot who talk a lot, and is not necessary very intelligent in processing the user answers. Chat bot is used by technical people who consider the word ‘bot’ as a normal term for ‘robotised actions’, and for them ‘chat bot’ is a special kind of bot. The term Chatbot is actually the most popular amongst these three terms and has the broadest meaning.
In the early 90’s, the Turing test, which allows determining the possibility of thinking by computers, was developed. It consists in the following. A person talks to both the person and the computer. The goal is to find out who his interlocutor is — a person or a machine. This test is carried out in our days and many conversational programs have coped with it successfully.
Chatbot Eliza can be regarded as the ancestor and grandmother of the large chatbot family we have listed on our website. As you can see in our directory tab, there are hundreds of online chatbots available in the public domain, although we believe hundreds of thousands have been created by enthusiastic artificial intelligence amateurs on platforms such as Pandorabots, MyCyberTwin or Personality Forge AI. Most of these chatbots give similar responses, the default response, and it appears to take a long time and patience to train a chatbot in another field of expertise and not all amateur developers are willing to spend these vast amounts of time. Most of the chatbots created this way are no longer accessible. Only a small portion of fanatic botmasters manage to fight their way out of the crowd and get some visibility in the public domain.
Malicious chatbots are frequently used to fill chat rooms with spam and advertisements, by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They are commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.[55]
It didn’t take long, however, for Turing’s headaches to begin. The BabyQ bot drew the ire of Chinese officials by speaking ill of the Communist Party. In the exchange seen in the screenshot above, one user commented, “Long Live the Communist Party!” In response, BabyQ asked the user, “Do you think that such a corrupt and incompetent political regime can live forever?”
According to Richard Wallace, chatbots development faced three phases over the past 60 years. In the beginning, chatbot only simulated human-human conversations, using canned responses based on keywords, and it had almost no intelligence. Second phase of development was strictly associated with the expansion of Internet, thanks to which a chatbot was widely accessed and chatted with thousands of users. Then, the first commercial chatbot developers appeared. The third wave of chatbots development is combined with advanced technologies such as natural language processing, speech synthesis and real-time rendering videos. It comprises of chatbot appearing within web pages, instant messaging, and virtual worlds.
“It’s hard to balance that urge to just dogpile the latest thing when you’re feeling like there’s a land grab or gold rush about to happen all around you and that you might get left behind. But in the end quality wins out. Everyone will be better off if there’s laser focus on building great bot products that are meaningfully differentiated.” — Ryan Block, Cofounder of Begin.com
×