However, as irresistible as this story was to news outlets, Facebook’s engineers didn’t pull the plug on the experiment out of fear the bots were somehow secretly colluding to usurp their meatbag overlords and usher in a new age of machine dominance. They ended the experiment due to the fact that, once the bots had deviated far enough from acceptable English language parameters, the data gleaned by the conversational aspects of the test was of limited value.
A malicious use of bots is the coordination and operation of an automated attack on networked computers, such as a denial-of-service attack by a botnet. Internet bots can also be used to commit click fraud and more recently have seen usage around MMORPG games as computer game bots.[citation needed] A spambot is an internet bot that attempts to spam large amounts of content on the Internet, usually adding advertising links. More than 94.2% of websites have experienced a bot attack.[2]

Some bots communicate with other users of Internet-based services, via instant messaging (IM), Internet Relay Chat (IRC), or another web interface such as Facebook Bots and Twitterbots. These chatterbots may allow people to ask questions in plain English and then formulate a proper response. These bots can often handle many tasks, including reporting weather, zip-code information, sports scores, converting currency or other units, etc.[citation needed] Others are used for entertainment, such as SmarterChild on AOL Instant Messenger and MSN Messenger.
What began as a televised ad campaign eventually became a fully interactive chatbot developed for PG Tips’ parent company, Unilever (which also happens to own an alarming number of the most commonly known household brands) by London-based agency Ubisend, which specializes in developing bespoke chatbot applications for brands. The aim of the bot was to not only raise brand awareness for PG Tips tea, but also to raise funds for Red Nose Day through the 1 Million Laughs campaign.

Sometimes it is hard to discover if a conversational partner on the other end is a real person or a chatbot. In fact, it is getting harder as technology progresses. A well-known way to measure the chatbot intelligence in a more or less objective manner is the so-called Turing Test. This test determines how well a chatbot is capable of appearing like a real person by giving responses indistinguishable from a human’s response.
Sometimes it is hard to discover if a conversational partner on the other end is a real person or a chatbot. In fact, it is getting harder as technology progresses. A well-known way to measure the chatbot intelligence in a more or less objective manner is the so-called Turing Test. This test determines how well a chatbot is capable of appearing like a real person by giving responses indistinguishable from a human’s response.
In a particularly alarming example of unexpected consequences, the bots soon began to devise their own language – in a sense. After being online for a short time, researchers discovered that their bots had begun to deviate significantly from pre-programmed conversational pathways and were responding to users (and each other) in an increasingly strange way, ultimately creating their own language without any human input.

ALICE – which stands for Artificial Linguistic Internet Computer Entity, an acronym that could have been lifted straight out of an episode of The X-Files – was developed and launched by creator Dr. Richard Wallace way back in the dark days of the early Internet in 1995. (As you can see in the image above, the website’s aesthetic remains virtually unchanged since that time, a powerful reminder of how far web design has come.) 
Please check out our main directory with 1376 live chatbot examples (an overview as maintained by developers themselves), our vendor listing with 256 chatbot companies and chatbot news section with already more than 370 articles! Our research tab contains lots of papers on chatbots, 1,166 journals on chatbots and 390 books on chatbots. This research section also shows which universities are active in the chatbot field, indicates which publishers are publishing journals on humanlike conversational AI and informs about academic events on chatbots. Also, check out our dedicated tab for awards, contest and games related to the chatbot field, various forums like our AI forum by chatbot enthusiasts and add any chatbot as created by yourself and your colleagues to our chatbot directory. Please do not forget to register to join us in these exciting times.
The Evie chatbot has had a huge impact on social media over the last few years. She is probably the most popular artificial personality on YouTube. She has appeared in several videos by PewdiePie, the most subscribed YouTuber in the world. This includes a flirting video with over 12 million views! Evie has been filmed speaking many different languages. She chats with Squeezie in French, El Rubius and El Rincón De Giorgio in Spanish, GermanLetsPlay and ConCrafter in German, NDNG - Enes Batur in Turkish, Stuu Games in Polish and jacksepticeye, ComedyShortsGamer and KSIOlajidebtHD in English. And that is a very small selection. Evie shares her database with Cleverbot, which is an internet star in its own right. Cleverbot conversations have long been shared on Twitter, Facebook, websites, forums and bulletin boards. We are currently working to give Evie some more artificial companions, such as the male avatar Boibot.
Efforts by servers hosting websites to counteract bots vary. Servers may choose to outline rules on the behaviour of internet bots by implementing a robots.txt file: this file is simply text stating the rules governing a bot's behaviour on that server. Any bot that does not follow these rules when interacting with (or 'spidering') any server should, in theory, be denied access to, or removed from, the affected website. If the only rule implementation by a server is a posted text file with no associated program/software/app, then adhering to those rules is entirely voluntary – in reality there is no way to enforce those rules, or even to ensure that a bot's creator or implementer acknowledges, or even reads, the robots.txt file contents. Some bots are "good" – e.g. search engine spiders – while others can be used to launch malicious and harsh attacks, most notably, in political campaigns.[2]
Social networking bots are sets of algorithms that take on the duties of repetitive sets of instructions in order to establish a service or connection among social networking users. Various designs of networking bots vary from chat bots, algorithms designed to converse with a human user, to social bots, algorithms designed to mimic human behaviors to converse with behavioral patterns similar to that of a human user. The history of social botting can be traced back to Alan Turing in the 1950s and his vision of designing sets of instructional code that passes the Turing test. From 1964 to 1966, ELIZA, a natural language processing computer program created by Joseph Weizenbaum, is an early indicator of artificial intelligence algorithms that inspired computer programmers to design tasked programs that can match behavior patterns to their sets of instruction. As a result, natural language processing has become an influencing factor to the development of artificial intelligence and social bots as innovative technological advancements are made alongside the progression of the mass spreading of information and thought on social media websites.
The “web-based” solution, which runs on a remote server, is generally able to be reached by the general public through a web page. It constitutes a web page with a chatbot embedded in it, and a text form is the sole interface between the user (you) and the chatbot. Any “upgrades” or improvements to the interface are solely the option and responsibility of the botmaster.

These are just the basic versions of intelligent chatbots. There are many more intelligent chatbots out there which provide a much more smarter approach to responding to queries. Since the process of making a intelligent chatbot is not a big task, most of us can achieve it with the most basic technical knowledge. Many of which will be very extremely helpful in the service industry and also help provide a better customer experience.

ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[9] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".