The word bot, in Internet sense, is a short form of robot and originates from XX century. The modern use of the word bot has curious affinities with earlier uses, e.g. “parasitical worm or maggot” (1520s), of unknown origin; and Australian-New Zealand slang “worthless, troublesome person” (World War I -era). The method of minting new slang by clipping the heads off respectable words does not seem to be old or widespread in English. Examples: za from pizza, zels from pretzels, rents from parents, are American English student or teen slang and seem to date back no further than late 1960s.[4]
Chatbots talk in almost every major language. Their language (Natural Language Processing, NLP) skills vary from extremely poor to very clever intelligent, helpful and funny. The same counts for their graphic design, sometimes it feels like a cartoonish character drawn by a child, and on the other hand there are photo-realistic 3D animated characters available, which are hard to distinguish from humans. And they are all referred to as ‘chatbots’. If you have a look at our chatbot gallery, you will immediately notice the difference.
The word robot is derived from the Czech noun robota meaning “labor”, and is an accomplishment of the cubist painter and writer Josef Capek, older brother of novelist and playwright Karel Capek. The word robot first appeared in 1920 in the Karel Capek’s play “RUR” (“Rossum’s Universal Robots”) and since then this play popularized the word invented by playwright’s brother.[2]
Tay, an AI chatbot that learns from previous interaction, caused major controversy due to it being targeted by internet trolls on Twitter. The bot was exploited, and after 16 hours began to send extremely offensive Tweets to users. This suggests that although the bot learnt effectively from experience, adequate protection was not put in place to prevent misuse.[56]
This chatbot aims to make medical diagnoses faster, easier, and more transparent for both patients and physicians – think of it like an intelligent version of WebMD that you can talk to. MedWhat is powered by a sophisticated machine learning system that offers increasingly accurate responses to user questions based on behaviors that it “learns” by interacting with human beings.
Smart chatbots rely on artificial intelligence when they communicate with users. Instead of pre-prepared answers, the robot responds with adequate suggestions on the topic. In addition, all the words said by the customers are recorded for later processing. However, the Forrester report “The State of Chatbots” points out that artificial intelligence is not a magic and is not yet ready to produce marvelous experiences for users on its own. On the contrary, it requires a huge work:

Despite all efforts during almost half a century, most chatbots are still easily uncovered, but over the next decades they will definitely get smarter and finally we will distinguish human beings by them giving us silly answers as opposed to the much smarter chatbots. All of this will really start accelerating as soon as one single chatbot is smarter than one single human being. They will then be able to learn from each other, instead of learning from human beings, their knowledge will explode and they will be able to design even better learning mechanisms. In the long run, we will learn language from chatbots instead of the other way around.
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[9] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
Efforts by servers hosting websites to counteract bots vary. Servers may choose to outline rules on the behaviour of internet bots by implementing a robots.txt file: this file is simply text stating the rules governing a bot's behaviour on that server. Any bot that does not follow these rules when interacting with (or 'spidering') any server should, in theory, be denied access to, or removed from, the affected website. If the only rule implementation by a server is a posted text file with no associated program/software/app, then adhering to those rules is entirely voluntary – in reality there is no way to enforce those rules, or even to ensure that a bot's creator or implementer acknowledges, or even reads, the robots.txt file contents. Some bots are "good" – e.g. search engine spiders – while others can be used to launch malicious and harsh attacks, most notably, in political campaigns.[2]
These days, checking the headlines over morning coffee is as much about figuring out if we should be hunkering down in the basement preparing for imminent nuclear annihilation as it is about keeping up with the day’s headlines. Unfortunately, even the most diligent newshounds may find it difficult to distinguish the signal from the noise, which is why NBC launched its NBC Politics Bot on Facebook Messenger shortly before the U.S. presidential election in 2016.
Online chatbots save time and efforts by automating customer support. Gartner forecasts that by 2020, over 85% of customer interactions will be handled without a human. However, the opportunites provided by chatbot systems go far beyond giving responses to customers’ inquiries. They are also used for other business tasks, like collecting information about users, helping to organize meetings and reducing overhead costs. There is no wonder that size of the chatbot market is growing exponentially.
The word robot is derived from the Czech noun robota meaning “labor”, and is an accomplishment of the cubist painter and writer Josef Capek, older brother of novelist and playwright Karel Capek. The word robot first appeared in 1920 in the Karel Capek’s play “RUR” (“Rossum’s Universal Robots”) and since then this play popularized the word invented by playwright’s brother.[3]
If a text-sending algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seem plausible, for instance making false claims during a presidential election. With enough chatbots, it might be even possible to achieve artificial social proof.[58][59]

Chatbots could be used as weapons on the social networks such as Twitter or Facebook. An entity or individuals could design create a countless number of chatbots to harass people. They could even try to track how successful their harassment is by using machine-learning-based methods to sharpen their strategies and counteract harassment detection tools.
Pop-culture references to Skynet and a forthcoming “war against the machines” are perhaps a little too common in articles about AI (including this one and Larry’s post about Google’s RankBrain tech), but they do raise somewhat uncomfortable questions about the unexpected side of developing increasingly sophisticated AI constructs – including seemingly harmless chatbots.
The “stand-alone” application, where the chatbot runs on a single computer, integrates mostly some sort of system interface, allowing your chatbot to control certain aspects and functions of your computer, such as playing media files, or retrieving documents. It usually also has a graphical component built in, as well, in the form of an avatar (often female) that enhances interaction, thus improving user’s experience.
The idea was to permit Tay to “learn” about the nuances of human conversation by monitoring and interacting with real people online. Unfortunately, it didn’t take long for Tay to figure out that Twitter is a towering garbage-fire of awfulness, which resulted in the Twitter bot claiming that “Hitler did nothing wrong,” using a wide range of colorful expletives, and encouraging casual drug use. While some of Tay’s tweets were “original,” in that Tay composed them itself, many were actually the result of the bot’s “repeat back to me” function, meaning users could literally make the poor bot say whatever disgusting remarks they wanted. 
1. Define the goals. What should your chatbot do? Clearly indicate the list of functions your chatbot needs to perform. 2. Choose a channel to interact with your customers. Be where your clients prefer to communicate — your website, mobile app, Facebook Messenger, WhatsApp or other messaging platform. 3. Choose the way of creation. There are two of them: using readymade chat bot software or building a custom bot from scratch. 4. Create, customize and launch. Describe the algorithm of its actions, develop a database of answers and test the work of the chatbot. Double check everything before showing your creation to potential customers.
One pertinent field of AI research is natural language processing. Usually, weak AI fields employ specialized software or programming languages created specifically for the narrow function required. For example, A.L.I.C.E. uses a markup language called AIML, which is specific to its function as a conversational agent, and has since been adopted by various other developers of, so called, Alicebots. Nevertheless, A.L.I.C.E. is still purely based on pattern matching techniques without any reasoning capabilities, the same technique ELIZA was using back in 1966. This is not strong AI, which would require sapience and logical reasoning abilities.
×