ALICE – which stands for Artificial Linguistic Internet Computer Entity, an acronym that could have been lifted straight out of an episode of The X-Files – was developed and launched by creator Dr. Richard Wallace way back in the dark days of the early Internet in 1995. (As you can see in the image above, the website’s aesthetic remains virtually unchanged since that time, a powerful reminder of how far web design has come.) 
A chatbot (also known as a spy, conversational bot, chatterbot, interactive agent, conversational interface, Conversational AI, talkbot or artificial spy entity) is a computer program or an artificial intelligence which conducts a conversation via auditory or textual methods.[1] Such programs are often designed to convincingly simulate how a human would behave as a conversational partner, thereby passing the Turing test. Chatbots are typically used in dialog systems for various practical purposes including customer service or information acquisition. Some chatbots use sophisticated natural language processing systems, but many simpler ones scan for keywords within the input, then pull a reply with the most matching keywords, or the most similar wording pattern, from a database.
Love them or hate them, chatbots are here to stay. Chatbots have become extraordinarily popular in recent years largely due to dramatic advancements in machine learning and other underlying technologies such as natural language processing. Today’s chatbots are smarter, more responsive, and more useful – and we’re likely to see even more of them in the coming years.
The process of building, testing and deploying chatbots can be done on cloud-based chatbot development platforms[51] offered by cloud Platform as a Service (PaaS) providers such as Oracle Cloud Platform Yekaliva[47][28] and IBM Watson.[52][53][54] These cloud platforms provide Natural Language Processing, Artificial Intelligence and Mobile Backend as a Service for chatbot development.

Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational—even when it is actually based on rather simple pattern-matching—can be exploited for useful purposes. Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories. Thus, for example, online help systems can usefully employ chatbot techniques to identify the area of help that users require, potentially providing a "friendlier" interface than a more formal search or menu system. This sort of usage holds the prospect of moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked "genuinely useful computational methods".


Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational—even when it is actually based on rather simple pattern-matching—can be exploited for useful purposes. Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories. Thus, for example, online help systems can usefully employ chatbot techniques to identify the area of help that users require, potentially providing a "friendlier" interface than a more formal search or menu system. This sort of usage holds the prospect of moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked "genuinely useful computational methods".
Sometimes it is hard to discover if a conversational partner on the other end is a real person or a chatbot. In fact, it is getting harder as technology progresses. A well-known way to measure the chatbot intelligence in a more or less objective manner is the so-called Turing Test. This test determines how well a chatbot is capable of appearing like a real person by giving responses indistinguishable from a human’s response.
Pop-culture references to Skynet and a forthcoming “war against the machines” are perhaps a little too common in articles about AI (including this one and Larry’s post about Google’s RankBrain tech), but they do raise somewhat uncomfortable questions about the unexpected side of developing increasingly sophisticated AI constructs – including seemingly harmless chatbots.
Insomno bot is for night owls. As the name suggests, it is for all people out there who have trouble sleeping. This bot talks to you when you have no one around and gives you amazing replies so that you won’t get bored. It’s not something that will help you count stars when you can’t sleep or help you with reading suggestions, but this bot talks to you about anything.
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[9] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
×