Please check out our main directory with 1376 live chatbot examples (an overview as maintained by developers themselves), our vendor listing with 256 chatbot companies and chatbot news section with already more than 370 articles! Our research tab contains lots of papers on chatbots, 1,166 journals on chatbots and 390 books on chatbots. This research section also shows which universities are active in the chatbot field, indicates which publishers are publishing journals on humanlike conversational AI and informs about academic events on chatbots. Also, check out our dedicated tab for awards, contest and games related to the chatbot field, various forums like our AI forum by chatbot enthusiasts and add any chatbot as created by yourself and your colleagues to our chatbot directory. Please do not forget to register to join us in these exciting times.

One of the key advantages of Roof Ai is that it allows real-estate agents to respond to user queries immediately, regardless of whether a customer service rep or sales agent is available to help. This can have a dramatic impact on conversion rates. It also eliminates potential leads slipping through an agent’s fingers due to missing a Facebook message or failing to respond quickly enough. 


Tay, an AI chatbot that learns from previous interaction, caused major controversy due to it being targeted by internet trolls on Twitter. The bot was exploited, and after 16 hours began to send extremely offensive Tweets to users. This suggests that although the bot learnt effectively from experience, adequate protection was not put in place to prevent misuse.[56]
The idea was to permit Tay to “learn” about the nuances of human conversation by monitoring and interacting with real people online. Unfortunately, it didn’t take long for Tay to figure out that Twitter is a towering garbage-fire of awfulness, which resulted in the Twitter bot claiming that “Hitler did nothing wrong,” using a wide range of colorful expletives, and encouraging casual drug use. While some of Tay’s tweets were “original,” in that Tay composed them itself, many were actually the result of the bot’s “repeat back to me” function, meaning users could literally make the poor bot say whatever disgusting remarks they wanted. 
Despite the fact that ALICE relies on such an old codebase, the bot offers users a remarkably accurate conversational experience. Of course, no bot is perfect, especially one that’s old enough to legally drink in the U.S. if only it had a physical form. ALICE, like many contemporary bots, struggles with the nuances of some questions and returns a mixture of inadvertently postmodern answers and statements that suggest ALICE has greater self-awareness for which we might give the agent credit.
Love them or hate them, chatbots are here to stay. Chatbots have become extraordinarily popular in recent years largely due to dramatic advancements in machine learning and other underlying technologies such as natural language processing. Today’s chatbots are smarter, more responsive, and more useful – and we’re likely to see even more of them in the coming years.

This chatbot is one the best AI chatbots and it’s my favorite too. Evidently it is the current winner of Loebner Prize. The Loebner Prize is an annual competition in artificial intelligence that awards prizes to the chatterbot considered by the judges to be the most human-like. The format of the competition is that of a standard Turing test. You can talk with Mitsuku for hours without getting bored. It replies to your question in the most humane way and understands your mood with the language you’re using.
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published,[7] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:
×