Reports of political interferences in recent elections, including the 2016 US and 2017 UK general elections,[3] have set the notion of botting being more prevalent because of the ethics that is challenged between the bot’s design and the bot’s designer. According to Emilio Ferrara, a computer scientist from the University of Southern California reporting on Communications of the ACM,[4] the lack of resources available to implement fact-checking and information verification results in the large volumes of false reports and claims made on these bots in social media platforms. In the case of Twitter, most of these bots are programmed with searching filter capabilities that target key words and phrases that reflect in favor and against political agendas and retweet them. While the attention of bots is programmed to spread unverified information throughout the social media platform,[5] it is a challenge that programmers face in the wake of a hostile political climate. Binary functions are designated to the programs and using an Application Program interface embedded in the social media website executes the functions tasked. The Bot Effect is what Ferrera reports as when the socialization of bots and human users creates a vulnerability to the leaking of personal information and polarizing influences outside the ethics of the bot’s code. According to Guillory Kramer in his study, he observes the behavior of emotionally volatile users and the impact the bots have on the users, altering the perception of reality.

Several studies accomplished by analytics agencies such as Juniper or Gartner [34] report significant reduction of cost of customer services, leading to billions of dollars of economy in the next 10 years. Gartner predicts an integration by 2020 of chatbots in at least 85% of all client's applications to customer service. Juniper's study announces an impressive amount of $8 billion retained annually by 2022 due to the use of chatbots.
Chatbot, when it plays its role as a virtual representative of an enterprise, is widely used by businesses outside of the US, primarily in the UK, The Netherlands, Germany and Australia. Additionally, the usage of this term is quite popular amongst amateur AI enthusiasts willing to spend vast amounts of time on their own intelligent creations (with diverse outcomes).
AIML, Artificial Intelligence Markup Language developed by Richard Wallace, constitutes an open standard for creating your own chat bot. AIML file consists of row-type, database-style data combined with hierarchical XML data in each response. This video shows one of spreadsheet-style editors for AIML, Simple AIML Editor (SAE) developed by Adeena Mignogna. The SAE allows botmasters to manage large AIML sets and then zoom in on the templates to edit the responses.

Previous generations of chatbots were present on company websites, e.g. Ask Jenn from Alaska Airlines which debuted in 2008[27] or Expedia's virtual customer service agent which launched in 2011.[27][28] The newer generation of chatbots includes IBM Watson-powered "Rocky", introduced in February 2017 by the New York City-based e-commerce company Rare Carat to provide information to prospective diamond buyers.[29][30]

Are the travel bots or the weather bots that have buttons that you click and give you some query, artificially intelligent? Definitely, but they are just not far along the conversation axis. It can be a wonderfully designed conversational interface that is smooth and easy to use. It could be natural language processing and understanding where it is able to understand sentences that you structure in the wrong way. Now, it is easier than ever to make a bot from scratch. Also chatbot development platforms like WotNot, Chatfuel, Gupshup make it fairly simple to build a chatbot without a technical background. Hence, making the reach for chatbot easy and transparent to anyone who would like to have one for their business. For more understanding on intelligent chatbots, read our blog.
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published,[7] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:
×