The word robot is derived from the Czech noun robota meaning “labor”, and is an accomplishment of the cubist painter and writer Josef Capek, older brother of novelist and playwright Karel Capek. The word robot first appeared in 1920 in the Karel Capek’s play “RUR” (“Rossum’s Universal Robots”) and since then this play popularized the word invented by playwright’s brother.[2]
Reports of political interferences in recent elections, including the 2016 US and 2017 UK general elections,[3] have set the notion of botting being more prevalent because of the ethics that is challenged between the bot’s design and the bot’s designer. According to Emilio Ferrara, a computer scientist from the University of Southern California reporting on Communications of the ACM,[4] the lack of resources available to implement fact-checking and information verification results in the large volumes of false reports and claims made on these bots in social media platforms. In the case of Twitter, most of these bots are programmed with searching filter capabilities that target key words and phrases that reflect in favor and against political agendas and retweet them. While the attention of bots is programmed to spread unverified information throughout the social media platform,[5] it is a challenge that programmers face in the wake of a hostile political climate. Binary functions are designated to the programs and using an Application Program interface embedded in the social media website executes the functions tasked. The Bot Effect is what Ferrera reports as when the socialization of bots and human users creates a vulnerability to the leaking of personal information and polarizing influences outside the ethics of the bot’s code. According to Guillory Kramer in his study, he observes the behavior of emotionally volatile users and the impact the bots have on the users, altering the perception of reality.
The process of building, testing and deploying chatbots can be done on cloud-based chatbot development platforms[51] offered by cloud Platform as a Service (PaaS) providers such as Oracle Cloud Platform Yekaliva[47][28] and IBM Watson.[52][53][54] These cloud platforms provide Natural Language Processing, Artificial Intelligence and Mobile Backend as a Service for chatbot development.
Malicious chatbots are frequently used to fill chat rooms with spam and advertisements, by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They are commonly found on Yahoo! Messenger, Windows Live Messenger, AOL Instant Messenger and other instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.[55]
“It’s hard to balance that urge to just dogpile the latest thing when you’re feeling like there’s a land grab or gold rush about to happen all around you and that you might get left behind. But in the end quality wins out. Everyone will be better off if there’s laser focus on building great bot products that are meaningfully differentiated.” — Ryan Block, Cofounder of Begin.com
×