Despite all efforts during almost half a century, most chatbots are still easily uncovered, but over the next decades they will definitely get smarter and finally we will distinguish human beings by them giving us silly answers as opposed to the much smarter chatbots. All of this will really start accelerating as soon as one single chatbot is smarter than one single human being. They will then be able to learn from each other, instead of learning from human beings, their knowledge will explode and they will be able to design even better learning mechanisms. In the long run, we will learn language from chatbots instead of the other way around.


The bot (which also offers users the opportunity to chat with your friendly neighborhood Spiderman) isn’t a true conversational agent, in the sense that the bot’s responses are currently a little limited; this isn’t a truly “freestyle” chatbot. For example, in the conversation above, the bot didn’t recognize the reply as a valid response – kind of a bummer if you’re hoping for an immersive experience.
Bots are also used to buy up good seats for concerts, particularly by ticket brokers who resell the tickets.[12] Bots are employed against entertainment event-ticketing sites. The bots are used by ticket brokers to unfairly obtain the best seats for themselves while depriving the general public of also having a chance to obtain the good seats. The bot runs through the purchase process and obtains better seats by pulling as many seats back as it can.

Pop-culture references to Skynet and a forthcoming “war against the machines” are perhaps a little too common in articles about AI (including this one and Larry’s post about Google’s RankBrain tech), but they do raise somewhat uncomfortable questions about the unexpected side of developing increasingly sophisticated AI constructs – including seemingly harmless chatbots.

Chatbots could be used as weapons on the social networks such as Twitter or Facebook. An entity or individuals could design create a countless number of chatbots to harass people. They could even try to track how successful their harassment is by using machine-learning-based methods to sharpen their strategies and counteract harassment detection tools.
ALICE – which stands for Artificial Linguistic Internet Computer Entity, an acronym that could have been lifted straight out of an episode of The X-Files – was developed and launched by creator Dr. Richard Wallace way back in the dark days of the early Internet in 1995. (As you can see in the image above, the website’s aesthetic remains virtually unchanged since that time, a powerful reminder of how far web design has come.) 
It didn’t take long, however, for Turing’s headaches to begin. The BabyQ bot drew the ire of Chinese officials by speaking ill of the Communist Party. In the exchange seen in the screenshot above, one user commented, “Long Live the Communist Party!” In response, BabyQ asked the user, “Do you think that such a corrupt and incompetent political regime can live forever?”
Despite all efforts during almost half a century, most chatbots are still easily uncovered, but over the next decades they will definitely get smarter and finally we will distinguish human beings by them giving us silly answers as opposed to the much smarter chatbots. All of this will really start accelerating as soon as one single chatbot is smarter than one single human being. They will then be able to learn from each other, instead of learning from human beings, their knowledge will explode and they will be able to design even better learning mechanisms. In the long run, we will learn language from chatbots instead of the other way around.
Since the steep rise of available hardware and software platforms lately, nowadays chatbots are available everywhere. Originally, they were very tight to computers, then exchangeable through tapes, discs and floppy discs, but since the Internet era they have been widespread. For example ancient chatbot Eliza is now also available on iPhone, while famous chatbot A.L.I.C.E. is available on Facebook.
^ "From Russia With Love" (PDF). Retrieved 2007-12-09. Psychologist and Scientific American: Mind contributing editor Robert Epstein reports how he was initially fooled by a chatterbot posing as an attractive girl in a personal ad he answered on a dating website. In the ad, the girl portrayed herself as being in Southern California and then soon revealed, in poor English, that she was actually in Russia. He became suspicious after a couple of months of email exchanges, sent her an email test of gibberish, and she still replied in general terms. The dating website is not named. Scientific American: Mind, October–November 2007, page 16–17, "From Russia With Love: How I got fooled (and somewhat humiliated) by a computer". Also available online.
An Internet bot, also known as a web robot, WWW robot or simply bot, is a software application that runs automated tasks (scripts) over the Internet.[1] Typically, bots perform tasks that are both simple and structurally repetitive, at a much higher rate than would be possible for a human alone. The largest use of bots is in web spidering (web crawler), in which an automated script fetches, analyzes and files information from web servers at many times the speed of a human. More than half of all web traffic is made up of bots.[2]
The term chat bot (or sometimes just bot) can also be used in the meaning of an automatic chat responder program. The article How to Create a Chat Bot for Yahoo Messenger written by Chelsea Hoffman, explains how quick and easy it is to create a Chat bot responder containing unique and accurate responses to general phrases, words and questions that are used in Yahoo messenger.
The most widely used anti-bot technique is the use of CAPTCHA, which is a form of Turing test used to distinguish between a human user and a less-sophisticated AI-powered bot, by the use of graphically-encoded human-readable text. Examples of providers include Recaptcha, and commercial companies such as Minteye, Solve Media, and NuCaptcha. Captchas, however, are not foolproof in preventing bots as they can often be circumvented by computer character recognition, security holes, and even by outsourcing captcha solving to cheap laborers.
However, as irresistible as this story was to news outlets, Facebook’s engineers didn’t pull the plug on the experiment out of fear the bots were somehow secretly colluding to usurp their meatbag overlords and usher in a new age of machine dominance. They ended the experiment due to the fact that, once the bots had deviated far enough from acceptable English language parameters, the data gleaned by the conversational aspects of the test was of limited value.
^ "From Russia With Love" (PDF). Retrieved 2007-12-09. Psychologist and Scientific American: Mind contributing editor Robert Epstein reports how he was initially fooled by a chatterbot posing as an attractive girl in a personal ad he answered on a dating website. In the ad, the girl portrayed herself as being in Southern California and then soon revealed, in poor English, that she was actually in Russia. He became suspicious after a couple of months of email exchanges, sent her an email test of gibberish, and she still replied in general terms. The dating website is not named. Scientific American: Mind, October–November 2007, page 16–17, "From Russia With Love: How I got fooled (and somewhat humiliated) by a computer". Also available online.
However, as irresistible as this story was to news outlets, Facebook’s engineers didn’t pull the plug on the experiment out of fear the bots were somehow secretly colluding to usurp their meatbag overlords and usher in a new age of machine dominance. They ended the experiment due to the fact that, once the bots had deviated far enough from acceptable English language parameters, the data gleaned by the conversational aspects of the test was of limited value.
24/7 digital support. An instant and always accessible assistant is assumed by the more and more digital consumer of the new era.[34] Unlike humans, chatbots once developed and installed don't have a limited workdays, holidays or weekends and are ready to attend queries at any hour of the day. It helps to the customer to avoid waiting of a company's agent to be available. Thus, the customer doesn't have to wait for the company executive to help them. This also lets companies keep an eye on the traffic during the non-working hours and reach out to them later.[41]

The term chat bot (or sometimes just bot) can also be used in the meaning of an automatic chat responder program. The article How to Create a Chat Bot for Yahoo Messenger written by Chelsea Hoffman, explains how quick and easy it is to create a Chat bot responder containing unique and accurate responses to general phrases, words and questions that are used in Yahoo messenger.
Efforts by servers hosting websites to counteract bots vary. Servers may choose to outline rules on the behaviour of internet bots by implementing a robots.txt file: this file is simply text stating the rules governing a bot's behaviour on that server. Any bot that does not follow these rules when interacting with (or 'spidering') any server should, in theory, be denied access to, or removed from, the affected website. If the only rule implementation by a server is a posted text file with no associated program/software/app, then adhering to those rules is entirely voluntary – in reality there is no way to enforce those rules, or even to ensure that a bot's creator or implementer acknowledges, or even reads, the robots.txt file contents. Some bots are "good" – e.g. search engine spiders – while others can be used to launch malicious and harsh attacks, most notably, in political campaigns.[2]
A representative example of a chat bot is A.L.I.C.E., brought to artificial life in 1995 by Richard Wallace. The A.L.I.C.E. bot participated in numerous competitions related to natural language processing evaluation and obtained many honors and awards, and it is also worth mentioning that this chat bot won the Loebner Prize contest at least three times, it was also part of the top 10 at Chatterbox competition, and won the best character/personality chat bot contest.
“Major shifts on large platforms should be seen as an opportunities for distribution. That said, we need to be careful not to judge the very early prototypes too harshly as the platforms are far from complete. I believe Facebook’s recent launch is the beginning of a new application platform for micro application experiences. The fundamental idea is that customers will interact with just enough UI, whether conversational and/or widgets, to be delighted by a service/brand with immediate access to a rich profile and without the complexities of installing a native app, all fueled by mature advertising products. It’s potentially a massive opportunity.” — Aaron Batalion, Partner at Lightspeed Venture Partners
×