What began as a televised ad campaign eventually became a fully interactive chatbot developed for PG Tips’ parent company, Unilever (which also happens to own an alarming number of the most commonly known household brands) by London-based agency Ubisend, which specializes in developing bespoke chatbot applications for brands. The aim of the bot was to not only raise brand awareness for PG Tips tea, but also to raise funds for Red Nose Day through the 1 Million Laughs campaign.
According to the Journal of Medical Internet Research, "Chatbots are [...] increasingly used in particular for mental health applications, prevention and behavior change applications (such as smoking cessation or physical activity interventions).".[48] They have been shown to serve as a cost-effective and accessible therapeutic agents for indications such as depression and anxiety.[49] A conversational agent called Woebot has been shown to significantly reduce depression in young adults.[50]
There has been a great deal of controversy about the use of bots in an automated trading function. Auction website eBay has been to court in an attempt to suppress a third-party company from using bots to traverse their site looking for bargains; this approach backfired on eBay and attracted the attention of further bots. The United Kingdom-based bet exchange Betfair saw such a large amount of traffic coming from bots that it launched a WebService API aimed at bot programmers, through which it can actively manage bot interactions.
The “stand-alone” application, where the chatbot runs on a single computer, integrates mostly some sort of system interface, allowing your chatbot to control certain aspects and functions of your computer, such as playing media files, or retrieving documents. It usually also has a graphical component built in, as well, in the form of an avatar (often female) that enhances interaction, thus improving user’s experience.
According to the Journal of Medical Internet Research, "Chatbots are [...] increasingly used in particular for mental health applications, prevention and behavior change applications (such as smoking cessation or physical activity interventions).".[48] They have been shown to serve as a cost-effective and accessible therapeutic agents for indications such as depression and anxiety.[49] A conversational agent called Woebot has been shown to significantly reduce depression in young adults.[50]
In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published,[7] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:

Chatbots talk in almost every major language. Their language (Natural Language Processing, NLP) skills vary from extremely poor to very clever intelligent, helpful and funny. The same counts for their graphic design, sometimes it feels like a cartoonish character drawn by a child, and on the other hand there are photo-realistic 3D animated characters available, which are hard to distinguish from humans. And they are all referred to as ‘chatbots’. If you have a look at our chatbot gallery, you will immediately notice the difference.
If a text-sending algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seem plausible, for instance making false claims during a presidential election. With enough chatbots, it might be even possible to achieve artificial social proof.[58][59]

In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published,[7] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:

×