Enter Roof Ai, a chatbot that helps real-estate marketers to automate interacting with potential leads and lead assignment via social media. The bot identifies potential leads via Facebook, then responds almost instantaneously in a friendly, helpful, and conversational tone that closely resembles that of a real person. Based on user input, Roof Ai prompts potential leads to provide a little more information, before automatically assigning the lead to a sales agent.

AIML, Artificial Intelligence Markup Language developed by Richard Wallace, constitutes an open standard for creating your own chat bot. AIML file consists of row-type, database-style data combined with hierarchical XML data in each response. This video shows one of spreadsheet-style editors for AIML, Simple AIML Editor (SAE) developed by Adeena Mignogna. The SAE allows botmasters to manage large AIML sets and then zoom in on the templates to edit the responses.


Bots are also used to buy up good seats for concerts, particularly by ticket brokers who resell the tickets.[12] Bots are employed against entertainment event-ticketing sites. The bots are used by ticket brokers to unfairly obtain the best seats for themselves while depriving the general public of also having a chance to obtain the good seats. The bot runs through the purchase process and obtains better seats by pulling as many seats back as it can.
Social networking bots are sets of algorithms that take on the duties of repetitive sets of instructions in order to establish a service or connection among social networking users. Various designs of networking bots vary from chat bots, algorithms designed to converse with a human user, to social bots, algorithms designed to mimic human behaviors to converse with behavioral patterns similar to that of a human user. The history of social botting can be traced back to Alan Turing in the 1950s and his vision of designing sets of instructional code that passes the Turing test. From 1964 to 1966, ELIZA, a natural language processing computer program created by Joseph Weizenbaum, is an early indicator of artificial intelligence algorithms that inspired computer programmers to design tasked programs that can match behavior patterns to their sets of instruction. As a result, natural language processing has become an influencing factor to the development of artificial intelligence and social bots as innovative technological advancements are made alongside the progression of the mass spreading of information and thought on social media websites.
Several studies accomplished by analytics agencies such as Juniper or Gartner [34] report significant reduction of cost of customer services, leading to billions of dollars of economy in the next 10 years. Gartner predicts an integration by 2020 of chatbots in at least 85% of all client's applications to customer service. Juniper's study announces an impressive amount of $8 billion retained annually by 2022 due to the use of chatbots.

Pop-culture references to Skynet and a forthcoming “war against the machines” are perhaps a little too common in articles about AI (including this one and Larry’s post about Google’s RankBrain tech), but they do raise somewhat uncomfortable questions about the unexpected side of developing increasingly sophisticated AI constructs – including seemingly harmless chatbots.


However, the revelations didn’t stop there. The researchers also learned that the bots had become remarkably sophisticated negotiators in a short period of time, with one bot even attempting to mislead a researcher by demonstrating interest in a particular item so it could gain crucial negotiating leverage at a later stage by willingly “sacrificing” the item in which it had feigned interest, indicating a remarkable level of premeditation and strategic “thinking.”
Pop-culture references to Skynet and a forthcoming “war against the machines” are perhaps a little too common in articles about AI (including this one and Larry’s post about Google’s RankBrain tech), but they do raise somewhat uncomfortable questions about the unexpected side of developing increasingly sophisticated AI constructs – including seemingly harmless chatbots.
These days, checking the headlines over morning coffee is as much about figuring out if we should be hunkering down in the basement preparing for imminent nuclear annihilation as it is about keeping up with the day’s headlines. Unfortunately, even the most diligent newshounds may find it difficult to distinguish the signal from the noise, which is why NBC launched its NBC Politics Bot on Facebook Messenger shortly before the U.S. presidential election in 2016.
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[9] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
×