Thought leadership from our experts

Beware of the legal risks surrounding the rise of chatbots

What is a bot?

A bot, short for "robot", is a software program that performs automated repetitive tasks on the Internet. There are many different types of bots. For instance, a common type of bots is web crawlers, or spiders, which are used by search engines to systematically browse the World Wide Web in order to index the websites.

What is a chatbot?

From its name, a chatbot is a software program that "chats". It simulates human conversations through voice commands, textual methods or both. It is one of the first types of automated programs proposed by Alan Turing, the developer of the first computer model and the father of artificial intelligence, in the fifties. His idea was to develop artificial computing intelligence in order to impersonate a human in a real-time conversation sufficiently well that a human would not be to distinguish reliably between the program and a real person.

Who uses chatbots and why?

Chatbots became popular in the nineties with the rise of online chatrooms. At the time, the bots were mainly used to detect certain text patterns sent by chatrooms' participants and reply with automated responses. In particular, chatbots would identify inappropriate language and warn the user to stop or eventually block him/her from the discussion.

Nowadays, chatbots are widely used by corporations for many purposes:

  • Customer support services: a great number of companies use chatbots as virtual customer service agents. The bots are capable of handling numerous repetitive questions, such as products' return and exchange issues. Using chatbots is highly beneficial to the company. On the one hand, it eliminates the call waiting time for the customer and provides a 24 hours' service. On the other hand, it allows the company to cut labor costs. Indeed, according to BI Intelligence, chatbots can save up to 30% of the annual salary in customer support services1. A lot of entities have already implemented chatbots as a part of their customer support service. For example, Alaska Airlines have launched their virtual assistant "Ask Jenn" since 2008.
  • Advertising: more and more brands are using chatbots for promotional purposes. Brands have recourse to messenger applications, such as Facebook Messenger, in order to send advertising links to potential consumers. WeChat, a Chinese messenger application, already has more than 10 million official accounts, including banks, hotels and even celebrities that are registered to interact with users through chatbots. This provides brands with a new exposure window and a new advertising opportunity, which is more personal than a spam email. In fact, according to Octane AI, specialized in chatbots and messenger marketing, 15 to 60% of people who receive a message from a Messenger chatbot with a link to an external URL click the link2.
  • Legal services: some start-ups are creating chatbots which provide low cost or even free legal advice and service. A famous example dubbed as "the first lawyer robot" is DoNotPay, which started by appealing parking tickets. In 26 months, DoNotPay has taken on 250,000 cases and won 160,000, giving it a success rate of 64% appealing over 4 million USD of parking tickets. After this success, the AI lawyer

went on helping people with flight delay compensation, as well as acting as a guide for refugees navigating foreign legal systems. If this type of chatbots appears particularly advantageous for its users and shows that chatbots can be used for more than just promoting its product, it cannot be easily implemented in all countries. Indeed, such professions require specific qualifications and are therefore highly regulated in many countries, such as France which imposes restrictions on the possibility of offering legal advice.

  • Malicious use: while most chatbots are used for productive purposes, some are considered malware. Malicious chatbots are frequently used to fill chat rooms with spam and advertising, or to entice people into revealing personal information, such as bank account numbers.

In light of all these uses, recourse to chatbots is constantly increasing. Indeed, according to the multinational computer technology corporation Oracle, 80% of businesses will implement chatbots by 20203.

But what are the legal issues and risks implied by this phenomenon?

Terms & Conditions

Users should be aware if a certain activity is being carried out by a chatbot. This is especially important if chatbots are being used to facilitate online transactions or provide any type of advice. Acceptance of these Terms by the users must be unequivocal.

Disclaimers and compliance to regulated activities

The introduction of chatbots in highly regulated industries such as financial, medical or legal services highlights a range of potential issues relating to liability. For example, if a chatbot is assisting a user with booking a flight, a disclaimer stating that the service is computer generated and that users are responsible for checking information provided before booking travel may be appropriate. If the chatbot is used in regulated industries, the activities of the chatbot must be programmed to comply with ‎industry regulations and standards.

Developers and owners of chatbots should particularly watch out for the rules around product recommendations and advertising. Companies working in collaboration with sponsors who want to exploit chatbots will have to make it clear whether a chatbot is "sponsored"/"paid for"/"brought to you by" or when a chatbot is programmed to put forward sponsored products. If a chatbot is programmed to always suggest a particular type of brand whenever anyone asks for clothes shop recommendations for example, an explicit disclaimer must be put in place.

Any company giving a chatbot authority to advice users will need to ensure that the chatbot has access to a large volume of up to date information in order to understand instructions and questions, and provide helpful and relevant responses. In case the chatbot is unable to provide a correct advice, a clear disclaimer and potential human intervention trigger should be considered.

Data protection

Chatbots have the potential to collect a large volume of personal data and other commercial information in the course of interaction with internet users. Therefore, the data protection policy is a key issue for companies using chatbots, especially in the European Union. In particular, companies must clarify the identity of the data officer, the nature of the collected data, the duration and purposes of data processing, and eventually the identity, location and degree of protection ensured by any recipient of this data outside the European Union.

Infringement of third party rights

Chatbots are susceptible of infringing copyright protected rights or using third party trademarks. Appropriate safeguards must be put in place to prevent such infringements.

Developers of chatbots must therefore ensure that any use of third party's intellectual property rights falls within the scope of the exceptions not constituting an infringement. For example, a fair use of third party's trademarks can be made in the context of comparative advertising. Using a trademark is also not considered as an infringement of its owner's rights when it is not in the course of trade. However, it is quite rare to be in this position. The main criterion is the avoidance of any likelihood of confusion. Concerning copyrights, using a protected artwork may be defended by invoking freedom of speech or the parody exception. In all cases, it is essential to make sure that the chatbot would not infringe others' rights since such infringement would engage the liability of its owners or developers and deteriorate the image of the brand using it.

Prevention of rogue chatbots

Companies should be cautious about potential detrimental, abusive and incorrect responses that a chatbot may give and bear in mind the effect a chatbot can have on a company's image and profile. A number of recent chatbot errors have caused embarrassment to companies and brands by giving inappropriate answers. For instance, in 2016, Microsoft's chatbot, Tay, which was designed to interact with 18-24 year olds through artificial intelligence and machine learning mechanisms, has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements. Such a behavior may engage the liability of the corporation responsible for the chatbot for defamation, abuse or harassment charges.

In order to prevent such a culmination, companies should anticipate and implement a type of intelligent censoring mechanism. Furthermore, the chatbot must be tested and reviewed through random conversations exposing it to this scenario before the launch. Companies should also incorporate this risk into their risk and crisis management planning to be able to react quickly to any complaints made by the public concerning their dealings with the chatbot.

It is clear that, if chatbots offer several benefits to its developers as well as its users explaining their rise, they also entail different risks. Anticipating these risks and making sure that the designed chatbot respects the correspondent legal framework is a must. Ensuring this conformity to the laws might necessitate a professional legal opinion, which developers should not hesitate to seek.

  1. BI Intelligence, The Chatbots Explainer, 2016 <>
  2. Octane AI Guidebook, Does your small business need a chatbot?, May 2017 <>
  3. Oracle, Virtual Reality, Chatbots to dominate brand interactions by 2020, December 2016 <>