Thought leadership from our experts

Artificial intelligence and data privacy

Almost all use of artificial intelligence (AI) requires the collection and use of large amounts of data, in many cases personal data, to learn and make intelligent decisions. In recent years there has been an increased focus on ethical dilemmas, privacy and regulations for AI. In this article, we look at AI and the regulation of automated decisions under GDPR.

AI, ethics and GDPR

The development of AI is largely driven by economic and societal needs, and development is taking place in virtually all areas of business and society. Data systems and machines can carry out advanced tasks more quickly and at a lower cost. In some areas, AI will modestly challenge privacy principles, while in others it may be perceived as more wide-ranging and problematic, for example if police and judicial authorities use AI as a tool to make decisions, pass judgments, or predict criminal behavior. Or when the health care system uses AI to determine utility and eligibility for treatment.

The potential for significant financial or societal benefit must be balanced against privacy principles. Among other things, challenges have been identified related to the requirements of legal, fair and open processing, purpose limitation and data minimization. If we do not trust public authorities and private companies to process the information in a fair and equitable way, it may lead to a reluctance to share information and thus reduce the usefulness of AI.

A lot of work is already being done. The same year that the EU's new General Data Protection Regulation1 (GDPR) came into force, the EU Commission presented an AI strategy2, and in 2019 laid out ethical guidelines for trustworthy AI3. These guidelines highlight 7 basic considerations that AI systems must take into account. Several of these are closely linked to protection of personal data of natural persons.

In this article, we look at AI and specific regulation of automated decisions as laid out under GDPR. One is the right to information and an explanation (Articles 12 to 15) and the other is the right to not be subject to automated decisions (Article 22).

Transparency

Data protection is largely about protecting the natural person's right to make decisions about their personal data, cf. GDPR Article 1 and Recital Point 1 - 4. This requires transparency regarding the processing of personal data. GDPR Article 5 no. 1 states that personal data should be processed in a "legal, fair and transparent manner".

The requirement of transparency is in various ways reflected in different regulations, in particular with regard to access rights to personal information, GDPR Articles 12 to 15. When collecting personal data for automated decisions4, additional information requirements are also triggered. The use of AI is often a form of automated processing, and in some instances the system makes the decision independently without human influence.

The controller is obliged to inform the data subject about the occurrence of automated decisions, cf. Article 13 no. 2 (f), 14 no. 2 (g) and 15 no. 1 (h). The wording of the articles suggest that the additional requirements only apply to decisions based solely on automated processing, and not when there is human intervention. It is also a condition that the processing "produces legal effects concerning him or her or similarly significantly affects him or her", cf. the reference to Article 22 no. 1. Examples of situations are denial of unemployment benefits, not being selected for a job interview etc. The obligation to provide additional information also applies when the automated decision-making is based on special categories of personal data.

An expressed concern with advanced AI is that one does not always know how the result is produced, often called "the black box problem". We can distinguish between two main types of black box problems:

1) Access to algorithms and the logic of the system is deliberately limited by commercial considerations, national security, etc.

2) The system's structure and the algorithm are complicated and difficult to explain. This may be the case, for example, in so-called "neural networks". Non-guided learning also allows systems to identify new patterns and relationships in data that may be difficult to explain.

To comply with GDPR, the controller shall provide relevant information about the underlying logic as well as the significance and the expected consequences of such processing. The term "logic" is not defined in GDPR. The Norwegian Data Protection Authority states that the controller must provide information about, for example, decision trees used, how the information is emphasized and how the information is linked. This can be challenging in advanced AI. The authority also expresses that it is not always necessary to provide a comprehensive explanation of the algorithm, or to present the algorithm itself5.

The Guidelines on Automated individual decision-making (WP251) from the Article 29 Working Party6, adopted by the European Data Protection Board, discuss the requirement to explain the "logic" as follows:

"The controller should in simple ways to tell the data subject about the rationale behind, or the criteria relied on in reaching the decision. The GDPR requires the controller to provide meaningful information about the logic involved, not necessarily a complex explanation of the algorithms used or disclosure of the full algorithm. The information provided should, however, be sufficiently comprehensive for the data subject to understand the reasons for the decision."

Examples of information are:

  • the categories of data that have been or will be used in the decision-making process;
  • why these categories are considered pertinent;
  • how any profile used in the automated decision-making process is built, including any statistics used in the analysis; and
  • how it is used for a decision concerning the data subject.

According to GDPR, there is probably no requirement that the algorithm itself be presented, cf. recital 63 where it is presumed that the individual's rights should not adversely affect trade secrets or intellectual property rights, especially the copyright by which the software is protected.

The right to be informed in Article 13, etc. is important, and will probably become more important as AI is used in several areas of business and society. There is currently not much guidelines on how the right to information shall be safeguarded. There is hardly a "one-size-fits all" solution. Suggestions are, for example, the establishment of ethical advice, "auditing techniques" built into the system to give a third party the opportunity to monitor and revise, etc.

Right to human intervention

Another provision that is likely to become even more important as AI develops is the GDPR Article 22. That provision states that a natural person has the right not to be subject to a decision based solely on automated processing. If a natural person is involved in the process leading to the decision or has an opportunity to review the decision (change it), the provision does not apply.

In WP251 it is stated that Article 22 no. Paragraph 1 shall be interpreted as a prohibition on automatic individual decisions:

"The term" right "in the provision does not mean that Article 22 (1) applies only when actively invoked by the data subject. Article 22 (1) establishes a general prohibition for decision-making based solely on automated processing. This prohibition applies whether or not the data subject takes an action regarding the processing of their personal data."

Some legal scholars do not agree with WP251 interpretation. It is debated whether Article 22 no. 1 is a prohibition, or whether this right must be exercised by the individual in the same way as other rights under the GDPR, such as data portability, access, etc.

However, Article 22 no. 1 does not apply unconditionally. Solely automated decisions are allowed when it:

a) is necessary to enter into or fulfil an agreement between the data subject and a controller;

b) is authorized under EU or national law of the Member State to which the controller is subject, and where appropriate measures are also adopted to protect the rights, freedoms and legitimate interests of the data subject; or

c) is based on the expressed consent of the data subject.

If the exceptions in (a) or (c) apply, the controller shall take appropriate measures to protect the data subject's rights and freedoms and legitimate interests. This includes measures that give the registered right and access to human intervention by the controller, to express their views and to contest the decision. This means that the data subject must be able to demand that a natural person make the final decision, and the data subject must be able to submit a complaint about it, cf. Article 22 no. 3. WP251 states that "Any review must be carried out by someone who has the appropriate authority and ability to change the decision. The reviewer should undertake a thorough assessment of all the relevant data, including any additional information provided by the data subject."

Solely automated decisions involving specific categories of personal data are only allowed by consent or if there is a legal basis, cf. Article 22 no. 4.

The right to human intervention is a fundamental right and highly relevant to AI. The interpretation of Article 22 is likely to be brought before member state courts and the CJEU7 in the future.

Summary

The GDPR has several provisions that may impact the development of AI, including the requirements for transparency and data protection by design and default. However, the implementation of these provisions may face several challenges, both practical and legal. There are some factors that stand out as central to further development and clarification:

  • The need for methods and guidelines for ensuring transparency.
  • The scope of the individual's rights or the prohibition under Article 22.
  • Effective enforcement of the individual's rights under privacy laws.

1 Regulation 2016/679

2 https://ec.europa.eu/digital-single-market/en/artificial-intelligence

3 Ethics Guidelines for trustworthy AI, High-Level Expert Group on Artificial Intelligence, 8 April 2019

4 Cf. GDPR Article 22 nr. 1 and 4

5 Datatilsynet "Artifical intelligence and privacy" report, January 2018

6 Guidelines on Automated individual decision-making and profiling for the purposes of Regulation 2016/679 (WP251)

7 Court of Justice of the European Union