Entertainment and Media Guide to AI

Legal issues in AI part 2 - Judicial scales icon

Read time: 7 minutes

Introduction

Businesses across all industries increasingly use AI technologies for purposes of profiling or automated decision-making. AI-powered age verification tools are also materializing, offering a new approach to age verification requirements. The article below explains these two emerging aspects in relation to data protection and data privacy in the age of AI.

Decision time: The rise of AI use in automated decisions

Profiling and automated decision-making are distinctly different concepts with varying definitions, depending on the jurisdiction, but they are not mutually exclusive.

Profiling generally refers to any form of automated processing of personal information to evaluate, analyze or predict certain aspects relating to an individual’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements. For instance, a streaming company may use data collected from what content a subscriber views in order to make recommendations to them about other content they may want to view or listen to in the future.

Automated decision-making generally refers to the process of making a decision by automated means without any human involvement, which may have a legal or otherwise similarly significant effect. These decisions can be based on factual data as well as on digitally created profiles or inferred data. Automated decision-making often involves profiling, but it does not have to. For instance, companies may use automated decision-making tools to run a credit check for its customers or to identify a user’s age, gender or other identifiable characteristics for other legally permissible business purposes (e.g., to moderate objectionable content or to advertise products based on a consumer’s demographics).

Profiling and automated decision-making can occur in a variety of situations ranging from financial loans and social health care to employment. In the entertainment industry, for example, AI systems are regularly used in a variety of contexts, such as profiling for advertising, as discussed in the AI and advertising section, and using facial recognition technology to access events. For instance, major sporting events around the world, such as the FIFA World Cup and the Olympics, use facial recognition technology to monitor fans for safety purposes. During the 2022 FIFA World Cup in Qatar, 15,000 CCTV cameras were hooked up to facial recognition systems to monitor threats ranging from reckless football fans to terrorism. Other companies are also exploring and implementing facial recognition technologies to make payments and check-in to events.

Given that AI is dependent on huge data sets that include personal information, and sometimes even sensitive personal information, lawmakers and regulators around the world have addressed the use of AI to profile and make automated decisions under their jurisdiction’s privacy laws. These laws generally seek to ensure that businesses use AI in a responsible manner, especially when it comes to an individual’s personal information, and that they honor an individual’s right not to be subject to profiling or automated decision-making. Such laws include the EU Global Data Protection Regulation (GDPR), California Privacy Rights Act (CPRA), Colorado Privacy Act (CPA), Virginia Consumer Data Protection Act (VCDPA) and Connecticut Data Privacy Act (CTDPA).

Other jurisdictions have taken a slightly different approach. For instance, the Office of the Privacy Commissioner for Personal Data (the Privacy Commissioner) in Hong Kong does not regulate automated decision-making, but in August 2021, it did issue guidance on the ethical development and use of AI in to facilitate the healthy development and use of AI in Hong Kong and to assist organizations in complying with the provisions of the Personal Data (Privacy) Ordinance in their development and use of AI, including automated decision-making.

Regardless of the regulatory or legal approach, (a) maintaining transparency, (b) honoring individual rights, (c) determining the level of human intervention and (d) conducting risk/impact assessments are important considerations when using AI for purposes of automated decision-making and profiling. Companies will also need to consider the challenges around the explainability of the decision-making process for AI systems, especially as it relates to the expected impact and potential biases.

Key takeaways
  • AI technology is being adopted across industries for profiling and automated decision-making
  • Companies must ensure responsible AI use, respect for individual rights, transparency in the level of human involvement and the carrying out of risk/impact assessments
  • We examine challenges and privacy risks associated with automated age verification, such as inaccurate outputs