Generative artificial intelligence (“AI”) is transforming the way people learn and create. Used correctly, this technology has the potential to create content, products, and experiences that were once unimaginable. However, its rapid advancement has raised legal concerns, including issues of copyright infringement, data privacy, and liability – challenges that are not limited to any type of business, as generative AI tools can be used across many settings and industries. Making use of these new tools can come at a steep price tag if AI use runs afoul of legal requirements. With this in mind, it is worth considering what steps companies can take to leverage the power of generative AI while actively mitigating the associated legal risks?
AI and Copyright Infringement
The ability of generative AI to produce original content, such as music, images, and text, has created new challenges in intellectual property (“IP”) law. Companies must ensure that their use of AI-generated content does not infringe on the rights of copyright holders, and it is currently unclear to what extent the output of such models is protected by copyright. To mitigate these risks, companies should carefully evaluate the use cases of generative AI and consider using dedicated AI models trained on data that is legally obtained with appropriate licenses in place. Lawsuits have already been filed, in which the plaintiffs allege that the use of images generated by AI models infringes the copyrights in the images contained in the training data.
Companies using content created by AI tools should consider establishing guidelines for the use of such AI-generated content, especially since such underlying data may not be protected by copyright everywhere. This can present an issue especially if the output is crucial to the company’s product since it will be harder to take legal action against copycats and counterfeiters. The law is still developing on this point and the outcome may be different in different jurisdictions. In the European Union, for instance, copyrightable work generally needs to be the (human) author’s intellectual creation, a condition that is not met by AI. At the same time, the U.S. Copyright Office has issued guidelines stating that the output of generative AI tools is generally not protected, whereas copyright law in the United Kingdom potentially does protect computer generated works where there was no human involvement, but this area is under review.
Data Privacy and Security
Data privacy is a critical issue when training, developing, and using AI tools. Generative AI models carry high risks because of the vast amount of data used to train them. There is a risk that personal data used to train these models was not used lawfully or could be reverse engineered by asking AI the right questions, creating both privacy and security risks. As such, any business developing or using generative AI will need to ensure that they are doing so in compliance with local laws, such as the General Data Protection Regulation (“GDPR”) in the EU and the UK GDPR in the UK.
The first step on this front is to identify whether personal data (which is defined widely to include information relating to an identified or identifiable natural person) is being used as at all. In the event that personal data is used for development, this should be for a specific purpose and under a specific legal basis. The personal data will need to be used in line with legal principles and special considerations will need to be made as to how individuals could exercise their data rights. For example, would it be possible to provide any individual with access to information about them?
When using AI to create outputs, these should be monitored for any potential data leakages that could amount to a data breach. For instance, where an individual has published information about themselves on social media, it does not necessarily mean it is legal to use that information for other purposes, such as to create a report about potential customers to target for an advertising campaign?
Contracts and Confidentiality
Before implementing or permitting the use of any generative AI tool, companies should also check the terms under which the tool is provided. These terms may restrict how the output can be used or give the provider of the tool broad rights in anything used as a prompt or other input. This is particularly important if tools are used to translate, summarize, or modify internal documents, which, aside from containing personal data, may also include information that the company would rather keep proprietary or confidential. Uploading such information to a third-party service could breach non-disclosure agreements and trigger serious liability risks.
AI and Sector-Specific Regulation
In addition to laws surrounding AI, international businesses should be aware that there are specific laws being developed that cover the use of AI in the EU. The current draft legislation creates obligations for companies based on the risk that AI creates. Where it is used in a high-risk scenario, the providers and users of these systems will need to do more to meet compliance requirements (while some applications are deemed to be an unacceptable risk). In contrast, the UK has recently put out a white paper stating that AI will not have specific regulation, but instead it will be up to sector specific regulators. How generative AI falls within either of these frameworks will depend on the context in which it is used. Therefore, any business planning to use generative AI to offer international products or services should consider EU and UK legal stances early in the development to mitigate the risks of potential fines or the requirement to redevelop that product or service.
THE BOTTOM LINE: Generative AI offers tremendous potential for companies to innovate, streamline, and increase their efficiency. However, businesses must be diligent in addressing the legal risks associated with the technology. By implementing, monitoring, and enforcing policies based on the guidelines outlined above, companies can harness the power of generative AI while mitigating potential legal pitfalls.
Felix Hilgert is a Partner at Osborne Clarke, where he focuses on technology and video games, and helping North American companies expand and succeed abroad.
Emily Barwell is an associate on Osborne Clarke’s U.S. team who specializes in data protection and technology contracts.