By Heidi Bruce, Principal Partner
29 November 2023
The rapid advancement and uptake of AI has given rise to many new opportunities in many industry sectors, but has also brought with it a host of concerns, including the impact on our jobs, taking work away from artists, through to a very new set of legal risks and challenges. In the advertising and communication industries, agencies and brands are exploring ways to harness the potential of AI, in creative and innovative ways. In ways reminiscent of the digital media revolution 15-20 years ago, the industry is starting to realise the potential for AI, and the cost-savings on offer are appealing for brands and agencies alike. There are a growing number of AI programs such as Midjourney, DALL-E, Craiyon, Dream, and Adobe’s Firefly, that can be used to generate artworks by feeding in just a few prompts. There are other AI tools like Springboards that help with campaign concepts and creative briefs, and of course chatbots like ChatGPT. Many are asking us what the legal risks are, of using generated AI outputs in the advertising process, for ideation, for the creation of artwork, campaign concepts and content.
More specifically, what are some of the compelling opportunities for using AI-generated images in an advertising campaign – as a key image in a poster, a social media ad, or packaging? Who owns the intellectual property in AI generated art, and what are the risks of copyright challenges from other artists? What sort of protections do the AI platforms offer, and who is liable if there is a legal issue? What safeguards can be implemented in this area?
Below, we break down the key legal issues and risks.
Exploring the options for introducing AI into the creative process is still in the early stages, although it is accelerating fast, and creative minds will be at the forefront of this growth. Marketing teams are looking for ways to embrace these technologies effectively and drive innovation, and brands are looking for greater efficiencies and increased marketing performance. As a result, agency groups and management are asking similar questions – how can we harness this technology without creating a massive legal headache for ourselves and our clients?
The use of AI is attractive, as you can generate a piece of content very quickly, and hone in on very precise creative requirements, without the need for an expensive shoot. No need for engaging talent, photographers, lighting and production crews either, and no wet weather cancellation risks, and a nicely tailored piece of content at the end to perfectly meet the brief. Is all this as intelligent as it sounds?
Key issues – intellectual property
One of the key considerations to be aware of in the use of AI in the creative industries is intellectual property. Firstly, who owns the AI output? Then, what are the risks of the output infringing the intellectual property rights of another artist? And who bears the associated risk?
The terms of service for these AI programs are each slightly different, and are important to consider before use. We have seen a common thread with the terms of service for these AI content generation tools, and what we typically see is that:
- You (as the user) are the owner of the output and the intellectual property rights in it.
(That all sounds good at first glance, but looking further through the fine print, you see that)-
- You use the output at your own risk;
- With most platforms, no promises are given as to whether the output infringes any third party rights – after all, this is effectively impossible when AI engines learn by copying a massive amount of third party work;
- You are responsible for defending any legal challenges from third parties (such as from artists claiming that the output infringes their copyright);
- You are also liable for damages to the AI program owner, if you have knowingly used the program in a way that infringes third party rights.
Midjourney is an example of this. Their Terms of Service are in line with the above and make it clear that all responsibility for content created falls to the user. An excerpt from their current Terms of Service (November 2023) is set out below:
“We provide the service as is, and we make no promises or guarantees about it.….
You are responsible for Your use of the service. If You harm someone else or get into a dispute with someone else, we will not be involved.
If You knowingly infringe someone else’s intellectual property, and that costs us money, we’re going to come find You and collect that money from You. We might also do other stuff, like try to get a court to make You may our attorney’s fees. Don’t do it.”
The Midjourney terms also contain a facility for artists to complain about any IP they feel is theirs and to issue take down notices.
There has recently been a lawsuit brought by some artists in the US against DeviantArt, Stability AI and Midjourney, which claims that these companies infringe the copyright of artists by using their images as a basis for the program’s learning capability. These claims are now on the path to trial. Again, given that all AI engines learn by scouring the internet and available third party sources, on their face the claims have merit. What remains to be seen is how the law will bend or stretch and adapt, in the face of these challenges.
How close is too close?
There is an inherent risk in using these programs that the images they generate may inadvertently infringe copyright if they generate an image that is too close to an existing image, or draft copy that is too close to an existing work. A copyright infringement can be made out where a work has substantially reproduced a third party work, and it is more about the quality of what is taken than the quantity. In a typical scenario, when you engage an artist to create an artwork, the artist can give assurances that their work is original and that they have not substantially copied any other works. You can ask the artist to provide reference or source materials and you can get legal advice if they have based this on very similar works. You can more easily assess infringement risks and the changes that need to be made to alleviate that risk. On the other hand, an AI program is not capable of conducting its own due diligence, and the sources for the generated images are not disclosed. On this basis, any image generated by an AI program carries with it an inherent risk that it could attract a copyright claim if an artist notices and feels like one of their pieces of art has been reproduced by the program. It can be very difficult to assess the risk if there is no way to know for sure, the sources that the program used to create the artwork.
If an agency was to use an AI program for creating artwork or other content (such as copy), then the agency, as the AI program user, carries the risk of infringement claims. In terms of recourse, if there was a claim by an artist, the agency or brand using the output would have little to no recourse against the AI program owners themselves. The output is given with very little assurances as to its origins or originality. It is up to the user to conduct their own due diligence, assess the risks and decide when and in what commercial contexts it uses the output. This may not be consistent with the contract between the agency and its client, which may place responsibility on the agency for IP infringement.
On this basis, at the moment, with the AI programs on offer, use of AI generated images is not necessarily “safe for use” in a commercial environment. There are definitely opportunities to embrace this further in creative settings. The risks are more manageable in scenarios such as pitches, early concepts, exploratory stages, storyboards, creative treatments and mood boards. However advertisers and agencies will need to be aware that using this technology to develop creative materials, particularly if they are to be published broadly, it does pose potential copyright risks due to the unknown origins of the work, the lack of visibility into the programming itself, and lack of recourse against the program owners.
If AI is used, the worst case scenario is that there is a valid IP infringement claim, and the agency may be found to breach warranties it has given under its agreement with its client (i.e. that its work does not infringe any third party rights). The client may trigger an indemnity claim in regards to IP breach. Indemnities and warranties as to IP infringement are currently fairly standard in agency / client services agreements, and so the agency will be on the hook.
We expect there will be developments in this area with AI programs becoming available which provide a higher level of assurance for commercial use. Adobe has made announcements that its Firefly is “designed to generate content safe for commercial use”, however the practical reality of this remains to be seen. Of course it would be an over-reaction to simply ban the use of AI (especially when the cost-savings available are so attractive), but it does need to be used with caution and appropriate parameters and safeguards in place, which we look at below.
Another important consideration in the use of AI is protection of confidential information. When a brand briefs an agency to work on a campaign there will no doubt be sharing of confidential information of the client, including sensitive details about its products, services, pricing, upcoming new product launches, or marketing initiatives. Work on the campaign can also involve sensitive information of the agency including branding concepts, methodologies, research insights, and of course personal information relating to the team involved.
Some AI programs are quite openly ‘open source’ in nature, which means that your inputs can be used to further train and enhance the program. If sensitive information is put into the program by way of prompts or other inputs, you are putting this confidential information at risk, and are exposed to claims of breach of confidence or NDA-related obligations. In other words, inputting information into an AI platform means that information is no longer confidential, and you may be found to have breached your obligations. Some AI programs may be more controlled than others, in terms of what the program does with your inputs, and close attention to the terms is needed. Some companies are now exploring their own internal AI tools that operate in more of a ‘walled garden’ framework and we are likely to see much more of this in the future.
The other important consideration is that there are typically no guarantees given by the AI platforms, that the content generated will be accurate, or free from errors. This leaves open the risk of false or misleading claims arising with the use of copy generated with the help of AI in advertising and campaign materials.
This means that material created with the use of AI will still need to be carefully reviewed and fact checked, and independently scrutinised like any other piece of content.
It has been well publicised that AI programs can have ‘hallucinations’, which is a phenomenon where an AI program starts to generate content that is nonsensical or inaccurate. So if they are asked a question, and do not know the answer, they can make one up, but present it as fact. For instance if you ask ChatGPT what is Barack Obama’s favourite breakfast, you will get an answer, but there are no disclosures or warnings as to whether it is accurate or what facts this is based on. There have been instances in the US of lawyers getting into hot water for using AI, including a New York lawyer who used AI for legal research, and submitted a brief that included six “bogus” cases that the chatbot appeared to have simply made up. News outlet CNET was forced to issue corrections after an article generated by an AI tool gave wildly inaccurate personal finance advice when it was asked to explain how compound interest works. Relying on AI can be problematic when you need factual or reliable information, so this sort of content needs to be verified and reviewed.
Other PR related considerations
In addition, the public response to the use of AI should be factored in also. There is the possibility that if it came out that a major brand was using AI generated images as opposed to real images with real talent, this may also attract criticism and complaint from the public, or particularly from artists who fear these programs may replace or devalue their services.
In November 2023, YouTube released a set of new requirements as to disclosure and labelling of AI content. These include a requirement on creators to disclose when they have created altered or synthetic content that is realistic, using AI tools. When creators upload content, YouTube will offer various new options to indicate that it contains realistic altered or synthetic content. For example, this could impact an AI generated video that realistically depicts an event that never happened, or showing someone doing something they didn’t do (eg a very realistic AI-generated Tom Cruise talking about a product, or giving a political view). This will take the form of a new label added to the description panel, or for certain sensitive content, a more prominent label may be applied to the video player. Material that violates the YouTube Community Guidelines on this may be removed, regardless of whether it is labelled.
There are also processes being introduced that allow artists to request removal of content if it uses their face or image in a digitally-generated way without their permission.
What safeguards can be put in place for agencies and brands?
Brands need to be very mindful of the risks if directing an agency to use AI during the creative process, and have open and transparent conversations about expectations, and responsibilities of the parties. This goes both ways of course. Where an advertising agency is looking to use AI for creative materials, for the agency’s protection, it would also be prudent to disclose the use of AI to the client, and for the parties to be clear on who bears what responsibility once work is published. If a brand is pushing for AI to be used for instance, and an agency has reservations, the agency may wish to seek an indemnity from the client, that they will indemnify and hold the agency harmless from any issues that may arise from use of this program, and this would be reasonable. Likewise, if the agency is using AI and the brand client does not know this, the agency is going to be most likely doing so at its own risk.
It is also important to have clear policies and procedures in place with the personnel who are using this technology, to ensure that they are using this appropriately. For instance:
- Staff should take great care with inputs and prompts, that this is not creating intellectual property infringement risks, such as using prompts to mimic a particular artist’s style, or to replicate a well-known image, to create a tribute piece that references a known work, or to outright copy a particular work;
- Staff should take great care not to input any data that is commercially sensitive or confidential, and particularly anything that identifies a company, brand or individual;
- Staff should check AI outputs for accuracy, especially for ad copy or claims;
- The agency or brand may have their own policies that set out pre-vetted AI programs or vendors that may be used, but others that are not to be used, or only used in certain contexts;
- Procedures may also set out the circumstances in which AI may be used, and not used, and when staff should disclose its use to managers or key stakeholders such as clients
- Performing due diligence checks to satisfy yourself in relation to IP issues, and making changes where appropriate to work to ensure work is original; and
- Legal review and approval processes both internally and externally where appropriate.
If you would like further information on the above and how it impacts on you, please contact one of our experts below. We can provide tailored legal and practical advice to assist you.