Six Best Practices for Companies Using Generative AI (like DALL-E or ChatGPT) for Marketing Purposes
A new, fun, and fast way to generate words and images has exploded in popularity. The hero (or villain, depending on whom you ask) is a high-powered, complex form of computer programming called generative artificial intelligence (AI). OpenAI, a company riding on a multi-billion dollar investment from Microsoft, has popularized generative AI with ChatGPT, a now-viral platform allowing users to generate seemingly anything the mind can imagine in text form. Other companies have created platforms like Midjourney or Adobe Firefly, allowing people to do the same but with images.
Copyright issues surrounding generative AI are unsettled. Courts and the United States Copyright Office are still grappling with issues such as:
- Is the text or image created by a generative AI platform copyrightable in the first place, or is putting text into the prompt nothing more than an underlying idea?
- If text or an image is copyrightable, who owns the exclusive rights to the resulting image or text under Copyright law?
- Can a company using a generative AI platform, like Midjourney, use an image created by the platform in external marketing materials, or does such use expose the company to copyright infringement claims?
Artists, creators, and other stakeholders have put these questions squarely before the copyright office and courts. For example, a group of artists filed a class action against Midjourney and other image-creating generative AI platforms, arguing that those companies’ use of copyright-protected works of art constitutes copyright infringement. The companies have claimed “fair use,” which has a complicated meaning under copyright law and a meaning that the United States Supreme Court may dramatically change this year.
Below are some best practices for using these platforms.
- Have a written policy. Given the uncertainties, it is wise to implement an internal-use policy for employees using these platforms to increase productivity.
- Read the terms of use or terms of service. The terms of use or terms of service may disclose whether a company assigns the rights to all generated content to the user, like OpenAI’s ChatGPT does. Those terms may also require users to put a warning on generated content, telling viewers that it was generated using AI.
- Avoid putting confidential or proprietary information into the prompt because confidentiality is not guaranteed. It is worth repeating, reading the terms of use or terms of service before allowing employees to use generative AI platforms. OpenAI, for example, discloses that it cannot delete prompts entered by users. It also expressly advises users of ChatGPT not to reveal sensitive information and that anything entered into the prompt may be used to train the program later. Although these programs have great potential to solve complex problems and fix bugs in code quickly, it is risky to ask these programs to solve bugs in proprietary code. For example, three Samsung employees reportedly put secret company code into ChatGPT to help them fix a bug.
- Avoid putting trademarks, celebrities, or well-known images and characters in the prompt. Popular and well-known images and characters may be subject to copyright protection. Celebrities also tend to protect the rights to use their image and likeness. To avoid generating content that could subject you to copyright infringement allegations, publicity rights claims, and/or trademark infringement claims, avoid putting popular characters, celebrities, and trademarks into the prompt.
- Choose platforms wisely, but understand the tradeoffs. Not all companies trained their programs using the same content. Adobe Firefly, for instance, claims to have only trained its image-generating program using licensed content or images in the public domain. Firefly may be a good option for companies trying to minimize infringement allegations, but some users of Firefly beta have commented that limiting the training to licensed and otherwise free-to-use content has come with a creative cost — the quality of the Firefly images may be lower than images generated by platforms that scraped content protected by copyright. Using platforms that scraped all content is not completely unwise, especially if companies stick to number three above and use “reverse-image” searching to see if the AI platform spit out something similar that is protected by copyright. And courts may very well determine in the future that “fair use” applies to AI-generated images, allowing near unfettered use, so staying on top of recent developments is a wise move.
- Double check statements of fact. It is prudent to double-check the work of programs like ChatGPT because the information it generates is not always accurate.
If you have questions or need help crafting an internal policy, contact our Copyright team.
UPDATE: OpenAI has now added a feature that allows users to turn off Chat History. When Chat History is turned off, new chats are not used to train the model.
In This Article
You May Also Like
What Does a Second Trump Administration Mean for AI Regulation? New Illinois Law Restricts Use of AI in Employment Practices