CENTER FOR DATA INNOVATION 1
Critics of Generative AI Are Worrying
About the Wrong IP Issues
By Daniel Castro | March 20, 2023
Critics argue developers of generative AI systems such as
ChatGPT and DALL-E have unfairly trained their models on
copyrighted works. Those concerns are misguided.
Moreover, restricting AI systems from training on legally
accessed data would significantly curtail the development
and adoption of generative AI across many sectors.
Policymakers should focus on strengthening other IP
rights to protect creators.
One of the most visible advancements in artificial intelligence (AI) is the
development of generative AI—AI systems that can produce novel images,
music, or text in response to user prompts. Users are still exploring
potential applications of this technology in many fields, but early results are
promising. Already people have used generative AI tools to draft news
articles, press releases, and social media posts, create high-quality images,
video, and music, and even write code. And many more applications in
fields such as medicine, entertainment, and education are on the horizon.
However, some critics argue that generative AI poses a serious threat to
content creators. For example, some visual artists have launched online
protests denouncing AI and calling for online platforms to block AI-
generated art.
1
One of their chief complaints is that when developers train
generative AI systems on publicly accessible copyrighted content, they are
unfairly exploiting the works of creators.
2
But these critics are wrong.
Generative AI systems should not be exempt from complying with
intellectual property (IP) laws, but neither should they be held to a higher
standard than human creators.
3
This report refutes five of the most common arguments made about how
generative AI is unfair to creators:
1. Training generative AI systems on copyrighted content is theft.