Saying that the road to the full launch and mass adoption of artificial intelligence (AI) is not without challenges is definitely an understatement. Although there are the common technological issues in gear and interface, an ethical dilemma is now at the forefront of a new type of AI, which is generative AI.

Generative AI programs are those that can, as the term implies, generate new content like images and texts from datasets derived from previously released content. These programs are also called large language models (LLMs), and they work by using AI algorithms to employ deep learning techniques and huge datasets to create new content. 

A popular example of a generative AI program are OpenAI’s ChatGPT and DALL-E. The former is currently being used to create articles based on prompts of the user, which usually include old articles and texts. DALL-E, on the other hand, can generate new images based off of both image and text prompts.

With these two examples, it is clear where the ethical dilemma arises. While generative AI programs certainly help in the creation of new content in a more efficient and faster way, if it is based off of previously released or published texts, images and texts, then are the authors and artists of the original works being compensated appropriately? 

If not—which is almost always the case—can it be considered an infringement of copyright and intellectual property (IP)? If so, are the developers of these generative AI programs liable for their actions in a court of law? 

For instance, a class action lawsuit was filed against OpenAI by the Authors Guild in the United States on September 19. The lawsuit, which includes big names like George R.R. Martin, John Grisham, Jodi Picoult and David Baldacci as plaintiffs, is demanding against the “flagrant and harmful infringement” of copyrights and the “unpermitted use of the authors’ copyrighted works.” 

This class action lawsuit is but one of many being filed against generative AI developers. Authors and artists demanding appropriate compensation for the use of their works is certainly warranted, and damages should indeed be paid if proven that illegal use of copyrighted works has been committed. 

However, the question that remains is how the enforcing of these copyright and IP laws affect the development of generative AI and AI in general. Will it cancel out the progress that has been made, and will it hamper future innovation? Should the rights of the smaller group be sacrificed in order to attain a greater good? 

This is certainly an ethical dilemma wherein careful consideration is much needed. However, while lawmakers are still trying to wrangle how best to deal with copyright implications of this new technology, AI developers should figure out how best to resolve the situation that is beneficial to both parties.

For instance, because blockchain technology is a decentralized ledger that records transactions immutably, it can provide a reliable way to track content and ownership. Smart contracts and tokenization can also be utilized to streamline how content is monetized and copyrights are properly enforced. 

And if generative AI platforms store all of their data sets on a scalable blockchain, it can provide transparency to how copyrighted content is actually used. It can be programmed in a way that authors and artists automatically receive payment whenever their content is used by an AI algorithm. In this way, the burden of paying for copyrighted content will fall on users and not solely on developers.