By Audrey Kelly
The rise of ChatGPT and generative Artificial Intelligence programs raises important questions for creators. Whether it is a short story, a business memo, a thumbnail photo or other visuals, there seems to be endless possibilities for what AI can generate. While exciting, the apparent limitless nature of AI is also concerning because its inner workings are invisible to us as users. Though these programs may seem magical, images and text don’t simply appear out of thin air. Artificial Intelligence isn’t ‘smart;’ rather, it relies on machine learning to collect existing data and improve based on the patterns it finds. Thus, concepts such as authorship and copyright are directly called into question by AI and must be examined before you decide to submit any AI-generated work. Below are a few considerations to help you with this.
Does AI produce original work?
You might be familiar with the traditional definition of plagiarism: passing off someone else’s work or words as your own. In an academic essay, a references page with appropriate in-text citations ensures that the writer acknowledges any outside sources and avoids plagiarism. Used appropriately, outside sources support the writers’ words. And the piece remains uniquely the writers’, even when informed by information or words from outside sources. When it comes to works generated by AI, the boundaries between plagiarism and originality are more blurred since the content generated by AI uses data from other creators who may not even know that their content is used to train an AI model, let alone to what extent.
So what does this mean for AI-produced pieces? Are they original? Does AI count as an author? The answer depends on how we define authorship. If we consider that anything generated by an AI system is based on existing data, it is difficult to not see AI work as a conglomerate of previous works from creators who (probably) did not consent for their work to be used that way. Perhaps, AI only repeats back but does not really create.
The question of authorship depends also on how ChatGPT is being used. For example, if a user prompts ChatGPT to write an entire essay, there is little doubt that the result cannot be considered as the author’s thoughts, experience, or voice. And if one tries to pass an AI-generated text for their own, labeling the act as plagiarism seems like an easy answer. Does this mean we can never use AI to create ethically? No. Harnessed as a tool, writers can benefit from using AI systems. For example, Robert. A Gonsalves describes how ChatGPT can be used as an “interactive writing partner” to generate ideas and edit a text. In this way, these new AI systems can not only assist but feed creativity. However, this raises yet another question: at what point, do I have to disclose that I used ChatGPT to create my work? This question has become ubiquitous in educational settings in the past few months.
The fast development of generative AI systems has prompted debates across universities and workplaces regarding the kinds of policies needed for accounting if and how creators need to disclose the use of ChatGPT. What if AI is only used to brainstorm? Or to spark inspiration? At this point, given the many unanswered questions, it is probably safer and more ethical to communicate upfront if AI was used in any way in the content creation process. Check with your university, ask your professor or employer, and keep up with their latest policies.
Whose data is used to train AI models? Did anyone give consent?
Not necessarily. We do not always know exactly where generative open AI pulls its data from. What we do know though is that it is probably not (only) pulling from the public domain, nor is it giving credit to the creators it is learning from. Creators often lack the power to opt-out of AI training or receive credit for their work. Is there any way to protect creators? At this time, there are more questions than answers, which has raised concerns about the legality of how AI-generated content is produced and used.
The U.S. Copyright Office has initiated an examination of copyright and its applications to AI. The goal is also to listen to creators’ concerns and come up with protections and ways to register AI-generated content. As creators, we must think of the legal implications of using AI-generated content and keep up to date on copyright laws. To protect your own work, consider taking small measures such as watermarking or meta-tagging your pieces.
Is AI worth it?
One thing is clear, understanding these fast-evolving AI systems is a challenge. With the open AI movement, more AI models are becoming accessible to the average user. This makes the need to examine issues of authorship and ownership in programs such as ChatGPT all the more essential. It can be tempting to use ChatGPT to save time or generate ideas. AI can be a great tool; however, the changing AI landscape requires constant reevaluation of its impact. Even though we have yet to understand ChatGPT completely, newer versions including GPT4 and GPT5 are already being introduced. It often feels like we are in a race to understand systems that are released faster than we can keep up. Legality. Ethics. Fairness. These are some of the concepts that need to be considered. For now, the lines between an ethical and unethical use of AI-generative systems are blurry. So if you are going to use ChatGPT to write that report, take time to consider the ethical and legal implications based on the most updated information about generative AI, the impact on creators, and your moral compass.