Excerpted from the Art Practitioner’s Guide to Generative AI series by Natalie Diggins, Technologist and Founder of TheArts.ai.
When considering the use of generative AI, whether in the arts or any other field, there are a variety of ethical issues to consider. These issues require meticulous examination and resolution. For example, there may be concerns over bias and fairness, which question the impartiality of AI. There may also be concerns regarding originality and plagiarism, raising doubts about the authenticity of AI-created works. There’s the issue of manipulation and misinformation, where the possibility of AI being used to distort reality or mislead people is a significant concern. Finally, transparency and disclosure are pivotal, necessitating clarity about the use of AI. [Update – see footnote1These suggestions were drafted when generative AI became available to the general public. Since then, text citation practice has evolved. MLA Style, APA Style, Chicago Style and many academic standards have been updated to address textual generative AI contributions.
Outside of academia, practicing consensus has evolved that generative AI is a tool, similar to Microsoft Word’s Editor feature, Adobe Photoshop, and spellchecking software. Similar to how these tools aren’t cited, generally neither is generative AI. That said, while generative AI may be used as a tool, the nature of its contributions to content creation might warrant different considerations compared to more straightforward productivity software.
The bottom line is that there is no right answer. Citation is a work in progress, and you should use your own judgment based on content and context.
The critical and complex issues of creator permission, attribution, and compensation remain unsolved and appropriately controversial.]
New Chat
While ChatGPT’s response to the issue isn’t comprehensive, or necessarily correct or right for you individually, it can serve as a jumping-off point for further reflection and discussion.
Proofread by ChatGPT-4.