August 10, 2023 , in technology

When is it OK to Use ChatGPT?

As the use of ChatGPT and other AI-driven text generators become increasingly widespread, is there a consensus developing on what constitutes legitimate use?

Eidosmedia When Is It Okay to Use ChatGPT?

When is it OK to Use ChatGPT? | Eidosmedia

The rapid adoption of large language models (LLMs) like ChatGPT has been accompanied by proportional concern over the accuracy and ethics of AI-generated content. The consensus seems to be that most people are fine with using ChatGPT to author routine text, but for more demanding or creative pieces things get murkier. Do readers have a right to know if an article was written by an LLM? Should authors be required to give credit to ChatGPT? Where is the line between a time-saving tool and authorial deceit? We explore the case for and against using LLMs to author text.

To use or not to use, to credit or not?

The New York Times column The Ethicist recently responded to a newly appointed English Department Chair at a small college who wrote in to ask about the legitimacy of using ChatGPT to write administrative documents like reports and proposals. “Is it OK to use ChatGPT to generate drafts of documents like these, which don’t make a claim to creative ‘genius’? Do I need to cite ChatGPT in these documents if I use it?” the ChatGPT-curious Chair asked.

The answer to the first question was yes, it is ethical to use LLMs to produce administrative documents because they usually “aren’t thought of as original compositions” — just so long as “you exercise proper vigilance and can stand by what you submit.” The answer to the second question was also yes but with an important caveat: “I don’t think you are obliged to cite ChatGPT any more than you are obliged to say you started with last year’s annual report as a model. Academic writing is different; there are many reasons to acknowledge sources in work that is meant to be original.”

What about academic papers?

In fact, academic publishing is an area in which authorial originality and the citing of sources is particularly critical. If an author uses an LLM to modify or correct the content of a paper, should the AI be credited?

Wageningen University and Research (WUR) notes that a few scientific journals have published papers co-authored by ChatGPT, but that policies regarding ChatGPT authorship vary from publication to publication. “If you're planning on using ChatGPT, we recommend checking the editorial policy of the journal in which you would like to publish,” WUR advises. “Most journals have decided that ChatGPT or other artificial intelligence language models do not meet the criteria to be cited as a co-author.”

Just a tool?

Academic Stephen B Heard in a blog post entitled ChatGPT: author, acknowledgement, method, or tool? defends the use of ChatGPT "... to correct grammatical errors, improve logical flow, ... or simplify complex and turgid sentences..." Heard argues that an LLM is none of the first three items in the title and, is in fact, just a tool and requires no acknowledgement.

"It’s a tool just like a dictionary, a thesaurus, the grammar checker in Word. ... You’d never list the dictionary as a coauthor, acknowledge Word’s grammar checker ... – right? So why on earth would ChatGPT be treated differently?"

Creative support

Few would argue against this position as long as the AI model is being used to correct or improve existing content - but what about using its services as part of the author's creative process itself?

Tech news site Make Use Of looks at the role ChatGPT can play in creative writing projects, identifying areas where its assistance can be beneficial:

  • Brainstorming ideas
  • Plotting a story
  • Generating unique character names
  • Researching story elements
  • Finding comparable works

Not just a tool

While these uses fall short of getting ChatGPT to create the story text, we've clearly moved on from the "just a tool" adoption into areas where important authorial choices have been delegated to the model. While the MUO article seems to approve these practices, many authors would probably consider them part of the creative process and their delegation to an AI model a dubious ceding of authorial control.

They also expose the user to the dangers that the MUO article goes on to identify as the downside of LML usage: the risk of plagiarism, factual inaccuracy, the dominance of over-familiar tropes and cliches. In short, all of the familiar limitations that dog over-ambitious use of LMLs. The article points out that existing literature is one of ChatGPT’s data sets, and “As a result, if you ask the chatbot to write a scene for your story, it’s likely to mimic passages from books that trained it.”

In fact the article's recommendations draw the line at using ChatGPT to create original text: "As a creative writer, you must pay attention to what ChatGPT generates and never use its text in your stories without adapting it first."

What about in business?

In business contexts, where the bottom line takes priority, we might expect a more pragmatic approach to harnessing AI tools. And an article published by software developer Deskview is liberal in its suggestions. In How to use ChatGPT at work (your boss won’t mind) we find a list of 10 tips, ranging from Summarizing Reports to Planning your Day. And at number 6 position the article unashamedly proposes Write and polish content.

But here, too, is a caveat: "...without the proper context and without being familiar with your tone of voice, it will struggle to write it exactly as you need it."

In fact, the article warns against using purely AI generated copy, recommending: "...either ask it to write a basic article and then polish it yourself, or write it yourself and then ask the AI to polish it."

"Everybody's cheating"

Perhaps the most laissez-faire approach to AI use at first glance is that of Ethan Mollick, an associate professor at the University of Pennsylvania's Wharton School of Business. According to an NPR interview, Mollick is not only allowing his students to use ChatGPT - they are required to.

"The truth is, I probably couldn't have stopped them even if I didn't require it," Mollick said. But its use is subject to a clear policy: students will be responsible for any errors and omissions resulting from its use and they must always acknowledge when and how they have used it.

Mollick is cautiously optimistic about the possibilities of incorporating AI tools into the educational context:

"We taught people how to do math in a world with calculators," he said. "I don't think human nature changes as a result of ChatGPT. I think capability did."

The final verdict on AI authorship - for now

In spite of the short time that has elapsed since ChatGPT made its disruptive entrance into our offices and classrooms, perhaps we can already see some agreement about what's OK and not OK to use it for.

Few people seem to have problems with text polishing services of the kind a spell-checker or grammar assistant might provide. Routine and repetitive document generation is similarly OK. In addition, all seem to agree that such uses don't need to be acknowledged.

A grey area surrounds some of the creative writing 'crutches' mentioned in the MUO article above, like plot generation or character creation, where the writer seems to entrust the LLM with tasks that many would see as part of an author's unique imaginative effort. Almost all practitioners, however, are agreed that raw AI text should not be presented as original work or creative writing.

Ethical - or pragmatic?

Part of the thinking behind this is ethical - most people feel it is wrong to take credit for creative work done by someone (or something) else.

But part of it is also pragmatic. Current LMLs have their limitations and people who pass their output off as their own work risk being branded as plagiarists, fantasists or bigots. Hence the warnings, on purely practical grounds, never to present unalloyed AI output as creative or original work.

But the models are getting better all the time. The dangers of disinformation, mimicry and mediocrity may well be reduced or eliminated entirely.

It will be interesting to see how this plays out. In the absence of negative consequences, will purely ethical considerations be sufficient to deter authors from placing their signatures below skillfully generated content from a future generation of LMLs?

Time will tell.


Find out more about Eidosmedia products and technology.