Jack Grove, Academics despair as ChatGPT-written essays swamp marking season, The Times Higher Education Supplement

June 17, 2024
Jack Grove, Academics despair as ChatGPT-written essays swamp marking season, The Times Higher Education Supplement, June 17, 2024               Another reason to employ or contract me, a human being.       Academics despair as ChatGPT-written essays swamp marking season ‘It’s not a machine for cheating; it’s a machine […]

Jack Grove, Academics despair as ChatGPT-written essays swamp marking season, The Times Higher Education Supplement, June 17, 2024

 

 

 

 

 

 

 

Another reason to employ or contract me, a human being.

 

 

 

Academics despair as ChatGPT-written essays swamp marking season

‘It’s not a machine for cheating; it’s a machine for producing crap,’ says one professor infuriated by rise of bland scripts

June 17, 2024
James Hinchcliffe holds a poop emoji while filming on a screen to illustrate Academics despair as ChatGPT-written essays swamp marking season
Source: Richard Rodriguez/Getty Images for Texas Motor Speedway

 

 

 

The increased prevalence of students using ChatGPT to write essays should prompt a rethink about whether current policies encouraging “ethical” use of artificial intelligence are working, scholars have argued.

 

 

 

With marking season in full flow, lecturers have taken to social media in large numbers to complain about AI-generated content found in submitted work.

 

 

 

Tell-tale signs of ChatGPT use, according to academics, include little-used words such “delve” and “multifaceted”, summarising key themes using bullet points and a jarring conversational style using terms such as “let’s explore this theme”.

 

 

 

In a more obvious giveaway, one professor said an advert for an AI essay company was buried in a paper’s introduction; another academic noted how a student had forgotten to remove a chatbot statement that the content was AI-generated.

 

 

 

“I had no idea how many would resort to it,” admitted one UK law professor.

 

 

 

Des Fitzgerald, professor of medical humanities and social sciences at University College Cork, told Times Higher Education that student use of AI had “gone totally mainstream” this year.

 

 

 

“Across a batch of essays, you do start to notice the tics of ChatGPT essays, which is partly about repetition of certain words or phrases, but is also just a kind of aura of machinic blandness that’s hard to describe to someone who hasn’t encountered it – an essay with no edges, that does nothing technically wrong or bad, but not much right or good either,” said Professor Fitzgerald.

 

 

 

Since ChatGPT’s emergence in late 2022, some universities have adopted policies to allow the use of AI as long as it is acknowledged, while others have begun using AI content detectors, though opinion is divided on their effectiveness.

 

 

According to the latest Student Academic Experience Survey, for which Advance HE and the Higher Education Policy Institute polled around 10,000 UK undergraduates, 61 per cent use AI at least a little each month, “in a way allowed by their institution”, while 31 per cent do so every week.

 

 


Campus resource: Can we spot AI-written content?


 

 

 

Professor Fitzgerald said that although some colleagues “think we just need to live with this, even that we have a duty to teach students to use it well”, he was “totally against” the use of AI tools for essays.

 

 

“ChatGPT is completely antithetical to everything I think I’m doing as a teacher – working with students to engage with texts, thinking through ideas, learning to clarify and express complex thoughts, taking some risks with those thoughts, locating some kind of distinctive inner voice. ChatGPT is total poison for all of this and we need to simply ban it,” he said.

 

 

Steve Fuller, professor of sociology at the University of Warwick, agreed that AI use had “become more noticeable” this year despite his students signing contracts saying they would not use it to write essays.

 

 

 

He said he was not opposed to students using it “as long as what they produce sounds smart and on point, and the marker can’t recognise it as simply having been lifted from another source wholesale”.

 

 

Those who leaned heavily on the technology should expect a relatively low mark, even though they might pass, said Professor Fuller.

 

 

 

“Students routinely commit errors of fact, reasoning and grammar [without ChatGPT], yet if their text touches enough bases with the assignment they’re likely to get somewhere in the low- to mid-60s. ChatGPT does a credible job at simulating such mediocrity, and that’s good enough for many of its student users,” he said.

 

 

Having to mark such mediocre essays partly generated by AI is, however, a growing complaint among academics. On X, Lancaster University economist Renaud Foucart said marking AI-generated essays “takes much more time to assess [because] I need to concentrate much more to cut through the amount of seemingly logical statements that are actually full of emptiness”.

 

 

“My biggest issue [with AI] is less the moral issue about cheating but more what ChatGPT offers students,” Professor Fitzgerald added. “All it is capable of is [writing] bad essays made up of non-ideas and empty sentences. It’s not a machine for cheating; it’s a machine for producing crap.”

 

 

 

 

 

 

 

 

0 0 votes
Article Rating
The following two tabs change content below.
Neville Buch (Pronounced Book) Ph.D. is a certified member of the Professional Historians Association (Queensland). Since 2010 he has operated a sole trade business in history consultancy. He was a Q ANZAC 100 Fellow 2014-2015 at the State Library of Queensland. Dr Buch was the PHA (Qld) e-Bulletin, the monthly state association’s electronic publication, and was a member of its Management Committee. He is the Managing Director of the Brisbane Southside History Network.
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments