AI by omission
Researchers seem to agree that the people who helped write a paper should have their names attached to it. The same can’t be said about artificial intelligence, as our Stephanie M. Lee reports.
Researchers are often flouting requirements to note their use of AI, suggests a preprint from Alex Glynn, a research literacy and communications instructor at the University of Louisville. Here’s what to know about his work, which has yet to be peer reviewed:
- An analysis looked at 500 articles that contain language deemed likely to have been generated by chatbots, but that don’t divulge any AI use.
- A fifth of those articles appeared in journals or at conferences where authors are required to acknowledge use of AI.
The questionable language can stand out like a sore thumb. Glynn looked for papers containing hallmarks of AI writing, like use of the first-person singular and disclaimers about reviewing more recent sources. “New findings may have emerged since my last update in January 2022,” read one passage he flagged.
Featured Image: A graphic design of a robotic apocalypse with machines takingov. a graphic design of a robotic apocalypse with machines taking over the world k uhd very det. ID 316712551 © Vovaanty | Dreamstime.com
Neville Buch
Latest posts by Neville Buch (see all)
- Stephanie M. Lee on “AI by omission”, The Chronicle of Higher Education, Thursday, December 19, 2024 - December 20, 2024
- John Wagner’s List of Negative Characteristics in Corporate Culture - December 19, 2024
- Economic Error in Australian International Student Market - December 19, 2024