The science of simulation
A chatbot powered by artificial intelligence (AI) is alleged to have written research paper abstracts that can fool some scientists. ChatGPT was released in November and creates authentic-sounding text using massive amounts of pre-existing data. But ethical questions have been raised about its ability to produce text that is realistic enough to convince experts it was written by a human.
An article on the website Nature reveals how it was used to create 50 medical research abstracts which were shown to scientists who were asked to spot the fakes. The text was also put through a plagiarism detector and an AI-output checker.
All the abstracts passed the plagiarism check and two thirds of them were spotted by the AI output test. But the human researchers only identified 68% of the chatbot abstracts and 86% of the genuine abstracts they were shown.
Sandra Wachter, who studies technology and regulation at the University of Oxford and wasn’t one of the testers, told the website: “I am very worried. If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics.”