A New Study Claims That AI Does Not Pose An Existential Threat To Humanity
ChatGPT and other large language models (LLMs) lack the potential to learn independently or gain new abilities, posing no existential threat to humans, according to new research from the University of Bath and the Technical University of Darmstadt, Germany.
The study, published in the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), found that while LLMs can follow instructions and display linguistic proficiency, they cannot learn new abilities without specific training. As a result, these models are seen as intrinsically controllable, predictable, and secure.
The press release for the report was issued on the Eureka Alert website.
The research team found that, despite being trained on progressively massive datasets, LLMs can still be deployed safely. However, the technology still has the potential for misuse.
“The prevailing narrative that this type of AI is a threat to humanity prevents widespread adoption and development of these technologies, as well as diverting attention away from the genuine issues that require our attention,” said Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study.
Led by Professor Iryna Gurevych of the Technical University of Darmstadt, the study team conducted trials to assess LLMs’ ability to complete activities they had never faced before, known as “emergent abilities.”
While LLMs may answer questions about social situations without explicit programming, the researchers discovered that this is because the models use “in-context learning” (ICL), which involves completing tasks depending on examples supplied to them.
Dr. Tayyar Madabushi stated, “The fear has been that as models grow in size, they may solve new issues in unpredictable ways, creating a menace with harmful abilities such as reasoning and planning. Our research reveals that this concern is unfounded.”
The study’s findings call into question worries raised by leading AI experts around the world about the potential existential threat of LLMs. However, the research team underlines the need to tackle existing hazards, such as the spread of fake news and the greater likelihood of fraud.
Professor Gurevych said, “Our findings do not imply that AI is not a threat at all. Rather, we demonstrate that the alleged growth of complex cognitive skills associated with specific dangers is unsupported by evidence. Future studies should focus on the additional dangers offered by these models.”