This is a news/discussion on Nature about the influences of ChatGPT on academia. Some quotes below:
In the two years since ChatGPT was released to the public, researchers have been using it to polish their academic writing, review the scientific literature and write code to analyse data. Although some think that the chatbot, which debuted widely on 30 November 2022, is making scientists more productive, others worry that it is facilitating plagiarism, introducing inaccuracies into research articles and gobbling up large amounts of energy.
60,000: the minimum number of scholarly papers published in 2023 that are estimated to have been written with the assistance of a large language model (LLM). This is slightly more than 1% of all articles in the Dimensions database of academic publications surveyed by the research team.
10%: the minimum percentage of research papers published by members of the biomedical science community in the first half of 2024 estimated to have had their abstracts written with the help of an LLM. Another study estimated the percentage to be higher — 17.5% — for the computer science community in February.
6.5–16.9%: the percentage of peer reviews submitted to a selection of top AI conferences in 2023 and 2024 that are estimated to have been substantially generated by LLMs. These reviews assess research papers or presentations proposed for the meetings.
One big question that researchers have been pursuing in the past year is whether ChatGPT can go beyond the role of a virtual assistant and become an AI scientist.
Also see FutureHouse's PaperQA2, this work from Lu et al., and this work from Boiko et al.. All are attempts at creating an "AI scientist".