For the last few decades, we have defined science by the publication of peer-reviewed research. That is, a scientist is someone who can write a paper and have it reviewed by two or four other scientists who conclude that the work is credible.
Notice how this is a circular definition: it defines a scientist as, effectively, a member of a social club of people who agree on writing conventions.
Contrary to what many people believe, this system of peer review is recent. When Watson and Crick submitted their ‘double helix’ paper to Nature in 1953, it did not go to reviewers.
Peer review became the dominant model in the 1970s. It emerged around the same time the West adopted aspects of the Soviet model of science. It also coincided with a decade-long stagnation in science. I believe it was a mistake.
Sure, science is still making amazing progress. I celebrate the incredible scientific productivity of enterprises like Google DeepMind and OpenAI—enterprises that do not rely on conventional peer review.
This model of ‘peer-review science’ now faces an existential threat. In fact, I do not believe that the current dominant model of science, established in the early 1970s, can survive. Indeed, Large Language Models can produce papers today that can pass the peer review test. In a sense, it is an even easier test to pass than the Turing test, because the conventions of peer review science are so artificial.
Why would anyone be paid to write papers that are indistinguishable from what ChatGPT can produce in mere seconds?
So where do we go from here? There is only one option. We have to stop considering the research paper as the end product of research. It never was, in any case. That was always a convenient bureaucratic convention, inspired by the need to ‘manage science.’
If ChatGPT can produce research papers that are indistinguishable from what most scientists can write, then maybe scientists can focus on actually advancing science—something that ChatGPT has thus far proven unable to do.
« AI’s ability to generate vast amounts of text raises concerns about a potential flood of irrelevant theoretical papers, further straining the evaluation system. Stonebraker’s (2018) call for rewarding problem-solving over publication needs revisiting. Perhaps the emphasis should be on the impact and significance of research, not just its passage through peer review—a skill replicable by AI. Daniel Lemire. 2024. Will AI Flood Us with Irrelevant Papers? Commun. ACM 67, 9 (September 2024), 9. https://doi.org/10.1145/3673649
.png)
![The universe's biggest gear reduction GOOGOL to 1 [video]](https://www.youtube.com/img/desktop/supported_browsers/firefox.png)
