Academia’s Case of Stockholm Syndrome written by Harry Crane and Ryan Martin
https://quillette.com/2018/11/29/academias-case-of-stockholm
Earlier this year, we launched Researchers.One, a scholarly publication platform open to all researchers in all fields of study. Founded on the principles of academic freedom, researcher autonomy, and scholarly quality, Researchers.One features an innovative author-driven peer review model, which ensures the quality of published work through a self-organized process of public and non-anonymous pre- and post-publication peer review. Believing firmly that researchers can and do uphold the principles of good scholarship on their own, Researchers.One has no editorial boards, gatekeepers, or other barriers to interfere with scholarly discourse.
In its first few months, Researchers.One has garnered an overwhelmingly positive reception, both for its emphasis on core principles and its ability to attract high quality publications from a wide range of disciplines, including mathematics, physics, philosophy, probability, and statistics. Despite its promise, many academics worry that leaving peer review up to authors will grind the academic juggernaut to a halt. With nothing to stop authors from recruiting their friends as peer reviewers or from publishing a bunch of nonsense just to pad their CV, how should academic researchers be judged for hiring, tenure, or promotion? Without the signal of impact factor or journal prestige, how should readers assess the quality of published research? On their own, such questions are quite revealing of the predominant attitude toward academic publishing, which treats peer review as a means to an administrative end rather than an integral part of truth-seeking.
When done right, peer review is a rigorous process that fosters honest critique, lively discussion, and continual refinement of ideas for the mutual benefit of researchers and society. When done wrong, peer review plays to the worst instincts of human nature, devolving the pursuit of knowledge into a spectator sport in which the credibility of individual researchers, prestige of institutions, and legitimacy of scholarship as a whole are staked on the appearance of quality, objectivity, and novelty that the “peer review” label brings. As the above questions indicate, the prevailing mindset focuses on all that is wrong, and very little of what is right, with peer review.
A far cry from its ostensible scholarly mission, academia today resembles “a priesthood or a guild” or even a “cult“, with peer review serving an essential administrative need in a system of promotion, tenure, funding, and accolades designed to maintain the established order. Under this model, political posturing and bogus theorizing have become indistinguishable from scholarship in some fields. Identity politics and demagoguery have even infected mathematics, as seen in the recent saga of Ted Hill’s twice accepted, once published, and twice rescinded article on the Greater Male Variability Hypothesis. In health-related disciplines, with big money at stake, conflicts of interest dictate which results are reported, and what stays in the file drawer. In our own field, small cliques monopolize the flagship journals, as nearly half of all articles published in The Annals of Statistics are authored by a member of its own editorial board.
Even ignoring the subversiveness and pettiness that current norms enable, this system fails in its major promise to ensure the quality of published literature. It is widely believed that most published scientific findings are false: a recent analysis successfully replicated only 61 percent of social science findings published in Science and Nature, and an earlier study replicated just 39 percent of findings from top psychology journals. Scientists who were once celebrated have become maligned amidst allegations of fraud or shoddy scholarship. Meanwhile, thoughtful, potentially transformative ideas struggle to see the light of day, as chronicled in Francis Perey’s four-decade quest to publish foundational work on probability theory. On top of these well-publicized incidents, individual researchers privately trade stories about their own experiences with discrimination, incompetence, and obfuscation in their respective disciplines.
Paralyzed by a severe case of Stockholm syndrome, career academics persistently complain about these problems while simultaneously insisting that substantive reform to current practices “would do more harm than good.” Fittingly, recognizing the ills of this system and vowing to do something about them has itself become a signal of belonging, as leading new initiatives within the boundaries of the current paradigm is now a surefire way to gain recognition and advance one’s career. Such initiatives are also guaranteed to leave the status quo in place, modulo cosmetic changes. It is common to appoint committees, assemble task forces, propose best practices, host panel discussions, and raise awareness, all on the false premise that—with just a few tweaks—the very same administrative process founded on filtering, suppression, and signaling will reform itself into a bastion of freedom, autonomy, and truth. Notably absent are solutions that will elevate peer review above bureaucracy and administrivia, restore science to its scholarly, truth-seeking purpose, and entrust scholars with control over their own scholarship.
Standard bureaucratic remedies, which enact more draconian measures and impose greater oversight, only reinforce academia’s toxic “publish or perish” culture, embolden editorial boards, drive a wedge between researchers, and worsen the replication crisis. By “raising the bar” for publication, these new policies bestow even more credibility to, and thus increase the impact of, the (inevitable) fraction of unreliable results that makes it through peer review’s filter. Even supporters of the current model, such as Aaron Carroll, recognize the folly of trusting in peer review: “Too often, we think that once a paper gets through peer review, it’s ‘truth.’ We’d do better to accept that everything, even published research, needs to be reconsidered as new evidence comes to light, and subjected to more thorough post-publication review.”
Science thrives by fostering what Feynman called a culture of doubt, not of consensus or signaling. In line with Feynman’s view, Researchers.One eliminates restrictions and removes barriers to publication, and with them also the credibility and prestige conferred by the “peer-reviewed” label. In turn, the platform offers a number of innovative features—and is developing several more—to facilitate pre- and post-publication peer review for the purpose of improving quality and facilitating discussion. Though most scientists and journal editors agree that replication studies and negative results are important to maintain checks and balances, most journals, concerned with their impact factor and prestige, are unwilling to publish them. Researchers.One, by contrast, welcomes and encourages positive or negative findings, original studies or replications, expository or research articles, and whatever else an author wishes to disseminate.
Those initiated into the conventional mindset may struggle to appreciate the virtues of autonomous peer review, wondering how making it easier to publish could possibly improve the state of affairs. Understand this: the point of Researchers.One isn’t to make publishing easy; it is to make publishing trivial, so that the worth of an idea is judged on its merits alone. Because publishing on Researchers.One comes with no stamp of approval, conferral of credibility, or associated prestige, readers should approach ideas published on the platform with a healthy dose of skepticism. Aware of the reader’s skepticism, authors have every reason to solicit rigorous and critical peer review feedback to ensure that their work stands up to scrutiny. The ease with which low-quality work could be published on Researchers.One thus fuels the natural skepticism necessary to keep readers alert and researchers in check, to mitigate the impact of errant findings, and to keep science on track, all without the interference of academic oligarchs. We call this process, by which quality improves by eliminating existing quality control mechanisms, scholarly mithridatism, after Mithridates VI, who is said to have purposely ingested small doses of poison as a counter-measure to an assassination attempt.
We don’t propose to cure academia’s Stockholm syndrome by changing its captor: the kind of reform we have in mind with Researchers.One is not one in which a small group of elites instantiates its own version of the current top-down system. We insist on a more organic and dynamic shift. Researchers.One is not intended as a replacement or an improvement to the current system. It is instead an alternative that offers all the scholarly benefits of peer review without the administrative overhead, and which empowers individual researchers to take control of their own destiny. As such, the success of this unconventional platform can’t be measured by conventional metrics such as impact factor, rank, or prestige. Rather, the success of Researchers.One lies in its very existence, as a forum open to anyone who yearns for true academic freedom, open access, viewpoint diversity, and for something more fulfilling than the next rung on the academic ladder.
Harry Crane is Associate Professor of Statistics at Rutgers University. You can follow him on Twitter @HarryDCrane or visit his website for further information.
Ryan Martin is Associate Professor Statistics at North Carolina State University. You can follow him on Twitter @statsmartin or visit his website for further information.
Comments are closed.