The Guardian recently ran an article about fraud in the sciences, noting the institutional pressures placed on researchers that play a part in motivating misconduct:

A recent paper in the journal Proceedings of the National Academy of Sciences shows that since 1973, nearly a thousand biomedical papers have been retracted because someone cheated the system. That’s a massive 67% of all biomedical retractions. And the situation is getting worse – last year, Nature reported that the rise in retraction rates has overtaken the rise in the number of papers being published.

This is happening because the entire way that we go about funding, researching and publishing science is flawed. As Chris Chambers and Petroc Sumner point out, the reasons are numerous and interconnecting:

• Pressure to publish in “high impact” journals, at all research career levels;
• Universities treat successful grant applications as outputs, upon which continued careers depend;
• Statistical analyses are hard, and sometimes researchers get it wrong;
• Journals favour positive results over null findings, even though null findings from a well conducted study are just as informative;
• The way journal articles are assessed is inconsistent and secretive, and allows statistical errors to creep through.”

Perverse incentives in publishing are particularly sharp in the biomedical sciences: billions of dollars are at stake and the relationship between academy and industry is uneasy at best. But the basic problem is more widespread and more subtle, for many of these pressures apply across every academic discipline and shape the research output accordingly, even in philosophy. As one commenter on the widely-read Philosophy Smoker blog wrote, once out of grad school,

…one must research. Not that one should research, but one must research. Unlike grad school, research on the job is most certainly tied to your funding: publish or perish. This is true for all levels. “Teaching schools” may have lower publication requirements for tenure and promotion, but those requirements are still higher than what one found in grad school. In grad school, presenting at conferences and publishing articles was a job well done; in a TT position, they are how you survive reappointment. And given that many people want to publish their way into a better job, they are taking on R1 publishing expectations in addition to all the advising, service, and additional work on teaching expected of new hires.

Counting publications and ‘impact scores’ is an easy and ‘objective’ way to quantify decisions about hiring, tenure, and funding. Quite naturally an industry has developed that caters to this necessity. Research is produced, sent to publication, citations are duly counted, and careers go on. Criticisms abound of the ‘industry’ itself: the unpaid labour of referees supports the for-profit publishing model, which then appropriates work made with public funds, placing them behind pay-walls and claiming the copyright. ‘Work-for-hire’ language is creeping into contracts. This goes not just for journal articles (which–apart from the obnoxious mercantilism, also plays into informal capture by established networks of scholars), but the writing of scholarly books and monographs too, a “morally dubious enterprise” that is too often taken to be a necessary part of evaluation of scholarly output. The push for open-access journals (without paywalls), and the popularity of venues like philpapers, is a step in the right direction, but the collective action dilemma remains: so long as the reward structure remains as it is, it is not reasonable to compromise one’s career on idealistic grounds.

All this is not to suggest that research is unimportant. Of course it is. Yet, to borrow a Marxist phrase, it is not too much to suggest that the ‘relations of production’ surrounding research have undue impact on the content itself. Philip Kitcher wrote a provocative piece in 2010, Philosophy Inside Out, arguing that

Any defense of the idea that philosophy, like particle physics and molecular biology, proceeds by the accumulation of reliable answers to technical questions would have to provide examples of consensus on which larger agreements are built. Yet, as the philosophical questions diminish in size, disagreement and controversy persist, new distinctions are drawn, and yet tinier issues are generated. Decomposition continues downwards, until the interested community becomes too exhausted, too small, or too tired to play the game any further. (Kitcher, 251).

Whatever one thinks of the issue of philosophy as ‘normal science’ (and there are clear problems with Kitcher’s argument; see this thread on Leiter, with Kitcher contributing, and my own critical note), one might fairly wonder whether this state of affairs really has more to do with orthodox metaphysical assumptions than with the conditions of production: “tinier issues are generated” and “new distinctions are drawn” not because these are not legitimate questions (as Peter Ludlow suggested in the Leiter thread linked above, the question of who gets to determine legitimacy is, if anything, more vexing than legitimacy itself) but rather because these are the kinds of questions that lend themselves to publication.

I want to side-step the question of ‘legitimacy’ and ‘well-orderedness’ entirely and think, instead, in terms of selection pressure, and to what extent the forces of publishing (structured the way it is) and evaluation (structured the way that is) have shaped the contemporary enterprise: choosing which projects and undertakings are worth doing, and making others more fraught and less likely to turn into a career. Regardless of one’s own metaphilosophical views, it is clear that a graduate student interested in Kitcher’s own pragmatic, Dewey-influenced view of philosophy as a species of education is less likely to have work published in top journals, and will certainly seem less appealing on hiring and tenure evaluation on the usual metrics. This is despite the evident historical fact that some of the most influential philosophers of the 20th have indirectly shaped the discipline by pedagogy and mentorship rather than ground-breaking publication (as interviews with today’s senior philosophers often reveal). Some schools, in some faculties, are open to pedagogical effectiveness over research effectiveness (e.g., UVic in this document, p.7-8). But, of course, what to change, and how to do it without suffering the law of unintended consequence, is a vexing and difficult question. I pretend to have no definite answers; I just wonder what we might be missing out on.

Nicholas McGinnis