Shortly
after being awarded the Nobel Prize in Physics in 2013, Peter Higgs, of
Higgs boson fame, said he doubted he would have gotten a job, not to
mention tenure, in today’s academic system. The professor emeritus at
the University of Edinburgh said he simply wouldn’t have been
“productive” enough, with academe’s premium on publication metrics.
Conversely, said Higgs, working in today’s academic system probably
wouldn’t have afforded him the opportunity to identify how subatomic
material requires mass.
“It's difficult to imagine how I would ever have enough peace and
quiet in the present sort of climate to do what I did in 1964,” he told
The Guardian.
The statement resonated with many academic scientists running the
funding-collaboration-publication treadmill. But while the negative
consequences of the “publish or perish” paradigm, such as innovation
costs and decreased attention to teaching and mentoring, are widely
acknowledged, there’s been scant data to back them up. So a new study
suggesting that publication pressures on scientists lead to more
traditional, more likely to be published papers, at the expense of
scientific breakthroughs, stands out.
“Pursuing innovation is a gamble, without enough payoff, on average,
to justify the risk,” the study says. “Nevertheless, science benefits
when individuals overcome the dispositions that orient them toward
established islands of knowledge … in the expanding ocean of possible
topics.”
The study, called “Tradition and Innovation in Scientists’ Research Strategies,” is in the current
American Sociological Review.
To begin, Jacob B. Foster, lead author and professor of sociology at
the University of California at Los Angeles, and his co-authors created a
database of more than 6.4 million biomedical and chemistry publications
from 1934 to 2008.
They used chemical annotations from the National Library of Medicine
to build a computer-modeled network of knowledge, and looked for
chemicals that were linked, showing up in the same paper. They then
sorted the links into two broad categories: those that built on past
knowledge and those that were truly innovative, adding connections to
the network.
The researchers looked at how many of each type of link appeared in a
given year, and made inferences about scientists’ disposition to pursue
tradition over innovation. This link classification allowed Foster and
his team to classify papers to determine, via various regression
analyses, whether papers with more innovative strategies were more
frequently cited.
Finally, they built a database linking winners of some 137 major
scholarly awards to their publications, to compare the mixture of links
used by scientists with major achievement to the publication pool more
generally.
Essentially, Foster and his co-authors created a map of which
individual publications built on existing discoveries or created new
connections. Then they correlated each of the research strategies to two
different kinds of recognition -- citations and major awards.
Perhaps unsurprising, the work of prize-winning scientists involved
significantly more innovation than the overall pool. And more than 60
percent of publications generally had no new connections, building on
traditional research alone.
Foster and his co-authors, James Evans, an associate professor of
sociology at the University of Chicago, and Andrey Rzhetsky, a professor
of medicine and human genetics at Chicago, argue that researchers who
focus on answering established questions are more likely to see their
work published. But while researchers who pursue riskier academic work
may not be published as frequently, if published, their work receives
more citations.
Foster said in an email interview that what makes his study “distinctive is the scale.”
“We were able to study this tension at scale because of several
intersecting trends: increasing availability of computer-readable
information about science and scientific publications, increasing
computer power, and the development of network-driven techniques for
representing and analyzing knowledge,” he said. “It is this last
development that allowed us to operationalize tradition and innovation
in a reasonable way for large-scale analysis.”
The authors recommend various ways that colleges and universities can
promote more innovation, such as not linking job security to
productivity, in terms of easy metrics. They say that such a
strategy, once proved successful at Bell Labs, where scientists could
work on project for a year without being evaluated.
Other ideas include awarding research grants to researchers, not
specific research proposals, or trying funding to a proposal’s inherent
innovation.
Some universities have begun supporting riskier research goals, in the form of
grand challenges-oriented research,
and the National Institutes of Health and various private organizations
have experimented with ways to support innovative research. But
publication pressures persist. Foster said he was nonetheless
“optimistic” about change.
Academe should resist “the temptation to outsource judgment of
quality to easily countable quantities,” he said. “Top universities
emphasize that they are not interested in counting publications or
citations -- that colleagues ‘read the work’ when evaluating a case.”
Scholars approaching any milestone, from searching for a first job to
going up for tenure, can feel “pulled toward something safe and
decipherable,” Foster said. “At least they'll have the publications,
right? And that's more what I hope we can keep in mind: the importance
of creating and protecting space (or rather, time) to take real risks.
That's what tenure is supposed to do, which is one of the reasons that
attacks on the tenure system are so worrisome. It's a shortsighted and
ultimately counterproductive trade-off.”
Of course, sometimes sticking with more traditional research has its value -- as a
recent, massive study suggesting that most psychology study results cannot be successfully replicated indicates.
The lead author of that study, Brian Nosek, a professor of psychology
at the University of Virginia, said he wasn’t familiar enough with
Foster’s paper to critique its methodology, but said it sounded
“intriguing.” In any case, he said, “innovation and accumulation are not
mutually exclusive.”
“Innovation occurs when expectations are violated,” Nosek said.
“Replication is actually a great way to spur innovation because, when
replications are successful, they increase confidence and
generalizability of existing claims, and when they are not successful,
they spur innovation to try to understand why different results were
observed.”
Foster said he agreed that building on existing knowledge was
essential to science, but that he was interested in how much
innovation should be mixed in -- what he called a "division of labor."
"Too much innovation, and science would be incoherent," he said. "Too much tradition, and it would slow to a crawl."