Biocurious is a weblog about biology, quantified.

Is a fraudulent paper worse than a worthless paper?

by PhilipJ on 16 February 2006

Lindomar B. de Carvalho, a condensed matter physicist at the Universidade de Brasília writes in this week’s Nature that while fraud (as in the Schon, and more recently the Hwang scandals) is certainly bad, the entire publish or perish mentality leads to junk science.

He says,

Researchers are increasingly put under pressure to publish papers to further their career and access resources. But the fact that there are millions of pages published every month, only a few percent of which are worth reading, seems as much a fraud as the Hwang case. Are you wasting your time any more reading something fraudulent than reading something worthless? Neither helps the student or researcher wanting to do something concrete. It seems we have to read ten papers to get the one that really gives us something. The information is fragmented — distributed across hundreds of publications, around the world, many of them inaccessible.

How many papers do you actually read per month? How many journals do you even regularly leaf (or click) through? Hundreds of journals with thousands of articles are published every week, but the signal to noise is horrible. A quick search for “physics” in the journal subject field at SFU’s library reveals almost 200 journals—and those are only the ones to which we have electronic subscriptions. As a scientist just starting out, I don’t even want to publish something unless its going to end up in a respectable journal where the chance of others reading it isn’t near zero. The sheer magnitude of unread papers is surely a bigger crime than the few, high profile fraudulent papers that plague our profession every now and again.

So how do we fix things? It has to start from within. Lindomar B. de Carvalho says,

I suggest slowing down the paper-publishing machine by limiting the number of journals that publish original research, asking more peer-reviewers to read preprints and opening up preprint manuscripts for public discussion.

I definitely agree with the first point—there are simply far too many journals. While the impact factor of a journal isn’t a perfect measure of how good or bad the research published within is, editors of journals with absolutely deplorable IFs may want to reconsider their journal’s existence. I have to admit, however, that I don’t see how his second two suggestions will slow the process down.

  1. Uncle Al    4567 days ago    #

    The original fractional string theories were vigorously rejected for publication. That is a systematic falure. One need only look up “Jack Sarfatti” in Google Groups to appreciate how well peer review works overall. Whacko Jacko doesn’t get published.

    Grant funding demands a PERT chart and a business plan. Schools demand grad students complete within seven years. Faculty is then strongly compelled to propose and publish smallest content, least controversial bits. They only succeed in the small by failing in the large.

    Creative young faculty is selectively starved while boring senior faculty are stuffed to repletion. It’s good business (zero risk for guaranteed results) and crappy science. Discovery cannot appear on a spreadsheet. Birth of the future is not amenable to caesarean section.

    The most exciting territory is where theory fails. Who would knowingly allocate resources for that trip?

  Textile help