|Current rating - 5|
|Ratings - 1|
Peer Review and Biomedical Publication Conference Vancouver 2009 Short Report
From Sci-Mate WikiBiological and Medical Sciences > Classification by Description > Services > Publishing
Type: General Knowledge
Intention: For Publication
Read/Write Permissions: Open Access
31.05.2011 - Christopher Dyer - Sci-Mate
The 6th International Congress on Peer Review and Biomedical Publication continued a 20 year tradition of research into issues of authorship, citation, peer-review, ethics, bias, trial registration, guidelines, and standards.
Steven Goodman opened the conference by suggesting that regardless of how persuasive data may be, results that are unexpected or cannot be sensibly explained should be thoroughly questioned.
The first results presented in this context, safely confirmed what most already expected: that the most prestigious authorship position is first author marked as corresponding author; last authors are valued more if corresponding; second authors were valued only if the first author was marked as corresponding author; and middle authors were very rarely assigned prestige or a clear role in the publication by other researchers. Unfortunately, there was no data presented on the more contentious issue of shared first authorship.
Research also showed - not surprisingly - that reviewers can recognise scientific importance, i.e. papers most likely to be cited, and that initially rejected papers were less cited. Moreover, that “originality”, “adequacy of methods”, and “brevity and clarity” do not correlate with citation. Finally, citation rate does not depend on the journal, but on the quality of the article.
Very few publications were found to contain sufficient information to repete experiments or analysis. This was despite a dramatic increase in the use of online suppliments- increasing from 32% of articles to 64% over 6 years. A large majority of researchers surveyed reported their overwhelming willingness to share protocols, statistical code and data, but only under conditions that they can specify.
At a more basic level, papers were often found to be missing citations, eligibility criteria, limitations, and conflict of interest declaration. One third of potentially relevant papers were cited in clinical trial reporting; and across a broader sample of papers, just over 1/2 of prior information was referenced adequately. Eligibility criteria was inaccurately reported (matching study protocol) in 42% of cases studied. Limitations such as measurement errors, and relevant selected study populations were mentioned in 73% of papers studied. Sponsorship was acknowledged in 33 of 55 investigator sponsored studies (ISS); while 13 of 55 randomized control trials (RCT) studied, openly discussed the source of their bias; and in a different set of articles 40% at least mentioned the source of possible bias. Sources of funding are only mentioned in 1% of abstracts studied, and a similar low rate in editorials.
On research quality, surveys of researchers showed that 6% of scientists experienced best practices in all of their trials, and (in separate research) that this was more likely in government funded research than industry funded research. The case study of Gabapentin was presented twice during the conference, revealing how Park-Davis/Pfizer successfully produced academically published content in line with their marketing strategy. Separate research showed that the chance of publishing industry sponsored research was found to be higher if results were favorable to the sponsor's product. Perhaps not surprising in such an environment, yet another report showed that 86% of researchers surveyed believed that a sponsored physician's conference presentation would be consistent with the marketing message of the study sponsor. In spite of this, around 3/4 of researchers believe that this sort of commercial influence would not affect their judgement of content, which leaves 1/4 of researchers who make it 2.3% more likely to publish research acknowledging government rather than industry funding.
Research focused on high ranking journals, showed that rejected articles at least from these journals were published elsewhere, albeit eventually. Rejected papers from the New England Journal of Medicine were published in 86%-90% of cases, taking on average 2 years (1995) and 1.5 years (2003). Randomized control trial reports rejected by BMJ were published in 91% of cases, after a median 1.36 years; and 76% of all papers rejected by BMJ, Lancet and Annals of Internal Medicine during 2003-4 were published elsewhere. Factors increasing the likelihood of publication were sample sizes, statistical comparison, and disclosure of funding source.
Guidelines have been introduced to increase the overall quality of research reporting, such as ICMJE and CONSORT. These instructions are passed on to authors by editors in around 55% and 70% of cases (more often in higher impact journals), while 78% of journals insisted authors disclose conflicts of interest. Such guidelines were criticized, however, for often lacking a clear explanation of why and how they were developed, and evaluation on their effectiveness. It was also reported that clinical trial reporting actually decreased following the introduction of CONSORT guidelines in 2001, although according to some measures quality of reporting had increased. Honorary authorships, for example, have maintained a stable 20% over the past 13 years, while ghost authorships have reduced from 11.5% - 7.8%.
To encourage better reporting of all results, clinical trial registration became an industry requirement for publication in September 2005. Four years later research suggests an increasing level of compliance with this requirement, although independant research showed only 23% of journals studied insisted upon trial registration. In practice, primary outcomes, eligibility criteria, intervention protocols, and sample size calculations were not always included in registers and not always consistent with published results. Looking specifically at a group of 35 articles where the published results differed from the New Drug Applications (NDAs) trial report, 7 were found to contain no sponsorship acknowledgement; 49% of statements made were not supported by statistical evidence; and the use of the word “significant” was not associated with statistical analysis in 28% of cases.
The increasing importance of statistics was discussed in several reports of relevance to both publishers and authors. Multivariable risk-based analysis was explained as a way to better deal with large variations in baseline risk values- as is the case in many trial populations. To confront more wilful data distortions, such as the compiling of outcomes, Harald Sox explained how a statistical editor was introduced into the publication process at the Annuls of Internal Medicine. JAMA, meanwhile recently introduced a requirement for independent statistical analysis of all clinical trial reports submitted to the journal. This significantly decreased submissions, while other journals increased. Quality has yet to be assessed.
In quality control, it appears that peer-reviewers experience a measurable “decay of quality” over time, supporting previous research that showed decline amongst older reviewers. Mentoring reviewers was shown to only transiently improve the quality of peer-review. Despite the problems with peer-review, however, it is positively healthy when compared to issues surrounding grant-reviewing, where problems with recruitment, motivation and quality were recognised.
A publication based on a 'negative result' was shown to score lower by reviewers; be more thoroughly checked for errors; and more likely to be rejected.
In the last session on the last day research was presented to show that academic authors responded to online criticism in 45% of cases, and the expanded use of online features and uptake by researchers was noted - at least in the US.
The 6th international congress on Peer Review and Biomedical publication concluded on a high note, with an invitation to return in four years time to present further research into a system that seems to work well, but itself lacks critical research (current results excepted of course).
More detailed numbers, statistics and other information is summarized in a longer review of the Peer-Review and Biomedical Congress Vancouver 2009 and abstracts may still be available at the conference website.