Create new Article:



Current rating - 5
Ratings - 1

Peer Review and Biomedical Publication Conference Vancouver 2009

From Sci-Mate Wiki

Jump to: navigation, search
Biological and Medical Sciences > Classification by Description > Services > Publishing


Editor:  Patricia Kritek
Branch: Sci-Wiki
Type: General Knowledge
Intention: For Publication
Read/Write Permissions: Open Access
Authors :
13.04.2010 - Christopher Dyer  -  Sci-Mate   
21.09.2009 -   -  BCBSA   
21.09.2009 - Cynthia Lokker   
20.09.2009 - Patricia Kritek   
13.09.2009 - Isabelle Boutron   


The 6th International Congress on Peer Review and Biomedical Publication was held between 10-12th September 2009, in Vancouver. Below is a draft report on the major topics presented and discussed over the three days. Authors, presenters, editors and writers are invited to help develop this Article as an open access resource for the research community. More details of the research are available in abstracts located on the conference web site.

For a quick summary, see Peer Review and Biomedical Publication Conference Vancouver 2009 Short Report.

Discussion was invited into any of the interesting topics raised during the meeting, such as ghost writing, but none of those who presented chose to comment.

Contents

[edit] First Day

Steven Goodman (replacing John P. A. Ionnidis) started the conference with an enjoyable and thought provoking presentation on the limitations of statistics. Prior knowledge, Prof Goodman argued should not be ignored, and that unexpected or new results should be neither ignored nor trusted, but reported modestly and subjected to further experimental testing. He then presented several examples, including the claim that winning an academy award increases life expectancy to demonstrate the importance of a sensible argument to support statistics. Numbers + Explanation = Knowledge.

[edit] Authorship and Contributorship

Jason Busse showed, not surprisingly perhaps, that a first author marked as the corresponding author is considered to be the most prestigious position by other researchers. Survey respondents indicated that they believed that this author had contributed significantly to concept design and result analysis. In such papers, the second author position was also highly considered. However, if the first author was not marked as the corresponding author, then the most prestigious position was more evenly split between first and last, and the second author position became far less well perceived. Finally, most responders gave no prestige or identified no clear contributions of authors 3 and 4 (all examples used 5 authors).

Xiu.Yuan Hao first reported that rates of Chinese honorary (29%) and ghost (10%) authorships are similar to those reported in the US. Joe Wislar then reported a decline in honorary (19.3% - 20.6%) and ghost (11.5% - 7.8%) authorships between 1996 – 2009.

(see discussion on Ghost Writers)

Jenny White presented confidential documents behind the marketing of Gabapentin outside its initial FDA approval. Documents obtained through a legal process showed how Park-Davis paid Medical Education Systems (MES) $160 000 to publish 24 scientific articles on Gabapentin in peer-reviewed journals. In the end, 6 of the proposed articles were published in their targeted journal and 7 in alternative publication. 2 articles disclosed an honorarium from MES. Dr White proposed clearer standards, sanctions (retractions and bans), and verifiable disclosures (eg,dida.library.ucsf.edu), and calling on the expertise of reviewers and editors to maintain quality.

[edit] Peer Review

Michael Callaham spoke after the first break about how 93% of peer-reviewers underwent a “decay of quality” over time, and that this decay would on average be noticable after 12.5 years. Other research was discussed in this context that showed that quality was lower amongst older reviewers.

Debra Houry then explained that a mentor system improved output and quality of reviews, but that this effect reduced over time. During question time it was pointed out that a similar process of mentoring occurs informally between researchers.

Trish Groves briefly took the conference away from biomedical publication to consider difficulties in grant application peer review processes. Of those grant organisations interviewed 54% reported frequent or very frequent refusals to carry out a request, and that it is getting more difficult to recruit reviewers. Other common problems included late reports, administrative burden, difficulty finding new reviewers, and reviewers not following guidelines (36%, 21%, 14% and 14% respectively). Of 28 organisations who responded, 12 revealed they had paid for grant review. Reviewers themselves felt poorly motivated, lacking preparation, instruction and feedback. She recommended a need for support through the release of guidelines and instructions, and adherence to SPIRIT principles.

[edit] Data Sharing and Conflicts of Interest

Christine Laine reported that researchers appear to be willing to share sufficient knowledge to allow reproduction of their work, but only under certain conditions. 83% of survey respondents said that under the right conditions they would share the protocol, 70% would share statistical code and 61% would share data. However, without conditions, only 13% of authors would share a protocol, 3% would share statistical code, and 4% would share data. Consistent with this understanding, Annals of Internal Medicine now requires the listing of conditions for sharing, while Dr Laine suggested that data sharing build upon the Creative Commons Project and Open Source initiative.

An-Wen Chan revealed that only 6% of investigators surveyed experienced best practices in all of their trials. Moreover that best practise was more likely in government funded research than industry funded research. 37% of researchers reported having personally experienced or witnessed a financial conflict of interest, of which 70% were relating to industry funded research.

John Ellison then spoke about investigator sponsored studies (ISS), in which industry provides funding or material support with minimal technical input. In these trials 31 of 55 publications acknowledged sponsorship, mostly where funding not materials were provided.

Isabel Rodriguez-Barraquer reported that 53% of abstracts submitted to the Association for Research in Vision and Opthamology (ARVO) between 2001-2003, were subsequently published. Research acknowledging government support had a slightly (2.3%) higher chance of publication than those acknowledging industry support. The rate of publication amongst industry supported research was higher rates if results were favorable to sponsor.

Suzanne Lippert told the conference that 86% of survey respondents believed that the content of a physician's slides at a conference would be consistent with the marketing message of the study sponsor. Moreover, respondents believed that 79% of presenters would only be invited back to speak if their recommendations were consistent with the marketing message; and 87% that content of slides and text of presentations would be provided by the sponsor. Finally, however, 64% of reviewers believed that their final recommendation would not be affected by these considerations.

[edit] Editorial Training, Decisions, Policies and Ethics

Victoria Wong revealed results suggesting that editors have an extremely poor level of understanding of common ethical considerations. The results of a questionnaire testing knowledge of these issues by editors returned the following average scores: plagiarism 17%, authorship 30%, conflict of interests 15%, and peer review 16%.

Joerg Meerpohl then reported that ICMJE and CONSORT guideline documents are passed on to authors by editors in around 20% and 70% of cases (other guidelines less often). Of those examined, 78% of journals insisted authors disclose conflicts of interest, and 23% insisted upon trial registration. The higher the impact factor, the more likely that guidelines and trial registration were used by the journal.

Ben Djulbegovic and Elizabeth Wager then showed how JAMA's requirement for independent statistical analysis had reduced the number of randomized clinical trials published by the journal (while numbers increased in other journals). Unfortunately, there was no data on any increase in quality that would have justified the action and costs of this initiative.

Liz Wager rounded up the first day by talking about the rapid growth in membership and activities of the Committee on Publication Ethics (COPE).

[edit] Second Day 

Harold Sox opened the second day of the conference by stressing the importance of robust statistical analysis- in addition to novelty, validity and potential to change patient care- when considering publications. During his tenure at Annals of Internal Medicine the role of a statistical editor was introduced into the publication process.

[edit] Publication Pathways

Michael Bretthauer then reported that 86%-90% of papers rejected by the New England Journal of Medicine were subsequently published in other journals (1995-2003). Moreover that in 1995, it took on average 2 years for most (over 50%) to be published, while in 2003 it took on average 1.5 years.

Douglas Altman reported that 91% of RCTs papers submitted between 1998 and 2002 to BMJ were eventually published (83% in other journals). 23% of papers were published in BMJ, while those rejected were published in lower impact factor journals (except for 6 papers). The median time from submission to publication was 1.36 years, excluding non-published trials.

Kirby Lee reported that 76% of papers rejected by BMJ, Lancet and Annals of Internal Medicine during 2003-4 were published elsewhere- mostly in specialty journals with lower impact factors. Factors increasing the likelihood of publication were sample sizes, statistical comparison, and disclosure of funding source.

Francine Kauffmann showed higher citation rates in papers that reviewers recognised as having high levels of “Scientific Importance”. Papers with poorly rated “adequacy of interpretation” had lower rates of citation. Other measures, “originality”, “adequacy of methods”, and “brevity and clarity” were found not to be indicative of citation. Interestingly, rejection was also shown to correlate negatively with citation.

[edit] Publication Bias

Seth Leopold examined positive-outcome bias in peer-review by submiting 2 versions of a manuscript to peer review at the 2 highest ranking orthopaedic journals. The manuscripts were identical except that the primary study endpoint was statistically significant in one version, and no-difference at the other; the methods sections were identical between versions; and 5 identical hidden errors were placed in each. At the Journal of Bone and Joint Surgery (JBJS), the no-difference manuscript was rejected over 20 times more frequently than the positive one (98% vs 71%, p<0.001), while at Clinical Orthopaedics and Related Research (CORR) no significant difference was found. At JBJS, but not CORR, the methods section was graded significantly lower and more of the hidden errors were recognised in the paper where no significant difference was found. This finding has implications for meta analysis: if peer review is biased towards the acceptance of positive studies, then synthesis of the literature is at risk for inflating the apparent benefits of new treatments. (updated by Ch. Dyer on behalf of S. Leopold). Supporting the basic premise of this work, research was posted by Ziad Kanaan reporting that negative & inconclusive articles were more often found in lower-IF surgical journals (R2 = 0.986).

Peter Gotzsche showed examples of how compiling outcomes can strengthen statistical results, while obscuring the effect of individual components. In one case, a primary outcome with a p value of 0.425 was altered to a value of 0.043 by measuring a composite outcome. Composite evaluations were reported in 11 of 36 trials studied, primary measures reported in only 2 trials, and both in just 1 trial. Discussion indicated an awareness that composite outcomes are used to portray positive outcomes of research and a lack of guidelines currently exists.

S. Swaroop Vedula showed how the reporting of research into gabapentin was managed by the marketing department of Parke-Davis/Pfizer as part of their off-label marketing strategy (60% of all medications are prescribed off label). Marketing documents revealed the intentional suppression and manipulation of data to be consistent with and support the marketing objective. Of 12 published trials, efficacy data was not published in full in 11 cases; selective publication of primary in 7 trials and secondary outcomes in 11; selected populations were used in 5 trials; ghost authors possibly used in 3; citation bias in 5; delayed publication in 5; and positive spin in 8 of the 12 trials. Interestingly, it was pointed out in discussion that gabapentin is actually a very useful drug with demonstrable off-label applications.

[edit] Rhetoric

Isabelle Boutron reported that from a panel of 72 negative RCT reports indexed in Pubmed in December 2006, “spin” could be identified in 18% of titles, 29% of abstracts, 43% of results and 50% of conclusions. She suggested more classification of spin and a need for further research in this field

Lisa Bero looked at 35 articles where the published results differed from the New Drug Applications (NDAs) trial report and found that: 7 contained no sponsorship acknowledgement; 49% of statements made were not supported by statistical evidence, and the use of the word “significant” was not associated with statistical analysis in 28% of cases. Recommendations included the need for greater discussion and a need to include linguistic analysis in peer-review guidelines.

Eileen Gambrill proposed to the delegates a “Propaganda Index” for screening manuscripts and articles. Using such an indes, she found “extensive” use of propaganda (according to Ellul’s definition) in 78 cases out of 110 opportunities across 5 papers.

[edit] Trial Registration

The following reports refer to clinical trial registration, which became a mandatory requirement in September 2005 following a decision by the International Committee of Medical Journal Editors (ICMJE) to only publish registered trials.

Deborah Zarin reported that primary outcomes recorded in clinical trial registries matched primary outcomes reported in publications in 62 of 75 cases. It was however, difficult to match some registered primary outcomes due to their sloppy or vague initial definition. Examples of changes included the changing of an endpoint (ENHANCE); and publication of registered secondary outcome as primary outcome. She suggested that more details of statistical analysis be part of trial registration, and that journals check registered outcomes as part of their editorial processes.

Ludovic Reveiz revealed that clinical trial registers collect eligibility criteria in 81% of cases; primary outcomes in 66% of cases; secondary outcomes in 46% of cases; follow-up duration in 62% of cases; intervention descriptions in 53% of cases; and sample size calculations in 1% of cases. Those registries with specific fields (eg, inclusion criteria) were more likely to collect specific and relevant information.

Mirjana Huic showed that in 102 registered trials published, only 24 were registered prior to trial start date, and a number of trials had missing or erroneous data in the 20-item minimal dataset. The completeness of data increased over time, and 87 studies had at least 1 change to registered information. Published data was found to differ in 23-78 cases (depending on the degree of difference).

Roberta Scherer reported that only 62% of RCT abstracts submitted for the Association for Research in Vision and Ophthalmology (ARVO) conference were registered in 2007. This rate increased to 67% in 2008 and 73% in 2009. He also found few major disagreements between registration information and submitted abstracts (eg, inclusion and exclusion criteria, details of intervention, and primary outcome).

[edit] Final Day

The final day of the conference was opened by Drummond Rennie, who explained the origin and history of these meetings. He referred to a paper by Franz Ingelfinger calling for prospective research studies into a system of publication that seemed to work, but was in fact without scientific analysis itself. This lead to the first meeting in 1986, for which abstracts slowly appeared that recognise many of the same key issues that are still the focus of research over 30 years later- including, bias, ethics, and authorship. Three years after the congress, JAMA published a theme issue- “Guarding the Guardians”- on research into the system of academic publication. This issue included Iain Chalmer’s seminal paper on editorial freedom. The next conference was not until 1993, followed by a second theme issue in 1994, including Kassier and Champion’s paper describing the existing system as crude and undesirable, but indispensible. As well as pointing out the critical lack of studies, the paper recognised that the following key topics require rigorous analysis: scientific misconduct; financial conflicts of interest; blinding of trials; authorship. With regard to bias, it was recognised that all people have unconscious bias- and that the goal of editors is to understand, and identify this bias. Dr Rennie then referred to what he felt was solid research done by Swedish researchers into sexism and nepotism in the Swedish publication system (ref. needed). In 1997, following the dissociation of the USSR, the next meeting was held in Prague, but unfortunately this event did not attract research from eastern European publishers. A Harvard study, nevertheless, first revealed what was to be the start of an exponential growth in the number of authorship disputes. On 13th September 2001, the congress was rocked by the events of two days earlier, although a final theme issue of research into publication was released following the meeting. By 2005, the research was considered strong enough to no longer require the affirmative action of a theme issue, but could competing with other research topics for publication. This was itself demonstrated by figures showing a consistent rise in research papers on publication and in citation of this research.

[edit] Quality of Reporting I

Sara Khangura's work revealed that only 11% of articles in high-impact journals studied described how representative the sample was; 40% discussed the extent to which the results can be generalized; 82% identified the mode of administration; and 35% made the questionnaire available with the publication.

Karen Robinson reported that only a third of relevant papers were cited in clinical trial reporting, and that on average 56% of prior information was not referenced. Some discussion occurred into considerations such as: limitations on space within an article; the use of reviews and mini-reviews to avoid the need for comprehensive citation; and repetitive or defunct research papers requiring only the most recent or targeted citation.

Diane Civic then examined the nature of editorials and commentaries and found that about 1/3 of the 55 studied had different implications than the original paper regarding implications for practice. 33 RCTs (80%) met at least 1 or more Cochrane criteria for potential risk of bias. Interestingly, 13 of the original manuscripts revealed the source of bias in their discussion, while only 5 editorials did the same. Issues of funding were ignored in all 22 editorials or commentaries studied, except 1, despite being mentioned in 40% of the original articles. Only 9% of editorials mentioned additional strengths of the study beyond those in the original article, and 22% brought up additional limitations.

Milo Puhan reported that in 73% of papers a median of 3 limitations are mentioned in the discussion section; and 5% of abstracts contained a limitation. In almost all (95%) of cases, the limitation was stated before the conclusion. Common limitations included: measurement errors, and selected study populations affecting internal or external validity.

Erik von Elm then explained that in 58% of cases he studied, eligibility criteria (as defined in study protocol) had been omitted or modified in subsequent journal publications. Most changes or omissions were considered major, and most suggested larger sample sizes.

David Schriger presented research that suggests only a small amount of the available experimental data is made available in most publications (mean for best outcome, 22%, median, 6%, range, 0.2-100%). Even within the same journal, articles vary widely (range 2-72%). Data associated with figures, more than tables, more than text, was associated with a higher level of data presentation.

[edit] Quality of Reporting II

Sally Hopewell then showed that the reporting of randomized control drug trial decreased between 2000 and 2006- in the period following the release of the CONSORT statement in 2001. Average sample sizes during this period increased from 52 to 62; and more articles detailed the primary outcome, power calculation, random sequence generation, and allocation concealment. There was, however, no improvement in the providing of details of blinding, nor the details of trial registration and access to trial protocol.

Sally Hopewell also reported that in 5 high impact journals reviewed, 3 give authors submission instructions referring to CONSORT guidelines. Of the specific guidelines for abstracts (released Jan. 2008), those that were met in most cases were: ‘randomized’ included in the title (76%); eligibility criteria (90%); description of interventions (77%); study objectives (97%); primary outcome (71%); results for each group with effect size (74%); and precision (79%). Those guidelines not met in most cases were: allocation concealment (5%); sequence generation (2%); specific details on who was blinded (4%); trial design (23%); funding source (1%); harms (42%); and the number of people randomized (48%) and analysed (32%) in each group. In the short time since release of CONSORT, the reporting of blinding, details on the numbers of participants and funding sources are improving.

David Kent then reviewed the use of multivariable risk-based analysis, which he argued could substantially improve the reporting of clinical trials by assessing the heterogeneity of treatment effect (HTE). His group found tremendous variation in baseline risk values in many trial populations, and showed how using statistics based on average values can obscure both benefits and harms to many individuals. Risk based sub-groups, in these sorts of cases, would be more effective, he argued. Although sub-group analysis was performed in 65% of studies (median 4 sub-groups), risk based sub-group analysis was performed in only 8% of these trials (eligible in 85% of cases).

David Moher reported that there is an increasing number of guidelines covering the various experimental designs and data types. Few contain an explanation of how they were developed or why; and few describe if they are being evaluated or if they have been evaluated.

[edit] Postpublication Citations, Indexing, Responses and Online Publishing

Cynthia Lokker found that the citation rates of articles from major journals that were selected for abstraction in secondary journals targetted to practicing clinicians (eg, Evidence-Based Nursing) had almost double the citation rates compared to the overall original publication, averaging 11.3 citations compared to 6.2. The impact factors of the secondary journals were calculated as ACPJC (39.5); EBM (30.2); and EBN (9.3); very respectable impact factors for their clinical category. The high values validate the careful selection process of articles for inclusion into secondary journals; the effect of stimulating citations by re-publishing articles is unknown.

Prof. Martin Tramèr showed how US based anesthesia journals are more likely and more rapidly indexed by MEDLINE than non-US journals. This was not found to be true for EMBASE indexation. The number of Anesthesia journals grew from around 50 in 1960 to 260 in 2005, of which 22% of these journals were from the US and 78% from non-US countries. In 2009, MEDLINE consisted of 42% US journals and 58% non-US journals (total 4,900) and EMBASE 29% US and 71% non-US (total 4,400).

Andreas Lundh reported that substantive online criticism was raised against 30% of papers studied (BMJ), to which authors responded in 45% of cases (two year study). Interestingly, the degree of criticism (minor, moderate or major) did not influence the likelihood of a response. The adequacy of responses by authors was judged to be on average somewhere between partly addressed and not addressed, by the contributor of the criticism, although editors judged it to be between partly addressed and fully addressed. Dr Lundh went on to recommend that journals encourage authors to respond, to which many editors in the audience identified notifications and the threat to publish criticisms as motivating.

Trish Kritek explained that a novel case-based clinical decion-making online feature in the New England Journal of Medicine (nejm.org).  Clinical Decisions presents a case with a clinical scenario for which there is no clear "best" management. Three different expert opinions are presented and viewers can vote on which they feel is best. Viewer response increased from 18.5% to 37.5% over the course of two years and seven cases. Of those that voted, 63% were from the US, compared with 18% from the EU (next highest). There were participants from 136 countries intotal.  Most (85%) declared themselves to be physicians. Of those that voted, on average only 6% left a comment, which resulted in an average of 373 comments per case.

David Schriger completed the conference by reporting that the use of online-only supplements increased from 32% to 64% and other supplementary material increased from 5% to 30% between 2003-2009 . Some features, such as online post-publication review actually decreased from 12% to 9% between 2005 to 2007. Figures were the most common supplement, followed by tables, methods, video, audio, survey, and data sets.





  • This page was last modified 18:49, 13 April 2010.
  • This page has been accessed 30,679 times.