Reducing misinformation by fostering honest and useful credible information regarding manual therapies

Category: Blog Page 1 of 3

Pros and Cons of Paying Peer Reviewers

By: Juliana Ancalmo, Chad E Cook PT, PhD, FAPTA, Ciara Roche

Background

Critical appraisal is a hallmark of peer reviewed publishing. Critical appraisal provides analytical evaluations of whether the results of the study can be believed, and can be transferred appropriately into other environments, for use in policy, education, or clinical practice [1]. Historically, critical appraisal is performed by peer reviewers who are either content or research experts (or both). Peer reviewers have viewed this act as an obligation for science, especially those who benefit from peer review as authors, and are not currently paid for this service.

Recent limitations brought forth by qualified peer reviewers has ignited discussion around paying for reviewing services. Although this topic had been highly debated previously, a new wave of conversation was reignited when researcher and Chief Scientific officer James Heathers [2] argued for a $450 fee for a peer review in an editorial published on Medium. This, coupled with the challenges many researchers faced post-COVID have spurred people on both sides of this argument to speak out. In this blog we will outline the pros and cons of this debate and discuss the complexity of the issue at hand.

Pros of Paying Peer Reviewers

We propose several benefits of paying peer reviewers for their critical appraisals. Since the COVID-19 pandemic, there has been a notable decline in acceptance rates combined with an increase in submission rates in academic journals, creating a large imbalance within the peer review process [3]. Compensation could lead to reviewer buy-in and decrease this imbalance [2]. Interestingly, it may also increase the diversity of peer reviewers. Peer reviewers often reflect those who populate their field of study, which often are dominated by men. Theoretically, paying for peer review may better represent women and lower income countries, especially if they are targeted [4].

On top of a lack of diversity of reviewers, these publishing companies are generating record profits, and compensating reviewers may reduce the associated negative optics. For example, the arguably biggest academic publishing company in the world is Elsevier, which generates $3.35 billion in revenue with a profit margin of around 40% [5]. It is arguable that among the five major publishing companies, Elsevier, John Wiley & Sons, Taylor & Francis, Springer Nature and SAGE, who control 50% of all revenue of the academic publishing industry globally, a solution could be drawn up to financially compensate underpaid and worked reviewers [5]. Quite frankly, asking someone to do a lot of work for free is a tough sell during times of record profits. Finally, we believe reviewers simply deserve to be paid. Good reviewers spend a lot of time peer reviewing papers. This process improves the final manuscript and strengthens the science. Experts deserve to be compensated and asking people to work for free is an archaic and offensive stance.

Cons of Paying Peer Reviewers

There are also several arguments that can be made against paying peer reviewers. One often cited is that compensation of reviews may lead to unethical reviews being submitted. It is not a stretch to consider how reviewers may take advantage of this monetary system for their own financial benefit–this could impact the quality of the reviews submitted, as reviewers work to do as many reviews as they can for “easy cash-grab.” This leads into another concern regarding the payment of peer reviewers: there is currently no threshold on what constitutes a “good review.” Nowadays it can take several months to wait for feedback on a paper, only to receive a couple of lines from a reviewer and a rejection from the editor. Does this two-line review deserve the same compensation as someone who spent hours reading and giving critical feedback on a review?

It is clear there would need to be notable training and standardization in submitting a review that would qualify for compensation; however, this process would further limit individuals who could submit a review and may cause further delays in this process. Additionally, processing of the payments would likely be a disaster at first. Considering it can sometimes take a year for these journals to publish a review, it is not unreasonable to believe that a payment system for ongoing peer reviewers would result in lost, incorrect or delayed payments.

Finally, there is uncertainty of whether journals or the industry could even afford to pay these reviewers in the first place. Publishing consultant, Tim Vines, argued that if there is an average of 2.2 reviews per article, each reviewed article would cost $990, assuming the $450 fee proposed by Heathers is met [6]. Additionally, for a journal with a 25% acceptance rate, the cost of reviewing for each accepted paper would be $3,960 [6]. This additional cost would almost double research journals expenditures, which may lead journals to increase article-processing charges and subscription fees to cover these additional expenses.

Our Thoughts

In theory, we support payment for peer review. However, the traditional practice of peer review may be resistant to change due to system inertia, which is the resistance of an organization to change despite its necessity [7]. We support the need for additional steps before a model flip can occur. These include reducing unnecessary burden on reviewers such as: 1) papers that have fatal flaws and have no chance of acceptance; 2) requests that are outside the scope of the reviewers; 3) multiple requests at one time; and 4) unrealistic review turnaround. Simply put: there are too many submissions–a focus on quantity over quality. Predatory journals and journals that support weak science are interested only in publishing (and article processing fees) and little on science. We acknowledge the complexity of institutional reform in the presence of system inertia. Once these elements are sorted, we can get back to the discussion of paying for peer review.

References

  1. Katrik P, Bialocerkowski AE, Massy-Westropp N, Kumar VS, Grimmer KA. A systematic review of the content of critical appraisal tools. BMC Medical Research Methodology. 2004;22:(4).
  2. Heathers, J. The 450 Movement. Medium. Available at: https://jamesheathers.medium.com/the-450-movement-1f86132a29bd
  3. Künzli N, Berger A, Czabanowska K, Lucas R, Madarasova Geckova A, Mantwill S, von dem Knesebeck O. I Do Not Have Time -Is This the End of Peer Review in Public Health Sciences? Public health reviews. 2022;43. https://doi.org/10.3389/phrs.2022.1605407
  4. Cheah PY, Piasecki J. Should peer reviewers be paid to review academic papers? Lancet, 2022;399(10335):1601.
  5. Curcic D. Academic Publishers Statistics. WordsRated. Available at: https://wordsrated.com/academic-publishers-statistics/
  6. Brainard J. The $450 question: Should journals pay peer reviewers?. Science https://www.science.org/content/article/450-question-should-journals-pay-peer-reviewers
  7. Coiera E. Why system inertia makes health reform so difficult. BMJ (Clinical research ed.), 2011;342:d3693.

Yes, Peer Review is Broken, but It’s Probably Worse than You Think

By: Chad E. Cook PT, PhD, FAPTA

We have problems: There are countless publications, editorials, and blogs indicating we have a notable problem with the peer review system used in scientific publications [1-4]. Concerns have included its inconsistency, its slow process, and the biases associated with reviewers (especially reviewer two) who have an axe to grind. These limitations and the knowledge that publishing companies are making record profit margins [5] off the free labor of reviewers, while authors are required to pay to publish, are especially difficult to stomach. This problem has been ongoing for some time but in my opinion, it seems to have worsened recently. Having been immersed in publishing for over 25 years as an author, and over 20 years as an editor-in-chief or associate editor for four journals, I’d like to outline my concerns that qualify my statement in the title that it’s “probably worse than you think”.

Journals are overwhelmed and subsequently, unresponsive: The last three publications I’ve submitted to peer reviewed journals took 11 months, 10 months, and 6 months, to receive the first set of reviewers’ comments. For those that are not familiar with peer-reviewed publishing, this is a very long time to wait for your first set of reviews. We pulled the paper that took 11 months over 6 months ago (because we were tired of the lack of responsiveness from the journal) and informed the editor-in-chief that we removed it from the review process, but they kept it within their system anyway, and eventually provided the reviews (11 months later). It had already been accepted in a different journal by then.  We were informed by the editor-in-chief of the paper that took 6 months that they had reached out to 60 reviewers, to receive two reviewers’ comments. They eventually used the names of reviewers that we recommended. Two of the three examples were review articles and the editors had the audacity to recommend an updated search!

Quality has been sacrificed for quantity: It is estimated that there are 30,000 medical journals published around the world [6]. In 2016, about 1.92 million papers were indexed by the Scopus and Web of Science publication databases; In 2022, that number jumped to 2.82 million [7]. This equates to approximately two papers uploaded to PubMed every minute [8]. Subsequently, it is no secret that quantity has replaced quality. It is especially prevalent in open access journals in which revenue is dependent on an article processing charge (APC) and volume. On average, an article processing charge (APC) of $1,626 USD has been reported [9]. Whereas, this may not seem to be unreasonable, some journals charge over $11,000 USD (Nature Neuroscience [10]), whereas others (PLOS One [11]) have published over 30,000 papers in a given year. I think it is hard-pressed to assume that enough useful science is being created that demands 2.82 million unique papers.

Reviewers are overwhelmed and are abused: I feel it is my responsibility to review for journals, since I’m a user of the peer review system, and I do so without compensation. It generally takes me an hour to do a meaningful and respectful review; sometimes it takes me longer if I need to check the trial registration, review attached appendices, or some of the more important references. Although I serve as an associate editor for a journal, I try and limit my reviews to two manuscripts a week. Apparently, this isn’t enough. From March 1st through March 31st in 2024, I was asked to review 67 papers for scientific journals. That’s an average of nearly 2.2 requests per day-including non-business days. Interestingly, one journal in particular, in which I just published a paper (after 10 months of waiting for the first review), requested my review services 13 times. I averaged >four requests a week from this journal until I finally stopped responding. It is important to recognize that reviewers are overwhelmed and should be compensated for their work. Those who agree to review understand the sarcastic phrase “no good deed goes unpunished”.

Editors are Often Underpaid, Overworked, and Pressured to Publish: A 2020 survey found that more than one third of editors surveyed from core clinical journals did not receive compensation for their editorial roles [12]. As an editor-in-chief from 2006 through 2012, I contributed over 20 hours a week to the journal, and did receive a small stipend for my efforts. I calculated an average hourly salary of a little over three dollars. Further, previous work has exposed the pressure editors have to publish work [13], especially those who run open access journals, in which payment is required to publish within the journal. This leads to the acceptance of inferior work and a flooding of review requests for papers that should have likely been triaged by the editor.

Fake journals are numerous and are getting difficult to discriminate: Predatory journals are open-access publishers that actively solicit and publishes articles for a fee, with little or no real peer review [14]. I’ve written about these before and even wrote a fake paper (with Josh Cleland and Paul Mintken) about a dead person being brought back to life with spinal manipulation to show how these journals will accept anything [15]. There are some estimates that there are 15,000 predatory journals in existence [16]. A popular publishing company MDPI has recently been placed on Predatory-Reports.com’s predatory publishing list because of concerning behaviors in the peer-review process [17]. It is worth noting that many borderline predatory behaviors have made the distinction of predatory journals more difficult, as the competition to secure submissions has ramped up correspondingly with the number of new journals that have been created. Publishing low quality or questionable work has also undermined the promotion and tenure process in academic settings as appointment, promotion and tenure (APT) committee members are often asked to review portfolios of individuals outside of their professional field.

Retraction rates are on the rise. A retraction occurs when a previously published paper in an academic journal is flagged as seriously flawed to the extent that their results and/or conclusions are no longer valid. Retractions occur because of plagiarism, data manipulation and conflict of interest [18] and overall, they are not very common; for every 10,000 papers, 2.5 papers were retracted. Journals self-govern (with external assistance) and often identify flawed work and retract the papers. As such, most retractions occur in higher level journals. To date, data simply don’t exist that can provide an estimate of how many flawed papers are present in journals with no real peer review (predatory) and those that aren’t predatory but have questionable behaviors.

This sounds awful, what should we do: I do realize this blog is negative, but it’s important to understand the context around peer review, especially if you have not the opportunity to publish, review or edit in the peer review system. There are strategies that one can take on that may help navigate these challenges. First, I’d recommend that you read work from reputable journals that are affiliated with reputable societies (e.g., JOSPT, Physical Therapy, Journal of Physiotherapy, etc.). Second, I think it is healthy and reasonable to question results that are notably different from known information, results that were obtained from a group with a vested interest in the outcome of the study, and results that are substantially better than the comparison group, because that’s just not very common or likely. Third, it is appropriate to support the current inertia toward paying reviewers for their efforts as long as their work is of high quality. Fourth, it is good when editors triage papers that are unlikely to be published (or those that shouldn’t be published) as this reduces the burden on peer review. Lastly, it’s important to recognize that someone has to pay for open access journals; it is typically the author that pays.

References

Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006 Apr;99(4):178-82.

Flaherty C. The Peer-Review Crisis. Inside Higher Ed. Available at: https://www.insidehighered.com/news/2022/06/13/peer-review-crisis-creates-problems-journals-and-scholars

Malcom D. It’s Time We Fix the Peer Review System. Am J Pharm Educ. 2018 Jun;82(5):7144.

Subbaraman N. What’s wrong with peer review. Wall Street Journal. Available at: https://www.wsj.com/science/whats-wrong-with-peer-review-e5d2d428

Ansede M. Scientists paid large publishers over $1 billion in four years to have their studies published with open access. El Pais. Available at:   https://english.elpais.com/science-tech/2023-11-21/scientists-paid-large-publishers-over-1-billion-in-four-years-to-have-their-studies-published-with-open-access.html

Gower T. What Are Medical Journals? WebMD. Available at:  https://www.webmd.com/a-to-z-guides/medical-journals

(no author) Scientists are publishing too many papers—and that’s bad for science. Science Advisor. Available at: https://www.science.org/content/article/scienceadviser-scientists-are-publishing-too-many-papers-and-s-bad-science#:~:text=In%20recent%20years%2C%20the%20number,had%20jumped%20to%202.82%20million.

Landhuis E. Scientific literature: Information overload. Nature. 2016;535:457–458.

Morrison H, (2021-06-24). “Open access article processing charges 2011 – 2021”. Sustaining the Knowledge Commons / Soutenir les savoirs communs. Retrieved 2022-02-18.

Du JS. Opinion: Is Open Access Worth the Cost? The Scientist. Available at: https://www.the-scientist.com/opinion-is-open-access-worth-the-cost-70049

Kayla Graham (January 6, 2014). “Thanking Our Peer Reviewers – EveryONEEveryONE”. Blogs.plos.org. Retrieved 2015-05-17.

Lee JCL, Watt J, Kelsall D, Straus S. Journal editors: How do their editing incomes compare? F1000Res. 2020;24;9:1027.

De Vrieze JOP. Open-access journal editors resign after alleged pressure to publish mediocre papers. Science Advisor. Available at: VRIEZEhttps://www.science.org/content/article/open-access-editors-resign-after-alleged-pressure-publish-mediocre-papers

Cook CE, Cleland JA, Mintken PE. Manual Therapy Cures Death: I Think I Read That Somewhere. J Orthop Sports Phys Ther. 2018 Nov;48(11):830-832.

Cook CE, Cleland J, Mintken P. Temporal Effect of Repeated Spinal Manipulation on Mortality Ratio: A Case Report. ARCH Women Health Care Volume. 2018. 1(1): 1–4.

Freeman E, Kurambayev B. Rising number of ‘predatory’ academic journals undermines research and public trust in scholarship. The Conversation. Available at: https://theconversation.com/rising-number-of-predatory-academic-journals-undermines-research-and-public-trust-in-scholarship-213107#:~:text=That%20is%20roughly%20the%20same,there%20were%2015%2C000%20predatory%20journals

(anonymous author) Is MDPI a predatory publisher? Publishing with Integrity. Available at: https://predatory-publishing.com/is-mdpi-a-predatory-publisher/

Conroy G. The biggest reason for biomedical research retractions. Detection software is not enough. Nature Index. Available at: https://www.nature.com/nature-index/news/the-biggest-reason-for-biomedical-retractions

On Mastery

By Seth Peterson, PT, DPT, OCS, FAAOMPT

“I don’t know how they can sleep at night.” I was getting chewed out in a hallway in my first year of residency training. My mentor was speaking in general terms, but it was painfully clear that “they” meant me. I had just seen an 11-year-old girl with an ankle sprain. I had given her a painful balance exercise in standing (because the evidence showed it was more effective) and we had talked about pain neurophysiology, which was cutting-edge at the time. Her problem with what she’d just witnessed was that, despite me applying “evidence-based care,” she hadn’t really seen me apply that care to the individual. She hadn’t seen me think.

Looking back, my lack of thinking about the interventions was made worse by the fact that I was doing so much thinking about the simple things. While my mentor was thinking about the words used to greet someone and deciding what mattered to that person on that day, I was focused on how to sequence an ankle examination. I was focused on the basics—and the basics were something they did unfailingly well. Using the conscious competence learning model, you could say I was at a stage of “conscious incompetence” while they were well into the “unconscious competence” stage. Another way to say it is they had “mastered” the basics, while I was just beginning to grasp them.

An Exercise in Interpreting Clinical Results

by Chad E Cook PT, PhD, FAPTA

Randomized Controlled Trials

In clinical research, treatment efficacy (the extent to which a specific intervention, such as a drug or therapy, produces a beneficial result under ideal conditions) and effectiveness (the degree to which an intervention achieves its intended outcomes in real-world settings) are studied using randomized controlled trials. Randomized controlled trials compare the average treatment effects (ATEs) of outcomes between two or more interventions [1]. By definition, an ATE represents the average difference in outcomes between treatment groups (those who receive the treatment or treatments) and/or a control group (those who do not receive the treatment) across the entire population. Less commonly, researchers will include a secondary “responder analyses” that looks at proportions of individuals who meet a clinically meaningful threshold.

Disentangling the Truth about Manual Therapy

by Chad E Cook PT, PhD, FAPTA

The “Facts” Please

Perhaps you’ve heard the following “facts”? The Great Wall of China is visible from space. If you touch a baby bird that is in its nest, the mother will abandon it. If you flush a toilet in the Southern Hemisphere, water rotates in the opposite direction through a process known as the Coriolis Effect. I’m uncertain when and where I’ve heard these, but I was surprised to have learned recently that each of these “facts” is actually false [1]. The Great Wall is not visible at low earth orbit without magnification and baby birds are not abandoned once touched. In fact, most birds have a poor sense of smell and won’t even detect that a human has been there. Lastly, toilet construction dictates how water rotates once flushed, not its position on the earth [1]. Each of these statements, which I’m certain you and I have heard numerous times, is an example of the “illusory truth effect” [2].

The illusory truth effect is a cognitive bias in which people tend to believe that a statement or claim is true if they have encountered it repeatedly, even if it is false or lacks evidence to support it [2]. This effect demonstrates the power of repetition and familiarity in shaping beliefs and perceptions. This form of cognitive bias is commonly employed by politicians, marketers, and left- and right-wing journalists to manipulate the truth. Unfortunately, in situations where the “truth” is complicated, the illusory truth effect is a very effective strategy that leads to unwarranted changes in thoughts and beliefs [3].

Manual Therapy: Manipulation of the Brain?

by Tara Winters PT, DPT

When a person walks into the clinic with low back pain with primary nociplastic pain mechanisms, I’m armed and ready with a number of treatment ideas. This is thanks to the leaps and bounds made in the last 20 to 30 years in the world of pain science. “Let’s see if you can distinguish this photo of a right hand versus a left hand”, “I’m going to create a quadrant on your lower back and I want you to tell me which quadrant you feel pressure in”, “Let me tell you about the science behind your pain!”. We then find ourselves down this (evidence-based, of course) rabbit hole of treatments, termed graded motor imagery (GMI), with manual therapy falling lower on our list of treatment needs. Can you relate?

The relevance of contextual factors for hands-on treatment in musculoskeletal pain and manual therapy

by Giacomo Rossettini – PhD, PT


‘I definitely feel less pain in my back after the manipulation’. ‘My shoulder has better mobility after the massage’. Phrases such as these, uttered daily by patients in rehabilitative settings, lead clinicians to think that their hands-on treatments are so powerful that they are sometimes miraculous. Although the literature supports a short- to medium-term benefit of hands-on techniques in managing musculoskeletal pain,1 if we ask why they work, we are often surprised by the justifications proposed by the clinical and scientific community. Indeed, in addition to biomechanical and neurophysiological explanations,2 the international literature has recently suggested Contextual factors (CFs) as mechanisms for understanding the clinical functioning of hands-on techniques, regardless of what they are (e.g., joint mobilizations, joint manipulations, soft tissue or neurodynamic techniques).3

Why do our Interventions Result in Similar Outcomes?

by Chad Cook PT, PhD, FAPTA; Derek Clewley PT, PhD, FAAOMPT

If you’ve seen the movie, Oppenheimer, you may remember him discussing the paradoxical wave-particle duality. This revolved around the finding that light exhibits both wave-like and particle-like properties. In fact, in certain experiments, light behaves more like a wave, whereas in others, it behaves more like a particle. Oppenheimer was perplexed because light shouldn’t have both properties, properties that seem to “depend” on how they are tested.

When you read comparative analyses involving two markedly different treatments that yield similar outcomes, it is likely that you are just as perplexed as Oppenheimer. As we’ve stated before in papers and blogs on this website and others, most musculoskeletal treatments result in similar overall outcomes [1]. In truth, it’s become the norm versus an exception. We could manage this using the current “circular firing squad” method of badmouthing the interventions we don’t like and supporting those we do, OR we can try to better understand why we are experiencing this. We chose the latter. The purpose of this blog is to provide possible reasons we see similar outcomes across studies involving different interventions.

The Placebo Effect

Definitions Matter

In healthcare, the use of appropriate definitions is imperative. I was recently part of an international nominal group technique (a qualitative study that is used to build consensus) that harmonized a definition for contextual factors [1]. Within the literature, contextual factors have been variably described as sociodemographic variables, person-related factors (race, age, patient beliefs and characteristics), physical and social environments, therapeutic alliance, treatment characteristics, healthcare processes, placebo or nocebo, government agencies, and/or cultural beliefs. Our job was to determine which of these characteristics most accurately reflected a contextual factor. Our harmonization (the paper is currently in review), should improve the ability of two clinicians, researchers or laypersons to communicate what they mean by this critical concept.

Shared Decision Making for Musculoskeletal Disorders: Help or Hype?

By Chad E Cook PT, PhD, FAPTA; Yannick Tousignant-Laflamme PT, PhD

Background

In 2010, the Affordable Care Act (ACA) was passed with a goal to expand access to insurance, increase consumer protections, emphasize prevention and wellness, improve quality and system performance, expand the health workforce, and curb rising health care costs [1]. Principle to the ACA was the process of shared decision making (SDM) [2]. By definition, SDM is ‘an approach where clinicians and patients share the best available evidence when faced with the task of making decisions, and where patients are supported to consider options, to achieve informed preferences” [3]. Whereas other definitions of SDM also exist, all converge to a similar notion: as a central part of patient-centered care, SDM is a dynamic process by which the healthcare professional (not limited to the physician) and the patient influence each other in making health related choices or decisions [4] upon which both parties agree.

Purpose

Whereas it’s difficult to argue against the principles of SDM (i.e., sharing best available evidence and considering all options), it is worth evaluating whether SDM has made a difference in the care provided to patients with musculoskeletal disorders, particularly a difference in clinical outcomes. The purpose of this blog is to evaluate the current evidence on SDM for individuals with musculoskeletal disorders.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén