Reducing misinformation by fostering honest and useful credible information regarding manual therapies

Category: Blog Page 1 of 3

On Mastery

By Seth Peterson, PT, DPT, OCS, FAAOMPT

“I don’t know how they can sleep at night.” I was getting chewed out in a hallway in my first year of residency training. My mentor was speaking in general terms, but it was painfully clear that “they” meant me. I had just seen an 11-year-old girl with an ankle sprain. I had given her a painful balance exercise in standing (because the evidence showed it was more effective) and we had talked about pain neurophysiology, which was cutting-edge at the time. Her problem with what she’d just witnessed was that, despite me applying “evidence-based care,” she hadn’t really seen me apply that care to the individual. She hadn’t seen me think.

Looking back, my lack of thinking about the interventions was made worse by the fact that I was doing so much thinking about the simple things. While my mentor was thinking about the words used to greet someone and deciding what mattered to that person on that day, I was focused on how to sequence an ankle examination. I was focused on the basics—and the basics were something they did unfailingly well. Using the conscious competence learning model, you could say I was at a stage of “conscious incompetence” while they were well into the “unconscious competence” stage. Another way to say it is they had “mastered” the basics, while I was just beginning to grasp them.

An Exercise in Interpreting Clinical Results

by Chad E Cook PT, PhD, FAPTA

Randomized Controlled Trials

In clinical research, treatment efficacy (the extent to which a specific intervention, such as a drug or therapy, produces a beneficial result under ideal conditions) and effectiveness (the degree to which an intervention achieves its intended outcomes in real-world settings) are studied using randomized controlled trials. Randomized controlled trials compare the average treatment effects (ATEs) of outcomes between two or more interventions [1]. By definition, an ATE represents the average difference in outcomes between treatment groups (those who receive the treatment or treatments) and/or a control group (those who do not receive the treatment) across the entire population. Less commonly, researchers will include a secondary “responder analyses” that looks at proportions of individuals who meet a clinically meaningful threshold.

Disentangling the Truth about Manual Therapy

by Chad E Cook PT, PhD, FAPTA

The “Facts” Please

Perhaps you’ve heard the following “facts”? The Great Wall of China is visible from space. If you touch a baby bird that is in its nest, the mother will abandon it. If you flush a toilet in the Southern Hemisphere, water rotates in the opposite direction through a process known as the Coriolis Effect. I’m uncertain when and where I’ve heard these, but I was surprised to have learned recently that each of these “facts” is actually false [1]. The Great Wall is not visible at low earth orbit without magnification and baby birds are not abandoned once touched. In fact, most birds have a poor sense of smell and won’t even detect that a human has been there. Lastly, toilet construction dictates how water rotates once flushed, not its position on the earth [1]. Each of these statements, which I’m certain you and I have heard numerous times, is an example of the “illusory truth effect” [2].

The illusory truth effect is a cognitive bias in which people tend to believe that a statement or claim is true if they have encountered it repeatedly, even if it is false or lacks evidence to support it [2]. This effect demonstrates the power of repetition and familiarity in shaping beliefs and perceptions. This form of cognitive bias is commonly employed by politicians, marketers, and left- and right-wing journalists to manipulate the truth. Unfortunately, in situations where the “truth” is complicated, the illusory truth effect is a very effective strategy that leads to unwarranted changes in thoughts and beliefs [3].

Manual Therapy: Manipulation of the Brain?

by Tara Winters PT, DPT

When a person walks into the clinic with low back pain with primary nociplastic pain mechanisms, I’m armed and ready with a number of treatment ideas. This is thanks to the leaps and bounds made in the last 20 to 30 years in the world of pain science. “Let’s see if you can distinguish this photo of a right hand versus a left hand”, “I’m going to create a quadrant on your lower back and I want you to tell me which quadrant you feel pressure in”, “Let me tell you about the science behind your pain!”. We then find ourselves down this (evidence-based, of course) rabbit hole of treatments, termed graded motor imagery (GMI), with manual therapy falling lower on our list of treatment needs. Can you relate?

The relevance of contextual factors for hands-on treatment in musculoskeletal pain and manual therapy

by Giacomo Rossettini – PhD, PT


‘I definitely feel less pain in my back after the manipulation’. ‘My shoulder has better mobility after the massage’. Phrases such as these, uttered daily by patients in rehabilitative settings, lead clinicians to think that their hands-on treatments are so powerful that they are sometimes miraculous. Although the literature supports a short- to medium-term benefit of hands-on techniques in managing musculoskeletal pain,1 if we ask why they work, we are often surprised by the justifications proposed by the clinical and scientific community. Indeed, in addition to biomechanical and neurophysiological explanations,2 the international literature has recently suggested Contextual factors (CFs) as mechanisms for understanding the clinical functioning of hands-on techniques, regardless of what they are (e.g., joint mobilizations, joint manipulations, soft tissue or neurodynamic techniques).3

Why do our Interventions Result in Similar Outcomes?

by Chad Cook PT, PhD, FAPTA; Derek Clewley PT, PhD, FAAOMPT

If you’ve seen the movie, Oppenheimer, you may remember him discussing the paradoxical wave-particle duality. This revolved around the finding that light exhibits both wave-like and particle-like properties. In fact, in certain experiments, light behaves more like a wave, whereas in others, it behaves more like a particle. Oppenheimer was perplexed because light shouldn’t have both properties, properties that seem to “depend” on how they are tested.

When you read comparative analyses involving two markedly different treatments that yield similar outcomes, it is likely that you are just as perplexed as Oppenheimer. As we’ve stated before in papers and blogs on this website and others, most musculoskeletal treatments result in similar overall outcomes [1]. In truth, it’s become the norm versus an exception. We could manage this using the current “circular firing squad” method of badmouthing the interventions we don’t like and supporting those we do, OR we can try to better understand why we are experiencing this. We chose the latter. The purpose of this blog is to provide possible reasons we see similar outcomes across studies involving different interventions.

The Placebo Effect

Definitions Matter

In healthcare, the use of appropriate definitions is imperative. I was recently part of an international nominal group technique (a qualitative study that is used to build consensus) that harmonized a definition for contextual factors [1]. Within the literature, contextual factors have been variably described as sociodemographic variables, person-related factors (race, age, patient beliefs and characteristics), physical and social environments, therapeutic alliance, treatment characteristics, healthcare processes, placebo or nocebo, government agencies, and/or cultural beliefs. Our job was to determine which of these characteristics most accurately reflected a contextual factor. Our harmonization (the paper is currently in review), should improve the ability of two clinicians, researchers or laypersons to communicate what they mean by this critical concept.

Shared Decision Making for Musculoskeletal Disorders: Help or Hype?

By Chad E Cook PT, PhD, FAPTA; Yannick Tousignant-Laflamme PT, PhD

Background

In 2010, the Affordable Care Act (ACA) was passed with a goal to expand access to insurance, increase consumer protections, emphasize prevention and wellness, improve quality and system performance, expand the health workforce, and curb rising health care costs [1]. Principle to the ACA was the process of shared decision making (SDM) [2]. By definition, SDM is ‘an approach where clinicians and patients share the best available evidence when faced with the task of making decisions, and where patients are supported to consider options, to achieve informed preferences” [3]. Whereas other definitions of SDM also exist, all converge to a similar notion: as a central part of patient-centered care, SDM is a dynamic process by which the healthcare professional (not limited to the physician) and the patient influence each other in making health related choices or decisions [4] upon which both parties agree.

Purpose

Whereas it’s difficult to argue against the principles of SDM (i.e., sharing best available evidence and considering all options), it is worth evaluating whether SDM has made a difference in the care provided to patients with musculoskeletal disorders, particularly a difference in clinical outcomes. The purpose of this blog is to evaluate the current evidence on SDM for individuals with musculoskeletal disorders.

It’s the Dose, Stupid

Author:

Seth Peterson, PT, DPT, OCS, FAAOMPT

The Motive Physical Therapy Specialists

Oro Valley, AZ


We learn from our failures more than our success. In other times, we learn from our “almost failures.” These close-calls are the best events to learn from, really, because they can carry almost the same weight as a failure without the tragic consequences. Police officers hint at their knowledge of this fact every time they let you go without a ticket. There is a hill on my way to work that I always brake while going down, 8 years later; It is a location where I got off with a ‘warning’.

Compared to What?

Author: Chad Cook PT, PhD, FAPTA

Physical therapists commonly compare two or more things to one another. For example, I’ve frequently heard the comparison of the diagnostic accuracy of one test to another, when defending or rejecting the use of a special test. I’ve also heard the reporting that one intervention is more effective compared to another; in most cases, incorrectly. Sometimes these judgments are not apples-to-apples comparisons and markedly depend on the context and type of the compared group. If you indulge me, I’ll give a non-physical therapy-related example to reinforce my point better.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén