Tag Archives: Evidence-based practice

Countdown to Christmas Quiz: Question 9 – Monday 9th December

What is the three-letter acronym we use to describe the integration of:

(a) clinical expertise/expert opinion,
(b) external scientific evidence, and
(c) client/patient/caregiver perspectives to provide high-quality services reflecting the interests, values, needs, and choices of the individuals we serve?

ANSWER: EBP – Evidence-Based Practice

Evidence-based practice elementsSometimes, evidence-based practice seems – and sounds – complicated, especially when you’re trying to make sense of statistical data presented in peer-reviewed articles. However, the basic principles are simple and all you need to ask yourself is one question: “Is what I am doing with my client based on the best information available to me?” If the answer is “yes,” then you’re on the right track; if the answer is “no,” then your next question is “Where do I get the best information about what I am doing?”


Position paper on EBP in Communication Disorders from ASHA’s Joint Committee on Evidence-based Practice.

The Handbook for Evidence-Based Practice in Communication Disorders by Christine Dollaghan published by Brookes Publishing.

“I don’t care what the research says…”

A colleague of mine was asking for some references to support the notion that kids with severe learning difficulties can learn to use high frequency core words (such as want, stop, and get) because they were being told that what these kiddos really use (or need) are words like toy, cookie, and banana. I duly provided a quick sample of peer-reviewed articles and shared the information with other colleagues. And what the hell, I’ll share them with you, dear reader, in the References section at the end of this piece.

Reading the research

Reading the research

But another of my friends also commented that there are still those folks who respond with comment such as, “I don’t care what the research says, I don’t care who these kids are. These are not the kids I’m working with. The kids I’m working with just aren’t going to use these words.”

So what do you do about this? At what point does being “critical of the research” become “ignoring the research because I don’t believe it.”? In the world of Physics, it’s hard to say, “I don’t care what the research says, I’m still going to fly using my arms as wings.” Mathematicians don’t say, “I don’t care what the research says, 1 + 1 does equal 7.” And it’s a brave doctor who would say, “I don’t care what the research says, you go right ahead and smoke 40 cigarettes a day and you’ll be just fine.”

No-one would argue that Speech and Language Pathology as a profession will ever achieve the rigid, statistical certainties of physics and mathematics, but what does it say about our profession if we openly admit to ignoring “the research” because it doesn’t fit with our individual experience? There are certainly enough practices  in Speech Pathology that are hotly debated (non-speech oral motor exercises, facilitated communication, sensory integration therapy) and yet still being used. But all of these are open to criticism and lend themselves to experimental testing, whereas an opinion based on personal experience is not. I could tell you that I have used facilitated communication successfully, but that is still personal testimony until I can provide you with  some measurable, testable, and replicable evidence. This is one of the underlying notions of evidence-based practice in action.

However, it’s  one thing to talk about using evidence-based practice but another to actual walk the walk. If the evidence suggests that something you are doing is, at best, ineffective (at worst, damaging), how willing are you to change your mind? If 50% of research articles say what you’re doing is wrong, how convinced are you? What about 60%? Or 90%? At what level of evidence do you decide to say, “OK, I was wrong” and make a change?

If there’s anything certain about “certainty” it’s that it’s uncertain! Am I certain that teaching the word get to a child with severe cognitive impairments is, in some sense, more “correct” or “right” than teaching teddy? No, I am not. But what I can do is look at as many published studies of what words kids typically use, at what ages, and with what frequency, and then feel more confident that get is used statistically more often across studies. This doesn’t mean teddy is “wrong,” nor does it preclude someone publishing an article tomorrow that shows the word teddy being learned 10x faster than the word get among 300 3-year-olds with severe learning problems.

But until then, the current evidence based on the research already done is, in fact, all we have. Anything else is speculation and guesswork, and no more accurate than tossing a couple of dice or throwing a dart at a word board.

Being wrong isn’t the problem. Unwillingness to change in the face of evidence is.

Banajee, M., DiCarlo, C., & Buras Stricklin, S. (2003). Core Vocabulary Determination for Toddlers. Augmentative and Alternative Communication, 19(2), 67-73.

Dada, S., & Alant, E. (2009). The effect of aided language stimulation on vocabulary acquisition in children with little or no functional speech. Am J Speech Lang Pathol, 18(1), 50-64.

Fried-Oken, M., & More, L. (1992). An initial vocabulary for nonspeaking preschool children based on developmental and environmental language sources. Augmentative and Alternative Communication, 8(1), 41-56.

Marvin, C.A., Beukelman, D.R. and Bilyeu, D. (1994). Vocabulary use patterns in preschool children: effects of context and time sampling. Augmentative and Alternative Communication, 10, 224-236.

Raban, B. (1987). The spoken vocabulary of five-year old children. Reading, England: The Reading and Language Information Centre.

First Baby Step to Thinking of Evidence-Based Practice: Be Skeptical

At the recent 2012 conference of the International Society for AAC (ISAAC) there was some robust discussion about the technique know as facilitated communication. It’s a controversial technique and surprisingly one on which ISAAC does not have a position paper – which is an endeavor currently underway with a view to something being published soon. I say “surprisingly” because many other professional organizations have had position papers for many years, from the American Academy of Child and Adolescent Psychiatry (1993) through to the Victorian Advocacy League for Individuals with Disability [1]. ASHA has had a statement since 1994, so it does seem a little tardy for the group whose raison d’être is AAC to be publishing a statement on an AAC technique. But never mind, at least there is action being taken, which is better than continuing to say nothing.

But this isn’t about the pros and cons of FC. It’s about the development of a mindset that allows people to think about FC – and Non-Oral Motor Speech Exercises, Equine Therapy, Canine Therapy, Sensory Integration, and other such debatable practices. The reason I started with the reference to FC was simply because during the discussion, one person actually said, “But there’s more to this than Science.”

Is there? Is there really? I can appreciate that things in the world can be difficult to measure, and that there are times when measurement seems unfeasible and even intractable, but that doesn’t mean we stop trying.

Handbook of EBP in Communication DisordersEvidence-based practice can be tough. When you get into the nitty-gritty of the scientific method – which is a big chunk of what EBP is about - it’s easy to get overwhelmed by talk of variables, pre-tests, post-tests, levels of confidence, skewed distributions, ANOVA, one- versus two-tailed hypothesis, Bayesian, Cartesian, and the whole catastrophe that is experimental design. Even the most readable of books, such as the excellent The Handbook for Evidence-Based Practice in Communication Disorders by Christine Dollaghan [2], can be hard to read and even more challenging to digest. The potential complexity of designing ways to measure clinical practice is, to put it bluntly, off-putting. When you have a caseload of 200 clients and only 24 hours in a day, the idea of setting up formal measurement procedures is about as welcome as a bacon sandwich at a Bar Mitzvah.

Nil desperandum! Like any other skill in life, becoming a more effective practitioner of EBP doesn’t require you to be an expert all at once. You can improve your practice simply by sharpening your mindset to be more in tune with the concepts of EBP. And the first thing you can learn to do is become a Skeptic.

First, let me shovel out of the way that huge mound of steaming objection that being a skeptic is just an excuse for rejecting everything and believing in nothing. That’s a cynic, or a nihilist. In a 2010 interview with Skeptically Thinking, philosopher and author Massimo Pigliucci [3] said;

I think that a crucial aspect of being skeptical, of engaging in critical thinking, is not the idea that you reject claims because they seem absurd. That’s not being a skeptic, that’s just being a cynic. It’s just denying things for the sake of denying it. The idea of skepticism is that you inquire — that you do the work.

“Doing the work” is obviously a tough one because in our world of Wikipedia and endless cable shows about ghost hunters, psychics, celebrity hauntings, and quick-fix psychology, it’s easy to let someone else do the work for you – and that work may be of stunningly poor quality and accuracy. However, a little “critical thinking” is not that hard.

So here are my Top Three Critical Questions to help you become a baby Skeptic. And feel free to be skeptical about whether my three are a good three!

1. If someone claims X causes Y because they did Z, can the claim be tested independently? If I tell you that I can stop an interdental lisp by pushing the tip of a client’s tongue with a wooden spoon, while simultaneously saying “go back, tongue, go back,” you’d be right to ask if anyone else can do it, and you may even try it yourself. But if I claim that the reason no-one else can do it is because they don’t have the same spoon, or that my intonation pattern is very specific, you’d also be right to call bullshit on me.

2. If someone claims X causes Y because they did Z, are there any other simpler explanations as to why Y may have happened? When TV ghost hunters use a drop in temperature to “prove” the presence of a ghost, could something simpler have caused it? When a child appears to speak more after an hour with a dolphin, was it actually the dolphin’s presence causing it or just that the kids was happy?

3. If someone claims X causes Y because they did Z, what change was actually measured and how? “My kid talks more to my therapy dog, so therapy dogs work.” More than what? More than if there was a cat? More than 6 months ago? More than when he walked in the door? I had a client many years ago who swore blind that his stammer was much better after a few pints of beer and he wondered if he could get a prescription! Although I never took the opportunity to spend a night out at the bar with him, his measure of “better” was that he felt he was more fluent. But after a few pints of ale, I’m not sure my client was particularly accurate in his measurement techniques.

Everythiing is Obvious book

Oddly enough, I’m not going to suggest you use your common sense because this can be less “common” and “sensible” than you might believe. A recent book by Duncan Watts takes the notion of common sense to task. In Everything is Obvious: How Common Sense Fails Us, he argues that;

Common sense is “common” only to the extent that two people share sufficiently similar social and cultural experiences. Common sense, in other words, depends on what the sociologist Harry Collins calls collective tacit knowledge, meaning that it is encoded in the social norms, customs, and practices of the world.

Anyone who feels that common sense is in some sense the truth may want to spend at least 30 minutes listening to the discussions that go on in your country’s government, with folks in the US now facing 2 months of pre-election “common sense” being thrust down their throats. If sense were really that common, all parties in the political divides would cease to exist because their would only be one truth.

So common sense is less helpful in making evidence-based judgements than the basic science of testing and measuring. Even minimal measurement is better than no measurement because it gets you ever closer to an improved metric. You don’t have to subscribe to the “all or nothing” fallacy that some folks promote. Remember that there are different levels of measurement you can use, and each one has its pros and cons.

So let’s invent an example based on Dolphin Therapy. I can ask my client to tell me as much as possible about a picture of a busy street and record what is said, then repeat the task 5 minutes after spending a half-hour with a dolphin. If I simple count the number of words before and after the swim, then find the post-dolphin condition has twice as many words, is that a “good” measure? Well, the safest answers is “it’s a measure” but the notion of “goodness” is more complex. But here’s the valuable thing; you’ve at least created for yourself a methodology that you can use with the rest of your swimming clients. You can also do it again next time you client has another dolphin session. And the next.

Of course, don’t be surprised if someone else comes along and pokes holes in your methodology and results. The good news is you actually have some results to talk about, rather than a blanket statement about how “good for the kids” this dolphin fun is. Nor should you be surprised if someone uses the second question in my list to suggest an alternative explanation such as “the kid was just relaxed and would have done just as well if you’d given him a massage, or a bowl of ice-cream, or a flight in a helicopter.” This will help you go back and think of a better way to measure and test (or try to get a grant for “Helicopter Therapy” sponsored by folks who like flying in helicopters!) [3]

Enough for now. Once an article passes the 1500-word mark, it ceases to qualify as “baby steps.” So take those three critical questions and start trying them out. If you want some homework, try them out while watching a TV show about UFO’s or Bigfoot – it’s kinda fun.

[1] No, the “Victorian League” is not a group of steam-punk enthusiasts who yearn for a return to the values of the 19th century but an organization (VALID) based in the Australian state of Victoria, the capital of which is Melbourne.

[2] Dollaghan, C. A. (2007). A Handook of Evidence-Based Practice for Communication Disorders. Baltimore: Paul H. Brookes Publishing. This is great book and if you wanted to buy just one reference for EBP, I’d go for thisl But be warned; it is so full of excellent one-liners and summaries that if you use a yellow highlighter, there’s a fair chance you’ll end up with a banana-colored book. I use sticky tags and I think I went though three packs of them! And if you don’t want to spend the money – and time – on the book, you can read Christine’s 2004 ASHA Leader article entitled Evidence-Based Practice: Myths and Realities.

[3] Often the people promoting the benefits of animal therapy are animal lovers who appear to want to somehow “prove” that there’s something special about their dog/cat/dolphin/horse/lizard/three-toed sloth/whippet etc. I have no doubt that research shows how stroking a cat can reduce your blood pressure temporarily, but I can get the same effect from drinking beer, riding my motorcycle, or having sex. However, unlike the animal therapy folks, I am not promoting Drunken Biker Orgy therapy, or DBO as it would be referred to in the academic literature. Which may turn out to be a spectacular loss of revenue for me as a future project…

Quackery, Hokum, Baloney: Separating Science from Stupidity

Suppose I told you that somewhere between Earth and Mars there is an invisible teapot that orbits the sun once every 666 days. The teapot is invisible because it is cloaked using technology developed by space aliens, who left it there to monitor our progress. They believe that once we make contact with the teapot, an alarm will sound and they will return to see if we are truly worthy of being galactic citizens.

Teapot in space

The Orbiting Teapot

The question you need to ask is; “Is that true?” and if so, “How do I know it’s true?” This is, of course, the fundamental question for Science. What do we know and how do we know it.

The invisible teapot was created by the philosopher Bertrand Russell back in 1952 and went like this:

If I were to suggest that between the Earth and Mars there is a china teapot revolving about the sun in an elliptical orbit, nobody would be able to disprove my assertion provided I were careful to add that the teapot is too small to be revealed even by our most powerful telescopes. But if I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense.

But this is the 21st century, after all, and we are all sophisticated, intelligent people, and we have a wealth of scientific knowledge and instrumentation to help us test for the presence of the teapot. The “powerful telescopes” of 1952 have been replaced by much more sophisticated technology and we can now see much more of our solar system.

Bertrand Russell

Bertrand Russell

In principle, therefore, we could focus all the world’s telescopes along the elliptic plane and search for the pot. We already know what it looks like, we know it is between Earth and Mars, and we know it is cloaked. The cloaking business may make it trickier but we also know from current research that “cloaking” is little more than deflecting light around a mass. We could spot the teapot by looking to see if there is a teapot-sized region of space that makes stars behind it appear to change position; this is because the mass of the pot will cause light to bend ever so slightly (it why in a solar eclipse we can see stars on the edge of the sun that are actually behindit).

The key thing to note here is that we TEST for the presence of the teapot and refuse to accept it on faith. I may be able to spin the most wonderful story about the pot, about how beautiful and splendiferous it can be, and how much it has changed my life, but if all I have is my personal perceptions and ideas, you would be right to treat what I say as bullshit of the highest order.

The only way for me to prove that I am right is to provide evidence of the pot. If the telescopes suddenly reveal a sea-green piece of revolving pottery, orbited by teacups (hey, there may be more to the teapot than I knew!) then you should start taking me more seriously. When several independent observatories have pictures, and all independently identify its location by numerical coordinates, and spectrograph analyses all show its chemical structure, then I’m pretty much vindicated. And if after a few years NASA’s latest “Pot Probe” reaches that location and scoops up the teapot into its gaping maw, then that’s likely to be as much proof as any reasonable person would require to be able to say, “Yes, there IS a teapot in outer space.”

Testability is a cornerstone of Science. And the thing that has to be tested is a HYPOTHESIS[1], which is defined as;

A proposition or principle put forth or stated (without any reference to its correspondence with fact) merely as a basis for reasoning or argument, or as a premiss from which to draw a conclusion.

The aim of Science is to test a hypothesis, that is, to see if it is true or false. Now in reality, you don’t actually prove something to be “true,” you “support” it. Truth and support are two very different things. If my hypothesis is “All swans are white,” I can test it by sitting by a river bank photographing every swan than lands on the water in front of me. If I have several friends across the world do the same thing, we might find that all the pictures we have turn out to be white swans. Does this mean that “All swans are white” is true? Nope, it just means that there is overwhelming support, based on many observations and measurements by many people, that swans are white. However – and here’s the kicker – if we find just ONE black swan, the hypothesis is dead in the water. Gone. No amount of evidence can make a hypothesis true, but just one observation can make it false.

This is the principle of falsification, promoted and discussed at great length by the great philosopher of Science, Karl Popper, whose Logic of Scientific Discovery[2] is  a classic in the field. For a more relaxed read (and by “relaxed” I mean “requires a little concentration” as opposed to “Oh, my frickin’ head’s about to explode!”) you might prefer Objective Knowledge: An Evolutionary Approach[3], published in 1972.

Karl Popper

Karl Popper

It’s also explained eloquently by another Carl, Carl Sagan, in his 1996 book The Demon-Haunted World: Science as a Candle in the Dark. I can’t recommend this book enough to students who are eager to learn about the scientific method in an enjoyable and entertaining fashion. It is, perhaps, his best and most lucid book, and it beats me why this isn’t recommended as a high-school text or at least an undergraduate offering to all students. Many people have a woeful understanding of what science and the scientific method are all about and this one book explains it so well.

One particularly practical offering is Chapter 12: The Fine Art of Baloney Detection, where Sagan offers a number of ways to check whether a proposition or hypothesis is valid. Here’s the list for Baloney Detection:

  • Wherever possible you need to find independent confirmation of the facts. One person or test does not a hypothesis prove!
  • Encourage and engage in debate on the evidence by knowledgeable proponents of all points of view.
  • Don’t fall for arguments from authority alone; I may have a PhD in Astrophysics but that doesn’t mean there IS a teapot.
  • Be prepared to try multiple hypotheses.
  • Try not to get overly attached to a hypothesis just because it’s yours, THis is harder than you might think.
  • Measure, measure, measure. Objective numbers always trump personal beliefs, no matter how many folks share that belief.
  • If there is a chain of argument every link in the chain must work.
  • Sharpen up Occam’s Razor[3] – if there are two hypothesis that could explain the data equally well, choose the simpler.
  • Check to see if the hypothesis can, at least in principle, be falsified: Is it testable? If it isn’t testable, it isn’t science![4]
  • Can other people replicate the experiment and get the same result?
  • Conduct control experiments, especially “double-blind” experiments where the person taking measurements is not aware of the test and control subjects.
  • Check for confounding factors; make sure you separate as many of the variables as you can.

This is why evidence-based practice is so important. It separates the speculative from the scientific. The current rush to buy iDevices as a blanket solution for those individuals who need an AAC device is a good example of where hypotheses precede evidence. When someone turns up at the clinic doors with a kid, an iPad, and a recommendation from a video on YouTube that “this is the answer,” what do you say? There are many purported “evidential” video clips on the Internet that are well-meaninged attempts by parents to show how their kids have “improved” by using technology, but with no pre-testing and no measure of what “improvement” is, it’s impossible to call this evidence.

In their desire to help people with communication problems, it’s sometimes easier to believe in orbiting teapots than measure performance.

[1] The word hypothesis comes directly from the Greek ὑπόθεσις and means “placing under.” ὑπό is “under” and you see this in words such as hypodermic (under the skin), hypothalamus (under the thalamus), and hypochondria (under the breast-bone). The θέσις part orignal referred to the action of placing a foot or hand down to beat time in poetry or music, and it became, by extension, the putting down of a proposition or idea.

[2] Popper, K.R. (1935) Logik der Forschung (The Logic of Research) , Vienna: Springer; trans. The Logic of Scientific Discovery, London: Hutchinson, 1959.

[3] Popper, K.R. (1972) Objective Knowledge, Oxford: Clarendon Press. If you just want to focus on just one chapter, try Chapter 6: Of Clouds and Clocks, which can be read somewhat independently of the book as a whole, and is less dense than some of the earlier chapters. Popper isn’t the easiest of folks to read and in truth, I still have a hard time with much of his stuff on probability because of the math and logic involved, but he’s well worth the effort.

[4] “Pluralitas non est ponenda sine neccesitate” or “plurality should not be posited without necessity.” This is attributed to William of Ockam (1285-1349), an English Franciscan Monk and philosopher, who used this premise in much of his work and thinking, although the notion was actually a common principle in medieval thought. The actual phrase, Occam’s Razor, appeared first in 1852 and was used by the astronomer and physicist, William Rowen Hamilton. No mention of his looking for a teapot…

[5] The difference between Science and Pseudoscience often comes down to this rule of Testability. An idea that is inherently untestable is called metaphysical or speculation. You may well believe passionately that there are fairies at the bottom of your garden but unless you can subject them to testing, they are no more real than my orbiting teapot.

And talking of being doggedly skeptical…

It’s called Animal-Assisted Therapy (AAT) and defined by the American Humane Society as;

… a goal-directed intervention in which an animal is incorporated as an integral part of the clinical health-care treatment process. AAT is delivered or directed by a professional health or human service provider who demonstrates skill and expertise regarding the clinical applications of human-animal interactions.

So what are we to make about Scout the Labrador who apparently is a skilled phonetician. In a recent article from the Missoulian newspaper, SLP Nancy Jo Connell takes her dog with her when treating children with language problems. Now, there’s some evidence (and not a lot) that pets such as dogs and cats can make someone feel better, and kids with autism appear to relate to animals.

Therapy dog

"I think that was a linguolabial trill!"

But where do we draw the line on the claims made about AAT? How about here?

He has a big vocabulary. When children with speech problems use the right word the right way, he responds. When autistic children who have problems with self-expression speak, he responds. When deaf children sign to him, he responds.

Really? A big vocabulary? The dog? Has he been assessed on the Peabody Picture Vocabulary Test? Did he bark the answer or just paw the pictures? It’s always a good idea to be open minded but not so open that your brains fall out.

Here’s an explanation of how it works from Connell:

With Scout, the children don’t have to be “corrected” if they use the wrong word.

If Scout doesn’t respond, Connell and others simply encourage them to find the right sound or word.

“It’s not a corrective model,” she said. “We don’t tell them what’s wrong. If they say a word and Scout doesn’t respond, we say, ‘Oh, he doesn’t understand you.’ “

Ah, so the truth is not necessarily that the dog understands the kids but that this is more of an example of a therapist using facilitated communication with animal. If the assertion is that Scout can process human speech and “know” the answer to something, we’re going to need a little more evidence than kids “use the right words, he responds.”

Badge for certified therapy dogs

Official CTD Badge

The problem here is whether SLP’s as a profession are interested in evidence-based practice or not. We can choose to “go with our gut” because in the case of having dogs around the clinic, unless the pooch actually bites a client, there’s no law necessarily being broken, and if you are sincerely advocating for using your pet as a therapy tool the least you can do is provide some objective measures of intervention with and without “Fluffy,” “Spot,” or “Lassie.” Hopefully the claims for Scout’s vocabulary size and ability to discriminate correct and incorrect responses is more journalistic hyperbole than alleged practicum fact.

For a reasonably sober summary of AAT, you could stop by the website of the Interactive Autism Network and read their article on findings. They point out that, “there is little research-based evidence that AATs lead to specific gains by children with ASD, but interest in the topic is growing in parallel with the field of AAT itself.”

By all means enjoy your pets, but avoid outrageous claims about their clinical skills. And after all, if animals really are that smart, how long do you think it will be before employees replace you with Champion the Wonder Horse?