“I don’t care what the research says…”

A colleague of mine was asking for some references to support the notion that kids with severe learning difficulties can learn to use high frequency core words (such as want, stop, and get) because they were being told that what these kiddos really use (or need) are words like toy, cookie, and banana. I duly provided a quick sample of peer-reviewed articles and shared the information with other colleagues. And what the hell, I’ll share them with you, dear reader, in the References section at the end of this piece.

Reading the research

Reading the research

But another of my friends also commented that there are still those folks who respond with comment such as, “I don’t care what the research says, I don’t care who these kids are. These are not the kids I’m working with. The kids I’m working with just aren’t going to use these words.”

So what do you do about this? At what point does being “critical of the research” become “ignoring the research because I don’t believe it.”? In the world of Physics, it’s hard to say, “I don’t care what the research says, I’m still going to fly using my arms as wings.” Mathematicians don’t say, “I don’t care what the research says, 1 + 1 does equal 7.” And it’s a brave doctor who would say, “I don’t care what the research says, you go right ahead and smoke 40 cigarettes a day and you’ll be just fine.”

No-one would argue that Speech and Language Pathology as a profession will ever achieve the rigid, statistical certainties of physics and mathematics, but what does it say about our profession if we openly admit to ignoring “the research” because it doesn’t fit with our individual experience? There are certainly enough practices  in Speech Pathology that are hotly debated (non-speech oral motor exercises, facilitated communication, sensory integration therapy) and yet still being used. But all of these are open to criticism and lend themselves to experimental testing, whereas an opinion based on personal experience is not. I could tell you that I have used facilitated communication successfully, but that is still personal testimony until I can provide you with  some measurable, testable, and replicable evidence. This is one of the underlying notions of evidence-based practice in action.

However, it’s  one thing to talk about using evidence-based practice but another to actual walk the walk. If the evidence suggests that something you are doing is, at best, ineffective (at worst, damaging), how willing are you to change your mind? If 50% of research articles say what you’re doing is wrong, how convinced are you? What about 60%? Or 90%? At what level of evidence do you decide to say, “OK, I was wrong” and make a change?

If there’s anything certain about “certainty” it’s that it’s uncertain! Am I certain that teaching the word get to a child with severe cognitive impairments is, in some sense, more “correct” or “right” than teaching teddy? No, I am not. But what I can do is look at as many published studies of what words kids typically use, at what ages, and with what frequency, and then feel more confident that get is used statistically more often across studies. This doesn’t mean teddy is “wrong,” nor does it preclude someone publishing an article tomorrow that shows the word teddy being learned 10x faster than the word get among 300 3-year-olds with severe learning problems.

But until then, the current evidence based on the research already done is, in fact, all we have. Anything else is speculation and guesswork, and no more accurate than tossing a couple of dice or throwing a dart at a word board.

Being wrong isn’t the problem. Unwillingness to change in the face of evidence is.

References
Banajee, M., DiCarlo, C., & Buras Stricklin, S. (2003). Core Vocabulary Determination for Toddlers. Augmentative and Alternative Communication, 19(2), 67-73.

Dada, S., & Alant, E. (2009). The effect of aided language stimulation on vocabulary acquisition in children with little or no functional speech. Am J Speech Lang Pathol, 18(1), 50-64.

Fried-Oken, M., & More, L. (1992). An initial vocabulary for nonspeaking preschool children based on developmental and environmental language sources. Augmentative and Alternative Communication, 8(1), 41-56.

Marvin, C.A., Beukelman, D.R. and Bilyeu, D. (1994). Vocabulary use patterns in preschool children: effects of context and time sampling. Augmentative and Alternative Communication, 10, 224-236.

Raban, B. (1987). The spoken vocabulary of five-year old children. Reading, England: The Reading and Language Information Centre.

About these ads

9 responses to ““I don’t care what the research says…”

  1. I guess the reason we operate that way is because there are always exceptions i.e. people who do not respond to intervention as the literature suggests. For example, even though studies show a certain medication worked in 80% of cases, there are still 20% of people who don’t. As you say, nothing is exact. The other thing is that the research out there is largely pretty weak. Even when the results are statistically significant, they’re not very significant in real-world, functional terms. In fact, it is not uncommon to find that the interventions suggested by the literature simply don’t work (for myriad reasons) outside of the “laboratory conditions” and in the real world. So, we fall back on goal-attainment scaling and trialing interventions that are not supported by literature.

  2. Pingback: Clarification Order: Blogging about research | "Talks Just Fine"

  3. Excellent post, see example of horrible outcome when clinical experience was put before evidence in Medicean, prior to evidence based medicine – personal experience is definitely not enough! http://doctorskeptic.blogspot.com/2013/05/lessons-from-history-7-medically.html?spref=tw

  4. Celeste Helling

    This is a great article, but my main reason for posting is to hear someone say they have had success with using facilitated communication techniques. I have supported a school team and parent adamant to use FC. To make a long story short, five years later the child with AU is no longer non-speaking, an honors high school student and in all likelihood college bound. I have always wondered what would have been the outcome if I had taken a fierce stance against FC.

    • Actually, in the spirit of full disclosure – and in apology for writing ambiguously – when I said, “I could tell you that I have used facilitated communication successfully…” I was using the word “could” in its hypothetical sense because I haven’t actually used FC. What I meant was that if I had used FC, just saying “It worked” wouldn’t be sufficient to claim it really DID work. I’d need to provide more data, which could be verified by others and replicated. And I may well have been convinced of the “truth” of the intervention, but I could still be wrong.

  5. A key piece of EBP is missing here though. There is the research, there is the SLP perspective, but what about the children and their families? Is the SLPs perspective of “not my kids” supported by interviews and research based vocabulary selection tools with the family? Also, there is intervention now and intervention for the future. Kids cannot grow in their language development without access to core, so if they aren’t included now what is the plan for including the words after progress has been made with the currently espoused system?

    • That’s a good point, John, and one I missed mentioning in the post. Certainly there is planning for “now” and planning for “later” to be taken into account, and adding new vocabulary to handle the “later” is clearly an expansion strategy worth pursuing.

  6. The question always comes up about EBP and the lack of research. But we do have other things we can use…namely very good theories. We know a lot about motor learning, motivation, learning theory, etc. So if a procedure is used, but we don’t have good “evidence,” we probably have some good theories. We must also ask ourselves, does this procedure follow the theoretical underpinnings that would even make sense?
    I strongly encourage everyone to listen to Micheal Shermer’s presentation “Skepticism 101: How to Think Like a Scientist” It is available on audio books and is really excellent.

    • I’d second the Shermer recommendation and also add Carl Sagan’s “The Demon-Haunted World: Science as a Candle in the Dark” for those who still like to read books ;) It’s extremely well written and an easy read yet not trivial. Shermer and Sagan are both good at making complex things seems simple.

      I’ve always thought of clinical practice as being driven by testing our “very good theories” as opposed to just following a check list. And if we want to justify using our “very good theories,” we have to be able to measure and test them at some level. So we construct mini-hypotheses (having Fido in my clinic will make Johnny talk more), set up some test conditions (I’ll see Johnny every week but only bring Fido every other week), identify some objective measures (I’ll have Johnny describe a picture to me at the end of every session and record, word for word, what he says, and count the number of words used), and then review the results of the measurements (when Fido is there, the average number of words is 40; when he’s not, it’s 35).

      That’s being a “little scientist” and although it may not have the rigor of a full-throttle 2-year matched-pairs, 3000 sample with $50,000 funding and a host of interns helping with data analysis, it’s good practice.

      Mmh, maybe we need to do a “How To Be A Little Scientist” post…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s