At the recent 2012 conference of the International Society for AAC (ISAAC) there was some robust discussion about the technique know as facilitated communication. It’s a controversial technique and surprisingly one on which ISAAC does not have a position paper – which is an endeavor currently underway with a view to something being published soon. I say “surprisingly” because many other professional organizations have had position papers for many years, from the American Academy of Child and Adolescent Psychiatry (1993) through to the Victorian Advocacy League for Individuals with Disability [1]. ASHA has had a statement since 1994, so it does seem a little tardy for the group whose raison d’être is AAC to be publishing a statement on an AAC technique. But never mind, at least there is action being taken, which is better than continuing to say nothing.
But this isn’t about the pros and cons of FC. It’s about the development of a mindset that allows people to think about FC – and Non-Oral Motor Speech Exercises, Equine Therapy, Canine Therapy, Sensory Integration, and other such debatable practices. The reason I started with the reference to FC was simply because during the discussion, one person actually said, “But there’s more to this than Science.”
Is there? Is there really? I can appreciate that things in the world can be difficult to measure, and that there are times when measurement seems unfeasible and even intractable, but that doesn’t mean we stop trying.
Evidence-based practice can be tough. When you get into the nitty-gritty of the scientific method – which is a big chunk of what EBP is about – it’s easy to get overwhelmed by talk of variables, pre-tests, post-tests, levels of confidence, skewed distributions, ANOVA, one- versus two-tailed hypothesis, Bayesian, Cartesian, and the whole catastrophe that is experimental design. Even the most readable of books, such as the excellent The Handbook for Evidence-Based Practice in Communication Disorders by Christine Dollaghan [2], can be hard to read and even more challenging to digest. The potential complexity of designing ways to measure clinical practice is, to put it bluntly, off-putting. When you have a caseload of 200 clients and only 24 hours in a day, the idea of setting up formal measurement procedures is about as welcome as a bacon sandwich at a Bar Mitzvah.
Nil desperandum! Like any other skill in life, becoming a more effective practitioner of EBP doesn’t require you to be an expert all at once. You can improve your practice simply by sharpening your mindset to be more in tune with the concepts of EBP. And the first thing you can learn to do is become a Skeptic.
First, let me shovel out of the way that huge mound of steaming objection that being a skeptic is just an excuse for rejecting everything and believing in nothing. That’s a cynic, or a nihilist. In a 2010 interview with Skeptically Thinking, philosopher and author Massimo Pigliucci [3] said;
I think that a crucial aspect of being skeptical, of engaging in critical thinking, is not the idea that you reject claims because they seem absurd. That’s not being a skeptic, that’s just being a cynic. It’s just denying things for the sake of denying it. The idea of skepticism is that you inquire — that you do the work.
“Doing the work” is obviously a tough one because in our world of Wikipedia and endless cable shows about ghost hunters, psychics, celebrity hauntings, and quick-fix psychology, it’s easy to let someone else do the work for you – and that work may be of stunningly poor quality and accuracy. However, a little “critical thinking” is not that hard.
So here are my Top Three Critical Questions to help you become a baby Skeptic. And feel free to be skeptical about whether my three are a good three!
1. If someone claims X causes Y because they did Z, can the claim be tested independently? If I tell you that I can stop an interdental lisp by pushing the tip of a client’s tongue with a wooden spoon, while simultaneously saying “go back, tongue, go back,” you’d be right to ask if anyone else can do it, and you may even try it yourself. But if I claim that the reason no-one else can do it is because they don’t have the same spoon, or that my intonation pattern is very specific, you’d also be right to call bullshit on me.
2. If someone claims X causes Y because they did Z, are there any other simpler explanations as to why Y may have happened? When TV ghost hunters use a drop in temperature to “prove” the presence of a ghost, could something simpler have caused it? When a child appears to speak more after an hour with a dolphin, was it actually the dolphin’s presence causing it or just that the kids was happy?
3. If someone claims X causes Y because they did Z, what change was actually measured and how? “My kid talks more to my therapy dog, so therapy dogs work.” More than what? More than if there was a cat? More than 6 months ago? More than when he walked in the door? I had a client many years ago who swore blind that his stammer was much better after a few pints of beer and he wondered if he could get a prescription! Although I never took the opportunity to spend a night out at the bar with him, his measure of “better” was that he felt he was more fluent. But after a few pints of ale, I’m not sure my client was particularly accurate in his measurement techniques.

Oddly enough, I’m not going to suggest you use your common sense because this can be less “common” and “sensible” than you might believe. A recent book by Duncan Watts takes the notion of common sense to task. In Everything is Obvious: How Common Sense Fails Us, he argues that;
Common sense is “common” only to the extent that two people share sufficiently similar social and cultural experiences. Common sense, in other words, depends on what the sociologist Harry Collins calls collective tacit knowledge, meaning that it is encoded in the social norms, customs, and practices of the world.
Anyone who feels that common sense is in some sense the truth may want to spend at least 30 minutes listening to the discussions that go on in your country’s government, with folks in the US now facing 2 months of pre-election “common sense” being thrust down their throats. If sense were really that common, all parties in the political divides would cease to exist because their would only be one truth.
So common sense is less helpful in making evidence-based judgements than the basic science of testing and measuring. Even minimal measurement is better than no measurement because it gets you ever closer to an improved metric. You don’t have to subscribe to the “all or nothing” fallacy that some folks promote. Remember that there are different levels of measurement you can use, and each one has its pros and cons.
So let’s invent an example based on Dolphin Therapy. I can ask my client to tell me as much as possible about a picture of a busy street and record what is said, then repeat the task 5 minutes after spending a half-hour with a dolphin. If I simple count the number of words before and after the swim, then find the post-dolphin condition has twice as many words, is that a “good” measure? Well, the safest answers is “it’s a measure” but the notion of “goodness” is more complex. But here’s the valuable thing; you’ve at least created for yourself a methodology that you can use with the rest of your swimming clients. You can also do it again next time you client has another dolphin session. And the next.
Of course, don’t be surprised if someone else comes along and pokes holes in your methodology and results. The good news is you actually have some results to talk about, rather than a blanket statement about how “good for the kids” this dolphin fun is. Nor should you be surprised if someone uses the second question in my list to suggest an alternative explanation such as “the kid was just relaxed and would have done just as well if you’d given him a massage, or a bowl of ice-cream, or a flight in a helicopter.” This will help you go back and think of a better way to measure and test (or try to get a grant for “Helicopter Therapy” sponsored by folks who like flying in helicopters!) [3]
Enough for now. Once an article passes the 1500-word mark, it ceases to qualify as “baby steps.” So take those three critical questions and start trying them out. If you want some homework, try them out while watching a TV show about UFO’s or Bigfoot – it’s kinda fun.
Notes
[1] No, the “Victorian League” is not a group of steam-punk enthusiasts who yearn for a return to the values of the 19th century but an organization (VALID) based in the Australian state of Victoria, the capital of which is Melbourne.
[2] Dollaghan, C. A. (2007). A Handook of Evidence-Based Practice for Communication Disorders. Baltimore: Paul H. Brookes Publishing. This is great book and if you wanted to buy just one reference for EBP, I’d go for thisl But be warned; it is so full of excellent one-liners and summaries that if you use a yellow highlighter, there’s a fair chance you’ll end up with a banana-colored book. I use sticky tags and I think I went though three packs of them! And if you don’t want to spend the money – and time – on the book, you can read Christine’s 2004 ASHA Leader article entitled Evidence-Based Practice: Myths and Realities.
[3] Often the people promoting the benefits of animal therapy are animal lovers who appear to want to somehow “prove” that there’s something special about their dog/cat/dolphin/horse/lizard/three-toed sloth/whippet etc. I have no doubt that research shows how stroking a cat can reduce your blood pressure temporarily, but I can get the same effect from drinking beer, riding my motorcycle, or having sex. However, unlike the animal therapy folks, I am not promoting Drunken Biker Orgy therapy, or DBO as it would be referred to in the academic literature. Which may turn out to be a spectacular loss of revenue for me as a future project…