It was the last day of ASHA and I had the special honor of closing the AAC strand for the convention. In short, I was last on the list of AAC presenters. In a curious twist of fate, my colleague from Germany opened the AAC strand at the first session of Thursday so between us we’d bracketed the field!
A less charitable viewpoint might be that I had to present after lunch on the last day, when many folks were leaving to catch planes home or taking the opportunity to spend one last day in the wonderful city of Atlanta. So the fact that folks turned up, including one of my #slpeeps from the Twitterverse was quite a relief. 
The topic was on how to use the data generated by an AAC device to plan therapy sessions. A number of AAC technologies have the facility to track data but few people seem to use it. The purpose of the presentation was to show folks that there is immense value in using such logging in order to help clients improve their communication skills.
Basically, automated data logging tracks events over time; you can see what someone is saying and when they are saying it. And with just these two pieces of information, you can provide a much better service to your clients.  You can gather information about;
- Vocabulary – the words your client uses
- Morphology – the way your client uses morphemes to indicate tense, number, intensity etc.
- Syntax – how your client uses words in a systemic way along with other words
- Function – how is your client’s language used (questions, imperatives, requests etc.)
To facilitate this, you can use the QUAD Profile, a paper-based checklist that provides guidelines on what to look for. Developed in 2005 as a quick and dirty evaluation tool, the QUAD is simple enough that you don’t have to be a specialist in AAC to use it . You can click on the graphic below to download a copy.
You can also take user-generated text data and analyze it using either Concordance or WordSmith, two pieces of software that you can input large amounts of text and then measure word frequencies, type/token ratios, or find keywords – those words in a sample that occur more frequently than you would expect by chance. I’ve covered both these – and discussed core versus fringe versus keywords in The Dudes Do ISAAC 2012: Day 4 – Of Corpora and Concordances, so take a look there for more details.
What I failed to spend any time talking about was the excellent BYU Corpora created by Mark Davies at Brigham Young University. If you’re wanting to find out how a particular word is used in contemporary American English – or slightly less contemporary British English – you can do no worse than using these corpora than the Corpus of Contemporary American English, or COCA . As an example, I previously talked about the difference between “taking a bath/shower” and “having a bath/shower,” arguing that in British English you’d teach “having” whereas in American English you’d focus on “taking.” The key point is that you can use the COCA to quantify this difference. And quantifying is a step towards evidence-based practice.
Here’s another example of where using the COCA can help you decide on which words to teach: which should you teach first – look or see? Well, if you want to focus on bigger semantic bang-for-buck, you should go for see, which is used in speech twice as often as look. Or how about need and want? It turns out that want is three times more likely to be used than need, so want is much more useful.
Another thing the COCA does is to show how words are used in context. This turns out to be very valuable knowledge to have when teaching language because you can’t just teach a word in isolation. For example, if we go back to the example of the word look, the COCA shows that is very frequently appears immediately before a preposition. Here are specifics:
So if you are going to teach look, think about look at followed by look for as contextual phrases because that’s how the word is used in real life! Here’s a link to download my slides and notes as a PDF handout.
By 2:30, I was done. My target was to be in my room at 3:00 with my shoes off, feet up, and a coffee in my hand. And this turned out to be a success!
At 5:00, I left for an early dinner with friend at the Sweet Georgia’s Juke Joint at 200 Peachtree Street. Being in the South, I plumped for fried chicken with collard greens, a peach cobbler for dessert, and a delicious Millionaires Mojito. To make the night breeze along, we were entertained by Nat George and the Nat George Players, a band so smooth you could spread ‘em on toast.
The video doesn’t really do the band justice but that’s all the more reason for you to put a trip to see them on your list of “Things to do in Atlanta” on you next trip out.
Another memorable night, and yet another example of why I guess I could spend much more time exploring the city. But tomorrow it’s back home. Ah well. C’est la vie.
 At this point, you might wonder why I don’t leap into the discussion about privacy, security, and ethics. Well, that’s because if I need to do that, I’d rather spend an entire post on it. But the short answer is that in all the years I’ve worked with clients who have data logging capabilities I have yet to have ONE tell me that I can’t see their data. After I have a short conversation about why I want to track their data and what I intend to do with it, they’ve been happy to allow me to have access. It’s important to have this discussion prior to turning on monitoring, and critical to explain the value, but once you do that, there’s no problem. Informed consent is a wonderful thing.
 OK, so it was @MeganPanatier – Thanks for stopping by and for tweeting some of my comments during the presentation!
 Cross, R.T. (2010). Developing Evidence-Based Clinical Resources, in Embedding Evidence-Based Practice in Speech and Language Therapy: International Examples (eds H. Roddam and J. Skeat), John Wiley & Sons, Ltd., Chichester, UK.
 The site includes the Corpus of Contemporary American English (450 million words), the British National Corpus (100 million words), the Corpus of Historical American (400 million words), the Time Magazine corpus (100 million words) and the new Corpus of American Soap Operas (100 million words), which I have yet to test run!