Category Archives: AAC

Augmentative communication

Articles and Abstracts: Free Stuff from the Dudes

Articles and Abstracts

It’s not unusual for me to get an email from someone asking things like, “Do you have any references that support the idea that using AAC will stop a child from talking?” or “Can you point me to some articles that provide information on Core vocabulary?” As a member of the “Not Dead Yet” club of AAC practitioners [1], over the years I’ve collected a few useful papers that I can refer to, and continue to collect new ones whenever I can force myself to do some journal reading.

So to make life easier, I’ve created a suite of PDF files is a series I call “Articles and Abstracts,” with each file providing a selection of articles along with the abstracts. I can’t provide the actual articles without having to get lots and lots of permissions, and frankly I don’t have the time for that, but given the citations and the abstracts, folks can at least decide if they want to go track them down – and sometimes a starting point is really useful.

I’ve broken the series down into the following topic areas:

There’s no magic formula to explain why I chose this grouping, just that they are areas of research that impinge on the field of AAC and language. And I don’t claim to have anything close to a comprehensive listing of articles, just some key ones that are, in my opinion, useful and relevant. If anyone has any suggestions for additional papers, just let me know – I can’t read every journal that’s out there!

I update on an irregular basis, by which I mean that if a new article that I find interesting comes my way, I’ll update the particular file there and then. So I already some 2017 papers cited – and you can have the excitement of finding out which they are when you download the series 🙂

From our blog home page, select the FREEBIES menu and then down to Article and Abstracts for the list. Or just use the bulleted list above. Feel free to share the information – it’s all publicly available in peer-reviewed journals – but we’d be grateful if you’d mention the Speech Dudes as your source now and again.

Notes
[1] In a field where the turnover of practitioners is relatively high, one of the easiest ways to become known is simply to avoid dying. If you can also add “getting around a bit,” then your stock can rise without you having to do much more than that! Of course, if you want to reach the level of AAC Superstar or AAC Luminary, you do, in truth, have to put a little more work into it than I have, and the Superstars and Luminaries deserve their status. All I’m sharing is that even if you don’t aspire to professional sainthood, staying alive is a really, really good idea 😉 And as Woody Allen once said, “I don’t want to be immortal through my work; I want to be immortal through not dying.”

Advertisements

Peppa Pig: Go Ahead and Let Your Kids Watch!

One of the special things about having grandchildren is that when you’ve had enough of them, you can give ’em back to their parents. There’s a certain amount of schadenfreude to be reveled in with this, particularly if you had some challenges bringing up your kids in the first place. Although I don’t actually gloat, I can’t but help feel a frisson of pleasure when my darling daughter tells me she’s had a sleepless night because her 3-year-old got up a 3:00 AM and began running round the house, and her 7-year-old had a tantrum before going to school. I simply nod sagely and say, “Yes, it’s rough, isn’t it.” Bad Daddy!

So while she and her husband get all the pain and anguish of living and working with two young kids (and we all know it doesn’t get any easier as they age!) I get to have fun time with them because (a) they only get me in small doses and (b) I can spoil them rotten [1].

Of course, this doesn’t give you free rein to allow total anarchy and hedonistic behavior so you have to at least rationalize your choices when it comes to letting your offspring decide what they want to do. Which brings me to Peppa Pig.

For those unfamiliar with this delightful British cartoon character, Peppa lives with her mummy and daddy and little brother George, who apparently has an expressive language disorder that no-one is in the least bit worried about. His only two utterances appear to be “dinosaur” and “Rrraarrrgggghhhh!” neither of which is core vocabulary and represent only two grammatical classes; noun and interjection. Sure he’s only a toddler pig but come on, his motor skills suggest he’s at least 24 to 30 months, so I’d expect him to have a much larger lexicon!

Language disorder aside, Peppa has an extended family in the form of Grandpa and Granny pig, who appear to be pretty well off considering they have a boat, which is not as common in the UK as in the US [2]. Then she has an extensive network of imaginatively named friends such as Suzy Sheep, Rebecca Rabbit, Zoe Zebra, Emily Elephant, and Delphine Donkey. It seems that initial consonant alliteration is a critical feature of animal nomenclature! But it’s actually a very good way to develop phonological awareness skills. According to Reese, Robertson, Divers, and Schaughency (2015):

…parents who play rhyming or alliteration games with their children, who sing rhyming songs more often with their children, or who engage in other types of wordplay (e.g., tongue twisters), may be fostering their children’s phonological awareness. (p.57).

Wittingly or unwittingly, the writers for Peppa Pig have built in so cute, subtle ways of providing viewers with phonemic cues that can help in speech sound development. And as Reese et al. also point out, “Children’s phonological awareness develops rapidly in the preschool years and is an important contributor to later reading skill. (p.54)” Clinicians and educators are usually much more aware of this. Thatcher (2010) points out that:

Children gain important information about rhyme and alliteration from learning poems and rhymes in which the prosodic features of the poem stress the shared sounds in the word. The profession of speech pathology must take possession of this area of early intervention… (p.476).

But wait, wait – there’s more! The didactic properties of Peppa Pig don’t just end with phonology. For the purpose of analyzing the vocabulary content of the show, I obtained a written set of transcripts from the complete first season [4] and ran the data through WordSmith 7, my trusty corpus linguistics software tool of choice. With this, I’m able to compare the frequency of use of words from the Peppa Pig sample with any other list that I choose. What I wanted to do was get an idea of how “core” the vocabulary in Peppa Pig is, and by “core” I mean how much of the entire vocabulary used is made up of high frequency words used by many people of many ages across different situations [5].

Being the author of Unity 84, a language program available in Prentke Romich devices, I choose the vocabulary associated with that as my core comparison. This is simply because it’s a set based on data from a number of core vocabulary studies and includes hundreds of low frequency nouns, which offer a little balance to a pure core list that would be weak in such words. But so long as I use the same core to make comparisons against other samples, the resulting “Core Scores” will be comparable [6].

So here’s how Peppa Pig fares in the “Core Score” arena.

corescorepeppapig

Core Score for Peppa Pig

What this means is that I counted ALL the instances of where core words were used in Season One, then counted all the instances of fringe words, and generated a simple percentage. So if someone is watching Peppa Pig, almost 83% of all the words they hear will be core words. I therefore give Peppa Pig a “Core Score” rating of 83.

It’s great to be able to toss out a number and say “Hey, this TV show is an 83” but that’s not tremendously useful unless there are comparisons. So I found a transcript for an episode of another of my favorite cartoons shows; SpongeBob SquarePants. And here’s how he did:

corescorespongebob

SpongeBob SquarePants Core Score

As you can see, SpongeBob gets a “Core Score” of 75, which tells me that my clients would be better off watching Peppa than SpongeBob if I want them to hear more core words. And in general, I would. After all, if I want to encourage clients to use more core words, putting them in situations where they hear lots of models of how those words are used is a solid goal.

Just out of curiosity, I applied the same analysis to three common, popular children’s books; Where the Wild Things Are, Goodnight Moon, and The Very Hungry Caterpillar. Here’s what I found:

corescorebooks

Books Core Scores

All of the preceding is not peer-reviewed research. It’s not even close. In fact, I’d even be hesitant to call it a “pilot study.” In the world of Business, it’s what we call a “Proof of Concept” – where you test out a few ideas so as to demonstrate that what you’re thinking about is something on which someone would be prepared to spend money [7]. But if you were to use it to argue the merits of suggesting that watching Peppa Pig is not a bad thing, then I think the data supports your decision!

References

Reese, E., Robertson, S.-J., Divers, S., & Schaughency, E. (2015). Does the brown banana have a beak? Preschool children’s phonological awareness as a function of parents’ talk about speech sounds. First Language, 35(1), 54-67.

Thatcher, K. L. (2010). The development of phonological awareness with specific language-impaired and typical children. Psychology in the Schools, 47(5), 467-480.

Notes
[1] It’s right there as number one in the Grandparent Commandments; “Thou shalt bestow upon thy grand offspring anything and everything they desire, and in the event that this is not possible, thou shalt feel perfectly OK with saying, ‘Oh sweetheart, that’s something to ask mommy and daddy.'”

[2] My older daughter and her husband have a boat on which my wife and I have spent some happy hours letting them do all the work of dragging it to a lake, dropping it in the water, steering it to the nearest lakeside bar, and paying the cost of repairs, maintenance, and storage required so that we can enjoy those 5 days in summer when the nautical life is the thing to embrace. Like having grandkids, having another family member own a boat means you can have all the pleasure but none of the responsibility.

[3] As further evidence that Peppa’s younger brother has a problem, note that he is one of the only character who does NOT have an alliterative name – he is “George Pig” as opposed to, say, Peter Pig or Paul Pig, or even Patrick Pig. So not only has he a more complex name structure to deal with than all the other animals, but he also has that initial “djuh” sound /d͡ʒ/ to struggle against. Poor George!

[4] My source is at “Glamour and Discourse”: Peppa Pig transcripts Season One. In the spirit of transparency, you’re free to use the same data and run your own analyses to see if they match with mine. I think they will but in a world driven by President Donald Trump’s “alternative facts” who’s to know?

[5] New visitors to this blog who are unfamiliar with the notion of what we refer to as a “core” vocabulary set in the field of augmentative and alternative communication (AAC) might like to check out the following posts:
Of Puck and Patois
Of Corpora and Concordances
The Monteverde Invincia Stylus Fountain Pen – and Keyword Vocabulary

[6] At a more technical level, the Unity core list is an unlemmatized list that consists “words” that are defined as “a string of letter terminating in a space or punctuation mark.” So the words eat, eats, and eating are counted as three distinct words, even though they are really just variations of the one lemma, <EAT>. A critical question in deciding on what constitutes a “core” list is whether it should include only root words such as eat and drink but not eating and drinking, or whether it should have all forms of a word in there. If you use a core that has eat but not eats, then any TV show or book that uses the word eats would not have that token counted towards a “core score” – but shouldn’t it? I’m open to suggestions, folks!

[7] I intend to test out a few more core lists in order play with the Core Score idea a little more.

28 Words to Boost Your Client’s Vocabulary – Maximum Bang for Buck

When developing a vocabulary set for an augmented and alternative communication (AAC) system – or indeed when deciding on what vocabulary to teach anyone – one of the most fundamental of measures you can use is frequency count; how often is a word used in a language? No-one can predict with 100% accuracy which words will be “best” for an individual, but if you’re going to take bets, you’re pretty safe to assume that words such as that, want, stop, and what are going to be used by everyone from ages 2 to 200. By the same token, you’d not be missing much if you didn’t spend too much time on words like ambidextrous, decalogue, and postilion [1].

In the field of AAC, this type of high frequency vocabulary that is used (a) across populations and (b) across situations is referred to as core vocabulary and it’s often contrasted with the phrase fringe vocabulary, which refers to words that are typically (a) low in frequency and (b) specific to isolated activities or situations. For a refresher on core and fringe – and an introduction to keyword vocabulary – check out my article entitled Small Object of Desire: The Monteverde Invincia Stylus fountain pen – and Keyword Vocabulary from two years ago.

The core/fringe distinction is now so embedded in the world of augmentative communication that it is rare to see any new app appear on the market that doesn’t use the phrase “core vocabulary” somewhere in its marketing blurb – even if it isn’t actually making good use of the core! And as core vocabulary is, by definition, common across ages, activities, situations, and pathologies, it’s not surprising that many AAC software offerings look the same, particularly with regard to the words being encoded [2].

But it’s worth taking a look at another level of frequency measurement, and that’s at the phrase level. Specifically, one area of research that seems to me to offer some value to Speech and Language pathologists and Educators working in vocabulary development is in the study of how phrasal verbs (PVs) are distributed.

PV 3

So what’s a phrasal verb? Well, simply put, it’s a phrase of two to three words that are yoked together, which include a verb and a preposition and/or adverb. Examples include, “I ran into Gretchen at the ATIA conference,” “I backed up my hard drive,” and “I came across an interesting article on phrasal verbs.” The English language is stuffed to the gills with these type of verbs, and a feature of them is that they tend to have multiple meanings.

To find out how polysemous a phrase can be, you can use the excellent WordNet online tool, a huge database of words and phrases that let you check out noun, verb, adjective, and adverb meanings. For example, would you believe that the simple phrase “give up” has 12 different meanings? Or that “put down” has 8 variations? It’s not surprising that learners of English find phrasal verbs quite challenging.

The other fascinating feature of phrasal verbs is summarized in a 2007 paper by Gardner and Davies, who point out that of you look at the 100 million word British National Corpus you find that;

…a small subset of 20 lexical verbs combines with eight adverbial particles (160 combinations) to account for more than one half of the 518,923 phrasal verb occurrences identified in the megacorpus. A more specific analysis indicates that only 25 phrasal verbs account for nearly one-third of all phrasal-verb occurrences in the British National Corpus, and 100 phrasal verbs account for more than one half of all such items. Subsequent semantic analyses show that these 100 high-frequency phrasal verb forms have potentially 559 variant meaning senses.

Read that again and see if you get the same tingle I did seeing those numbers. Over half the entire phrasal verbs found in the corpus can be accounted for by combining 20 verbs with 8 particles. In short, if you learn just 28 words, you’ve learned 50% of all the phrasal verbs you’ll need to use.

Let’s take a look at those Top 2o verbs first:

20 most frequent verbs in phrasal verbs

Table 1: Top 20 Verbs in PVs

And now the Top 8 particles:

Eight most frequently used particles in phrasal verbs

Table 2: Top 8 particles in PVs

All the verbs and prepositions as individual items are already high frequency, with the exception of perhaps the verbs point and set, which wouldn’t be on my list of “first words to teach.” However, the real bonus here is that not only do you get the benefit of teaching your client 28 high frequency words in isolation but if you then use them as phrasal verbs, your “bang for buck” is significant!

Here’s a link to a PDF of those 28 words: https://app.box.com/s/vng5hr2tctp87ufdjoyjvyv2ln8300yb

This frequency analysis of phrasal verbs by Gardner and Davies has recently been supported by and extended upon by Dilin Liu (2011) and by Mélodie Garnier and Norbert Schmitt [3] (2014). In their paper, The PHaVE List: A pedagogical list of phrasal verbs and their most frequent meaning senses, they point out that a limitation in Gardner and Davies’ analysis is that they failed to take into account the polysemy inherent in the phrases – like the 12 meanings of “give up.” In fairness to Gardner and Davies, they did, in fact, talk about the polysemous nature of PVs but didn’t offer any measure of the different frequencies with which the various meanings are used. They wrote that:

For instance, the list-high 19 senses of the PV break up … could be arranged from highest to lowest semantic frequency, thus prioritizing them for language learning. We acknowledge, however, that corpora of this nature are much easier talked about than constructed. (p.353).

Garnier and Schmitt are interested not just in identifying the frequency with which a phrasal verb occurs but also the most common senses of those PVs. They say that;

…our main purpose for creating the PHaVE List, which is to reduce the total number of meaning senses to be acquired to a manageable number based on frequency criteria.

On a pragmatic level, they want a learner not to have to learn every meaning of each PV but just focus on the most frequent, and therefore most useful meanings. Using the original list from Gardner and Davies, along with additions by Liu (2011), and including data from the Corpus of Contemporary American English (Davies, 2008), the duo created the PHaVE List; a list of the 150 most frequently used phrasal verbs, and 280 of the most frequently used meanings. So on the 12 potential meanings for “give up,” they use the following:

16. GIVE UP
Stop doing or having something; abandon (activity, belief, possession) (80.5%)
Example: She had to give up smoking when she got pregnant.

The general entry starts with a rank (in this case, 16th out of 150); the basic phrasal verb; a definition; a percentage frequency; and a specific example use. The complete list is made available as a download from the Sage journals website [4]. If you can get access to it, it is well worth the read and the download. And all the articles referenced in this article are good examples of how we can use corpus linguistics to help guide our practice of developing the vocabulary of our clients with language challenges.

References
Davies, M. (2008-). The Corpus of Contemporary American English: 425 million words, 1990-present. Available from Brigham Young University The Corpus of Contemporary America English, from Brigham Young University http://corpus.byu.edu/coca

Gardner, D., & Davies, M. (2007). Pointing Out Frequent Phrasal Verbs: A Corpus-Based Analysis. TESOL Quarterly, 41(2), 339-359.

Garnier, M., & Schmitt, N. (2014). The PHaVE List: A pedagogical list of phrasal verbs and their most frequent meaning senses. Language Teaching Research, 1-22.Published online before print http://ltr.sagepub.com/content/early/2014/12/08/1362168814559798.abstract

Liu, D. (2011). The Most Frequently Used English Phrasal Verbs in American and British English: A Multicorpus Examination. TESOL Quarterly, 45(4), 661-688.

Notes
[1] A postilion is the driver of a horse-drawn carriage, who sits posterior to the horses. The sentence “The postilion has been struck by lightning” is the basis of a wonderful little paper by the linguist David Crystal, published in 1995 in the journal Child Language Teaching & Therapy. Simply titled “Postilion Sentences,” Crystal defines a postilion sentence as “one which has little or no chance of ever being useful in real life. It could be used, obviously, because it is grammatically well-formed; but the contexts in which it would be natural to use it are either so restricted or so adult that the chances of a child encountering it, or finding it necessary to use it, are remote.” In the design of AAC systems, using pre-stored sentences may have some limited value but many “pragmatic utterances” turn out to be nothing more than postilions; unlikely to be used. This is why teaching sentences is neither language nor therapy.

Download Postilion sentences article

Enter a caption

[2] The now-common practice of using core vocabulary also makes it much harder to prove plagiarism – or as we Lancastrians would say, “nicking someone else’s ideas.” People, of course, don’t “steal” ideas – they are “inspired” by the work of others. But such inspiration inevitably leads to systems appearing almost clone-like in their structure. It’s only when you get to the fine details of how words are organized and encoded that you can separate the wheat from the chaff. And there’s a lot of chaff out there.

[3] If I haven’t mentioned it before, Norbert is the author of an excellent book on vocabulary research methods. Here’s the full reference: Schmitt, N. (2010). Researching vocabulary : a vocabulary research manual. Houndmills, Basingstoke, Hampshire ; New York, NY: Palgrave Macmillan. It’s full of useful information and lots of web links worth exploring, and worth the $30 you’ll spend on Amazon US – or the £20.99 in the UK.

[4] Just a reminder to all members of the Royal College of Speech and Language Therapists that you membership benefits includes access to a number of Sage journals online, and Language Teaching Research is one of those. In fact, you have access to over 700 (yes, count ’em!) titles, including my personal favorites Child Language Teaching and Therapy, Clinical Linguistics & Phonetics, English Today, and the riveting Scandinavian Journal of Occupational Therapy. OK, so I lied about the last one being a “favorite” 🙂

The State of the Union Address 2015: “We Are Family…”

Within seconds of a President turning off the autocue, political pundits stop trembling, wipe the drool from their lips, and spend the next 2 years talking incessantly about what was said. A single speech that clocks in at just under 6,500 words can single-handedly generate more web pages than the callipygian [1] Kim Kardashian can generate page clicks. Being a dude, you might think that this post is now about to become an excuse to share a picture of the ample Ms. Kardashian’s gluteus  maximus in all it’s shiny glory – but you’d be wrong! What I’m actually more interested in doing is taking a more detailed look at the vocabulary that Barack Obama used from the basis of corpus linguistics and concordance software. At this point, 90% of the guys who found this post by googling “Kim Kardashian’s ass” will leave. Sorry, dudes.

The data came from a transcript available from Time.com, which I then used as input for WordSmith 6.0 software, a corpus analysis tool. Of the many things this software will let analyze, the ones we’ll look at here are word frequencies, keywords, and concordances.

Keywords are those words that appear in a sample as being used significantly more or less than they are typically used in the general population. In the case of WordSmith, the “general population” is a list know as the British National Corpus, a sample of some 100 million words used in British English (BrE).

The “teachable moment” here is to think about why I chose this sample. Now I know – because I have a ear for these things – that Barack Obama does not use British English; his accent is also a bit of a giveaway. However, for the purpose of this analysis, I don’t think the frequency differences between BrE and American English (AmE) are significant enough to warrant worrying about it. I could have used a different sample called the American National Corpus but that’s only good for 14 million words, which is much smaller than the BNC. Therefore, I chose to go for the larger corpus, knowing that there may be some variations between the two but not, in my opinion, enough to skew the analysis.

Top 25 words by frequency

Fig 1: Top 25 words by frequency

If we take a look at the most frequently used words in the speech, you’ll see that they are pretty much what you might expect on the basis of typical distributions. The word the is the most frequent in the English language and seeing it atop the President’s list is uninteresting. What is interesting is that the pronouns we and our are right up there above I and you. Pronouns regularly score high on frequency lists, and it’s one of the reasons practitioners in the field of Augmentative and Alternative Communication (AAC) should make sure these words are targeted. But the fact that we and our appear so high up the list (at #4 and #8 respectively) made me wonder; is this what we might expect to see in general? And that, my friends, is why we turn to a keyness list.

Top 25 words by keyness

Fig 2: Top 25 words by keyness

Take a look at that keyness column and notice how both we and our are way up there at #2 and #3. Ignoring for now the intricacies of how those keyness figures are calculated [2], what is significant is that the Pres is using those two pronouns significantly more than how anyone else would use them in general, and that reflects a conscious effort to come across as one of “us” and not an “I” or “me” who is doing things. He’s appealing to a “Spirit of Unity.”

You can see more evidence for this appeal if you simply look at the keyness of # 4 and #8 – America and Americans. He’s certainly using the words with more frequency than you’d find in a regular sample but we can perform one more kind of analysis in order to see just how he’s using them; and that’s to create a concordance.

A concordance is a list that shows instances of a word in context, along with the words that go before and after it. Below is a concordance for the word Americans as used alongside our:

Concordance of instances of the words americans and we

Fig 3: Concordance showing WE and AMERICANS

Given that there were 19 instances of the word Americans being used in total, this pairing accounts for over 30% of the use of Americans and we. So as well as using the pronouns themselves to paint a picture of unity, he’s yoking one of them with Americans to further that underlying message.

Casting your eyes just a few more lines down the keyword list you’ll see the words jobs and the economy coming in at #11 and #12, not too far above families (#14) and childcare (#16). Here we see Obama invoking notions of family and economics, both of which are important to voters because we are all involved at some level with both! But take a look at the concordance for how the word family is being used and see if you can spot some familiar words:

Concordance of the word FAMILIES

Fig 4: Concordance of the word FAMILIES

Notice how our and American are also used along with families, further reinforcing that Spirit of Unity. In fact, Obama even makes that relationship between families and the United States in the following few sentences:

“It is amazing,” Rebekah wrote, “what you can bounce back from when you have to…we are a strong, tight-knit family who has made it through some very, very hard times.” We are a strong, tight-knit family who has made it through some very, very hard times. America, Rebekah and Ben’s story is our story.

So not only do we hear this explicit appeal to family but by analyzing the words he uses throughout the speech using keywords and concordances, we can tease out those subliminal nods and pointers toward an underlying message: We are family [3].

Notes
[1] Callipygian is one of my favorite words and, like many of them, deserves to be used much more than it is. The Oxford English Dictionary defines the word as, “of, pertaining to, or having well-shaped or finely developed buttocks,” which in turn comes from the Greek words kalli meaning “beauty” and pygi meaning “buttocks or rump.” Incidentally, an old word for someone who engages in anal intercourse is a pygist, and the adjective dasypygal means “having hairy buttocks.” Try using the last one next time you want to insult folks – especially if they’re making asses of themselves!

[2] So for that one person out there who has less of a life than I have, you basically count the number of times your target word occurs out of a sample of X words in total, then match that against the number of times the same word occurs in your reference corpus of Y words in total. Here’s the word we in a little 2 x 2 box:
Measure of usage of the word WEBecause I always prefer an easy life when it comes to all things numerical, I used an online calculator to take these figures to calculate a “log-likelihood” figure – the “keyness” number. You can find that site here: http://sigil.collocations.de/wizard.html

When the site works its magic, you see the score expressed as G-Squared below:

SOTUA2015 LogLiklihood
Take a look at that G-Squared figure and then look back at the Fig 4 and you’ll see the keyness figure is (almost) the same. You can try this with any of the value in Fig 4 and you’ll see that the online calculator scores match those of the WordSmith software.

[3] It was the end of the 70s and tight spandex leggings were all the rage – for the ladies – and Sister Sledge had a monster hit with “We Are Family” from the album of the same name. Apparently the Sisters are still touring to this very day – although I’m not sure if they’re still wearing spandex.

ColorBrewer: Utilizing cartography software for color coding

It seems that I am getting a reputation for being a teensy-weensy bit doryphoric [1] and that may have some truth in it insofar as I hate – with a passion – the tendency for people to use the word “utilize” rather than “use” simply because the former sounds more erudite. It’s not, in fact, erudite; it’s just plain wrong. As I’ve said in previous posts, “utilize” means “to use something in a manner for which it was not intended.” So I can “use” a paper clip to hold a set of pages together; but I can “utilize” it to scoop wax out of my ears or stab a cocktail olive in my vodka martini (shaken, not stirred).

Colorado beetle

Doryphoric

So when I titled this post with “utilizing cartography software” I really do mean that and I’m not trying to sound clever by using a four-syllable word (utilizing) over the simpler two-syllable using. No siree, I say what I mean and I mean what I say: utilize. The software in question is online at ColorBrewer: Color Advice for Cartography and its original purpose was to help map makers choose colors that provide maximum contrast. Let’s create an example. Suppose you have a map of the US and you want to use colors to show the average temperatures as three data sets; below 50F, 51F-65F, and above 65F. You can use three colors in one of three different ways:

  • (a) Sequential: Three shades of a chosen color from light to dark to indicate low to high values. e.g. Sequential color
  • (b) Diverging: Three colors that split the data equally in terms of the difference between the colors, but with the mid-range being related to a degree of difference between the extremes. Divergent color coding
  • (c) Qualitative: Three colors that split the data into three distinct groups, such as apples/oranges/bananas or trains/boats/planes – or for the statisticians out there, any nominal level data. Color coding qualitative

For a map of temperature averages, you’d choose the sequential coding so as to show the degree of change. Here’s what such a map might look like:

Three data point colors

Three data point colors

Compare this with a version whereby we chose to have six data points rather than three i.e. less that 45F; 46F-50F; 51F-55F; 56F-60F; 60F-65F; above 65F.

Six data points colors

Six data points colors

What the software does that is interesting is that it automatically generates the colors such that they are split into “chromatic chunks” that are equally different. The lowest and highest color values for each map are the same but the shades of the intermediate colors are changed. If you were to choose a set of 10 data points, the software would split those up equally.

Of course, as the number of data points increases, the perceptual difference between them decreases i.e. it becomes harder to see a difference. This is one of the limitations of any color-coding system; the more data differentials you want to show, the less useful colors become. You then have to introduce another way of differentiating – such as shapes. So if you had 20 shades of gray, it’s hard to see difference, but with 20 shades of gray and squares, triangles, rectangles and circles, you now have only 5 color points for each shape.

One of the areas where color coding is used in Speech and Language Pathology is AAC and symbols. In the system of which I am an author [2] color coding is used to mark parts of speech. But suppose you were going to invent a new AAC system and wanted to work out a color coding scheme, how might you utilize the ColorBrewer website?

If you’re going to design your system using a syntactic approach (and I highly recommend you do that because that’s how language works!) you could first identify a color set for the traditional parts of speech; VERB, NOUN, ADJECTIVE, ADVERB, PRONOUN, CONJUNCTION, PREPOSITION, and INTERJECTION [3]. This looks suspiciously like a nominal data set, which corresponds to the Qualitative coding method mentioned at (c) above. So you go to the ColorBrewer site and take a look at the panel in the top left:

ColorBrewer Panel

ColorBrewer Panel

You can set the Number of data classes to 8, the Nature of your data to qualitative, and then pick yourself a color scheme. If you chose the one in the graphic above, you see the following set recommended:

Eight color data setFor the sake of completeness, here are all the other options:

ColorDataQualSet2You can now choose one of these sets knowing that the individual colors have been generated to optimize chromatic differences.

So let’s assume we go for that very first one that starts with the green with the HTML color code #7FC97F [4]. I’m going to suggest that we then use this for the VERB group and that any graphics related to verbs will be green. Now I can move to step 2 in the process.

Verbs can actually be graded in relation to morphological inflection. There are a limited number of endings; -s, -ing, -ed, and -en. Knowing this, I can go back to the ColorBrewer site and use the sequential setting to get a selection of possible greens. This time I changed the Number of data classes to 5 and the Nature of your data to Sequential. Here’s what then see as a suggested set of equally chromatically spaced greens:


ColorDataQualPanel2

This now gives me the option to code not just verbs but verb inflections, while chromatically signaling “verbiness” by green. Here’s a symbol set for walk and write that uses the sequential – or graded – color coding:

Color-coded symbols

Color-coded symbols

If you want an exercise in AAC system design, knowing that ADJECTIVES also inflect like verbs using two inflectional suffixes, -er and -est, you can try using the ColorBrewer to create color codes [5].

There are probably many other ways to utilize the site for generating color codes. For example, you might want to create colors for Place of Articulation when using pictures for artic/phonology work, and seeing as there are a discrete number of places, it should be easy enough. Why not grab yourself a coffee and hop on over the ColorBrewer now and play. But only use it if you’re creating a map. Please!

Notes
[1] A doryphore is defined by the OED as “A person who draws attention to the minor errors made by others, esp. in a pestering manner; a pedantic gadfly.” It comes from the Greek δορυϕόρος, which means “spear carrier,” and it was originally used in the US as a name for the Colorado beetle – a notable pest. This beetle was known as “the ten-striped spearman,” hence the allusion to a spear carrier.  To then take the noun and turn it into an adjective by adding the -ic suffix meaning “to have the nature of” was a piece of cake – and a great example of using affixation to change a word’s part of speech. As always, you leave a Speech Dudes’ post far smarter than you entered it!

[2] Way back in 1993 I was invited to join the Prentke Romich Company’s R&D department as one of a team of six who were tasked with developing what became the Unity language program. The same basic program is still used in PRC devices and the language structure has been maintained such that anyone who used it in 1996 could still use it in 2014 on the latest, greatest hardware. The vocabulary also uses color coding to mark out Parts-of-Speech but not exactly like I have suggested in this article. Maybe next time…?

[3] The notion of 8 Parts-of-Speech (POS) is common in language teaching but as with many aspects of English, it’s not 100% perfect. For example, words like the, a, and an can be categorized under Adjectives or added to a class of their own called Articles or – by adding a few more – Determiners. So you might see some sources talking about 9 Parts-of-Speech, and I like to treat these as separate from adjectives if only because they seem to behave significantly differently from a “typical” adjective. Another confounding factor is that some words can skip happily between the POS and create minor havoc; light is a great example of this. The take-away from this is that sometimes, words don’t always fit into neat little slots and you need to think about where best to put them and how best to teach them.

[4] In the world of web sites, colors are handled in code by giving them a value in hexadecimal numbers – that’s numbers using base 16 rather than the familiar base 10 of regular numbers. Black is #000000 and white is #FFFFFF. When you’re working on designing web pages, it’s sometimes useful to be able to tell a programmer that you want a specific color, and if you can give them the precise hex code – such as #FF0000 for red and #0000FF for blue – then it makes their job easier and you get exactly what you need. You can also something called RGB codes to described colors, based on the way in which the colors (R)ed, (G)reen and (B)lue are mixed on a screen. Purple, for example, is (128,0,128) and yellow is (255, 255, 0). Take a look at this Color Codes page for more details and the chance to play with a color picker.

[5] I suppose I should toss in a disclaimer here that I’m not suggesting that creating an AAC system is “simply” a matter of collecting a lot of pictures with colored outlines and then dropping them into a piece of technology. There is much more to it than that (ask me about navigation next time you see me at a conference) so consider this article just one slice of a huge pie.

The Dudes Dissect “Closing the Gap” 2013: Day 2 – Of Speech and Sessions

Having looked at the vocabulary used in the Closing the Gap 2013 preconference sessions, it’s time to cast a lexical eye on the over 200 regular presentations that took place over two-and-a-half days. For most attendees, these are the “bread and butter” of the conference and choosing which to attend is a skill in of itself. It’s not uncommon [1] to have over ten sessions run concurrently, which means you’re only getting to attend a tenth of the conference!

So let’s take a look at the vocabulary used in the titles to all theses presentations to get a flavor of the topics on offer.

Conference Presentations: Titles

The total number of different words used in the session titles was 629 after adjusting for the top 50 words used in English [2]. As a minor deviation, kudos to all who used the word use correctly instead of the irritatingly misused utilize. Only one titled included utilizes – and it was used incorrectly; the rest got it right! For those who are unsure about use versus utilize, the simple rule is to use use and forget about utilize. The less simple rule is to remember that utilize means “to use something in a way in which it was never intended.” So, you use a pencil for drawing while you utilize it for removing wax from your ear; you use an iPad to run an application while you utilize it as a chopping board for vegetables; and you use a hammer to pound nails but utilize it to remove teeth. Diversion over.

Top 20 Most Frequent Words in Titles

Top 20 Most Frequent Words in Titles

Top 20 Most Frequent Words in Titles

No prizes for guessing that the hot topic is using iPad technology in AAC. Your best bet for a 10-word title for next year’s conference is;

How your students  use/access iPad AAC apps as assistive technology

This includes the top 10 of those top 20 words so your chances of getting accepted are high.

Conference Presentations: Content Words

The total word count for the session descriptions text is 2,532 different words (excluding the Stop List), which is a sizable number to play with. And when I say “different words,” I mean that I am basically counting any text string that is different from another as a “word.” So I count use, uses, used, and using as four words, and iPad and iPads as two. A more structured analysis would take such groups and count them as one “item” – or what we call a LEMMA. We’d then have a lemma of <USE> to represent all the different forms of use, which lets us treat use/used/uses/using as one “word” that changes its form depending on the environment in which it is sitting [3]

Top 50 Words By Frequency in Session Content

Top 50 Words By Frequency

A 2,3oo-word graphic would be rather large so I opted to illustrate the top 50 most frequently used words. As you can see, the top words seem to be the same as those in the titles, which suggests that on balance, presenters have done a good job overall in summarizing their presentation contents when creating their titles – something that is actually the strategy you should use.

Keywords in Content

Finally, let’s take a look at the keywords in the session content descriptions. Remember, the keywords are those that appear in a piece of text with a frequency much higher than you would expect in relation to the norm.

Top 10 words by Keyness score

Top 20 words by Keyness score

Top of our list here are apps with the iPad coming in at three. Fortunately this fetish for technology is tempered by the inclusion in our top 20 of words like strategies, learn, how, and skills, all critical parts of developing success in AAC that are extra to the machinery. It’s good to think that folks are remembering that how we teach the use of tools is far, far more important than obsessing over the tools themselves.

Coming next… The Dudes Dissect Closing the Gap: Day 3 – Of Content and Commerce. In which the Dudes look at the marketing blurbs of the Closing the Gap exhibitors to discover what the “hot button” words intended to make you want to buy!

Notes
[1] WordPress’s spell and grammar checker flagged the phrase “it’s not uncommon” as a double negative and told me that I should change it because, “Two negatives in a sentence cancel each other out. Sadly, this fact is not always obvious to your reader. Try rewriting your sentence to emphasize the positive.” Well, although I generally agree that you shouldn’t use no double negatives, the phrase “not uncommon” felt to me to be perfectly OK and not at all unusual. I therefore took a look at the Corpus of Contemporary American English and found that “it’s not uncommon” occurs 313 times while “it’s common” scores 392. This is as near to 50/50 as you get so I suggest to the nice people at WordPress that “it’s not uncommon” is actually quite common and thus quite acceptable – despite it being a technical double negative.

[2] For the curious among you, here are the contents of the Stop List I have been using, which is based on the top 50 most frequently used words in the British National Corpus (BNC): THE, OF, AND, TO, A, IN, THAT, IS, IT, FOR, WAS, ON, I, WITH, AS, BE, HE, YOU, AT, BY, ARE, THIS, HAVE, BUT, NOT, FROM, HAD, HIS, THEY, OR, WHICH, AN, SHE, WERE, HER, ONE, WE, THERE, ALL, BEEN, THEIR, IF, HAS, WILL, SO, NO, WOULD, WHAT, UP, CAN. This is pretty much the same as the top 50 for the Corpus of Contemporary American English, except that the latter includes the words about, do, and said instead of the BNC’s one, so, and their. Statistically, this isn’t significant so I suggest you don’t go losing any sleep over it.

[3] When you create and use lemmas, you also have to take into account that words can have multiple meanings and cross boundaries. In the example of use/used/uses/using, clearly we’re talking about a verb. But when we talk about a user and several users, we are now talking about nouns. So, we don’t have one lemma <USE> for use/used/user/users/uses/using but two lemmas <use(v)> and <use(n)> to mark this difference. It gets even more complicated when you have strings such as lights, which can be a verb in “He lights candles at Christmas” but a noun in “He turns on the lights when it’s dark.” When you do a corpus analysis of text strings, these sort of things are a bugger!

The Dudes Dissect “Closing the Gap” 2013: Day 1 – Of Words and Workshops

Regular readers of the Speech Dudes will know that when the “Dudes Do…” a conference, Day 1 is typically all about the travel experience, usually including some unfavorable comments about taxi cabs and hotel coffee, but this time I’m feeling charitable and, although not yet ready to “Hug a Cabbie,” I’ve decided to provide an overview of the preconference sessions, which I didn’t attend.
Now, you may think that not having attended a workshop might put me at a bit of a disadvantage with regard to reporting on content and offering a critique – and you would be right. On the other hand, what I can comment on is the contents of the preconference brochure that everyone can have access to prior to the actual event and which they use to decide the workshops and sessions they want to attend.

So what you’re going to see is an example of corpus linguistics in action, dissecting the very words used to influence YOUR choices. In short, you’re about to learn about what words presenters and marketers use to make up your mind for you. Grab your coffee, hold on to your hats, and prepare to be amazed at what you didn’t know!

Methodology

The Dudes are big believers in the scientific method and the application of evidence-based practice. We strive for some objectivity where possible, although we acknowledge that our occasional rants may be just a tad subjective. We don’t expect our readers to take everything we say as gospel sharing the methodology of how we analyzed our data seems fair.

The raw data came straight from the official conference brochure, available for any to check at http://www.closingthegap.com/media/pdfs/conference_brochure.pdf. From that I extracted all the text in the following categories:

  • Preconference Workshop Titles
  • Preconference Workshop Course Descriptions
  • Conference Session Titles
  • Conference Session Descriptions
  • Exhibitor Descriptions

Technically, I simply did cut-and-paste from the PDF and then converted everything to TXT format because that’s the format preferred by the analysis software I use.

WordSmith 6 is a wonderful piece of software that lets you chop up large collections of text and make comparisons against other pieces of text. These comparisons can then show you interesting and fascinating details about how those words are being used. I’ve talked in more detail about WordSmith in our post, The Dudes Do ISAAC 2012 – Of Corpora and Concordances, so take a look at that if you want more details.

Once I have the TXT files, I can create a Word List that gives me frequency data, but I also use a Stop List to filter out common words. If you simply take any large sample of text and count how often words are used, you’ll find that the top 200 end up being the same – that’s what we call Core Vocabulary. And when you’re looking for “interesting” words, you really want to get rid of core because its… well… uninteresting! Hence a Stop List to “stop” those words appearing.[1]

Preconference Workshop Titles

The first opportunity you have to encourage folks to come to your session is to have a title that makes a reader want to find out more about what you have to offer. The title is, in fact, the door to your following content description. Of course, you have to find some balance between “catchy” and “accurate.” For example, a paper I presented at a RESNA (Rehabilitation and Engineering Society of America) conference entitled Semantic Compaction in the Dynamic Environment: Iconic Algebra as an Explanatory Model for the Underlying Process was, in all fairness, technically accurate, but from a marketing perspective it had all the appeal of a dog turd on crepe. [2]

Let’s therefore take a look at what seem to be the best words to use if you want to attract a crowd.

Pre-conference Sessions: Keyword in Titles

High frequency words in Pre-conference titles

High frequency words in preconference titles

The Word Cloud here counts only words that appeared twice or more, and the size of the words is directly proportional to frequency, so it’s clear that students is a critical word to use, followed closely by iPad, technology, learning, and communication. On that basis, if you’re planning to submit a paper for 2014, here’s your best “10-word-title” bet for getting (a) accepted and (b) a crowd:

The implementation of iPad technology for learning and communication

In the event that the CTG review committee find themselves looking at multiple courses submitted with the same title, you’re going to have to consider how you describe your actual course contents – and luckily, we can help there, too!

Preconference Sessions: Keywords in Course Content

The actual highest frequency words were workshop and participants, which is something of an artificial construct because most people include phrases such as “in this workshop, participants will…” and so I removed these from my keyword analysis.

Frequent words in preconference sessions content

Frequent words in preconference sessions content

So to further enhance the pulling power of your course, you need to be talking a lot about students, how they use iPads and communication, along with using apps to learn, enhance learning, and any strategies that help meet needs. In fact, you need to include any of these Top Ten words:

Top Ten keywords in Pre-conference session content

Top Ten keywords in preconference session content

But wait, wait… there’s more

I’ve been using the word keywords to refer to those words that appear within a piece of text more frequently than you would expect based on comparing them to a large normative sample. If you perform  a keyword analysis on the preconference contents sample, you find that the top five keywords that appear are iPad, iPads, AAC, apps, and students. This suggests that we do an awful lot of talking about one, very specific brand name device – which is good news for the marketing department at Apple!

Top 15 words by Keyness score

Top 15 words by Keyness score

The relevant score is the keyness value. The higher the keyness, the more “key” the word is i.e. its frequency in the sample is significantly higher than you would expect to see in the normal population. So when you look at the table above, you’re not just seeing frequency scores but how significantly important words are. [3] As an example, the word iPads is used less frequently than the word communication (10 times as against score 16) but iPads is almost twice as “key” as communication i.e. it is significantly more important.

Now, as a final thought for folks who are working in the field of AAC (augmentative and alternative communication), I suggest that if you are developing vocabulary sets for client groups, using frequency studies is certainly a good start (and more scientific than the tragically common practice of picking the words “someone” thinks are needed) but if you then introduce a keyness analysis, you can improve the effectiveness of your vocabulary selection.

Coming next… The Dudes Dissect Closing The Gap 2013: Day 2 – Of Speech and Session. In which the Dudes present an analysis of the words used to describe conference session titles and contents. Find out how to improve your chances of getting paper presented!

Notes
[1] In truth, there is more I could say about the methodology, and were this intended to be a peer-reviewed article for a prestigious journal, rest assured I’d go into much more detail about some of the finer points. However, this is simply a blog post designed to educate and entertain, so I ask you to allow me some leeway with regard to precision. I’m happy to share the raw data with folks who want to see it but all I ask is you don’t toss it around willy-nilly.

[2] Not only did it have a title that included the word “algebra” but it was scheduled for 8:00 am on the final day (a Saturday, no less) of the conference. Surprisingly, people showed up – which says more about the sort of folks who attend RESNA conferences rather than anything about my “pulling power” as a presenter.

[3] There is a mathematical formula for the calculation of keyness values. One way is to use the Chi-Square statistic; the other is to use a Log-likelihood score, which is something like a Chi-Square on steroids. As I’ve often said, I didn’t become an SLP because of my ability to handle math and statistics, so I admit to finding these things a strain on my brain. However, for the non-statistically inclined among us, the point is that both these measures simply compare the frequency value of a word from an experimental sample against the frequency value it has in a very large comparative sample (such as the British National Corpus or the Corpus of Contemporary American), and then shows you how similar or dissimilar they are. If their frequencies are very, very dissimilar, the word from the experimental sample is a keyword – like iPad and AAC in the examples above. Now feel free to pour yourself a drink and let your brain relax.

Little Things DO Matter – Even Little Words

Sometimes the linguistic stars align and a lexical event of supernova proportions takes place. More specifically, unless you’ve been taking a vacation on an island without an internet connection or phone service [1] you’ve doubtless learned about the word twerk and, if you’re really unlucky, seen it demonstrated by pop princess Hannah Montana Miley Cyrus. Once the idol of millions of teen girls across the world, Miley is now the idol of millions of aged perverts who can’t wait for her to make a real porn movie instead of the “R”-rated performance she provided for the VMA Awards ceremony on August 25th, 2013.

Public domain image

Let’s twerk!

Tempting as it is to pander to the prurient and show you videos and pictures, there’s little need to do that because at this moment in time as I suspect 50% of the world’s internet content is already full of such material, and if you start typing “Miley Cyrus” into your search engine, you’ll probably get millions of links even before you get past the third “l” in her name!

As an SLP working in AAC, my interest is strictly professional and concerns the revelation from August 28th that twerk has officially entered the Oxford Dictionaries Online (ODO) site – that’s just three days after Miley’s graphic demonstration. So, just in case you are unsure, here’s the actual definition of twerk as used by Oxford:

Pronunciation: /twəːk/
Verb [no object]: informal.
Dance to popular music in a sexually provocative manner involving thrusting hip movements and a low, squatting stance.

Surprisingly, it was first noted in the 1990’s, and it is thought to be an alteration of the word work in the sense of “work it, baby, work it.” Normally when new words are added to the ODO, it’s fairly low-key and only word nerds really care. However, in this instance, it’s as if the Oxford marketing department had contracted with Miley to do her bump-and-grind act purely to promote the “release” of the new word – and a spectacular release it was! As I write, typing twerk into Google search returns 20,300,000 hits. Hell, “The Speech Dudes” only gets a paltry 4,990,000 hits!

So let’s think a little about what we can learn from this little episode because we, the Dudes, would like to think of our little piece of cyberspace here as being educational – in the most laid-back of ways, of course.

When the inclusion of twerk was announced to the world, thousands of commentators leaped forward to say that it was now a “real word” because it was “in the dictionary.” I want you all to take another look at that second phrase, “in the dictionary.” The significant element is the use of the word the as a determiner that precedes a noun. Typically, we use the – often referred to as the definite article – to refer to a single, specific thing. But we use the word a (0r an) – the indefinite article – to refer to one of many things. There is a world of difference between “Pass me the pencil” and “Pass me a pencil.” There’s an even bigger difference between, “Hey, you’re just the man!” and “Hey, you’re just a man!” And although some folks treat the and a/an as merely “fillers” that can be ignored, there are some occasions where they are absolutely crucial to the meaning of a sentence. Tell me “You’re the shit!” and I’m happy; tell me “You’re a shit!” and I’m a wee bit upset.

In this instance, the reality is that twerk has been added to a dictionary and not the dictionary. If it had been added to the dictionary, we’d have had to be clear which one that was, and then agree that is was the only one that matters. For me, “the dictionary” is the 20-volume complete Oxford English Dictionary (OED), 2nd edition, and anything else is “a dictionary.” But for twerk, as I mentioned earlier, the dictionary in question is the Oxford Dictionaries Online dictionary, which is a very different beast than the OED. A number of commentators failed to mention this, and indeed some suggested it was the OED.

Picture of a dictionary

Is it in the dictionary?

The ODO is what you would call a “living dictionary,” which is aimed at capturing the global lexicon as it exists now. It’s a less profane and more researched version of the Urban Dictionary, which is also a living dictionary but without vetting or investigation. Words can, in fact, be taken out of the ODO if they cease to be used, whereas once a word gets into the OED, it never leaves. This is because the OED is a “historical dictionary” that aims to trace the meanings of a word from its earliest known use through to either its demise (anyone used shrepe [2] lately?) or its latest meaning. For example awful didn’t originally mean “terrible” but “wonderful” – it referred to something that left you full of awe.

Something else we can learn is the speed at which a new word can be used in its constituent morphological forms i.e. twerk, twerking, twerks, and twerked. Using ghits [3], we see the following hit figures, which gives us some idea of the distribution of the word as a whole [4]:

     twerk: 20,300,000
     twerking: 17,700,000
     twerks: 2,850,00
     twerked: 439,000

Not surprisingly, we find that an adjective form also exists, twerky (71,200 ghits) but there’s a dearth of adverbial examples with twerkily only scoring 84 ghits, which is close to nothing. I should, however, now total all these up because they are all forms of the base form twerk, pushing the total ghit score up from 20,300,000 to just over double at 41,360,200.

For folks working on teaching vocabulary, the “teachable moment” from the whole Miley Cyrus debacle would be to use the word twerk as a springboard for reinforcing regular morphology. Thus, any worksheet along the following lines would be splendid:

“Miley Cyrus says she likes to TWERK. In fact, she TWERK___ a lot! We saw a video yesterday and she was TWERK____. Some people think she shouldn’t have TWERK___ at all.” [5]

So there you have it. Vocabulary, morphology, frequency studies, and the critical importance of the definite and indefinite articles. And who says the Speech Dudes site isn’t educational?

Notes
[1] If you’ve never done this, it’s highly recommended. It’s what used to be called a vacation, when you went away to somewhere very different from your home and spent one or two weeks doing fun and relaxing things that were not work related. Sadly, many people are now permanently connected to their jobs via smart technology and actually start their vacation mornings by checking work emails or making a couple of calls. This is not called a vacation; it’s called working from home – for which you don’t get paid. Cutting yourself off from the world is surprisingly difficult and something you really have to plan for and work at. Try it – and see if you have the will to do it!

[2] Shrepe means “to scratch” and comes from the Old English screpan=to scrape, which in turn came from  Old Norse skrapa=to scrape or erase, and ultimately from an unattested but re-constructed Germanic word *skrap-=scrape. Shrepe sadly went out of fashion in the 13th century but it’s good to pull such words out of the closet once in a while and wear them for just a day.

[3] Ghit is short for “Google hit,” which is the number of hits an entry in the Google search box gets. It appears just below the search term in a phrase such as About 39,300 results (0.13 seconds). It’s not an official measure in the world of corpus linguistics but it a pretty useful “quick and dirty” way of estimating web frequencies. If you find a word or phrase that only has ONE ghit, it’s called a googlewhack. Try slipping that one into your next conversation at the bar.

[4] Trying to define a “word” is not as easy as you might think. For example, are eat, eats, eating, ate, and eaten 5 words or just one? After all, the difference between eat and eats is simply based on whether you are talking about the 1st, 2nd, or 3rd person i.e. I/you/we/they eat but he/she/it eats. One way to get around this is to talk about something called a LEMMA, which is basically the dictionary form of a verb – such as twerk. A dictionary would, for instance, have the word eat as an entry, but not necessarily eats, eating, or eaten. It would, however, include ate because it’s a very irregular form of the lemma, eat.

[5] I admit that shamelessly using Miley Cyrus’s despicable behavior to teach language worries me no more than when I used beer bottle tops as poker chips to teach my daughters to count. Some may question my use of alcohol and poker for my “teachable moment” but hey, what can I say – I’m a dude!

“I don’t care what the research says…”

A colleague of mine was asking for some references to support the notion that kids with severe learning difficulties can learn to use high frequency core words (such as want, stop, and get) because they were being told that what these kiddos really use (or need) are words like toy, cookie, and banana. I duly provided a quick sample of peer-reviewed articles and shared the information with other colleagues. And what the hell, I’ll share them with you, dear reader, in the References section at the end of this piece.

Reading the research

Reading the research

But another of my friends also commented that there are still those folks who respond with comment such as, “I don’t care what the research says, I don’t care who these kids are. These are not the kids I’m working with. The kids I’m working with just aren’t going to use these words.”

So what do you do about this? At what point does being “critical of the research” become “ignoring the research because I don’t believe it.”? In the world of Physics, it’s hard to say, “I don’t care what the research says, I’m still going to fly using my arms as wings.” Mathematicians don’t say, “I don’t care what the research says, 1 + 1 does equal 7.” And it’s a brave doctor who would say, “I don’t care what the research says, you go right ahead and smoke 40 cigarettes a day and you’ll be just fine.”

No-one would argue that Speech and Language Pathology as a profession will ever achieve the rigid, statistical certainties of physics and mathematics, but what does it say about our profession if we openly admit to ignoring “the research” because it doesn’t fit with our individual experience? There are certainly enough practices  in Speech Pathology that are hotly debated (non-speech oral motor exercises, facilitated communication, sensory integration therapy) and yet still being used. But all of these are open to criticism and lend themselves to experimental testing, whereas an opinion based on personal experience is not. I could tell you that I have used facilitated communication successfully, but that is still personal testimony until I can provide you with  some measurable, testable, and replicable evidence. This is one of the underlying notions of evidence-based practice in action.

However, it’s  one thing to talk about using evidence-based practice but another to actual walk the walk. If the evidence suggests that something you are doing is, at best, ineffective (at worst, damaging), how willing are you to change your mind? If 50% of research articles say what you’re doing is wrong, how convinced are you? What about 60%? Or 90%? At what level of evidence do you decide to say, “OK, I was wrong” and make a change?

If there’s anything certain about “certainty” it’s that it’s uncertain! Am I certain that teaching the word get to a child with severe cognitive impairments is, in some sense, more “correct” or “right” than teaching teddy? No, I am not. But what I can do is look at as many published studies of what words kids typically use, at what ages, and with what frequency, and then feel more confident that get is used statistically more often across studies. This doesn’t mean teddy is “wrong,” nor does it preclude someone publishing an article tomorrow that shows the word teddy being learned 10x faster than the word get among 300 3-year-olds with severe learning problems.

But until then, the current evidence based on the research already done is, in fact, all we have. Anything else is speculation and guesswork, and no more accurate than tossing a couple of dice or throwing a dart at a word board.

Being wrong isn’t the problem. Unwillingness to change in the face of evidence is.

References
Banajee, M., DiCarlo, C., & Buras Stricklin, S. (2003). Core Vocabulary Determination for Toddlers. Augmentative and Alternative Communication, 19(2), 67-73.

Dada, S., & Alant, E. (2009). The effect of aided language stimulation on vocabulary acquisition in children with little or no functional speech. Am J Speech Lang Pathol, 18(1), 50-64.

Fried-Oken, M., & More, L. (1992). An initial vocabulary for nonspeaking preschool children based on developmental and environmental language sources. Augmentative and Alternative Communication, 8(1), 41-56.

Marvin, C.A., Beukelman, D.R. and Bilyeu, D. (1994). Vocabulary use patterns in preschool children: effects of context and time sampling. Augmentative and Alternative Communication, 10, 224-236.

Raban, B. (1987). The spoken vocabulary of five-year old children. Reading, England: The Reading and Language Information Centre.

There’s no such thing as a “free” app, so get over it and pony up!

In this article, I’m, not just on a proverbial hobby-horse but whipping it frantically as I gallop wildly into the Valley of Death. I may even end up offending some readers but hopefully make some new friends along the way. So saddle up and join the posse!

One boy and his horse

Imagine getting the following e-mail from someone who wants you to provide therapy services for their child.

Dear Therapist

I am the parent of a child who cannot speak and really needs help. I saw that you offer therapy services to people like my child and I’d love to have access to them. However, I am really surprised that your therapy services are so expensive and think that you should provide them free of charge. Other people provide free therapy services and there are also many people who are not therapists who provide free therapy services in their spare time.

I’d be happy to provide a recommendation of your therapy services to other people if you were to provide them to me for free. Otherwise, I am afraid I will just have to blog about how expensive your therapy services are and go somewhere else. It seems so sad that children in desperate need of help are denied access to therapy services because of people wanting to make a profit from their disability.

Now I’m guessing that your response is likely to be along the lines of “no,” based on the notion that at the end of the day, you’d like to be able to eat, stay warm, and maybe feed your family. You might also be wanting to pay off the huge loan  you took out to train for years to become a therapist in the first place – because those greedy folks at the college expected you to pay for your education! And if you were to give free therapy to this client, the “recommendation” would result in everyone else wanting free help, and that’s not usually a sustainable business model.

OK, so why not copy the email into a document and do a search-and-replace that changes every instance of “therapy services” to “apps.” Hell, why don’t I make it even easier and do that for you below:

I am the parent of a child who cannot speak and really needs help. I saw that you offer apps to people like my child and I’d love to have access to them. However, I am really surprised that your apps are so expensive and think that you should provide them free of charge. Other people provide free apps and there are also many people who are not therapists who provide free apps in their spare time.

I’d be happy to provide a recommendation of your apps to other people if you were to provide them to me for free. Otherwise, I am afraid I will just have to blog about how expensive your apps are and go somewhere else. It seems so sad that children in desperate need of help are denied access to apps because of people wanting to make a profit from their disability.

Sounds familiar?

So how about another stark contrast just to hammer home a little more how ridiculous we all are – and we’re all guilty – when it comes to value and pricing with apps.

Hands up anyone who buys at least one coffee per week from a local coffee store.

Hands down.

The nice people at Statistic Brain estimate that the average price on an espresso-based coffee in 2012 was $2.45, and a brewed one was $1.34. So if you drink an espresso-based coffee each week for a year, you are out-of-pocket by $127.40. More sobering is that there’s a good chance you drink more than one a week, and just having two takes you up to $255 and tax.

So remind me again; why do we whine about paying 99 cents for an app? Why do we jump through hoops to badger, harass, cajole, and even blackmail app developers into giving us freebies? If I’m happy to spend $250 per year on something as trivial as a cup of non-essential coffee, why will I not spend $1 on an app that is apparently “essential” for a child’s education? Is coffee more valuable than education?

Our sense of “value” and “worth” has gone totally to pot when it comes to apps. It defies belief that consumers somehow believe that either it costs nothing to create an app or that app creators are making money out of the wazoo from their yachts just off of Miami Beach. We are, in fact, victims of Crapponomics, the naive misunderstanding of how apps work from an economic standpoint.

Why did this happen? Why is it that when an app developer asks $1.99 for something that took weeks of work we are shocked at the effrontery to ask such an outrageous price and invent some form of “special case” as to why we deserve a freebie?  Let’s take a look at Crapponomics 101.

Crapponomics graphic

1. The apps “anchor” was originally free.
In Economics, there’s a concept known as the “Anchor Point.” As the name suggests, it’s the selling price at which you drop your anchor when you bring a new product or service to market. Once an anchor is set, new folks tend to cluster around your safe harbor and drop similar anchors. And when people start purchasing products, this anchor becomes the standard against all other similar products are measured. The average price of an app in 2012 was $1.58, which is 87 cents cheaper than a cup of espresso-based coffee.

The best anchor point for a consumer is usually free. If I want stuff, and the stuff costs me nothing, how bad can that be? Well, the obvious thing is that there’s a little thing called quality that gets factored into the equation, but you’d be surprised (or not) how much quality will be sacrificed on the altar of Free.  And in the early days of iPhones and iPads, the majority of apps were free – which became, and remains, the anchor.

2. Most apps are for marketing, not profit.
In his seminal work, The Selfish Gene, Richard Dawkins argued that human beings are basically the gene’s way of making new genes. We are, in fact, merely hosts for DNA, and the name of the game of Life is for DNA to exist. In a similar fashion, apps are just the tablet device’s way of creating more tablet devices. Apple, Samsung, and even Microsoft, don’t need to write apps because what they want is to sell tablets [1]. The purpose of the free Angry Birds games is to sell angrier and more expensive birds; the purpose of the free United Airlines app is to get you to buy tickets on United Airlines; the purpose of the free Pandora app is to get you to subscribe to the full Pandora service. Folks will pay for more birds, better airline seats, and more music but still begrudge the 99 cents for an app designed by a professional clinician and software engineer to help a client improve. And that’s because we mix up the free app with the for-profit app, and forget the value of expertise and quality.

3. We see apps as “things” and not a piece of intellectual property.
Ask yourself why you want an app in the first place. Usually it’s because it offers you something that would take you a long time to develop yourself (if at all) and will in some way help your clients succeed. So why are you reluctant to pay someone for taking the time to do all that work for you? Is 99 cents really too much to ask for the hours and hours a developer has put into it?

The problem is that most folks look at apps the same way they look at cans of beans on a shelf; you find the ones you like, stick ’em in your basket, and pay at the counter. But many apps – particularly those for therapy and education – have taken someone a long time to create. What you are buying is their years of experience in their field of expertise, their time designing the content, their cost to employ a programmer, and any royalties they in turn might be paying “behind the scenes” for things within the app [2].

4. Apple are not a charity and take their cut.
Folks who create apps know that despite the image that St. Stephen of Jobs carefully crafted to portray Apple as a caring, sharing, warm-and-fuzzy group of lovable kooks sticking it to “the Man,” they want 30% of everything that goes out via iTunes. Everything. If you write an app for an Apple device, you cannot sell it other than via iTunes – and that’ll cost you 30% of your selling price. Every 99 cent app that we begrudgingly pay for nets the developer 66 cents. Does anyone think to harass Apple because they get 33 cents for distributing? Nope, the developer gets the blame.

And how about that “freebie” thing? Well, Apple are kind enough to offer developers some free codes that they can use for promotion purposes, but after that, if the developer wants to give one away, they have to pay for their own app – and Apple still gets the 33 cents! That “free” app you are so adamantly demanding costs the developer (a) 66 cents in lost income and (b) 33 cents real money to Apple [3].

5. You have to sell millions of 99 cent apps to buy a boat.
Another basic rule of Economics is that to make a profit you can either sell millions of very cheap things at a small margin, or a few very expensive things at a large one. The Crapponomics assumption by consumers is that app developers make money by following the former route; millions of apps at 99 cents = sun-kissed beaches and mojitos in Hawaii.

But there are two underlying assumptions here that are inaccurate. The first is that the sort of apps being developed for therapy and education do not sell in millions. Not even close. The second is that there are not significant profits to be made from an app; simple math soon whittles down the margins. For a 99 cent app, Apple takes 33 cents, leaving 66 cents. Of that 66 cents, there’s usually at least two people to pay – author and developer – so that takes it to 33 cents each. Take out something for the IRS (like Apple, tax folks want their pound of flesh and you stand no chance of getting a “freebie” from them!), maybe a little for marketing, and that “dollar an app” profit has shrunk down smaller than a guy’s nuts on an Alaskan winter’s morning.

Think of the value, not the anchor
So do you still think paying 99 cents is too expensive? Or $1.99? Maybe even $4.99? Remember, that 99 cent app is supposed to make it easier for you to provide a service – for which you WILL be charging substantially more than 99 cents.

If a “life-changing” app costs $4.99, who in their right mind would quibble with that? Has the value of education and therapy reached the point where folks will pay more for a couple of pints at the bar than they will for their child’s future? I suppose that 60-inch LCD TV from  Best Buy is a “good investment” but the $99.99 for an AAC app isn’t? Where has our sense of value gone? I suppose paying Verizon Wireless $40.00 every  month for a data plan is normal for our wired life but hounding the developer of a $2.99 app  for a free copy balances that out.

We need to realize that when we buy an app we are not paying for the virtual equivalent of a can of beans but the skills, knowledge, and time of an experienced educator or clinician. Only then we will begin to stop the decline in the undervaluing of therapy and education as a whole.

Sometimes, there just isn’t an app for that.

Notes
[1] From Apple’s perspective, even the sale of the tablets isn’t where the big money resides; that’s coming from their greatest invention; iTunes. Although most folks would suggest that the iPod, iPhone, and iPad are Apple’s best inventions, it’s their delivery system that was their masterstroke. In order to get anything into your iDevice you need to download from iTunes, and Apple makes money on every download. Every app, book, song, movie, or video earns them cash, and that’s pure genius.

[2] Most app authors are in the true sense “authors” and not “writers.” They don’t actually write programming code for a device, and often have no idea about how code works. In a similar fashion, when Snooki claims to have “authored a book” she is being truthful; someone else actually “wrote” it based on Snooki’s ideas (whatever those may have been.) What this means is that the “99 cents” you pay  is now starting to get split many ways, and the author isn’t getting anything near 99 cents.

The average cost to develop and app has been estimated to be anywhere between $8000 and $200,000.  Here’s a good article called The Cost of Building an iPad App. Ideas for apps are cheap – we all have them – but software engineers are not, neither is your time. You might think that if you are designing an app in your “spare time” then it’s free, but the only reason you have “spare time” is that you’re already being paid for a job! The real test of the cost is to quit your real job and then go to the bank to see how much they will lend you to design an app.

[3] My standard disclaimer here is that I have no problem with any company making a profit. Apple developed the iTunes distribution system and have every right to recoup their development efforts by charging people to use the system. Although I may not want to say, “Greed is Good,” I’m OK with saying, “Making a profit is just fine.” My beef is more that for some reason, people seem to see Apple as the good guy and app developers as trying to gouge customers by charging for their apps. Folks seem happy to demand free 99 cent apps but don’t expect apple to give them a $700 iPad. Why is that? And Apple are the ones who force up app prices by asking for 30% of the selling price and only providing a limited number of free codes. So why don’t people rail on Apple about this? It seems that the richest company on the planet gets a “pass” but struggling app developers get the hassle. If Apple doesn’t give free $700 iPads, Verizon doesn’t offer free $40 monthly data plans, and Best Buy doesn’t let you walk off with a free 60-inch TV, why should an app seller give away a free 99 cent piece of software? Stop picking on the little guys!