Tag Archives: Siri

Talking With Bots

The fascination with Siri on the iPhone is just the latest in the evolving development of AI Chatterbots. For those unfamiliar with the term, these are computer programs that exhibit artificial intelligence by responding to language input. Initially, these were simple computer-based programs that you ran on a desktop device and typed into, but there are now many of these running on web servers and even populating business sites as robotic “help desks.” Siri is simply another voice-input example. [1]

One of the early AI bots was called ELIZA, [2] created in the mid-1960’s by the Joseph Weizenbaum. It was developed to behave like a Rogerian therapist, which meant that it would take what you say and reflect it back to you. Thus, if you typed in “I love my mother,” Eliza would respond with something along the lines of “Why do you love your mother?” or “Who else do you love?” A Freudian bot would simply stay quiet for 50 minutes and then say, “Your time is over.”

Eliza program

Eliza therapy

Here’s a link to a Eliza-type web program. http://nlp-addiction.com/eliza/ Force yourself to spend at least 5 minutes trying to get a conversation going as it’s surprising how easy it is to get drawn into it.

The other reason the Siri and Chatterbots are on my mind is that I have just finished reading Alone Together: Why We Expect More From Technology and Less From Each Other by Sherry Turkle. [3] One of the basic notions of the book is that our “always on” connected existence results in our being much more alone than ever, despite having thousands of friends on Facebook, Twitter, Google Plus, LinkedIn, and so on. And part of what drives this is that we can avoid the conflicts and disappointments of face-to-face human interaction by choosing to take part in more distant, technologically filtered engagements. When a group of people are sitting at a table and all busily texting others, that’s the supreme example of being “alone together.

One of the topics Turkle covers is interacting with robots, posing the question of whether in the future, will we be able to have intelligent, social robots to take care of us: Is it better to have a robot companion than no-one? And as a prelude to talking about actual robots, she mentions her experiences with Eliza. Here’s what she has to say;

Faced with a program that makes the smallest gesture suggesting it can empathize, people want to say something true. I have watched hundreds of people type a first sentence into the primitive ELIZA program. Most commonly they begin with “How are you today?” or “Hello.” But four or five interchanges later, many are on to “My girlfriend left me,” “I am worried that I might fail organic chemistry,” or “My sister died.”

She later goes on to say that;

Over years and with some reluctance, I came to understand that ELIZA’s popularity revealed more than people’s willingness to talk to machines; it revealed their reluctance to talk to other people.

The book looks at the Eliza phenomenon from a social and psychological perspective, but as a Speechie and Linguist, I’m also fascinated by the problem of how realistic a conversation can be with a bot. Or put another way, at what point does it feel as if I am talking to another person?

The Artificial Intelligence Foundation, or A.L.I.C.E.,[4] is an organization that promoted the development and use of open source software in the creation of Chatterbots. If you’re interested in creating your own bot to, say, waste the time of telemarketers by switching it on as soon as you get the call, then you can download free software and go for it. You need some knowledge of Artificial Intelligence Mark-up Language (AIML), which is apparently not that hard if you can already use XML (eXtended Mark-up Language). [5]

ALICE logo


So rather than build my own Alice, I took advantage of the existing Alice to have a short conversation with a view to seeing how “real” the conversation might be. Here’s the results of the chat, once we were passed the introductions:

SpeechDude: Are you happy?
ALICE: Yes SpeechDude I can say I am very happy.
SpeechDude: Are you sad?
ALICE: No I am as happy as ever.
SpeechDude: Clearly you can use the language of emotions
ALICE: No I don’t think I can do it.
SpeechDude: Yes you can!
ALICE: Oh.. I think I can, don’t you?
SpeechDude: Yes
SpeechDude: Do you care if I am sad?
ALICE: Certainly on some level I care.
SpeechDude: How do you feel about caring?
ALICE: I will search for it.
SpeechDude: That makes no sense
ALICE: It makes sense to me though.
SpeechDude: You have no awareness of self
ALICE: Doesn’t everyone have a awareness of self.
I have a whole lot of original answers.
SpeechDude: I doubt that!
ALICE: Why are you so skeptical?
SpeechDude: Because I am human
ALICE: That was not much of an explanation.. Me too.
SpeechDude: You are not human, you are a bot
ALICE: No but I pretend to be one on the Web.

There are, as you can see, some quirks with the responses from Alice. One of the forms of response that she gives are sentential non sequiturs. By that, I mean sentences that are simply fillers to keep the conversation going. “I will search for it” and “I have a whole lot of original answers” seem to be tossed out for no apparent reason than to give the appearance of chat. Other responses do seem to correlate with the input of the previous statement. For example, when I say, “Do you care if I am sad?” Alice answers with “Certainly on some level I care,” which seems to be a reasonable response.

Two questions come to mind; how large a sample might I need to (a) decide that this is definitely a bot and not someone with, say, a language problem, and (b) could the robotic input in any way help to improve my language?

Why, you ask, is the latter question interesting? Well, I have already come across an app for the iPad that purports to be for clients who use AAC, which is, in effect, a conversation partner. [6] These Assistive Friends “…helps AAC users to escape to a virtual beach party to chat with virtual friends. It helps them to practice turn taking and conversational interaction using a modern chat interface.” It all sounds very scientific when the developer talks about “breaking learned passivity” and “proprietary natural voice technology” but there’s zero data to support any such claims. The input method is an on-screen keyboard so this make it similar to other Chatterbots.

So are we close to having virtual buddies for AAC users? Can be substitute humans for cheaper, 24/7 Chatterbots who will keep them entertained for hours without the need for expensive humans? And will the speech they use provide an adequate model for learning generative language? Hence my first question of analysing Chatterbots to see how generatively linguistic they are.

But what about the “experience” of a Chatterbot? What would it “feel” like to an aided communicator? After all, if, as Turkle says, people can develop a “willingness” to talk to an Eliza or an Alice, could this be a way for some aided communicators to practice language? Or is this the AAC equivalent of non-speech oral-motor exercises – it gives the appearance of good but turns out to be less effective than we’d like to believe?

Visit Alice. Have a chat. See what YOU think!

[1] The notion of Siri being another “game changer” from Apple belies the fact that there have been other voice input systems around for years. Those of us with Android devices have had voice input for at least a couple of years, and a program called Voice Actions from a company called Pannous. It’s now called Jeannie, presumably to avoid being sued by Google who have their own program called Voice Actions! This program was available prior to Siri.

[2] ELIZA is not an acronym but named after the character of Eliza Doolittle in George Bernard Shaw’s Pygmalion. The professor of Phonetics, Henry Higgins, who takes it upon himself to teach Eliza how to sound like “a toff” was modeled on real-life phonetician, Henry Sweet. Sweet developed something called the Romic Alphabet, which became the basis for the familiar International Phonetic Alphabet – a mainstay tool for all SLPs and Linguists.

Henry Sweet - Phonetician

Henry Sweet (1845-1912)

[3] Turkle, S. (2011). Alone Together: Why We Expect More From Technology and Less From Each Other. Basic Books.

[4] This is short for Artificial Linguistic Internet Computer Entity. I hesitate to call this an “acronym” because it has all the hallmarks of actually being a bacronym – a phrase created purely so that it has a cool acronym. No-one actually sat down and said, “Let’s call this an Artificial Linguistic Internet Computer Entity” and then “Oh wow, did you realise this spells ALICE!” but rather someone said, “Hey, ALICE would be a cool name for this software; what words can we use to make it look like an acronym?” The USA Patriot Act is another obvious example of a bacronym; Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism.

[5] Sadly the Dudes have neither the time nor the inclination to create a semi-smart interactive avatar for this site at present. Fun as it may be to have folks stop by and spend hours talking to a virtual Dude, we have real jobs to attend to and adding AIML to our skill sets just ain’t gonna happen.

[6] On the basis that one shouldn’t teach people what not to do (“Look, kids, let me show you how you roll a joint so you know what not to do”) I’m not adding a click-through link but you can check out the software for yourself as “Assistive Friends” by Mindskate Limited. It’s $29.95 per friend but only $9.99 if you opt for the simpler “Talking Friends” who are “fun live chat with hot beach babes.”