Thursday, May 16, 2013

Let's give kids who don't speak tools to succeed


















How many words would the average, middle-class three-year-old child hear spoken to her in the course of a year?

"About 6 million," said Pat Mirenda, an expert on augmentative and alternative communication (AAC) who referenced a related study during a keynote talk at the Bloorview Research Institute's Pursuit Awards yesterday.

Pat said we could expect a deaf child of deaf parents to watch about the same number of signs over a year.

But what about kids who can't speak or sign and use a voice device or pictures to communicate? How often would a young child see these symbols used around them in a year?

"Zero," Pat said. "Zilch. We're not in a symbol-using community and children rarely see others using the same symbols they use, yet we expect them to magically be able to know what to do."

We know that children need to absorb vast amounts of language before being able to express it, Pat said. We know that modelling of AAC is important. But this is an area where we aren't translating research into practice.

"We don't routinely do it," Pat said. "We don't teach teachers they should point to symbols when communicating, we don't teach parents to point to symbols, and we don't teach educational assistants and peers."

Another gap occurs when we assume children who use AAC can't learn to read. "If we can teach a child to manipulate 26 letters, they can say whatever they want," Pat said, noting that expression isn't dependent on the number of symbols programmed into a device.

"But literacy is not a routine practice, still, in Canada, the U.S. and elsewhere in the world." Too often professionals have low expectations for complicated kids and make a decision, at age two to five, that literacy isn't a goal. "We should just teach them to make sandwiches," she said.

In an 
earlier interview, Pat said this sets children who use AAC up for exclusion. "They're the ones in self-contained classrooms who often don't get literacy and language instruction, and people just say 'that's too bad.' It's kind of stunning in 2013 that people still think they can tell a book by its cover.
.
"We know how to teach children to become literate, but we're not doing it," she said at her keynote.

Historically clinicians have decided what type of AAC system a person uses, and the individual had no input, Pat said. 

She talked about the importance of self-determination for people using AAC, which she described as including the following:

Autonomy, which means being able to act according to your own preferences, interests and abilities. For example, if a child wants four-letter curse words programmed into his device, he should get them.

Self-regulation, which involves problem-solving, goal-setting, self-monitoring, decision-making and self-evaluation. Of course clients won't always make good choices, but that is true of all of us, Pat said.

Psychological empowerment, which is about helping children see they can make things happen in their own life. It's about "learned hopefulness," rather than "learned helplessness," she said.

And self-realization 
happens when you understand your strengths and challenges and can capitalize on what you're good at.

Pat said 
the focus must be on what a child needs to communicate in settings that are "important and motivating to the child."

She said that too often voice devices are programmed with words for "wants and needs" but not for everyday socializing which is the foundation of friendship. Being able to tell jokes, make comments, and use phrases related to games are important to kids. "Relationships" are the ultimate prize, Pat said, and if an AAC system isn't helping a child maintain and enhance their social network "what are we doing?"

Pat is a professor in the department of Educational and Counselling Psychology and Special Education at the University of British Columbia. She is also a board-certified behaviour analyst.

3 comments:

Thank you for posting this. My son has Down syndrome which results in severe language delay and possibly apraxia. He signs a little bit, but not as much as we encourage. He got a Dynavox V three years ago and pretty much hated it. It was a struggle to get him to use it and we really didn't know what else to do. This year the school district trialed him on an iPad with ProLoQuo app and he loves it! He still doesn't use it as much at home (because we know what he is saying, I think) but he does use it much more at school. So that is progress. But the fact that what is programmed into the device is based on what other people decide to put in still troubles me. One of the reasons we wanted the speech device was to give us some clue as to what is in his head, what he thinks about, etc. and I don't think we are getting that yet. Hopefully! But we are still a long way off. I am going to share this with the speech therapists, technology consultants, etc. at his school. They are wonderful but I don't know that they have thought from this perspective.

Pat Mirenda is making such incredibly important points; I hope what she's saying can be firmly incorporated into all AAC strategizing though that isn't easy, by any stretch of the imagination. But I think too often the most profound aspects of communication are somehow lost in the technology shuffle.

ACC unfortunately is a tool of psychologists who want to intervene on behalf of children for their own benefit. As a child, I volunteered to work with non-verbal peers, using I don't know what technologies to help them communicate, unaware of the heteronomous nature of the technology, while I would use my own received and constructed languages with my peers.

Being autistic, I've always made up my own words while amassing a ponderous personal vocabulary of words I just liked the sound of. I mashed them up. I invented new meanings. I forced old words into new contexts. People still find it strange that I like to use out of date words. I don't it to be quaintly hipsteresque. I do it because they're the words that come to mind when I want to communicate.

The technologies we use now do as much to silence children's voices as they do to allow them a means of communication. No one asks the child if this is the voice they want speaking for them, or if they'd rather use some profanity, or even augment a profanity (fĂ¼tiferosity is a favourite of mine). No, children who are non-verbal are given a preset suite of tools to communicate with us, without us ever bothering to communicate to them in their own language. And any parent who has a child who does not speak their language should watch amanda bagg's video in my language (http://www.youtube.com/watch?v=JnylM1hI2jc), or melanie yergeau's I stim therefore I am http://www.youtube.com/watch?v=s2QSvPIDXwA as these videos will remind you that we are communicating, even when we're not using the tools we're forced to use to communicate.

I'm obviously good at expressing myself. Nothing like finally learning how to communicate effectively at age 35, but the first few decades were tough. I can tell you, that there's no way I'd have ever used the tools I'd be given now. They're too reductionist and pragmatic and uninteresting. I got into language via iambic pentameter and sustained alliteration. And I still struggle with simple anglo saxon words. If you want to engage children in language, why not start with some engaging words? That's my take.