I attended RESNA’s 2018 Annual Conference recently, as I do pretty much every year. This year’s was held in Arlington, VA in mid-July. A friend asked me why I invest the time and money to go to the conference. I’ll try to address that in this post, with a focus on the learning and new ideas that the conference inspires.
These highlights are based on my notes and some additional digging after the fact. I may not have all the details exactly correct — please let me know if you notice anything that needs fixing!
Presentations by the RERC on AAC
This year’s RESNA included the State of the Science conference for the RERC on AAC. (Every Rehabilitation Engineering Research Center, funded by NIDILRR, organizes a State of the Science conference during their 5-year funding cycle.) This meant a full day of terrific presentations on augmentative and alternative communication, as well as additional sessions sprinkled throughout the rest of the RESNA program.
There were some great presentations related to more efficient message generation in AAC. For example, one project is evaluating a technique where the augmented communicator and a communication partner can co-construct a message, unobtrusively and under the complete control of the augmented communicator. The concept makes a lot of sense, and evaluation of its effectiveness is ongoing. One subject has completed the evaluation protocol so far, showing an improvement in text generation rate from 2.0 wpm at baseline to 2.8 wpm with the SmartPredict prototype. The project team includes Susan Fager from Madonna Rehabilitation Hospital and Erik Jakobs from Invotek.

The Madonna/Invotek team are also collaborating on the concept of multi-modal access techniques. For example, one working prototype combines eyetracking with single-switch scanning. One use case for this is for an individual whose targetting accuracy with eyetracking is not quite high enough to enable successful use of typical eyetracking systems. With the combined system, the user first chooses a region of the screen with eyetracking, then uses switch scanning to make a specific selection from the handful of items in that region. When eyetracking alone would be too inaccurate, and scanning alone might be too slow and tedious, the combined system has the potential to be both more accurate and more efficient for this type of user. The project has progressed to the evaluation stage, and two participants thus far have shown more accuracy with the multi-modal system as compared to eyetracking alone.
Krista Wilkinson and colleagues continue their fascinating work on the design of AAC visual displays. By measuring (via eyetracking technology) where users look and for how long, they discern what arrangements of visual displays are most effective, both for visual search as well as for physical target selection. Does color matter? (Yes, searching is easier when groups of similar symbols are arranged by internal color.) Does a cluttered background decrease the effectiveness of a photo in a visual scene display? (No, somewhat surprisingly.) Do we have to arrange symbols in a grid, or would a pattern like perimeter arrangement work better? (Current evidence points toward perimeter arrangements.) Another interesting take-home point is that children with autism spectrum disorder (ASD) attend to people in photographs to the same extent as typically-developing (TD) children.

The day also included extensive input from individuals who use AAC and their family members, both in-person and via video, which I really appreciated. Presenters included Anthony Arnold, Greg Bieker, Chris Klein, Godfrey Nazareth, Tracy Rackensperger, Rob Rummel-Hudson, and Dana Nieder. The role of AAC as a key to supporting personal agency for augmented communicators was a common theme. And several individuals noted that effective AAC can occur in a variety of ways: with a “high-tech” speech-generating device, a “low-tech” physical letter board, co-construction using partner-assisted scanning, or a combination of approaches. Sharing and learning from these stories is central to what we do and why we do it.
I could go on and on about these presentations, but it may be best at this point to suggest that you visit the RERC on AAC website to learn more. They’ve posted links to many of the publications referenced in their presentations, and promise to post additional content about the State of the Science conference there as well.
Assistive Technology for Motor Impairments
This 90-minute workshop surveyed a wide range of approaches and technologies in the areas of communication, computer access, smartphone access, home control, and other instrumental activities of daily living. This landscape changes quickly with the constant evolution of smart speakers and new smartphone access techniques. Not to mention all the possible integrations across technologies, including: using your AAC device to talk to Alexa, using your computer to access your smartphone (Vysor), using your smartphone to access your computer (Rowmote), or using your powered wheelchair controls to access any and all of these! So this was a fantastic way to “level up” on the current state of the art in these areas.
Adina Bradshaw and Leah Barid of Shepherd Center and Matthew White of Courage Kenny Rehab really know their stuff and did a great job presenting a lot of information in a short time. They’ve posted their presentation for reference, and Matthew also maintains a vast Pinterest board on AT that you may find useful.
Tools for Knowledge Translation
As researchers or developers, we hope we have something useful to offer, such as knowledge to advance the field or a software application that solves an important problem. But building awareness and effectively communicating our work to “key stakeholders” isn’t always easy. Jen Flagg and Joann Starks presented some resources and approaches to help with that in their workshop. Jen is with the Center on Knowledge Translation (KT) for Technology Transfer (KT4TT) at the University at Buffalo, while Joann is with the Center on KT for Disability and Rehabilitation Research (KTDRR) in Austin TX. These are both NIDILRR-funded centers with a mission to move new research and technology into practice.
I thought I was familiar with both centers, but I was surprised how many useful tools and resources they’ve provided on their websites. I immediately made a note to visit both sites (KTDRR and KT4TT) and really *read* what’s there. And, if you are a NIDILRR grantee, both centers can provide you with direct assistance in getting your message, your product, your idea out into the world where it can do some good.
Developers’ Showcase

This is an opportunity for developers to demonstrate new product ideas or prototypes in an informal “science fair” type of environment. At least 20 developers participated in this year’s event, which also featured good food and a lively social atmosphere. Projects ranged from BrailleBlox to help emerging braille readers learn the braille alphabet to the Otto prototype from BlueSky Designs that auto-positions a user’s eyegaze device in just the right location. Since I was there as a presenter (showing our free AT-node and Scanning Wizard tools), I didn’t get to tour the other project tables as much as I would have liked. But it was a vibrant event that featured a lot of energy and ideas with unlimited potential.
So why do I go to RESNA?
The above is just a sample of what I learned at the RESNA 2018 Conference. And a big advantage of an in-person conference is the opportunity to get to know the presenters, ask questions, start a conversation, and build connections. On the flip side, it’s also a chance for me to present what I’ve been working on, and get feedback on that from the leaders in the field. So the learning and inspiration are big reasons why I attend.
These connections grow over the years, until at this point many of my RESNA colleagues are also close friends. So from a personal perspective, it’s just a fun event and a chance to get together with a lot of great people.
If you went to RESNA 2018, what were some of your highlights? What topics would you like to see at RESNA in the future?
See you at RESNA 2019 in Toronto!
Really nice overview of the conference. I need to check out the AT for motor impairments presentation. There was a lot going on!
Thanks, Tom. Yeah, it was definitely a full schedule!