Highlights from the ATIA 2020 conference

The ATIA 2020 conference: 2+ days packed with learning, sharing, and networking in assistive technology. Here are some highlights from my trip to ATIA this year.

Highlights from the ATIA 2020 Conference

Highlights from the ATIA 2020 Conference

I was fortunate to attend the ATIA 2020 Conference recently, held in Orlando, FL. From among literally hundreds of presentations and exhibits, I could only sample a tiny subset. Here are just a few highlights.

Firsthand experience with 3 new eyegaze systems

ATIA includes an extensive exhibit hall, providing a chance to try a wide range of devices and discuss characteristics with the people who make them.

I was able to try out 3 brand new eyegaze systems, one on the iPad (the Inclusive TLC Skyle) and two Windows tablets (the latest TobiiDynavox I-series and EyeTech’s EyeOn). These are just informal impressions, based on me using them for about 10-15 minutes, as a person with typical motor function, in a busy exhibit hall. In all cases, a company representative helped set me up. Calibration wasn’t difficult, but with each device, I recalibrated at some point, sometimes multiple times, to try to get better performance.

Overall eyegaze control

My feeling of control over the eyegaze cursor was similar with all 3 systems. I was able to select some areas or items easily. Other items, I just could not get, or could only get after really working at it.

Controlling the desktop can be a challenge, particularly if you want to do something with an item other than do a simple click on it. I could usually do the simple tasks I was trying to do, like open a browser window, scroll down to read the web page, close the page, etc. But I definitely made mistakes along the way or inadvertently selected things I didn’t want to select.

I felt more successful with typing, using an on-screen keyboard to gaze and dwell on my desired letters.

How to get better with eyegaze?

A general challenge I have with eyegaze as a method of control is that it’s really hard to know what to do to get better at it. When *I* know I’m looking at the item, but *it* doesn’t know that, what exactly should I do differently to get it to work better? I’m already looking right at the item, so what else is there to try (aside from recalibration)? I feel like this indirect path to improved skill is a fundamental difference between eyegaze and other types of computer input skills. I’m not sure how much better I would get at eyegaze with practice, although clearly there are people who have gotten extremely proficient at it.

More details about the systems I tried

Inclusive TLC Skyle

This runs on an iPad Pro, using what I think is Inclusive’s own eyegaze camera and hardware to convert eyegaze into mouse signals. Then it sends the mouse signals to the iPad Pro, to control the iPad via the iPadOS 13 Assistive Touch mouse feature. It does allow for explicit switch click to make a selection, if you’d prefer to use that instead of dwell selection. It’s priced at $2995, not including the iPad Pro.

EyeOn by EyeTech

This is an integrated system running on a Windows 10 tablet computer. You can use it with the Windows 10 Eye Control interface for all-purpose desktop control, or EyeTech’s own Quick Access eyegaze interface is available as well to perform desktop-type functions. In my trial with it, I was controlling a Grid 3 communication layout, for the most part.

TobiiDynavox new I-Series

The new I-series devices are Windows tablets, and include TobiiDynavox’s new Computer Control eyegaze interface. These use a new eyegaze camera, which is not currently available as a standalone eyegaze peripheral.

I was *not* able to try the built-in Microsoft Windows 10 option, called Windows Eye Control (not to be confused with TobiiDynavox’s Windows Control or Computer Control software!). For some reason, Microsoft didn’t have that set up at their booth, and I didn’t see it at any other booths.

Teaching speech recognition skills

Kelly Key and Dan Cochrane presented on ‘How to teach the speech recognition writing process.’ Students may use speech recognition as a writing tool for a variety of reasons. These include challenges with the mechanics of writing, or physical difficulty with writing or typing. For students who can verbally express their thoughts well, speech recognition can reduce the barriers involved in writing tasks and allow students to show what they know and develop more completely as a writer.

The key to this promise, however, is recognizing that using speech recognition is itself a skill that needs to be taught and practiced. Cochrane and Key have written a very readable and comprehensive guide for teaching this skill, and their workshop reviewed the major elements of their approach. They’ve refined the guide multiple times over the years, and it’s a fantastic resource for anyone who is using speech recognition as a writing tool in any setting.

The best part is that is freely available at bit.ly/srguide.

Four steps in the speech recognition writing process: Think it, say it, check it, fix it.

Multi-access and adaptive gaming

Adina Bradshaw from Shepherd Center presented with Ben Henry from TobiiDynavox on ‘Multi-access approaches for gaming, environmental control, and computer use.’ They began by describing the concept of physical Access Points, the body locations with which a user is going to interact with a system. And they encouraged us to think about how to combine access points to achieve more efficient access. Ideally, once you’ve identified available access points, you can find an access interface that can be used for a variety of tasks, including gaming, computer, or environmental control.

A major focus of Adina’s work is helping people play electronic games following a high-level spinal cord injury, acquired brain injury, or other neurological disorder. The Shepherd Center has a fully-equipped adaptive gaming room, and many users are highly motivated to learn how to use the adaptive controllers and switches to be successful gamers. Apparently it’s also possible to use game controllers to emulate the keyboard and mouse, for more general purpose computer access, using an application such as JoyToKey. And some game controllers, like the Quadstick, are also mouse emulators, no translation required.

Photo of a person in a wheelchair using switches and an xbox game controller to play games with a friend.

I have a lot to learn about adaptive gaming, but it was good to get some exposure to it. The main thing I learned: this gaming stuff is very important to lots of people, and Adina is the person to contact with any questions about making gaming accessible. She also passed along two major resources on adaptive gaming: Able Gamers and Warfighter Engaged.

Assistive technology feature-matching and tool selection

Heather Koren, the Director of Assistive Technology for Westminster Technologies, presented on ‘Assistive technology feature matching.’ This was a good reminder of the importance of identifying generic AT features that can meet the user’s needs, *before* trying to select a specific product that includes those features. Having a clear and documented list of needed features helps the team do a better job of selecting tools that can provide those features, and to document clearly how the identified solutions meet those needs. It also provides flexibility in providing the needed features in more than one way, rather than *only* by procuring a specific named device.

The presentation provided some good examples of how to *do* feature-matching. For example, an area of student need might be Writing, with a specific need of “enjoys writing but difficulty with spelling.” A feature that matches that specific need could be “word prediction,” to support the spelling difficulties. Once all the features have been identified, the team can generate a list of specific products (such as Co:Writer, Read & Write, etc.), then check which features are met by each tool.

SIFTS tool for identifying needed features

Heather also shared an interesting online resource that may help you with this process. The free SIFTS tool from OCALI can help teams match student needs and strengths to AT features. In other words, it helps you generate your list of needed features. You create a profile for a particular student, identify a general area of need (such as Physical Access: Computers), and then SIFTS asks a series of multiple choice questions about the student, environments, and tasks. Based on your responses, it suggests some features that could help the student (such as Voice Input for a student who has a clear and consistent speaking voice but has trouble using a standard keyboard due to motor limitations or fatigue). These are general feature suggestions, not specific products.

My talk on choosing alternative access solutions

My own ATIA presentation also had a feature-matching component to it, as part of using an evidence-based approach to choosing alternative access solution. Here’s an example of a list of general features (the first 2 columns) to meet a user’s needs, and how each of 5 specific products meet each feature (or not).

In the feature-matching table, we see that 2 of the products (GlassOuse and Quha Zono) look like good candidates for this user, at least on paper. The next step is to conduct user trials with the candidates to collect specific performance data about each one and give the user a chance to experience what it’s like to use each one.

If you went to ATIA 2020, what were some of your highlights? What topics would you like to see at ATIA in the future?

Hope to see you at ATIA 2021!

2 thoughts on “Highlights from the ATIA 2020 conference”

Leave a Reply

Your email address will not be published. Required fields are marked *