Which hands-free mouse is right for you?

Once you’ve identified a few hands-free mice that seem to meet your needs, how do you find the one that is the best fit?

Hands-free mice: Which is right for you?

Hands-free mice: Which is right for you?
Here’s the third in our series on hands-free mice. We’ve looked at 13 considerations for choosing a hands-free mouse, then examined detailed features of 25 available hands-free mice. All that information is useful for narrowing down the choices and identifying a few candidates that might meet your needs. In this post, we look at how to find the “right” solution from among those candidate devices. To support an evidence-based decision, we’ll show you some tools that help collect and organize data while trialling different hands-free mice.

An example scenario

This series of posts was inspired by a question to the QIAT listserv, asking about mouse emulators that could be used by a high school student with quadriplegia. Let’s use this scenario as a simplified example of someone who might benefit from a hands-free mouse. We’ll call the student “Adam.” Adam has no functional movement of his hands/fingers other than to activate a light touch switch, and his vision, cognition, and speech are all unaffected by his disability. He does grade-appropriate schoolwork and plans to attend college in the future.

Computer-wise, his primary task is using a web browser to access web pages and apps, including gmail, google docs, and other browser-based software. His main typing method is speech recognition (using Dragon NaturallySpeaking) on a Windows computer. While Dragon does provide control of mouse functions using speech, Adam and his assistive technology (AT) team wonder if it is the most efficient substitute for the mouse. They want to investigate some other options for performing the mouse functions, and see if they can find something that is faster.

Before running trials

Just a quick reminder that before running any trials with candidate devices, there are some important steps to complete first. Let’s briefly review those below. (Note that we’re simplifying this part of the process for the purposes of this post. For more details, there are many resources out there, including Education Tech Points, QIAT, the SETT framework, and this handout on feature-matching from an Ablenet webinar.)

Collaborative feature-matching

Let’s assume that Adam and his AT team have started by collaboratively discussing the situation: Adam’s needs, strengths, limitations, and preferences; the kinds of tasks he needs to do; the environments where he’ll use the hands-free mouse; and other factors. They’ve reviewed considerations specific to hands-free mice, and looked over the available options and their features. They’ve connected the dots to see which available options provide the features that Adam needs.

Identify some candidate hands-free mice

As a result of that process, Adam identified the following hands-free mouse approaches for trialling:

  1. MouseGrid: this is part of Dragon already, so Adam’s been using it and is familiar with it.
  2. Show Numbers: also part of his Dragon system, using the SpeechStart+ add-on. He uses this as a more direct way to access clickable targets.
  3. Camera Mouse: Adam wants to try a couple of the free webcam face trackers, to see if those could have a role in his setup. (One idea is that it would be relatively easy to use these at any computer on campus, since it is a software-only solution.)
  4. Enable Viacam: another free webcam face tracker.
  5. HeadMouse Nano: a good example of perhaps the most commonly used approach for someone with Adam’s needs.

Trials with the candidates

It’s a good idea to draft a written plan for how you will run trials with the candidate devices, include the what/how/who for training, tasks to be performed, data collection, and data analysis. Part of the trialling process is gathering evidence and actually measuring performance on tasks with each candidate. That’s what we’re going to focus on below.

Here at KPR, we’ve developed a couple of tools to help you gather and analyze relevant data. Let’s start with Compass software, then describe a simpler (but less fully developed) web page that essentially performs one of the main Compass tests.

KPR’s Compass software

Our Compass software is a desktop program for Windows or Mac. You can choose from 8 skill tests that measure text entry, mouse tasks, and switch access. Compass presents the test, collects speed and accuracy data during the test, and generates reports about the results.

For these hands-free mouse trials, we’ll focus on the mouse skill tests, since Adam already has a good way to enter text using Dragon. We’ll start with the Aim test. Aim presents a series of targets on the screen, one at a time, as shown below.

Screenshot of the Compass Aim test, showing a single blue square on the screen. The task is to click on the blue square.
Screenshot of the Compass Aim test, showing a single blue square on the screen. The task is to click on the blue square.

When you click on a target (or double-click, depending on the test configuration), the next target appears. Target locations are randomized, so you can’t learn and anticipate them from one Aim test to the next, but the overall test difficulty is similar from one test to another. That’s important because it allows you to compare the results from two or more Aim tests to each other, as an apples-to-apples comparison.

During each target trial, Compass tracks the mouse cursor location and the occurrence of all clicks. It measures the time required to select the target, and the accuracy of all clicks (an accurate click occurs inside the target). At the end of the test, you can view a report showing the results. You can also generate a multi-test report to compare results across multiple tests; this is what we’ll do when comparing Adam’s Aim tests for each candidate device.

Using Compass Aim to compare the candidates

Adam ran the Aim test with each of the 5 hands-free mice that he’s trialling. (Note: these are just example data. The HeadMouse data are from one individual with a C4-5 spinal cord injury, and I ran Aim myself using the other 4 methods. These are real data, but they aren’t from “Adam”, or even from a single person. So the interpretation and conclusions below are purely an example!)

For each device, Compass gives a test report, as shown below for the Mouse Grid test.

Screenshot of the report from a Compass Aim test, showing the test setup and data for speed and accuracy during the test. Average trial time using Mouse Grid was 12.61 seconds, with 100% click accuracy.
Example report from a Compass Aim test, showing the test setup and data for speed and accuracy during the test.

The summary of results shows that the average trial time when using Mouse Grid was 12.61 seconds. So what does that mean? Well, 12 seconds sounds like a longish time just to select one target, but we can get a clearer idea by considering typical values for the Aim test. The average for a mouse user with no disabilities is around 1.3 seconds per target, which is about 10x faster than Adam’s Mouse Grid result.

After completing the 5 Aim tests, we created a multi-test report using Compass, to compare the results across all 5 hands-free mice. This report gives a pretty comprehensive picture of the results, along with the test setups and descriptions of what each graph and table means. A key piece of the comparison is the speed-accuracy graph shown below.

A speed-accuracy profile from the Compass multi-test report. This one summarizes the results for 5 different hands-free mice. Each hands-free mouse has one data point on the graph. Accuracy is on the y-axis, with trial time on the x-axis. The HeadMouse device is the best performer, at 100% accuracy and 2.2 seconds per trial.
A speed-accuracy profile from the Compass multi-test report. This one summarizes the results for 5 different hands-free mice.

Tests with the best performance (fastest speed and highest accuracy) are in the upper left corner. In this case, that’s the HeadMouse Nano, with a 2.2 second trial time and 100% accuracy. All of the devices gave 100% accuracy, actually, except the Camera Mouse. (In the Camera Mouse test, the dwell select led to about 1 off-target click per trial; for some reason, it seemed to ‘click’ with a lower dwell time than the 1 second it was set up to use.)

So, which device is right for Adam?

Speed-wise, the HeadMouse is about twice as fast as the other options tested. The next-fastest is the Enable Viacam face tracker, at 4.7 seconds per target. The results also showed the relative inefficiency of using Dragon for general target selection, which confirmed Adam’s hunch that it may be worth finding another solution for mouse tasks. The speed advantage is quite important to Adam, especially as he moves on to the demands of college. Because of this, he’s considering using a HeadMouse for times when he is working at his own computer.

When using public campus computers, or when out and about, there may not always be a HeadMouse available. A reasonable Plan B in that case could be the Enable Viacam or the Show Numbers speech feature, since they are roughly tied for second place in efficiency (at about 5 seconds per target), based on our Aim data. Both of these could be readily available on campus Windows computers: the Viacam as a software download, and the Show Numbers as a built-in aspect of Windows Speech Recognition.

So, what’s the best device? Many of you in the AT community will recognize this as a trick question! Even with these data, the answer is still, “It depends.” However, the Compass data helped Adam to realize that his productivity could benefit significantly by using a new hands-free mouse solution, and that there were some reasonable alternatives to use in situations where his most productive option wasn’t readily available. He’ll make a final decision after considering these data in context with his preferences and other factors identified in the feature matching process described above.

Other aspects of Compass

In addition to the Aim test, you can also run a Drag test, to drag a target to a destination, or a Menu test, to choose items from a typical menu. Results of these will generally correlate with the Aim results, but may reveal distinctions that only show up when the task is a bit more complex than the simple target task within Aim.

One other thing to mention about Compass is that each test is customizable to match specific needs. For example, in the Aim test, you can provide more or less feedback during the test, change the size and appearance of the targets, choose the number of targets, and more. Generally, configure the test in a way that mimics a realistic situation, but also allows for a fair test. And when comparing tests using different input devices, like Adam is doing, make sure you set up each test the same way; just change the input device and test name so you can easily keep track of what input device was used for each test.

Screenshot of the configuration screen for the Compass Aim test. About 16 different options are available to customize.
Screenshot showing the configuration options for the Aim test. This one shows the default settings, except for labels to record that it is to be used with the Mouse Grid.

Learn more about Compass and request a free trial at kpronline.com/compass.

Free web version of the Aim test

Another option for trialling hands-free mice is our browser-based version of the Aim test. Compass software is comprehensive and commercial-grade, but sometimes installing and running a desktop program creates a barrier to using it. So I’ve experimented with doing some Compass-like things within a web page and eventually implemented something like the Compass Aim test to run in a browser.

The video below shows you what running the Aim test in the browser looks like. The test itself is very similar to Compass Aim (although the option for very hip music during the test is unique to the web-based version). You select each target by moving the mouse cursor into it and clicking.

Once all targets have been selected, you can view the results for the test, as shown in the video below. The main results are time per target and click accuracy, and there is also a feature to show you the path of the mouse cursor during each trial. In effect, you can “replay” the test to see exactly what happened during each target selection.

To compare multiple tests, you can also get a basic table and a speed-accuracy graph, as shown below.

Example of comparing results in the web-based Aim test. Two tests are shown on the speed-accuracy graph, with accuracy as the y-axis and time as the x-axis.
Screenshot from the web-based version of Aim, showing the multi-test report.

You’re welcome to try it yourself. Note that the web-based Aim does *not* store data in a robust, permanent way, so be sure to take screenshots or print any important results. Check out the user guide for more info on what the web version of Aim does and does not do.

Other tools for trialling hands-free mice

Compass and the web-based Aim are the only tools I know of that will collect and analyze speed and accuracy data while you perform a mouse task. So I hope you’ll use them to help inform decisions about access methods (and please send me suggestions about how to make them better).

Having said that, there are a variety of paper and pencil tools that can be useful during device trialling as well. A straightforward example is the Mouse Evaluation document found on the NATE Network’s forms and tools page. This form helps you track the setup of each device trialled, and has you rate the performance of basic tasks on a scale of 0-3. Like the software tools above, the focus is on basic tasks like pointing, clicking, and dragging.

A more elaborated version of this same idea is the Assessment of Computer Task Performance manual. This provides a very thorough protocol for setting up tasks, collecting data, and interpreting it. It is a recipe that you can follow, but has no software support for conducting it. (Actually, Compass is in many ways a computerized version of this protocol.) There is a significant research base behind the Assessment of Computer Task Performance, and you may get some good ideas by reading through it, even if you don’t intend to perform the full protocol yourself.

Next steps

I hope this post has given you a better understanding of how and why to use measurements as part of choosing a hands-free mouse. The same process can be used when making decisions about alternative keyboards, switches, or any other alternative access system.

Have you ever used any of these tools? Do you think you will? Why or why not?

Leave a Reply

Your email address will not be published. Required fields are marked *