Our free AT-node site is a handy way to view text entry data on users with physical disabilities. (You can get a quick intro to AT-node in one of our recent posts.) Recently I’ve been working on different ways of visualizing the text entry data in AT-node’s charts, and I could really use some feedback about whether we’re on the right track.
AT-node’s histogram
I’m starting with the main histogram that shows the distribution of the text entry data retrieved in the search. (See this post for more on the histogram.) For example, if you retrieve all 360 cases in the dataset, the histogram shows the range of text entry rate (wpm) in the data and how many cases fall into each level of text entry rate. Here’s what that looks like for the original design:

At a glance we can see that the most common text entry rates fall between 0 and 2 wpm. This is a bit shocking, and we’d like to know more about what’s going on. With the original chart design, each of the bar segments is an individual case, so you can mouseover each segment to see the TER value and what study it comes from. And you can click on a segment to go to the abstract for the study. But that doesn’t really tell you much about the specific cases. You can view a table of the data further down the page, but it can be hard to connect the dots between the cases in the histogram and those in the data table. Finally, there are some sparse datasets where the original chart design ends up looking just plain weird.
I’ve been revising the design to try to make the histogram more informative in general and better-looking for sparse datasets. Each case is now marked by a circle, which is color-coded to show which interface type the person was using. This lets us see, for example, that many (but not all) of those 0-2 wpm cases involve scanning (lime-green circles) or brain-computer interface (magenta). The mouse interaction is still available, but the tooltips now give more information. Here’s what it looks like for the same 360 cases in the revised design (keep in mind that this is typically bigger in the AT-node site itself):

Your feedback needed!
This is very much a work in progress, so I would love to get some feedback at this point. You can view and explore both chart designs yourself. Just visit these links to perform a few different searches using AT-node:
1. All cases
2. Cases with a diagnosis of cervical spinal cord injury
3. Cases using speech recognition or physical keyboard
4. Cases with a diagnosis of neuromuscular disease using brain-computer interface
5. Cases with a diagnosis of cerebral palsy using a cursor on-screen keyboard
This gives a pretty good sample of how the different charts look for different types of datasets. Here are still images for a spare dataset (the cerebral palsy/cursor OSK scenario above):


Feel free to explore AT-node on your own as well while you’re there.
Just leave your feedback in the comments section below, or email me at hhk@kpronline.com. Thanks for your help!
Some tech details if you’re interested
If you’re wondering, the original charts are drawn using google charts, specifically the Histogram method. This gives us some control over how the chart looks and behaves, but also has limitations. We can only give the Histogram method a stripped-down version of the data, so it can’t display different dimensions like color-coding for the interface or show additional info in the tooltip. It also has a mind of its own about the histogram bin size, when the dataset is sparse.
The revised charts use the D3 visualization library. This is an incredible collection of visualization tools, the kind of stuff they use to build visualizations for the New York Times. There’s a much bigger investment in learning to use the D3 library, but it provides the ability to do pretty much anything you can imagine. I used this histogram chart as a way of trying to learn the basics of D3. Fun, but challenging. In the future, it offers the possibility of allowing more user exploration as well, such as looking at the data by age range, gender, or other characteristics.
Happy to discuss further — just leave a comment below!