Tag Archives: User Experience

There’s a Person in There

My wife works as a speech and language pathologist for a local school district and her caseload is a mix of kids from ages 3 to 21. One of her most difficult cases involves a student with severe cerebral palsy who transferred into the district when he was 12-years-old. This student cannot speak or use his body to convey information and currently expresses himself primarily through eye contact and facial expressions. Because of this extremely limited range of abilities, his cognitive functioning is unknown.

The only real communication options involve interpretation of the student’s eye movements. Professionals can do this using a contraption called an eye gaze board, which is a simple frame to which you attach pictures or symbols. The professional sits face-to-face with the student, holds up the frame, and then prompts the student with a question. By observing where the student looks, the professional can make assumptions about their intended responses.

Needless to say, this approach has some limitations, particularly in this case. The student’s eye movements are hard for even professionals to interpret and the student himself tires very quickly. After careful consideration, my wife elected to see if the student could use an eye-tracking device in combination with a communication board or speech generating device (SGD) — a specialized computer that allows the user to build messages and relay them to others through a synthesized voice. (Dr. Stephen Hawking is a famous user of such a device.)

Users can access these devices directly using a keyboard or touch screen or they can manipulate them indirectly with a joystick, adapted mouse, optical pointer, eye tracking device, or other type of controller. The specific access method depends entirely on the abilities of the user and, in this case, there are not a lot of options. The student is quadriplegic and does not even have enough control over his head and neck movements to use switch access scanning, in which an indicator such as a cursor steps through selections automatically and the user hits a switch when the indicator lands on the desired choice. Blinking is also out for similar reasons.

A comparison of the two options shows some obvious advantages for the eye tracking option. Unfortunately, these devices are not cheap. While an eye gaze board can be assembled from five bucks’ worth of spare parts, a communication board and eye-tracking device cost about $8,000 apiece. No school district is going to spring for such a purchase these days so it became necessary for my wife to apply for a loaner and see if she could build a case for Medicaid.

.

Comparative Evaluation (Eye Gaze Board & Eye Tracking Device)

Eye Gaze Board Eye Tracking Device
Ease of Set-Up Easy Difficult
Ease of Listener Comprehension Difficult Easy
Y/N Response Accuracy 20-30% 80%
Number of Communication Functions 4 14
Size of Picture Field 4 pictures 12 pictures
Length of Time Before Fatigue 10 minutes 30-40 minutes
Maximum Length of Utterance 1 4+
Able to Fine-Tune Dwell Times No Yes
Able to Independently Introduce a Topic No Yes
Able to Communicate with Multiple Listeners No Yes
Able to Call for Attention No Yes
Able to Communicate with Non-Professionals No Yes
Able to Repair Communication Breakdown No Yes

.

1st Trial

The loaner — a Dynavox Vmax/EyeMax system — arrived in the last few weeks of the 2011 school year and came with some standard navigation screens or “boards” that are based on vocabulary and language ability levels. The user categories include — in order of ability — emergent communicators, context-dependent communicators, and independent communicators.

The primary choice for this case was between context-dependent, which means that the student’s ability to communicate depends on the environment, topic, or communication partner, and independent, which means that they are able to combine single words, spelling, and phrases together to create novel messages about a variety of subjects.

.

Examples of the Context-Dependent “Child 12” Navigation Page (left) and Scene (right)

       

.

Examples of the Independent Gateway “Child 12” Set-Up (left) and “Child 40” Set-Up (right)

       

.

These navigation boards make extensive use of picture communication symbols (PCS) and the Fitzgerald color coding system for language development. PCS are simply standard graphics whose meanings are easily understood while the Fitzgerald “key” system assigns colors to specific grammatical forms. The psuedo-3D appearance of the buttons looks a little dated to my eye but the perceived affordance may be necessary for some users. The program itself is highly customizable.

To create a message using the different boards, a user would navigate through the system and click on each component in turn until they were finished. For the purposes of measuring message complexity, each of these steps counted as one “navigational unit.”

A simple request for a sandwich might look like this in a context-driven environment (for an utterance of four navigational units):


The same request in a word-based environment would look like this (for an utterance of three navigational units):

My wife’s student’s communication level is context-dependent. However, the navigation boards available for context-driven communication were too complex for him to use and many of the topics simply weren’t relevant. (He would never use either of the above examples because he doesn’t eat solid food — all nutrients are provided through a gastro-intestinal tube.) To get around some of these issues, she programmed a customized board based on his particular abilities and interests.

Some of these modifications were fairly extensive. Since her student had no understanding of grammatical structure at this time, she simplified the color scheme so it only used three colors: orange for the “back” button, blue for any folder that could be opened, and gray for any item at the bottom of a decision tree. She also tightened up the button groupings to reduce difficult eye movements and eliminated any buttons that would appear “underneath” the back button to reduce navigation errors. Finally, she set the dwell time between 8.5 and 9 tenths of a second — the effective “window” for reading the student’s gaze accurately.

.

Customized Communication Board

       

.

The student was able to use the system for about 2 1/2 weeks in late Spring 2011 and for one week in Fall 2011. During the trial period, the student was able to use the twelve-button screen for several language functions, including basic greetings, requests, yes/no responses, exclamations, expressions of physical state, and even a few jokes (knock, knock jokes that my wife programmed into the computer). The range of communication partners included school faculty and several family members.

For casual observers, the student’s performance using the device was revelatory. One teacher who overheard the student working on a craft-related activity stated simply, “Wow, there’s a person in there.”

Although, it might seem obvious that such a tool would be beneficial for this particular student, the services were not deemed “medically necessary” and the initial request for Medicaid was denied. The evaluator felt that there just wasn’t enough evidence showing independent use of the system to create novel utterances. (Attempts to include some peer-appropriate language may have backfired when the evaluator dinged the student for overly frequent use of the phrase “smell ya later.”)

Another, longer trial was suggested.

2nd Trial

The next loaner arrived in April 2012 and my wife was determined to gather more quantitative data and provide as much documentation of the second trial as she could. Each of the student’s statements during the trial period were marked down and evaluated for complexity (number of navigational units or levels), conversational turns (the alternations or volleys between two speakers), and functions. Functions include descriptions (items, past events), requests (actions, information, objects), responses to requests, social devices (spontaneous calls, exclamations, greetings) and statements (emotions, future events, personal information, opinions). After four-weeks, there were 265 individual utterances available for analysis.

A few initial findings:

  • The student’s accuracy of responses to yes/no questions increased to 80% using the eye tracking device in conjunction with the SGD (compared to 20-30% on the eye gaze board).
  • The student’s ability to look at an item on command improved to 85%.
  • The student was able to comprehend all of the noun and verb phrases programmed into the device.
  • The student demonstrated comprehension of the following:  categories, colors, shapes, sizes, actions words, possessives, time words, words denoting quantity, pronouns and wh-questions.
  • The student spontaneously accessed the machine to call attention and participate in conversations with a variety of adults and peers.
  • The student combined multiple symbols to create a message and often used one symbol in novel ways. For example, he would use “bye” to indicate that he wanted to stop an activity.
  • The student demonstrated the ability to repair conversational breakdowns. After an unintended response, he would often use the method of multiple “clicks” on a word to emphasize his correctly intended response.

During the trial period, the student gradually shifted from single-level utterances to more complex navigational structures. By the second half of the trial, 61% of his utterances used a combination of symbols and the average length of utterance increased from about 1.6 navigational units during the first two weeks of the trial to over 1.8 navigational units in the second two weeks. A basic MS Excel t-test performed on this metric suggests that this change was significant.

.

Distribution of Utterances by Navigational Units (1 vs > 1)

.

Distribution of Utterances by Navigational Units

.

The mean score for Half 1 (M=1.605 SD= 0.727, N= 119) was significantly smaller than the mean score for Half 2 (M=1.836, SD=0.822, N= 146) using the two-sample t-test for unequal variances, t(261) = -2.42, p <= 0.016. This implies that the student has the attention, memory, and problem-solving skills to use a SGD to achieve his functional communication goals.

t-Test: Two-Sample Assuming Unequal Variances

Half 1 Half 2
Mean 1.605 1.836
Variance 0.529 0.676
Observations 119 146
Hypothesized Mean Difference 0
df 261
t Stat -2.42
P(T<=t) one-tail 0.008
t Critical one-tail 1.651
P(T<=t) two-tail 0.016
t Critical two-tail 1.969

Interestingly, many of the student’s more complex utterances were in conversations with peers — pre-teens with no training in speech and language communication. The student also increased the number of conversational turns per topic over time and, as with conversational complexity, his performance was better with his peers. He had longer conversational “volleys” and used many longer strings of symbols than his conversations with adults.

.

Navigational Units Comparison by Listener

Listener 1 2 3 4
Peer 45.6% 40.5% 8.9% 5.1%
Professional 46.2% 36.3% 16.5% 1.1%

.

Conversational Turns Comparison Over Time

Half 1 2 3 4 5 6
1 59.5% 25.0% 9.5% 4.3% 1.7% 0.0%
2 52.8% 27.1% 10.4% 5.6% 3.5% 0.7%

.

Conversational Turns Comparison by Listener

Listener 1 2 3 4 5 6
Peer 53.9% 23.7% 11.8% 6.6% 3.9% 0.0%
Professional 55.6% 27.8% 9.4% 4.4% 2.2% 0.6%

.

While there is no doubt that this technology would prove incredibly beneficial in this situation, the strict rules surrounding Medicaid requests makes the outcome difficult to predict. By carefully documenting the results of this second trial (and including some awesome tables and charts), my wife hopes to tip the scales in her student’s favor. The report was mailed yesterday so cross your fingers. As my wife’s student might say (through his technology-assisted communication device): “Let’s get this party started!”

Update:

  • June 22, 2012 – The request was approved. There is some hard work ahead but this is a big hurdle to clear. Congratulations and good luck to everyone involved!

Nudge, Nudge

A recent Wired article discussed the dangers of trying to influence users through nudging — the practice of structuring a person’s choices in such a way as to get a desired result. It highlighted one of the key dynamics facing today’s high-tech companies as they shift from relatively independent creators of “whiz-bang” software to full-fledged consumer-oriented businesses. This tension between following your bliss and taking into account the expectations of others can be a tough cultural change for some companies.

As corporate self-interest becomes more important than user satisfaction, the nudging company’s approach to consumers becomes fragmented and incoherent.

The target of the article was Facebook but it could just as easily be applied to anything from politics to parenting. I remember learning pretty quickly that if I wanted my five-year-old daughter to put on a sweater, I didn’t come right out and ask her if she wanted to put on a sweater … I asked her if she wanted to put on the red sweater or the blue sweater. Sheer genius. Of course, as she got older, she got wise to my evil machinations and the nudging approach started to fail.

The problem for businesses is that their customers are at least as savvy as young children and these people get frustrated when websites, surveys, or automated phone menus don’t offer up reasonable choices (or even try and trick them into doing something they don’t want to). This type of behavior can contribute to reduced customer satisfaction, lost revenues and lower brand value.

Updates:

Humanizing the Big Numbers

This recent article from Fast Company provides some great examples of how to make the statistics of big numbers more meaningful to the average person. This is a great skill to hone. Relating events or ideas to common human experiences helps make these things more easy to to understand and leads to more productive discussions.The approach is similar to developing the “return on investment” for a business case. The more clearly you can show the benefits of a particular solution, the more likely you are to gain traction with the people you are trying to influence:

“A good statistic is one that aids a decision or shapes an opinion. For a stat to do either of those, it must be dragged within the everyday. That’s your job — to do the dragging. In our world of billions and trillions, that can be a lot of manual labor. But it’s worth it: A number people can grasp is a number that can make a difference.”

It is also similar to the concept of human scale in architecture. The design of things like stairs, steps, seats, doors, windows, railings, hallways, ceilings, tables and shelves are all influenced by the physical and sensory capabilities of human beings. You can play with this scale to make things appear either monumental or intimate but the range of variability is limited to what people can actually use. People find places designed for automobiles — like parking structures, arterial streets or big box retail stores — alien and uncomfortable. The same is true for numbers or statistics that fall outside the range of human comprehension.

So what is the “Goldilocks zone” for these measures? It depends on the metric, of course, but here are a few guidelines off the top of my head:

Too Big Too Small Just Right
Time Eon, Millennia Nanosecond Second, Minute, Hour, Month, Year
Distance Parsec, Light Year, Astronomical Unit Angstrom, Micron Inch, Foot, Yard, Mile, Centimeter, Meter, Kilometer
Temperature Planck temperature Absolute Zero Room Temperature
Mass/Weight* Solar Mass Atomic Mass Unit Ounce, Pound, Gram, Kilogram
Objects Star, Galaxy Molecule, Atom, Subatomic Particle Building, Car, Book, Tool
Electromagnetic Spectrum Radio Wave, Microwave Gamma Ray, X-ray Visible Light

* Yes, I know.

Updates:

  • Channel Surfing Ain’t What it Used to Be

    As I was flipping through the stations on the TV the other day, I became particularly aware of the slight delay between the time I pressed the button on the remote and the actual change of the channel. This is one of those minor annoyances that shouldn’t bother anyone but it just seems weird that all the amazing technological advances in television (high-definition picture, thousands of channels to choose from) should come at the cost of the “crispness” in performance that I remember from the old analog broadcasts.

    The lack of responsiveness really becomes noticeable during casual browsing. The two- or three-second pause between channel clicks appears to be much longer than the amount of time the average human needs to evaluate the onscreen content. This sets you up for a lot of waiting and really has a negative impact on the user’s experience. If you have cable, you can use the guide feature, of course, but it just doesn’t provide the same satisfaction as a good, old-fashioned, rapid-fire channel surf.

    In a recent article, Jacob Nielsen revisited the topic of website response times and noted that delays of even a few seconds can contribute to an unpleasant user experience. He highlights three basic response speeds and how they relate to the human attention span:

    • 0.1 seconds provides a user with the feeling of an instantaneous response — a level of responsiveness that is essential to supporting the feeling of direct manipulation.
    • 1 second keeps the user’s flow of thought seamless and still allows them to feel in control. This degree of responsiveness is needed for good navigation.
    • 10 seconds keeps the user’s attention but they are starting to feel that they are at the mercy of the computer and wish it was faster. After 10 seconds, their mind starts to wander.

    These limits would apply equally well to an established technology like television.

    One Man’s Helpful Hint is Another Man’s Interruption

    A friend recently sent me this link to a discussion on the merits of obscure airport security notifications about snowglobes. Oddly enough, I experienced the snowglobe issue firsthand on a recent trip to New Mexico. The circumstances:

    • My daughter collects snowglobes
    • Snowglobes are the classic souvenir
    • Terrorists have attempted to smuggle incendiary fluids in small containers
    • The Feds only allow liquids in containers below a certain size onboard
    • Snowglobes contain an undetermined amount of liquid
    • Snowglobes are therefore banned from carry-on luggage
    • This information is provided to passengers only after the luggage check-in
    • There is no service that allows you to package and mail anything from the airport terminal
    • Nobody buying a snowglobe at a local tourist trap is going to piece all of this together beforehand
    • Terrorists and government bureaucracy now stand in the way of my daughter’s happiness

    This whole situation was extremely annoying and I have to admit that a sign or some sort of notification would have helped. The trick for delivering a message like this is how (and when) to target your audience. Obviously, a sign taking up precious real estate in the terminal can be distracting and dilutes the effectiveness of more important messages. On the otherhand, there is a small subset of people who would really benefit from this information if it could be delivered at the right moment.

    Interestingly, this incident did answer a question that had beeen bugging me throughout the trip: why is it so hard to find a snowglobe in Albuquerque? All I could find were items that looked like snowglobes but were partially filled with sand. It wasn’t like the area didn’t get snow — people ski there — so what was the deal? My guess is that the local tourist shops developed the sandglobes in response to the airport security issue. They were everywhere. Maybe the snowglobe warning should have been delivered at that point.