SEP 2020-MAY 2021
RAI: Robot Anomaly Interface
Overview
Our client GVSC came to our team seeking a way to alert human operators when problems occur with their robots sent on reconnaissance missions. My team set out to create an interface system that combined physical and visual notifications. We created a haptic smartwatch and tablet interface system to aid human-robot communication in anomalous situations.
I worked part-time as a Fullstack UX Designer for the Software Engineering Institute (SEI) at Carnegie Mellon University alongside a team of researchers, designers, and engineers. I did both UX research and UX design.
I designed the haptic smartwatch and contributed to the tablet interface design. As teammates left and joined, I ensured handoff documents were shared properly and made preparations for onboarding.
This project had some interesting constraints I had the opportunity to work through. We had a very dynamic team, so teammates left and joined throughout the project. I oversaw a lot of the personnel changes, ensuring proper documentation and leading onboarding sessions.
There were also sudden and unexpected funding cuts that impacted our project, right as I began work on the smartwatch. So we also had a unique challenge to wrap up all our work and present it within 3 weeks until further funding could be provided.
GVSC wanted a way to make it easier for human operators to deal with any problems that robot vehicles run into during operations.
In the current state, if something happens and the robot stops functioning as expected, operators didn’t receive any notification. Once they realized an error (or anomaly) has occurred, they had no way to easily diagnose what went wrong with the robot other than physically examine it and guess. So when the robot runs into an anomaly, how do we communicate that error state to the operator so they can figure out how to resolve the issue? That’s the problem we were tackling.
Solution
We created a multimodal robot anomaly interface (RAI) system that combines an 11 inch tablet with a haptic smartwatch to deliver alerts and updates that are visual and tactile. The smartwatch squeezes your arm to notify you of an alert while delivering high level details. The tablet is an in-depth interface that helps the operator dig into the robot’s condition and diagnose the problem.
4 smartwatch prototype iterations, 1 round of user testing
I was involved in both UX research and design. We went through 10 rounds of initial user research with storyboarding. The tablet interface went through 9 iterations, and I went through 4 iterations of the smartwatch with 1 round of user testing .
Research into design
I bridged the transition from research to design by leading the creation of our design principles. These principles helped us navigate what features to keep or cut.
Maximizing screen real estate
I made the initial mistake of putting the same amount of information on the smartwatch as I would a mobile screen, but the watch is much smaller. To remedy this, I created additional screens so the same amount of information was spread out over multiple screens.
Dealing with unique physical constraints
A unique constraint I dealt with was that operators are often wearing gloves, which meant that buttons had to be large and easily clickable. I utilized the smartwatch’s horizontal swipe interaction to balance having multiple options and making the buttons bigger.
The next step for this project would have been to run tests on the interfaces we designed. We came up with 2 success metrics we would have tested if we had additional time.
I was a full stack UX designer involved in both research and design. I got to work closely with the engineering team on the smart watch (as they worked on making the haptic interactions for the watch). I enjoyed working through unique constraints that came with this domain and designing for a multimodal solution.