Summary

Toyota Research Institute (TRI) has a goal of developing an autonomous vehicle that would be incapable of causing a crash. There are two different products autonomous products TRI is working on: Chauffeur, an SAE L4/L5 system, and Guardian, which is being developed to improve the driver’s experience while they remain in control of the vehicle. One of the latest advancements with the Guardian system is the idea of blended envelope control, where Guardian combines and coordinates the skills and strengths of the human and the machine.

Deliverables

Wireframes
User flows
Interaction design
Interface design
Design specification documents

Project duration

February 2017 – September 2019

The overall goal of the user experience team is to develop experiences that will build and maintain trust between the driver and our systems.

My role

I am the primary interaction designer working on the end-to-end experience for the Guardian product. I work with different feature teams to better understand the goals and needs of the product, which are broken down into different use cases. I work alongside researchers, engineers, and designers to work through my flows and designs, fill in any gaps I might have missed, and ensure we are all aligned with the UX goals of the feature(s) I’m working on. I keep the lead interaction designer on driving updated with what I’m working on to ensure alignment across other TRI products.

The photo above shows a steering wheel set-up for Guardian cars, which allows a trained safety operator to take over during testing if need be. This photo was taken in 2017, and the set-up has changed a bit since then.

Problem

For the purposes of this case study, I’m focusing on a very particular problem that TRI has publicly shared that I have worked on.

Improving trust in an autonomous system that very few people have experienced is a very big challenge, especially with the Guardian system since we have to establish and maintain mental models around its behaviors. When Guardian shifted to a blended envelope control, this meant that the car can do a small maneuver the driver wouldn’t even notice to something very large that can startle the driver.

One of the biggest questions we are solving for is: How can we ensure the driver is aware of what the car is currently doing, and will do in the near future?

Above are earlier iterations of communicating Guardian’s current behavior is using text and visuals.

Learning through research first

As a UX team, we decided which features needed to be research, and which ones had enough backing that we didn’t need as much initial investigation. With conveying what the car is currently doing, one of the primary elements we use is a path line and a message.

The problem we wanted to solve was how much information did drivers need? And we also had to keep in mind the other visual elements that might clutter the space and complicate the driving task.

I sketched out different flows and visual concepts, and had conversations with the researchers to understand their needs for testing stimuli. After a few iterations with the broader UX group, I shared the flows and concepts to a visual, audio, and 3d designer to create higher fidelity screens for the researchers to use as testing stimuli.

Improving the experience

Through research, we have learned that the path line was more effective for a longer duration of Guardian intervention, but is unnecessary for smaller interventions, which would happen more frequently than the former. I made the decision to remove that path line from the Guardian experience.

Sometimes another iteration of studies are needed, especially if the feature request were made by a stakeholder, but other times I would start incorporating their findings into the overall user flows of the product itself. I have written documentation, visuals, and task flows that are updated per the findings, and re-distributed with the team and engineers.

We discovered a new potential problem: how do drivers know if the system is intervening?

Being involved throughout implementation

I actively worked side-by-side with engineers to ensure the in-car experience was properly represented per the interaction specifications. I learned how to run car data on my computer and view the interactions run realtime; this was a very effective tool to quickly build, test, and iterate before pushing updated experiences into the car, which takes a longer amount of time.

Outcome

The Guardian user experience was put into our test vehicles in late 2018, and was shared during 2019 CES. Additional modalities are being explored to quickly inform the driver about what the car is doing, rather than showing it visually since it can take longer for the person to perceive.

I hope to further improve our experience by having a better understanding of what type of information is highest priority to our drivers, because there’s a lot of data the car knows, some amount of information I know the driver needs, but it’s still unclear whether they value information about obstacles on the road more, or how much Guardian is in control of the car.

All photo credit to Toyota Research Institute.