Voice in Oculus

Redesigning the user education and onboarding experience

Date

January - May 2020

What I did

Conversation design, interaction design, prototyping, user research

 
 
 
 

Background

At Meta, I had the chance to work on the Assistant on Oculus team, which was building command-and-control voice functionality to navigate around VR. At the time of this project, Voice Commands was an experimental beta feature and had some light usage from early adopter power-users. As we got closer to our public launch, the team wanted to build out a more robust onboarding and user education strategy that encouraged users to continue using the feature.

Problem

Initially, Voice Commands setup had a concerning number of drop offs, with only 21% of our users making it to our last screen and completing setup. There just hadn’t been enough design love for the onboarding flow, in part because Voice Commands was still rolling out in Beta.

01. Setup screen

02. Voice data consent screen

03. Short explainer on how to activate voice commands

04. Final screen

This previous onboarding experience was very text-heavy and did not clearly sell the value prop of voice commands, let alone why anyone should opt in to sharing their voice data. There is very light education of which voice commands are available and how to invoke them, but there was a lack of a true onboarding experience of how to use your voice in Oculus.

Goals for the redesign

  • Familiarize the user with how to activate the assistant

  • Allow users to practice end-to-end interactions with the attention system UI

  • Provide education for which voice commands are available

Interactive Tutorials

Inspired by the gamified tutorials that Oculus has for other apps, I wanted to explore a more interactive experience to introduce voice commands to new users. Not only are these tutorials fun and engaging, they are effective in teaching by doing.

Oculus Handtracking tutorial practices key motions like pinch and scroll

“First Steps” teaches to use your VR controllers through short, simple tasks

Design Process

Partnering with a product designer on our team, we mapped out a few rough interaction flows of a more conversational and interactive experience. We picked the 3-5 most used utterances like “Open Beat Saber”, “Take a photo”, and “What time is it?” that we wanted the user to be able to practice in the moment.

 
 

Design and technical constraints to consider:

  • How to communicate complex interactions effectively within a limited Attention System UI

  • Memory constraints of handling all user requests outside of practice examples, since we were building a canned dummy experience

  • Make it obvious to users they are in a tutorial experience and not actually invoking the real commands, nor are we saving their real voice data

Attention System

While working on this project, our team was also in the process of revamping our Attention System UI on Oculus. The Attention System is the visual and audio representation that the Assistant is listening, processing, and taking action on a user’s request. This is communicated via animations, sound design, and on-screen transcription for the user.

 

Attention System v1

 

Attention System v2

Since we wanted more space and flexibility for our interactive tutorial, we worked with our engineers to create an expanded Attention System state to allow for more UI elements like lists, buttons, and progress indicators.

 

Expanded Attention System UI for FAQs in response to “Hey Assistant, what can I say?”

 

User Research

Partnering with Blink UX, we were able to run a remote research study where we showed 8 participants both the current experience, as well as a video prototype of the redesigned experience. These were 90-minute one-on-one sessions with a Blink UX researcher. All participants were owners of VR headsets and had experience using Oculus Quest.

Every participant preferred the Redesigned NUX when compared with the Existing NUX. All found the interactivity, audible dialog, and example voice commands to be engaging, and that these elements allowed them to better learn the system. When scoring each version of the NUX, all participants rated the redesigned version higher in categories of likelihood to adopt, feeling prepared, understanding and clarity of text, and confidence.

The biggest takeaways were that the interactive approach was much clearer in understanding the how and why to use Voice Commands, and users also knew where to go to find more info about it later.

Some key recommendations:

  • Length: the tutorial in the prototype video felt too long, recommendation to shorten it to just 3 examples rather than 5

  • Navigation: Allow users to skip steps and close out of the tutorial

  • Progress indicators: Similar to navigation, users want to know how long the setup process is and how to skip around

Redesigned NUX

Sample experience of our redesigned NUX that we initially launched with.

 

Findings and Next Steps

Our team experimented with an A/B/C test at launch with the following experiences and respective increases in opt-ins/Day0 engagement:

  • Control: old NUX with no interactive tutorial

  • Privacy-First: new NUX tutorial with Privacy terms at the beginning of flow

  • Privacy-Last: new NUX tutorial with Privacy terms at the end of flow

Both versions of the new NUX showed significant improvement in user engagement and task completion. We immediately wanted to enable the Privacy-First flow for all users, since this led to the highest conversion rates both in completed setups and successful Day0 tasks.

Some next steps to continually enhance our user education experience:

  • Supplemental user education for users who complete setup but don’t participate in the interactive tutorial

  • Consider testing similar interactivity in other places (help screen, on-demand, new commands, etc)

Previous
Previous

Voice Assistant for Ray-Ban Stories

Next
Next

Alexa Reminders