Our first task was to define the key terms of our prompt. As the user focused team member, I conducted secondary research to help us understand what having low vision means. The first round of research revealed that low vision is a legal status that encompasses a spectrum of conditions, from partially blocked sight to full blindness. Age related vision loss, in particular, affects 4.2 million Americans aged 40 and above, causing them to lose confidence and independence.
On top of confirming that people with low vision do travel alone, I also wanted to learn about their process. I found a wealth of videos and blog posts by people with low vision who wanted to share their experience and educate others.
Finally, I conducted an informal competitive analysis of apps that low vision users use to navigate and apps that generally deal with travel and navigation.
Desk Research videos
Our team decided to focus on people who are newly living with low vision because of their existing familiarity with current technologies such as touch screen phones. Because of the limited time we have to work on this project, we felt that it would be best to design for a use case that is as familiar to our existing understanding of mobile technology.
I followed up our decision to focus on people newly living with low vision by conducting additional research into age related vision loss. As I had already identified an older age range for this user group, I also took a deep dive into tourism for elderly and retired people, especially solo travel. We wanted our solution to be something that our target user could start taking advantage of during their transition into living with full low vision.
To further refine our persona, I referred to TripAdvisor's 2016 TripBarometer study to identify which identity matched Ben with an existing TripAdvisor profile.
I also synthesized the data I collected from my research into low vision travel and elderly travel into a user journey that illustrated Ben's trip process, from research to conclusion.
Because of Ben's emphasis on independence, we decided to take a more hands off approach, especially during the trip taking phase. That also helps us treat his disability with respect.
As Ben is still learning to live with low vision, it's up to us to provide him with the information that he doesn't know he needs to keep him informed.
Our product is not about teaching Ben to use new technology, but augmenting his existing knowledge base with the benefits that voice interfaces afford.
Having a voice interface was a prerequisite to this project, so we spent most of our brainstorming sessions deciding how our product would exist and how broad or deep it would intersect with Ben's journey.
I wanted to leverage TripAdvisor's existing database of attractions and reviews and create a product for TripAdvisor's platform. To do so, I analyzed the existing TripAdvisor website/app to compile a list of features.
After eliminating features that were not relevant to the project's criteria, I labeled the remaining features based on how it might help us accomplish our goal.
I used this to finalize what features we wanted to implement into our voice assistant app and also added other features such as the bathroom finder to help us meet the project's criteria.
Designing a voice interface is difficult because of the mismatch between spoken and written language. I approached this challenge by designing a high level map of touch points stated in the journey map.
VisiTour Happy Path
I matched the features from the TripAdvisor analysis to this map and used it to write to a very rough back and forth conversation between the voice assistant and the user, starting from activation to the end of the research phase (created a trip list). This helped me get a feel for VisiTour's tone and a reference point for the rest of the situation flows.
Having this conversation guide also helped me quickly start testing the conversation with my teammates.
During this process, we also chose to create the term "DayTrip" to refer to the saved list of attractions to emphasize the fact that these attractions were meant to be visited in a single day (though it is not mandatory for the user to visit all of them). This was used to help us meet the project's requirements for the course.
This process also helped me get familiar with TripAdvisor's language. Here is what we chose to define our terms:
Attractions: locations of interest
Restaurant: places to purchase food, not necessarily sit down.
DayTrip: a saved trip, to be executed in one day
I decided to include visual feedback for some of the portions of the experience. I reasoned that, because it was a standard feature in most mobile voice assistants, our users may be familiar with the interface and could use the graphic interface to perform simple terminal tasks such as "Start Trip" and "Repeat Message". This would help reduce cognitive load and may make the app feel less intimidating.
I designed a wireframe to estimate how big the visual components may need to be in a speculated update to Android's Google Assistant GUI. I tested out my designs using a Figma plugin called "Low Vision" on the medium and severe settings. This helped somewhat confirm that the visual component has some value, but should not be the primary interface.
As this was an additional feature we wanted to experiment with, I decided that this was enough validation for the scope of this project.
Low vision variations
During this time, I also assisted our team's developer in building the prototype in voice flow.
Although user testing was not a requirement for this project, I decided to go ahead with it because we made a lot of assumptions during the design process (like the selfie feature) and tested almost exclusively with each other.
I wanted to make sure that the conversation logic made sense, especially to those who are not familiar with voice assistant products. I also wanted to find out how our voice assistant made the user feel. Our goal is to foster a sense of independence and empowerment, not smother the user with handholding.
Due to the constraints of the class, we were unable to test our product with our core users (45+ with low vision). Instead, I recruited some classmates with a range of experience with voice assistants as stand ins. This way, we could at least test the basic logic of the conversation and get a feel for the experience.
We conducted our usability test WOZ style, but without the smoke and mirrors because the users were aware that I was roleplaying the voice assistant. I created a deck that helped us integrate some screens into the test as well and distracted the participants from the fact that they were speaking to me and not a voice assistant.
Me roleplaying VisiTour during the usability test
This was a fun but challenging task for me because it was a last minute improvisation, as our test coincided with a VoiceFlow update that broke our prototype. Luckily, I was very familiar with the conversation flow (because I made it) and was able to use the flowchart as a script.
The hardest part was deciding when I should accept utterances/commands or throw errors, especially because I, a human being, was able to better understand their intention than a voice assistant could. It was very tempting to just give them an answer, but I stuck to the script as much as possible to gather accurate feedback.
Although I can't fully know what it's like living without sight, I tried my best to understand the experience of people with visual impairment through secondary research.
We found that our participants all enjoyed using the quiz to generate attraction recommendations. They thought it was a fun way to create a foundational Day Trip, versus having to pick every location one by one. We also found that our participants were positively surprised when we provided information about a location's accessibility. As non-disabled people, our participants did not think to ask about how a bathroom may be handicap accessible. As we assumed Ben would not be used to thinking about his low vision as much, we believe that this would benefit Ben's experience positively.
Unsurprisingly, our participants were sometimes confused during several of our conversation points where the next action to take was not singular or immediately clear. For example, our open ended tasks like "plan a trip to Seattle" scored less on the ease of use scale than tasks with clear end goals like "post a review on TripAdvisor". This was due to the broad language that could be applied to the planning task, versus the more limited vocabulary that could be applied to the review task.
Our feedback also indicated a lack of clarity about VisiTour's capabilities, such as the ability to locate bathrooms. According to a participant, "I wouldn't have known to ask it about bathrooms if you didn't ask me to do it". This could be a mismatch between a participant's understanding of travel apps and the requirements set forth by this assignment.
A potential solution to this issue is to design a better onboarding experience that clarifies VisiTour's level of involvement in the trip or VisiTour's features.
We wanted our user to focus on the trip and not have to spend too much time on their device. To do so, we implemented features such as quizzes to suggest pre-planned itineraries and integration with different apps to help them accomplish tasks quickly.
Using GPS, VisiTour can provide situationally relevant suggestions that low vision users may miss, such as when the user is near a popular selfie spot or a fun tourist activity.
A pain point of voice is the amount of time it takes to listen and ask. To help prevent frustration, we designed VisiTour to reduce or increase behaviours according to the user's reactions. If the user rejects offers to take a selfie multiple times, then VisiTour will stop asking in the future.
VisiTour can act as a voice interface between the user and other apps like Google Maps and Uber, allowing them to leverage those services without needing to use the screen or exit VisiTour.
1. Design an onboarding system
Because our chosen user is still learning about their new condition, we also need to provide guidance about how to use VisiTour. Creating a clear and linear onboarding system that outlines VisiTour's capabilities would help the user get the most out of the app, even if they have to ask again later.
2. Bring the design into a standalone app
We chose to design our service as a voice assistant skill to help give us enough time to build the prototype deliverable for this project. However, after testing and revisiting our initial findings, I believe that VisiTour could deliver more value by being a standalone app that can control its display and save local data. This way, the user can download information about attractions before traveling to save mobile data.
3. Improve connection between visual and audio feedback through prompts
In order to provide better utility for the visual output, we need to have a smarter system that includes language that prompts the user about where to click on the screen. This would make it more clear to users who are not familiar with the screen's layout as well.
4. Integrate wearable tech for haptic feedback
To further our goal of reducing active screen time to help immerse the user in their trip, I'd also like to integrate wearables like smart watches into this ecosystem to provide helpful information when not being actively asked.
As the primary focus of this class was to teach us about a new input method (voice), our team approached this project as more of an experimentation to learn the medium than a demonstration of our design expertise. Therefore we had a few stumbling blocks (aka learning experiences) along the way. Here are the main things that I've learned from this process:
1. Voice design should start from the highest possible level
Conversations live differently in writing than when spoken. As voice designers, we need to anticipate different ways a user might want to go about doing something and it helps to start designing from the intention level (high), rather than the specific written language level (low). This way, you can get the intended function down without getting stuck on the nitty gritty words.
2. When WOZing, leave your ego at the door
Our usability test was supposed to take place on Voiceflow, but because our project coincided with a major internal update, our prototype went completely kaput the night before our scheduled tests. So I quickly threw together a deck with screen mockups, put on my best Alexa voice, and became the prototype by reading from our flowchart. It was difficult to "throw errors" when I (a human being) understood their intention, but I stuck to the script and was able to identify several points of ambiguity in our design that needed fixing.
3. Voice systems should not be designed alone
Voice interfaces should be designed out loud. No matter how good I thought my inner conversation sounded, I always found errors when I spoke it out loud. While no voice system can account for every possible utterance, it's important to have multiple perspectives from the get go so that we can cover as much ground as possible when mapping out the conversations.
4. Start testing from day 1!
As I said before, conversations are hard to make up. I had to make major edits to significant portions of the conversation flow after testing it out with my teammates because of unanticipated answers or logical jumps. This is also an emphasis on reflection point 3: work with others so you can start testing right away!
As hectic as this class was, I thoroughly enjoyed it and feel like I came out a stronger designer than ever. I also got to hone my leadership skills (as the only graduate/design student in my group) to delegate work and teach my teammates about design thinking.