Google Now for iOS 

UX Redesign & Case Study

A student project in full collaboration with Jenessa Johnson and Halie Schwartz.

Time Frame

15 weeks

Role

Ideation, User Research, Strategy, Art Direction, Copywriting, Project Management

Tools

InDesign, Illustrator, Photoshop, After Effects

Our approach to this project was to focus on expanding our UX process skills. We focused on continuing to ideate in new ways, understand our users, pitch and articulate our ideas to others and document our workflow. We read up on current and emerging technologies while looking for unmet user needs. Guiding our work was the desire to keep our project forward thinking, with flexible deliverables that continued to expand our thought process and workflow.

Why is Google Now for iOS the right platform?

We didn’t want to work on just another app, we wanted to be part of creating and improving solutions that offered more than what a user could input. Connecting and working within the framework of Google gave us opportunities to connect with actionable data that could change how people live.

Research showed us that most users only use a few apps: 85% of app usage is only in about 3-5 apps,*  and some pundits are even going so far as to predict the death of apps.* Machine learning is being employed by nearly every major platform with varying success.

Google Now for iOS has tremendous potential

Integrated, seamless tools that don't involve user input, but passively pull and gather data

An already utilized ecosystem of applications brings familiarity and loyalty for many users, collaborative tools already in use

Google has access and resources that deepen machine learning and the ability to add emerging, cutting edge user interface methods

Google Now for iOS has significant room for improvement.

We targeted this problem space because it allowed us to explore and offer refinement on aspects of an already existing platform, leveraging the existing functions to prototype around a better experience for users. We asked a number of questions in the early phases to focus on the exact functions to prototype.

In the early brainstorming phases we threw any and every idea we had out there. We thought about goal tracking, sharing of resources, professional profiles, customization, security, and more. After all of our research we narrowed our scope with these proposed changes to Google Now for iOS: 

Tag On 

an input field for adding and customizing desired content

Onboarding 

in-flow peeks at core interactions and functions

Suggested tasks/actions

based on calendar events and reminders

VUI

Option of voice user interface for core tasks

User input

process for defining how much you want core interests to be present in feed

Predictive assistance

drawing on passive user data to offer actionable suggestions

Based on our research and our understanding of Google’s basic commercial needs, we came to some preliminary hypotheses to explore.

If Google implemented the following changes to Google Now for iOS we propose that they could see significant returns both for the user and Google:

Users will be more likely to have a satisfying experience and access more benefits of Google Now with forms of user input. These control and customization of content can enhance and supplement the predictive content already served. In other words, ‘teaching’ Google Now directly what you want in addition to the passive learning already occurring increases beneficial returns.

A more in-context onboarding process for a typical first time user flow will result in a more effective communication of the benefits of Google Now.

Targeting specific functionality only possible through applied artificial intelligence and using a natural voice user interface option allows customized virtual assistance.

So the Process Goes

Our work involved a multi-step approach to addressing some of the core user issues for Google Now on iOS. We took insights from repeated pain points discussed in user interviews and cross referenced those with the goals and use case scenarios of our personas. From that we brainstormed, developed, prototyped and tested ideas around onboarding, customization, and alternatives to the traditional screen-only interface. Along the journey we relied on information and guidance from professionals in UX and digital design to help shape our process as well as ideas.

“Test your idea, go with your gut, accumulate knowledge”

-Rebecca Shapiro, Lead UX Designer at Nordstrom

Initial Survey

To get some broad, preliminary guiding data we sent out a survey on patterns of customization and user interface. We were primarily interested in user motivations for customization and responses to hypothetical scenarios. We learned the following from the results.

Most people who changed the settings were interested in receiving fewer notifications or none at all.

When asked about to rate level of comfort with technology regarding privacy on a scale of 1-10, Average of about 7, meaning those surveyed were fairly comfortable with technology and did not have major privacy concerns.

People had varied ways of prompting a search with voice input. To the question about what would you say out loud to your phone to ask for help making a specific dinner decision, no respondents said the same thing. Responses varied from “delivery near me”, “What's an alternative to the usual?”, “What should I eat tonight?” to “Feed me”.

The majority of the respondents were more interested in visual prompts than audio prompts.

The main thing we learned from the survey is that many people are still most comfortable with a GUI, and we should keep that in mind in our design solutions. Obvious answer, but human interaction voice inputs are incredibly diverse and design solutions are a complicated system.

Main Challenges

We knew that in order to figure out how to narrow our scope we needed to talk to users to gain insights. Our goal is to ask open ended questions and walk away with new insights and ideas of who else we can talk to. In order to do this, we started researching how to talk to people in ways where they felt comfortable opening up and providing accurate information. We wanted to have a structure to our conversation, but stay flexible to encourage dialogue. Interviews with users validated the idea of enhancing an already existing platform that would help them organize their professional and personal goals in their day-to-day routine.

The challenge of having an open ended project is figuring out how to focus in and make decisions.

Interview Process: One interviewer leading the questions with an interviewee. One recorder taking notes of the interviewee’s verbal response. One recorder taking notes of the follow-up questions and interviewee’s clarification questions, all while recording the interview either through video or audio.

Persona Development

Our research identified two key users.

Persona 1 this person is actively looking to get started within their career but they are still working on making the right connections and getting more experience.

Persona 2 this person is a career focused,  multi tasker who is motivated to keep up with multiple responsibilities and changing industry standards.

 

These two personas use Google Now for iOS app to push them further in their careers and stay on top of their goals. They're looking to network, stay ahead of trends and continue to strive to be efficient within their day-to-day routine.

A key insight based on Persona 1’s need to have more focus is that they are really looking for the “right” info, not just more info, and that they would likely benefit from from customization.

When we walked through a typical day we saw connections between different times in the day and their goals. Based off of the persona's goals, we created several task flows. We learned after our first round of testing that in order to better serve the user's needs we needed to jump back a few steps. Through a user's first time experience Google Now will begin to learn the user’s lifestyle and interests. 

 

 

 

We created a new task flow for customizing content and worked with onboarding several renditions. We sent out a pre-survey for individuals to fill out and through the pre-screening we selected the candidates that fit within our demographic. We then had them walk through the paper prototype version of the onboarding process.

Heuristic Analysis

After we evaluated the areas, we looked over the major pain points in Google Now iOS and discussed the opportunities that could be implemented into the update. Also, we took these insights and compared them with our user’s goals to develop a better system for Google Now.

We walked through the Google Now iOS onboarding process and the initial flow for first time use. We took screenshots of what we encountered and rated the negative and positive functionality based on ten key principles of Nielsen’s Usability Heuristics.

Competitive Analysis

In order to gauge a better sense of what competitor applications are providing with their onboarding and customization flows, we analyzed a number of other popular and relevant apps that offer various onboarding and user experiences. Some apps that we looked at were: Pocket, Wanolo, Songza, GooglePlay, Spotify, Wildcard, Feedly, Flipboard, & more. Keeping this in mind, we used these experiences as inspiration while we began to build out our wireframes.

Below is our in-depth process of the areas that we focused on for the preliminary round of prototyping.  

Customization

How do we give users the content they want as fast as possible? We want to provide them enough suggested categories to get them started, but not too many as to overwhelm them.

After brainstorming multiple ideas we decided to focus on pushing users into a content feed as quickly as possible, while still giving them control to choose what they see. Creating a feed that fits users needs will ensure continuous returns that give the system feedback to further refine their feed.

Solution allow users to give the system an idea of how important different categories are to them, with a sliding bar that determines the prevalence in feed.

Refinement

In refining our category selections we conducted paper card sorting and did outreach to get feedback from users about what made sense to include. We learned in the initial phase of user testing our users needed to be prescreened and fit within the demographic of this project and introducing card sorting needed refinement so that we were presenting accurately worded questions in order to gather proper information.

Users are currently not getting the option to customize their feed as quickly as they would like. Currently they open Google Now on iOS and it takes a while for the custom feed to be enacted, and options are limited (unless you see some particular type of content, such as being served information on local events, you would not be able to give positive or negative feedback on local events).

 

While thinking about customization we observed that users want to receive specific information within one topic, a user may not be interested in the big picture category such as sports, instead they want to follow specific teams or players.

Providing a list of every interest a user could want to see would be overwhelming.

So how do we make sure that we are providing content that fits our unique users? Adding an icon that is always present allows users to become comfortable with inputting specific fields of interest. Having the option of setting particular interests within the context of the content allows users a natural flow of learning and information.

Icons

We wanted to create more space by adjusting the cards: Weather, Travel, and our implemented function of Tag On into icons. These icons are always located below the Google Search bar to allow a consistent resource spot for our users with economized space usage. Clicking on the weather would display the local weather information while clicking on the travel would display predictive travel assistance that Google Now already uses.

ICONSICONS

Swiping and “More Options” Menu Icon and System feedback

The current More Options Menu Icon is distracting and its function is not obvious enough for users. There is no easy way to give feedback to the system about what content a user may like or dislike (the system requires maneuvering your finger from tiny More Options Menu Icon, reading a somewhat lengthy dropdown list and a making selections).

 

We looked at the current More Options Menu Icon on content cards and talked about the importance of each of them to the user and the system.

At what point in a user task flow were people most likely to give meaningful feedback about the content they are being given?

How can we create easy ways to give feedback to the system about content?

With a swiping motion, can the user make multiple selections mid-swipe?

We propose two main methods of “teaching” Google Now what content the user wants by adjusting the two main locations to be more appropriate to the context of user journey by:

Removing the More Options Menu Icon from the content card, and adding it to directly to the article would allow the user to give more specific feedback of the content.

Adding NUI and clear system feedback by pulling right indicates negative feedback about the content, pulling left reinforces positive feedback. Swiping left allows the user to throw away the card. Swiping right allows the user to reinforce that the content was appropriate.

Google Now's current menu drop down method

This new menu allows for feedback to the system and a less overwheming user experience

We decided to minimize what types of feedback would happen on the swiping, having multiple options was difficult for dexterity (replicating the More Options Menu Icon in a NUI was not possible). When users become more comfortable using pressure point based gestures this would be an area that we could revisit in further iterations.

 

Refinement

We were surprised to get repeated feedback from users that they did not understand or enjoy a More Options Menu Icon on the cards in the feed, so we removed it entirely from the card content.

We think that the More Options Menu Icon is important to give the system feedback but we decided to only keep it on the article and to change the options.

 

What we didn’t do and why

One of the critical areas we heard about from users was that the lengthy list in the More Options Menu Icon dropdown was too much to read, and the differences between the questions was difficult to decipher (such as “don’t show me content like this” and “don’t show me content from this source”). Some of the options were pretty unnecessary and redundant such as “Edit URL” “Copy URL”, so we adjusted the options within the dropdown significantly.

Onboarding

Currently there is a lengthy process of onboarding, after which users are pushed into a generic feed. The onboarding information is sometimes confusing, including static illustrations that don’t show how to use the information or how to interact with it. Having a slideshow approach to onboarding isn’t as effective as in-context information.

How do we provide an easy discovery process for users that is tiered and presented in a way that seems appropriate?

What are the ways that they might discover core functions on their own?

 

Where are there places users would be making mistakes and how could we intervene? Peekaboo: first time user has a tease, maybe a jiggle, reveal, etc.

Will a “character” that guides a user through the onboarding process be successful?

With our users in mind we made the decision to use integrated, natural language to guide users to take full advantage of provided features. We included ‘peeks’ of functionality through in-feed animations (such as orienting users on how to swipe content).

Through subtle animations and branded colors that are appropriately integrated into the feed we can orient users to Google Now’s main functions. Peeks of functions using animations to attract the attention of users, but not too distracting that they can’t ignore it if needed.

Refinement

We tried adding a pulsing “G” interspersed within the feed (kind of an onboarding entity) to call out functions to get users. When the pulsing G was on a content card users thought that it was sponsored content. Our solution was to include the pulsing G only on onboarding cards and not on content cards.

Our hypothesis is that the onboarding process would not include pop ups but would include specific cards integrated within the feed. The spaces in between cards create a neutral teaching space for written information or suggestions.

Everything should be clickable within the onboarding and there should be an option offered to learn more.

A full personality in onboarding was not well received in interviews with users. They responded better to a more neutral, information delivery approach.

VUI predictive assistance User Journey Video

We mapped out an average day for our main persona. We identified areas where she would likely interact with her phone, pinpointing opportunities and pain points. We noticed that she is constantly multitasking, which is an area where voice user interface could really be used to improve her life. Google Now’s existing predictive technology that uses machine learning to provide relevant content and updates can be extended to offer smart, customized, in-context suggestions. We took a day-in-the-life approach to showcase the possible VUI and artificial intelligence possibilities of Google Now. The resulting video shows interaction with the prototype in a relatable visual narrative.

We thought about several phrasing options that the persona could input information into the VUI system in a way that would be natural and plausible for the system to understand. We know users are still getting used to VUI’s and don’t have the same level of confidence as they do with GUI’s so we combined those elements for many of the VUI tasks. Providing the persona with VUI feedback as well as contextually relevant GUI confirmations.

At home device connectivity is quickly approaching a reality and we wanted to incorporate “hands free” appropriate experiences. We got feedback from a professional working on the Amazon Echo about our script and what would be necessary for a Google product VUI on iOS, which helped us keep in mind that a wake word is likely to be necessary. It is difficult for us to know exact details about privacy and legal restrictions that would affect design for a cross-platform integration, but we kept in mind that our design ideas are contingent upon those issues.  

Final takeaways of what we learned

Technology is moving at such a rapid rate that it will be a continual challenge to stay ahead of the curve. One of the most interesting (and at times disheartening) issues we ran into during the course of the project was that many of the design solutions we were coming to were simultaneously being enacted in Google Now iOS updates.

Examples of Google Update during our process

Added a suggested bar to individual cards for more content like this

Language of the Meatball menu was updated

Added location, time, and event reminders


Google calendar added a function called “Goals” which uses artificial intelligence to find times in your calendar to work on a project or goal, this is synced to Google Now.

The subline at the bottom of every card now includes: ‘You’ve shown interest in…’

There may not be one path to follow when working on a UX project. What may be right for one project, will not work for every project. During the work we did we got a chance to grow our library of research and UX process tools. Interviews with working professionals and users gave us incredible insights and often caused us to go back to square one to rethink our previous conclusions. These were learning experiences we look forward to bringing to projects in the future. Overall, this project was a learning experience that allowed us to think about how to apply the greatest amount of return with minimal user input.