When it comes to building mobile apps, there’s a lot you can learn from one of the biggest players in the game. In the early 2000s, the introduction of web apps made large-scale user data easily available, so Google’s human-computer interaction researchers put their formidable brains together and created the Google HEART framework. They wanted to bridge the gap between user-centered and data-driven metrics; to shift from business-centered goals to a focus on user experience.
Up until then, user experience data had been available mainly through usability studies, but the sophistication of the web gave them enhanced availability and measurability of data straight from their users. Their HEART UX metrics approach, while no longer new, is grounded in simplicity and knowledge of how to elegantly interpret quantitative metrics.
In this article, we’ve adapted and updated their approach so that owners of mobile apps can confidently track their progress with a focus on the right metrics for the right reasons.
Measuring User Experience Straight From the HEART
The Google HEART framework was crafted by researchers to measure the pulse of the user experience. It incorporates attitudinal and behavioral metrics, combining quantitative and qualitative data with human insight to really understand their users. When you understand their attitudes and thinking, they reasoned, you’ll be more able to predict patterns in user behavior and iterate on their experience.
The Google HEART framework measures signals for Happiness, Engagement, Activation, Retention, and Task success. The specific figures you’ll obtain in each category must be relevant to your app’s purpose and goals, but we’ll give you some general ideas to get you started. These metrics can be applied at either the product level or the feature level, depending on your app needs.
Getting Started With the Goals-Signals-Metrics Process
How do you know how which combination of metrics is most relevant to your app? How do you know which questions to ask? If you don’t know where to start, Google’s Goals-Signals-Metrics process will guide you through your strategy development step by step. Let’s do some procedural goal setting.
Step One: Goals
Choosing user-centered metrics is just one part of a thoughtful UX strategy. First, you must identify the goals of your product or feature. In order for you to get value from your data, you’re going to choose metrics that will be directly related to your goals. That way, your progress will be measurable and your data will be actionable.
Step Two: Signals
How will you know if you’ve achieved your goal? Here’s where you identify which signals will demonstrate success. Decide how success (or failure) will be manifested in user behavior or attitudes. Some of Google’s favorite signals include logs and surveys.
Choose signals that are sensitive and specific to your goal. You want the needle to move only when the user experience changes, not for unrelated reasons. Not sure how to track a certain goal? Try thinking about the opposite of what you want. Sometimes failure is easier to identify than success (such as task abandonment, for instance).
Step Three: Metrics
Now that you have chosen your signals, you’ll have the direction you need to choose specific, measurable metrics that can be tracked over time on a dashboard.
Some tips on metrics from Google: numbers will grow as your user base does. You’ll need to normalize raw counts. Try to stick to ratios, percentages, or averages per user rather than overall figures. In addition, If you’re comparing your app to competitors, find out what their standards for measurement are and incorporate those as well.
The Structure of the Google HEART Framework
Now that you know more about choosing metrics, let’s hit each one of the five categories in the sections below. These groups are comprehensive enough to cover all your user experience needs while being flexible enough to accommodate new methodologies as they develop.
The Happiness category encompasses your users’ attitudes about your app experience. You’ll want to collect their feelings about satisfaction, visual appeal, the likelihood of recommendation, and ease of use. You can track these attitudes with periodic in-app surveys and monitor your progress over time.
Here are some signals that would fit into the Happiness category:
- NPS surveys: Learn how likely your users are to recommend your app to their friends. This common, 1-2 question survey is simple but can tell you a surprising amount.
- Custom in-app surveys: Send your users periodic short surveys about their experiences so you learn how you’re doing straight from them. For best results, keep questions clear, concise, and strategically targeted. Dive deeper into segmentation and survey design here.
Once you’ve collected your data, kick start your analysis by looking for the root cause of the issue. Quantitative metrics aren’t always cut and dry. For example, the Google HEART authors mention a time when they updated an internal employee homepage and subsequently saw a decline in user satisfaction numbers. Rather than acting, they waited and saw that their numbers recovered—and rose! The reason behind the dip in numbers was due to change aversion rather than bad design. Always look for the full picture.
Engagement can’t be defined using one number, but gathering behavioral metrics can help you gauge your users’ level of involvement with your mobile app. Try measuring the frequency or intensity of interaction with your app over a specified period of time. Remember to report totals as averages per user rather than total counts, because you don’t want your data being skewed by individuals or a rise in overall users.
Examples of Engagement metrics:
- Visits per week
- Comments per day
- Photos uploaded
- Levels played per session
- Virality (social involvement, e.g. inviting friends or sending items)
Engagement metrics should be specific to your app and business goals. “Seven-day active users” was an industry standard, but Google dumped it for Gmail, reasoning that regular email users would be logging in far more often than once a week. Instead, they counted how many users logged in five or more out of every seven days. This was far more useful for their situation and also successfully predicted long-term retention patterns.
Adoption and Retention
Adoption and retention metrics are fairly straightforward but should be combined to get the full picture. Collecting figures for both will help you separate new users from returning ones and make predictions about which ones will stick around. After all, it’s harder to keep a customer than it is to get one. Knowing what your longtime users have in common will help you understand where people find value in your application.
When measuring retention, decide what counts as valid usage for your app, depending on your goals: should a person be counted as a returning user if they log in? Or must they complete an in-app task for it to count? This depends on you. And, what’s “ideal” for your retention rate varies. This interval for some apps might be week-to-week while others might be monthly or quarterly.
Remember that retention isn’t just a product-level metric; use it to keep your eyes on how your users are responding to changes within your app. If you’re releasing a new feature or redesigning your experience, retention will help you decide if the updates make the cut.
Here are a couple of examples to help inspire you:
- How many users have downloaded your app within the last week/month/year?
- How many users have downloaded your latest update or tried out your newest feature?
- Of your new accounts, how many of them are using the app regularly a month later? Three months later? Longer?
- Say you’ve implemented a new feature. How many people who started using it are still using it a week later?
Google Finance experienced a spike in signups during the stock market crisis of 2008. Researchers used independent adoption and retention numbers in order to identify where the traffic was coming from—were these people anxious, previously existing users checking in on their investments? Or were they brand-new, suddenly curious about stocks after having read the news? Knowing your users means knowing your opportunities. Google Finance used these insights to optimize the experience for different types of users.
The next category to measure is how people are using your app. Task success encompasses multiple behavioral metrics. Google HEART gives us the three “E”s of task success: efficiency (time to complete a task), effectiveness (percentage of tasks completed), error rate (using undo/erase functions or failing to complete due to user error).
Here are some examples of tasks you can use with the three Es:
- Signing up
- Creating a profile
- Searching for something
- Purchasing something
- Using a feature
Task success measurement was a major influence on the Google Maps search box design. Back when it launched, it used two boxes—one for “what” (movie theater, sushi, etc.) and a second for where (NYC, Mongolia, etc.). During A/B testing, they tried out a single search box to see if users would adapt their search strategy and increase their efficiency. The single search box you see in Google Maps today was implemented after testers responded positively to the change and showed that a single field could be quick and intuitive.
Wrapping It Up
Measuring the user experience can be a challenging area of research, but it doesn’t have to be. Methodically choosing goals, signals, and metrics will set you on the right path to getting clear and actionable data. Making sure you have metrics for all of the Google HEART framework categories will ensure that you’re getting a well-rounded look at your user experience. This methodology has been used successfully by Google for almost 10 years—and it’s a powerful, timeless addition to your mobile app UX strategy.
- How Europe's Largest Luxury E-Commerce Portal Collects User Feedback
- How to Target and Segment Audiences With Instabug's In-App Surveys
- How Much Your Mobile App Performance Costs Your Business
Instabug empowers mobile teams to maintain industry-leading apps with mobile-focused, user-centric stability and performance monitoring.