When it comes to mobile apps, it’s tempting to only look for big wins. Growth teams look for hacks that boost retention 5%, 10%, or even 15%. Anything less than that is often deemed a waste of time. But there are only so many “big swings” to retention that you can pull off. And if your retention numbers aren’t dangerously low, then there isn’t any one feature release or UI element that will flatten that retention curve.
The closer you get to great retention, the harder it is to find anything that will bump up that retention number. That’s why at Apptimize, we do sweat the small stuff. Many of the most successful mobile apps— such as Glassdoor, Vevo, Glide, Strava—perform growth tests to find even the small wins. They look for every possible opportunity to increase retention even a single percentage point, because they know that those adjustments add up over time. Kevin Li, formerly of Yahoo Growth, swears by it:
“If there’s one takeaway it’s just that it’s okay to do small wins. Small wins are good, they will compound. If you’re doing it right the end result will be massive.”
So before your team brushes off testing as not effective enough, show them these huge upsides.
The snowball effect
The growth team’s Sergei Sorokin and Kevin Li told us that they celebrated all the small wins at Yahoo, because they’ve witnessed them add up over time. In 10 weeks, here’s the result of the compounding growth of over 120 tests.
After just one test, the lift was small. But after they repeated growth test after growth test—122 times—the combined results boosted the CTR 1000%. Running experiments for growth is the easiest way to continually lift that retention curve. Changing the copy of your CTA or tweaking your onboarding flow might give only give 2-3% gains, but it’s not the individual experiments that matter. Rather, it’s the compounding gains. So if you’re having a hard time getting your product team to buy into testing, show them the impact on the real numbers in the long-term.
The growth team at Appcues created a free retention calculator that allows teams see the impact to their bottom line. By plugging in their current retention numbers, they’ll see how even a small lift will impact MRR in the long run. Here’s what a compounded 10% lift would mean with some sample data:
Reap double the benefits
Testing doesn’t just improve your retention, it also helps you get to know your users better. Every growth test—whether you’re testing the copy on your landing page, or changing your sign up CTA—gives you more insight into what your users like and don’t like. Not only are you looking at users’ past behaviors on your analytics dashboard, you’re testing the potential for new behaviors. This data-driven approach lets you make the least risky, most data-informed product decisions possible.
The tough part is knowing where to start when your team doesn’t have any testing experience. Most people will turn to “best practices” they find across the web. But every app is different, so the most successful A/B test for you isn’t going to be the same as for another.
Instead, form your hypotheses for your tests based on your own analytics. Begin by looking at your overall N-day retention, where you’ll be able to locate some problem areas. Most mobile apps lose the majority of their users within the first few days, so that’s often a great place to start.
Let’s say you look at this chart and think that the first drop—to just under 40%— isn’t too bad. Maybe you have a freemium music app, so the barrier to entry isn’t too high. But the drop on day two or three is more concerning. From there, use Compass to see which events are highly predictive of retention in your time range.
In this example, you now know that people who post a comment in the community are likely to see value and stick around. So your growth tests might look something like this:
- Try including “how to post a comment” in the onboarding workflow.
- Try auto-posting a comment in the community as soon as users sign up.
- Send a lifecycle email to each user who joins the community, explaining how to post a comment.
If you know which behavior you’re guiding your users to, you’re not just poking around in the dark. You’re making data-informed hypotheses that have a high chance of affecting retention. And after each test, you’ll have more ammo for the next one. You’ll be able to weed out correlations that don’t denote causation, and you’ll better understand what works for your users and what doesn’t.
Don’t just think big
Back in the day, Kevin Li used to get frustrated at the end of most product meetings. Li told us that his manager used to say, “Week 3 and week 4 you have very little improvement. Your end goal is seemingly small. I don’t think you’re being ambitious enough. Why don’t you 4x that goal and come back?”
Thinking big is great, but not at the cost of small wins. When weeks later Li would meet with the team again, they’d be impressed with the 10-15% lift that resulted from all the testing results compounded. It’s this sort of attention to detail that was necessary to get the Yahoo Mail app into the top 5 free apps in the app store.
Rolling out big changes is both time-consuming and risky, especially if you already have a user base of loyal customers who are happy with your product. But small adjustments are possible and easy thanks to feature flags.
You can test new features by rolling them out to just a portion of your users at a time, without having to redeploy.
“We use feature flags liberally. Everywhere. And we use this for all our product rollouts.”
– Facebook Engineering Manager Girish Patangay
The reason why growth teams at Yahoo, Facebook, Uber, Lyft, Twitter and VSCO all use feature flags is because they enable them to iterate quickly with little cost to their existing user base. This way they can amass small wins quickly, and see changes to their retention numbers in a matter of weeks.
Yahoo-like growth isn’t only achieved through just big-swing projects. Consistent, carefully-calculated adjustments that add up over time are equally important. If your growth team is doing all big things right, it might just take some smaller wins to give your app the extra edge.