If you want to have an amazing app, it’s not enough to track only individual events. You need to string the events into conversion funnels to really tell a story about what users are and are not doing in your app.
Conversion funnels reveal actionable mobile app metrics about where users are dropping off in a flow. They allow you to pinpoint the problem by tracking a multitude of events that relate to a certain process. Doing so will help you identify drop off and friction points in your app that you need to address and greatly increase your understanding of your users.
After figuring out friction or drop off points in your app, you can strategize about changes to make to improve them. A/B testing allows you to implement these changes quickly and validate them on a smaller test group. It’ll show you what is and isn’t working, and allow you to take action accordingly.
A/B testing and conversion funnels can be particularly effective when used together because looking at the entire flow of user behavior can give you insight into how the change actually affects the user over the course of a flow rather than just the initial button click.
Here’s an example of what setting up a conversion funnel for A/B testing looks like in practice.
Example: User Signup Process
Let’s say I’m trying to improve the conversion funnel for completed signups in my app. Once a user opens the app, they’re taken directly to the signup screen. Above is a fairly typical setup, with the skip button in the top right corner to follow Apple Human Interface Guidelines. If we want to set up a conversion funnel for the signup process we might track the following events:
Using these events as a conversion funnel, I can see that there is a sharp drop off point on the signup page itself. By tracking the number of homescreens loaded, I can see that only 75% of my users are completing the process, a number that we might want to improve. Now that I know the % of home screens loaded, I can create a few hypotheses on how to improve it. I might determine that this metric could be improved through a few possible changes such as:
Once I’ve analyzed the issue, I can then create and run an A/B test on one of the hypotheses. In this case I’ll chose to make the skip button a more prominent option. Instead of having the skip button as an X in the top right corner, I’ll opt to move it right under the signup button and make it a button as well. I isolate this one change using an A/B test and compare how it performs versus the original version on a small group of users. Once the test is completed, I can see how it affects user behavior and my signup-specific metrics.
Now we’re able to see the precise effects of how making the skip button more prominent affected and drove user behavior. Looking at the results, we see in our funnel that the number of signups stayed the same (there is a slight decrease, but it’s not statistically significant) while the number of skips increased by a lot. Overall we note that the number of people getting onto the home screen of the app has increased by about 18%.
Using A/B testing, we’re able to see the precise effects of a change such as increasing the prominence of the skip button, rather than guessing what changes your last release made. Since we’re isolating each change using A/B testing, we’re able to see the precise effects of this change, rather than guessing what factors of your last monthly release contributed to the increase.
Using this information can give invaluable insights into what’s working well, and what isn’t. Not only can we use this to test out minor changes such as these, we can also use them to test out larger changes such as new feature releases and flow changes.
A Common Pitfall – Not Tracking Down the Entire Funnel
A common pitfall is making the conversion funnel too short. That is, not tying the conversion funnel to the bottom line or a key performance metric. Instead of just focusing on increasing the number of users who land on the home page, we need a more holistic view which, in our app, is the number of premium conversions. Therefore, we need to increase the length of the conversion funnel to include the metrics that measure the core KPIs of the app.
In the data above you can see that the rate for premium conversions actually decreased in our variant where we made the skip button more prominent. Even though the overall number of users who made it to the home page increased, the more important metric, premium conversions, performed worse on the variant that our original app.
It’s possible that users who would have previously signed up now just skipped since the option was so visible. While increasing the prominence of the skip button increased the percentage of users who made it to the home screen, it’s probably not a change we want to implement. Using the data we’ve collected, we can tweak the change to optimize for premium conversions, or test out another method to increase the completion rate.
Without tracking through the entire funnel, we would likely have continued with the more prominent skip button variant, since it boosted the metrics we were focused on improving. However, by tracking the full funnel, we can get a clearer picture of how a change affects our entire app, and gain confidence that we’re making the best decisions about any changes.
Summing It All Up
Creating conversion funnels in your app will give you invaluable insights into what’s actually going on inside your app. Tracking metrics throughout a funnel will allow you to pinpoint exactly where friction or drop-off points in your app are and allow you to test out changes to improve them.
Using A/B testing will also help you see precisely how a change you make affects user behavior, and allow you to quickly validate your ideas, before pushing it out to all of your users. Mobile growth is all about agility. You want to make changes, get feedback on them, and then respond accordingly in order to outpace the competition.
To learn more about how to iterate faster, optimize your app, and grow on native mobile, visit the Apptimize website.