Learn how you can Unlock Limitless Customer Lifetime Value with CleverTap’s All-in-One Customer Engagement Platform.
The primary set of KPIs for product owners, marketers, and entrepreneurs of consumer mobile products is to drive user activity on their platform, constantly improve app adoption, and drive revenue for their business.
Customer Acquisition Cost (CAC) is at an all-time high for consumer applications. With these increasing costs, it becomes imperative that the users you have acquired spend significantly more time within your apps in order to increase revenue and raise Customer Lifetime Value (CLTV) per user. Every mobile product owner strives to increase product adoption.
One of the key parameters to raise your app’s stickiness is to provide users with a positive experience interacting with your brand and app. Getting them to interact more, in fact, getting them hooked to your platform, is the key driver to your app’s success.
In this post, we will cover one of the most data-driven approaches to improving the user experience: Screen A/B Testing.
Screen A/B testing is the practice of showing two different versions of the same screen (or a feature flow) to different users in order to determine which version performs better. The version that performs best (the winner variant) can then be deployed to the rest of your users.
To effectively test which version of your screens/features work the best, you must invest in building more than one version. Each new version is added effort to your team’s bandwidth, and is always a tradeoff you need to make as the product owner.
Below are some typical use cases for when to use screen A/B testing:
When a product team releases a significant new feature, before making a release to the entire user base, they test multiple versions of the same feature. This helps them identify the best performing version which generates maximum bang for the buck.
Mature product companies constantly release new features and updates for users. If a feature majorly impacts a core area of the product, it is wise to stage these releases to a small percentage of your user population to measure the impact.
This use case is almost a growth hack of using the A/B testing infrastructure, where you can put a large chunk of your users (say 90%) in a Control Group and release the new feature to only a small set of users (the remaining 10%), thus giving you more control over the rollout.
Leading indicators help product owners predict significant changes in product usage or key metrics. Leading indicators are early signs that something good (or bad) is truly around the corner. They’re typically difficult to measure because, if done well, these are visible well before changes in key metrics (e.g., average revenue per user, sticky quotient) are obvious.
For example, product quality is typically a key leading indicator. Are there bugs in your mobile app which could lead to overall customer dissatisfaction? As a variation of use case B above, you can expose parts of your app to only a small number of users and then monitor their usage patterns, ask for qualitative feedback and make appropriate adjustments before releasing it to all users.
The following five items can be used as a simple checklist for evaluating test platforms. You should look for:
CleverTap’s screen A/B test capability meets all the above criteria. In addition, we integrate A/B testing as just one more form of experimentation you can carry out for increasing the time users spend within your apps.
For example: you can use timely, relevant, and contextual messages over channels like email, push notification, and SMS to drive users into your apps. Once in the app, you can experiment with in-app messages or an App Inbox that contains personalized messaging and is timed perfectly for every user. You can even retarget specific segments of your users on Facebook or Google using our integration with both of their custom audiences product. Finally, of course you can experiment with screen A/B testing variants.
Inculcating an experimentation culture within your organization and having the willingness to experiment with different channels, messaging, as well as screens, will only augur well for your app and business growth.
Your A/B test can be set up within minutes by taking the following simple steps:
Once the experiment is complete, you can analyze the performance of each variant using advanced statistical analysis techniques and then publish the winning variant to the remaining user base.
Variants | Charged | Item Wishlisted | Funnel Added to Cart -> Charged | Retention (3-7) days | Average Revenue per User |
---|---|---|---|---|---|
Default (900 users) | baseline | baseline | baseline | baseline | 250 |
Variant A (850 users) | -9% to -2% | 2% to 12% | -2% to 8% | -3% to 2% | 209 |
Variant A (990 users) | 0% to 5% | 6% to 10% | 2% to 15% | 3% to 6% | 390 |
Using our screen A/B test, you will be able to control the entire experiment from the dashboard without having to involve your engineering team at every step of the way.
We will be releasing the Screen A/B test capability soon. Watch this space for updates.