This step involves selection of two or more message variants, say A and B, for testing. Usually, the first message is your default choice and the second message is competing to dislodge your default choice.
This step involves choosing the right sample size for testing.
– Conversion Rate
The goal of A/B testing is to choose the message with the probability of having a higher conversion rate like CTR among the 2 messages. Though we do not know the conversion rate the campaign would ultimately clock, we need to make an educated guess based on the past conversion rates of similar campaigns.
While making an educated guess about the conversion rate, it is highly unlikely that you could confidently predict the exact conversion rate. In practice, it is always a range.
For example: Instead of exact 3%, you might predict the conversion rate to be in the region of 2.7% to 3.3%. In other words, you have assumed an error of +/- 0.3%.
– Confidence Level
The confidence level lets you decide how sure you want to be about the sample results getting reproduced on the entire audience. If the actual CTRs for similar campaigns lie between 2.7% to 3.3%, a confidence level of 95% for the sample implies that 95% of the time, the actual CTR of the entire target audience would also fall in that range. Or put differently, you are willing to let your predictions about the CTR for the target audience fail only 5% of the time.
After you have selected the right sample size and ran the A/B test, this step involves evaluation of the tests results and selection of the winning variant, if any.
|Variant A||Variant B|
Now that you have understood how to choose your sample size, confidence level, error, and conversion rate; you can start testing various components of your message one by one. This will help you find an optimal message for your campaign. Check the CleverTap website to find more about A/B testing for push notifications.