The purpose of this article is to provide information to Outreach Users regarding A/B testing templates in Outreach.
- Outreach Users
A/B Testing Overview:
An A/B test is a randomized controlled experiment to scientifically identify and evaluate the impact of a change to a template or sequence step.
For Example: Would a different subject line, or call to action, in a currently running email template improve prospecting? And if so, what other changes should be considered?
Running an A/B test provides the data necessary to make an informed decision on whether or not to update shared content by providing a precise measurement of increase or decrease in the pertinent metrics.
Using the example of testing a new email subject line; every prospect who is supposed to receive the email is randomly assigned one of the two variants.
Outreach runs the experiment for a couple of weeks then computes metrics of interest and performs a statistical analysis. Based on the analysis, users can determine if difference was caused by the implemented change.
While the experiment is running, the following messages appear on the Sequences page in Outreach.
Experiment in progress.
The Experiment in progress notification informs the user that the experiment is in the process of collecting data to complete a statistical analysis.
No clear winner found yet.
The No clear winner found yet notification informs the user that of the data collected thus far, a statistical analysis cannot be completed. This notification is the result of either the experiment not running long enough, or the data pool being too small to produce clear results; wait longer or increase the number of touch points for this step.
Note: Updating the template or step the experiment is running on, while the experiment is running, can invalidate the experiment.
Issues discovered with your experiment.
Issues discovered with your experiment indicates the experiment is invalid.
For Example: If one template appears to be performing better it may be caused by the issues with setup and not be a true conclusion. There are two criteria we check for to ensure correctness of experiments.
- The number of emails sent to each variant should be about the same, with at least 150 emails in each variant.
- Both templates should be active during the same consecutive period of time.
You have a winner.
You have a winner indicates the Outreach's statistical tests showed a winning variant has a statistically higher reply rate than another variant. Users can turn off the losing variant.
Note: Click View Results to see the data used to determine the winner.
While the experiment is running, it’s important to not make any changes to the sequence step and the templates involved in the experiment. Otherwise results of the experiment will become invalid. Outreach will warn you if you attempt a change that will affect any running experiment.
Editing a template or the subject line which is part of a running experiment makes the results difficult to interpret, since part of the results would be using one template or subject line and part of the results would be using another template. While Outreach will keep the experiment running, we recommend that you stop and restart the experiment cloning the sequence step and deleting the old step.
Turning off the template will make the experiment invalid.
Turning off sequence will make the experiment invalid.
Note that if you stop an experiment by pausing or deleting one of the templates, the messaging described above will go away. There is currently no place in Outreach to view historical experiments for which templates are not active or do not exist anymore..
Also note that the messaging described above will only appear for experiments that involve exactly two templates. Using more than two templates is not a recommended practice due to fewer email going to each variant, requiring to run the experiment much longer. Therefore, guided template A/B testing is not available for more than two templates. If you want to evaluate three or more new options for the same email template, you can evaluate them one by one in two-template experiments.
A/B Testing Best Practices:
Put 100 Prospects Through Per Template Used.
Keep track of the volume of prospects you reach out to when A/B testing sequences. As prospects move through a sequence step with an A/B test enabled, Outreach randomly (but evenly) sends out a template to these prospects.
For Example: In an A/B test using 2 templates, your data will become valuable after at least 150 prospects have passed through the step., this is when the model starts learning; therefore Outreach recommends applying 100-150 prospects per template for a total of 200-300 data points. Any less than this will not allow you to see enough relevant data to make an informed decision on the quality of the message.
A/B Test Where You Want to Improve.
Because of how little effort goes into creating A/B tests for various message types, we recommend customers do it often and across all types of sequences. Constant testing and refining will lead to a deep understanding of your customers and the best language to address them going forward.
Test Either the Open Rate or Reply Rate - Not Both.
You should only test open rate or the reply rate one at a time to ensure you are getting the purest results.
The open rate of your email will corresponding with the subject line. Outreach recommends starting by A/B testing the open rate of your first email by changing the subject line and leaving the body of the email the same. The reply rate will correspond with the body of the email.
Don't Test Too Many Templates at Once.
If you are adding too many templates to your A/B test, it will take much longer to get valuable data and users run the risk of losing valuable analytic data.
How To Create an A/B Test on Sequenced Email Steps