How do you know it’s time to end an A/B test?
If your answer is based on time or statistical significance your A/B tests may end up being more expensive than they need to.
I would argue that the length of your testing should depend on your return on investment, not an arbitrary rule. To help you find your ROI and understand how long to run an A/B Test we’ve put together a free A/B test calculator.
Read on to learn more or grab the free calculator now by signing up for our newsletter.
Why Run an A/B Test at All?
A/B testing or split testing can help you learn more about what your audience wants and needs. Do they prefer a fact-based approach or an emotional one? Animation or live-action? Bold graphics or understated designs?
You can A/B test pretty much anything, from ads to email subject lines. We’ve even created two versions of videos for brands that want to split test video length.
Split testing helps you improve your marketing strategy by showing you which tactics, mediums, and approaches work best for your audience.
But running these tests usually costs time and money. You have to make and maintain two sets of marketing collateral, track the results, and crunch the numbers.
Ending a test too soon might save resources in the short term, but can cost you in terms of results and conversions. On the other hand, the longer the test runs, the more expensive your experiment becomes.
The “conventional wisdom” around ending A/B tests
Experts offer all kinds of recommendations for when you should end an A/B test:
- Neil Patel says you should test for at least 7 days, making sure you’ve reached statistical significance
- Hubspot recommends a more flexible timeline but still focuses on statistical significance to determine when to end your test.
- Meta limits A/B testing duration on Facebook to between 1 and 30 days, although they too recommend a minimum of 7.
For what it’s worth, I think Hubspot has the right idea when it comes to timeline. You should plan a test that you think gives you the best chance of gathering the most data. For an email that could be as short as 24 to 48 hours. For an ad, you probably want a bigger window.
Now that brings us to the question of statistical significance. Marketers focus on statistical significance because they expect their test results to show a clear preference for one result over another. Generally, they’re aiming for at least 95% certainty in their results.
Here’s the problem with that approach: 95% is an arbitrary number. It’s based on a desire to feel certain, not on the real financial consequences of your test.
What I recommend instead
As I argued in an article for CXL, the right time to end an A/B test is when the Opportunity Costs from not ending the experiment become larger than the Error Costs you run from ending it.
Here’s why: Sometimes the cost of continuing a test outweighs the value you might get from it.
The 95% standard is nothing more than a convention. It might be useful in some cases, but that doesn’t mean you should apply it blindly.
With the help of data scientist, Wesley Engers, we created a calculator that can help you put some context around your A/B test. It reveals the opportunity cost, that is conversions lost, of continuing your split test.
Get the Free A/B Test Calculator
Sign up for our mailing list and we’ll immediately send you the free calculator as a thank you. It’s yours to keep whether you stay on the list or not.
Although, if you’re reading this post, you’re probably interested in video, marketing, and how you can make the most of both. That’s exactly what we cover in our newsletter every week.
So sign up for the calculator but stay for the insights.