Are Your A/B Test Goals Lying To You?

A/B testing tool vendors like Visual Website Optimizer and Optimizely offer the promise of simple and quick integrations. With a JavaScript drop in marketers can launch their own tests with point-and-click test and goal setup. The path to growth is simple – make some changes, setup goals, run the test, keep winning changes and repeat. It’s A/B testing utopia…until the goals start lying to you.

To be fair, it’s not like the goals set out to tell you big bold-faced lies. The problem is subtle - the testing tools don’t know what they don’t know. There is more to the story than they are telling you.

Two common goals setup in A/B tests are conversion rate goals and revenue goals. They get to the heart of whether a change you make is getting you more customers and more money. The problem is that the A/B test tool can only measure gross revenue and conversion rate, they can’t give you a view of your net conversion rate and revenue. What is the impact of customer cancellation, refunds, increased support contacts, and more?

If you are only measuring gross revenue and conversion rate it’s easy to have tests that look like winners on the front end, but are losers on the backend. As an example, I recently helped a colleague evaluate an A/B test he had run. It caused a higher AOV and didn’t impact conversion negatively, which in turn pushed up revenue. The test was a clear winner in the A/B test dashboard. But the dashboard was giving a gross revenue view. Only by digging in and looking at the net revenue did it become clear that the bundle was driving a significantly higher refund rate resulting in lower net revenue.

The good news is you can fix it, the bad news is that it isn’t a quick JavaScript drop in. In order to get a clear picture you must track what tests your customers are part of and what variation they saw. This needs to be tracked to your customer database, your analytics software, or both. Then as part of your analysis of both winning and loosing tests (I advocate for analyzing loosing test because I have learned a lot from them) you should create a full funnel view of those customers with cohorts for each variation. Comparing key metrics from each variation’s cohort to each other and your control will help you discover if you have a true winner or not.