Most Unexpected A/B Test Results and What Could Cause Them

How A/B Testing Could Go Wrong

Running A/B split test experiments is not only a useful exercise to help you increase conversion rates but it can also be quite fun. Especially when your experiment shows some unexpected results. What could be the most unusual surprises?   

  • Control performs better than any treatment variation

You spent all that time planning your experiment and driving visitors to the site and at the end of the experiment the best performing option is your control. Try convincing your boss you need A/B tests after that! So what could have gone wrong? Most likely changes that are made in variations are not significant enough to increase conversions. You need to think of better things to run experiments on. Also it would make sense to look at the data in more details to identify if there were any other factors that might have affected the experiment. AB Split Test

  • Gender of visitors affects test results

If traffic is not homogeneous it is likely that your results might be affected by how men and women react to different images. A research called “What’s Psychology Worth?” found that using a professional picture of a woman had an effect on men but no effect on women, the response rate of a mail marketing campaign increased by 4.5% amongst men when a picture of a woman was shown. Take into account your target audience when testing a variation containing an image. 

  •  Change in navigation of a website performs worse than expected

When you introduce a new navigation your conversion could fall. Even though everybody in your company might agree that the navigation is much better than the old one it still might not perform as well as the control variation. The most common reason is that visitors are used to the old navigation and the novelty adds a confusion. Analogy here is a new layout of a store where you make weekly shopping. The minute the layout is changed it makes you uncomfortable, takes you longer to find what you are looking for until you get used to the new layout. Therefore, it’s better to test changes in navigation on new visitors first excluding any returning visitors from the test.   

  • Results are skewed by unpredicted factors

If your results are not what you expected don’t take them for granted. Try drilling down to what might have caused the experiment to end this way. There are certain things that could have an impact on results which you might not anticipate. For example, website down time, JavaScript errors and visits by bots or employees. In one of its tests Microsoft noticed that the new page design had a much lower conversion rate (click on the “buy” button was used as a conversion) but the number of page views per user was a lot higher for the treatment variation. When investigating further they discovered that the experimenting site had a monitoring system which was simulating clicks to determine if clicks fail, it tried several times before raising an alarm. In the new treatment variation clicks from a monitoring system didn’t work and it made many attempts which reduced the click-through rate.

To conclude, if results of a test look suspicious to you there might be a reason for that. Don’t take things for granted unless there were no possible factors influencing the test. Don’t stop experimenting.