Does your website have lots of design faults and usability issues? Are you fully aware of what changes need to be made to improve the performance of your site? Do you lack some of the functionality that many of your competitors have on their websites? Have you employed website design experts to improve the performance of your website? Are you making changes to your website based upon best practice and customer feedback?
These are some of the reasons I have come across why companies don’t conduct A/B testing on their websites. Despite the proven benefits of A/B and multivariate testing many websites continue to make changes without the use of online experiments. Let’s examine the logic behind not testing.
Your website has lots of design faults and usability issues. Show me a website that doesn’t. There is no such thing as a website without usability and design issues. This is the nature of any choice architecture. Compromises have to be made when designing any website. The main unknown factor here is how these decisions influence visitor behaviour and conversion.
Many companies employ website design experts to recommend templates and changes to their websites. They may employ best practice principles and obtain visitor feedback. However, given that almost every website is unique it is impossible for any consultant or best practice principle to predict exactly how a change will impact user behaviour and your conversion goals.
An A/B test allows the impact of proposed changes to be measured because it splits traffic randomly and employs a control (the existing webpage) to identify the difference in the performance of the alternative user experiences. This scientific approach provides the reassurance that any difference in the conversion rate between the two experiences is unlikely to be the result of changes in other drivers of conversion (e.g. the quality of traffic, competitor activity or the weather) or just random variance.
This scientific approach is essential for digital marketers as I have come across instances where a clearly inferior design from a usability perspective results in a higher conversion rate. Stakeholders may still want to consider improving the user experience, but unless you run a controlled test before you implement such changes you won’t be able to quantify the true costs or benefits.
Similarly improved functionality of your website unless relevant and well integrated into the customer journey may not have the desired outcome. It may be a distraction or change visitor behaviour in a way that could damage conversion.
Even if your site has low traffic or conversion rates there are still strategies that may allow you to benefit from A/B testing. As suggested by Richard Page in his article on how to test and improve your website if your traffic is too low for A/B testing you should consider setting goals that take visitors towards your conversion objective, such as clicks on your call to action or engagement on the page.
Finally, human behaviour is too unpredictable to make changes to your website based purely upon subjective opinions and gut instinct. Making changes to a website without testing is not optimisation, it is risk taking and it could cost your business millions in lost revenues. Although most of my tests are successful I am frequently surprised by tests that don’t turn out as expected. Given this A/B testing is an essential tool for any digital marketer who wishes to understand how their content influences their prospects and customers.
Thank you for reading my post and I hope you find some of my other blog posts of interest.