A/B testing is, in simple terms, comparing two variants to see which one performs better. There is no use having an enormous amount of visitors but very little follow-through so, measuring the performance of a variation (A or B) means you are able to measure the rate of converted visitors to goal achievers.
A/B testing is the way to make more out of your existing traffic. There are paid for or free tests but you have to know what you are doing or the results could leave you worse off than you already were. Here we explain things further:
What is Split testing
In marketing and business intelligence, A/B testing is a term for a randomized experiment with two variants, A and B, which are the control and variation in the controlled experiment. A/B testing is a form of statistical hypothesis testing with two variants leading to the technical term, two-sample hypothesis testing, used in the field of statistics. Other terms used for this method include bucket tests and split-run testing but these terms can have a wider applicability to more than two variants.
In online settings, such as web design (especially user experience design), the goal of A/B testing is to identify changes to web pages that increase or maximize an outcome of interest (e.g., click-through rate for a banner advertisement). Formally the current web page is associated with the null hypothesis.
As the name implies, two versions (A and B) are compared, which are identical except for one variation that might affect a user's behaviour. Version A might be the currently used version (control), while version B is modified in some respect (treatment). For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, as even marginal improvements in drop-off rates can represent a significant gain in sales.
Significant improvements can sometimes be seen through testing elements like copy text, layouts, images and colours, but not always. The vastly larger group of statistics broadly referred to as Multivariate testing or multinomial testing is similar to A/B testing, but may test more than two different versions at the same time and/or has more controls, etc. Simple A/B tests are not valid for observational, quasi-experimental or other non-experimental situations, as is common with survey data, offline data, and other, more complex phenomena.
Multivariate testing is a technique for testing a hypothesis in which multiple variables are modified. The goal of multivariate testing is to determine which combination of variations performs the best out of all of the possible combinations.
Websites and mobile apps are made of combinations of changeable elements. A multivariate test will change multiple elements, like changing a picture and headline at the same time. Three variations of the image and two variations of the headline are combined to create six versions of the content, which are tested concurrently to find the winning variation.
A/B testing, which you may also have heard referred to as split testing, is a method of website optimization in which the conversion rates of two versions of a page — version A and version B — are compared to one another using live traffic. Site visitors are bucketed into one version or the other. By tracking the way visitors interact with the page they are shown — the videos they watch, the buttons they click, or whether or not they sign up for a newsletter — you can determine which version of the page is most effective.
Where could you run A/B tests?
You can run test on anything, literally. You can even test how a different colour pattern affects your conversions, but that’s not anywhere near the best use of this program. The split testing is to create a variant of landing pages, as its well-know that many element in your page, even your ads, might affect the user behaviour.
For example, it’s said that the social buttons on an e-com page decrease sales*, as they can drive a user off a site and then he will "forget" to proceed to checkout. It's much more common than you imagine! It’s part of the character of social networking: once you’re in Facebook, rather than Twitter, your attention will be caught by something in your newsfeed, it may be a kitten or a friend's post – it’s human nature. Once caught up in that distraction, you will forget what you were doing beforehand and so you will fail to complete your purchase.
To have a better idea on what you can run tests, here are the most common elements (click on each of the below to find out more):
This is Pay-per-click advertising which, in plain English, means: the more users click the more a company will pay, therefore you'd make sure those clicks were as "profitable" as possible. Adwords and every other PPC platform can be the most expensive ongoing cost in your marketing activities, so for your business success it’s vital to get the max from it and avoid letting conversions slip through.
When you run testing on PPC, pay attention to click-through and conversion rates, as they can change after keywords: because they can be used for collecting information only. Fix your unique goal from the start as it won't be possible for you to run one test only to improve both CRT and conversions.
It isn't enough to publish a banner and expect people will click on it. It can be a great success but also an epic failure. There are several reasons why a banner is not working:
- Wrong audience
- Value Proposition: undisclosed needs and benefits for clients
- Bad design: boring, poor or cluttered and overwhelming
- No call to action (CTA)
You only have to think back to the last banner ad you saw on a site and didn’t bother to follow up. Why was that? The product or service wasn’t applicable to you? The ad was cluttered with too much text and information and you couldn’t be bothered going through it all? It was boring and you lost interest? There was no immediate obvious benefit to clicking it? There was no CTA that made you want to click down on that mouse button?
You may think it is easy to throw together a banner, and it is. But to design a banner that will work takes much more in depth thought.
Email marketing campaigns can reach many thousands of potential customers, therefore a split testing allows you to trial different versions of a single campaign. In this way, you will be able to find what kind of changes can have a big impact on your sales.
Do not underestimate on what elements you can run tests during an e-shots campaign and how you split your database. For example, the address (the ‘from name’) from which you send emails can make a difference, the subject title and the time when you send them may affect your campaigns performances.
Putting all your trust on a single conversion goal is not the easy answer you are looking for. For instance, you offer a free gift on your website to entice people to sign up to your mailing list. You find it increases your list by 50% - that’s great but when you look at the sales rate you notice it has actually decreased by 20% - why? Basically they were only interested in the free gift, you hadn’t looked further and measured the impact of variations on ALL your website goals.
Also, can you be sure if using services such as Groupon that they will be driving new customers to you? Are they simply throwing bargain hunters to you?
Walk down any busy high street and you’ll see the attention and effort the shops allocate to display products, dress up shop windows and aisles, especially during holidays. They do it, because these elements affect sales. Your website isn't different from your shop and therefore your web pages are like those shop windows and sales aisles.
The greatest benefit you have in an e-com business is that you can make the same aisle look different to diverse clusters of customers. You can always get the max from this to improve sales, but be aware of the SEO risks you may incur.
Many website owners are concerned that multivariate testing will have an effect on their SEO. There are mainly two concerns:
- Content Cloaking
- Duplicate Content
Every web marketer should know that Google penalises cloaking content, but the good news is that A/B testing isn't actually cloaking. To quote Google: Cloaking refers to the practice of presenting different content or URLs to human users and search engines. And they site as examples:
- Serving a page of HTML text to search engines, while showing a page of images or Flash to users
- Inserting text or keywords into a page only when the User-agent requesting the page is a search engine, not a human visitor
However, the final word comes from Google in this article: "[...]showing one set of content to humans, and a different set to Googlebot—is against our Webmaster Guidelines, whether you’re running a test or not. [...]"
They also suggest to:
- Use rel=“canonical” to indicate that the original URL is the preferred version; they recommend using rel=“canonical” rather than a noindex meta tag
- Use 302s, not 301s redirections; you are running a test which it means it won't be forever. 301 address permanent redirections when 302 are temporary ones.
- Only run the experiment as long as necessary; (but at least for one whole week, we recommend)
We will further analyse this topic the most common variables to run tests and free or paid for testing tools in more detail on our next articles. In the meantime, if you wish to discuss any split testing questions you may have, get in touch with us for expert advice from our web team, on 01895 619 900
*To read some research about: