How A / B Testing Can Lead a Drop in Conversion

Split tests help optimize conversion. With their help, marketers check the effectiveness of changes in landing pages, conversion forms and other elements of sites. However, A / B tests can lead to a sharp drop in conversion rate and reduce sales. How does this happen? Read below.

What is the meaning of split testing

A / B test is a research method that allows you to evaluate the effectiveness of changes on the site. At the same time, the split test is an applied marketing method for increasing the effectiveness of web pages. A / B testing is performed using special services such as Content Experiments, Visual Website Optimizer or Optimizely.

The essence of the A / B test is as follows:

  • You explore the effectiveness of the page and formulate a hypothesis.
  • To test a hypothesis, you create a test page.
  • With the help of a special service, you distribute traffic between the source and test pages of the site.
  • After a certain time, you compare the parameters of the test and test pages. This allows you to confirm or deny the hypothesis.

Very often, the results of split tests are unexpected. For example, the conversion rate may increase after deleting grateful customers from the feedback page, and the number of orders may increase after replacing the aggressive CTA "Sign up for the newsletter now" with the neutral text "I want to receive information about new products and discounts."

Sometimes an A / B test does not just fix the ineffectiveness of change, but leads to negative business results. This happens due to marketing or technical errors.

How a split test can bring down conversion: a mini-case

The owner of a popular English-language Internet marketing blog decided to test the effectiveness of the pop-up email subscription form. He created a test version of the subscription form and equally distributed the traffic between the test and test forms using the AWeber service. A day later, the blogger checked the results of the experiment. In the report, he saw the following picture:

The top line displays the results of the test version, and the bottom control. Test results require decoding:

  • Probability: this column displays the planned distribution of traffic between text and control pages.
  • Displays: users saw the test version 6055 times, and the control version 610.
  • Subscribers: the test version brought 47 subscriptions, and the test version 19.
  • S / D: The percentage of subscriptions to impressions.

You do not need to have a degree to understand that the test results can not be considered valid. Despite the experimenter’s plans to distribute the traffic equally between the test and control pages, the variant with the changes scored almost 10 times more views. As it turned out, this was due to a technical failure on the AWeber platform.

It would seem that the blog owner can take a breath, wait for the error message and restart the experiment. But a healthy picky can make him look again at the S / D ratio of the base page and the number of test page hits. Heck, if the platform broke down the other way, then 6055 impressions could be converted to 187 subscriptions. Realizing the loss of hundreds of subscribers per day, the blogger can be very upset and lose faith in split-testing.

In fact, no one is immune from technical failures, and A / B tests remain an effective marketing tool, despite the incident described above. However, split tests can show erroneous results and lead to a drop in conversion due to marketing errors.

When the split tests lead to losses due to the experimenter's fault

Experiment becomes a problem for the site and business, when the marketer makes serious mistakes at the planning stage. The following are the most common mistakes of experimenters.

  • Creating conditions that strongly distort the course of the experiment

What do you think, what text will become more conversion: "add product to the basket" or "place an order"? The answer to this question can be obtained during the split test. And which CTA will be more successful for the subscription form: "sign up for our newsletter" or "leave Email and get 1000 rubles to Webmoney account"? You know the answer to this question without an A / B test.

The problem is not that the results of the experiment will be distorted. 99% of subscribers who subscribe for 1000 free rubles will unsubscribe from your newsletter within a few days. They will not do this immediately after receiving the money only because of the fear that you will dream about it at night and shake your head with reproach. It turns out that this error is dangerous not so much because of the distortion of the experimental results, but because of receiving false conversions.

  • Insufficient sample size

Many services for conducting A / B tests allow arbitrarily determining the proportion of traffic that participates in the experiment. If you allow an insignificant number of visitors to participate in the experiment, this significantly increases the time required to obtain a valid result. But that is not all.

Imagine the following situation: you are testing a new page design. The experiment involved 5% of visitors who are sent to the test page. You are in no hurry, so do not want to risk it. After a month, it turns out that the conversion rate of the test page is 2.5 times higher than the control indicator. Dislocated in an attempt to bite his shoulder by the elbow does not mean anything compared with the lost profit. Losses could be significantly reduced by distributing traffic differently at the beginning of the experiment.

  • Testing different page elements

Imagine testing a new version of text for a conversion button. At the last moment, the designer also decides to change the color of the button itself on the test page. In the course of the experiment, it turns out that the new page is two times more conversion than the old one. You attribute this effect to a new text and delete the old version.

After a while, you notice that page conversion has dropped by 50%. Having caught in the designer’s smoking room, you squeeze a confession out of him about changing the button color. Now you can explain the increase in conversion during the test. Moreover, you understand that you have lost time and customers, since the new text of the conversion button was less effective than the old one.

  • Incorrect metrics selection

Imagine that you are testing the effectiveness of conversion buttons inviting a user to download a free e-book. In this case, the conversion can be considered a click on the button, after which the book begins to automatically load onto the visitor’s hard drive. You will evaluate the test results for the CTR of each button variant.

And how to evaluate the effectiveness of the conversion button, inviting to add the product to the cart or place an order? Click on the button is not equal to the conversion, as by adding the product to the basket, the user can change his mind. Perhaps the effectiveness of the button should be assessed by the number of completed transactions? Is it possible that a button with a higher CTR generates fewer completed transactions? Will business damage a situation where you mistakenly consider a button with a high CTR and a low payment rate to be more efficient?

Strong medicine in the hands of the inexperienced, like a sharp sword in the hands of a madman

Doctors of antiquity warned these words of novice colleagues against the mindless use of drugs. A / B test is a strong marketing drug that benefits only when used correctly. Split testing errors can cost you not only lost time, but also a fall in conversion and sales. Be careful!

Watch the video: Facebook Split Testing Tutorial - Facebook Ads Split Test Variables and Best Practices (February 2020).

Loading...

Leave Your Comment