July 29, 2025

4 Landing Page Testing Best Practices Brands Should Follow

Arrow pointing left
back to blog

about

Learn the best practices for implementing A/B tests on your landing pages.

the author

Adam Ritchie
Ecommerce Contributor

share this post

Landing pages are one of the most important parts of your online store.

While other pages are burdened by the need to present many different options to the customer (buy this product if you want, or you can browse all of our other products, and here’s a button if you need to contact us, etc.), landing pages are focused on persuading visitors to complete one specific action, like subscribing to your newsletter or signing up for a loyalty program. With this concentrated approach, you’ll have a much easier time reaching whatever goal you happen to be targeting. 

In fact, landing pages are so important that you shouldn’t leave any aspect of them up to chance. From design to functionality to everything in between, A/B testing allows you to objectively determine the best way to approach each detail of these pages, unlocking extraordinary potential for your conversion rate.

Here’s how it works: in an A/B test, you can publish two versions of a landing page at the same time (the original version of the page vs. a new version with some kind of change you want to try out), randomly assign visitors to each variant, and then evaluate how these variants perform against each other to see whether your change was really a good idea or not.

But if you don’t know what you’re doing, you could end up wasting your time on tests that don’t produce any meaningful insights — or, worse, publishing a version of the page that actually hurts your conversion rate in the long run. In order to help you stay on the right track, we’ve covered all of the key landing page testing best practices that you need to know below.

Ecommerce experimentation has never been more accessibleShogun A/B Testing’s user-friendly interface makes it easy for absolutely anyone to set up, manage, and evaluate landing page tests on Shopify.Get started now

Best Practices for Developing Hypotheses

All landing page tests start out with some kind of hypothesis — this is an idea you have for an edit that might make your landing page more effective.

There’s an art to coming up with these ideas, as well as developing them to the point where they’re ready to be tested. Here are some tips to help you get started:

  • Do your research: The whole point of a landing page test is figuring out how to influence customer behavior. Who better to listen to than the customers themselves? You can take a look at your reviews, analyze support tickets, or even conduct customer surveys to identify potential areas for improvement in your site design and marketing strategy.
  • Choose your pages wisely: Every minute you dedicate to a landing page test is time that could have been spent on all the other tasks that go into running your store. Always ask yourself two questions before moving forward with a hypothesis: how difficult would it be to get a successful result, and how much of an impact would it make if the test was successful? For the sake of efficiency, you should be aiming for wins that are easy or big (or, ideally, both). Changes to high-traffic pages can make more of an impact on your bottom line than changes to less popular ones, and it’s easier to improve the performance of landing pages that currently have a low conversion rate than those that are already performing well.
  • Be specific: It’s best to only make changes to one variable in these experiments — otherwise, even if there is some difference in performance between Version A and Version B, you won’t be able to tell which of the changes you made caused it.

Let’s walk through a hypothetical example to demonstrate all of these points in action.

Hypothesis Testing in Action

  1. Identify High-Traffic, Low-Conversion Pages
    • Start by reviewing your data.
    • Pinpoint landing pages that attract a lot of visitors but have low conversion rates.
  2. Evaluate the Page Like a Visitor
    • Open the page with fresh eyes.
    • Avoid vague thoughts like “maybe the language could be more persuasive.”
    • Instead, develop a refined, testable hypothesis.
  3. Narrow Your Focus
    • Consider the different text elements: headings, paragraph copy, CTA buttons.
    • Choose one element to test first.
    • Example of a good hypothesis:
      “The heading is too bland. If we include benefits in the headline, it may increase visitor attention and conversion rate.”
  4. Run the Test
    • Implement the hypothesis and monitor performance.
    • If there’s no improvement, don’t stop there.
  5. Iterate on Other Elements
    • Try updating other parts of the page:
      • Paragraph text.
      • CTA button labels.
    • Each failed test helps eliminate possibilities and refine your direction.
  6. Use Test Results to Spark New Ideas
    • Testing might lead to new hypotheses.
    • If content changes don’t work, consider:
      • Reviewing customer feedback.
      • Checking for design-related issues (e.g., readability).
  7. Test Design Adjustments
    • New hypothesis example:
      “If we improve readability by increasing text size and contrast, more visitors will understand the benefits and convert.”
    • Implement design changes based on this hypothesis.
    • Watch for significant improvements.
  8. Embrace Iteration
    • Don’t expect instant success.
    • Each cycle — even the “failures” — brings insights.
    • Continuous testing and refining is key to optimizing landing pages.

Understanding Sample Size and Timing

Another reason to prioritize pages that get a lot of visitors for these experiments is that the more traffic you receive, the more accurate your results will be. 

To demonstrate why this is the case, let’s take an extreme example — imagine that just one person each visited Version A and Version B, and then you immediately closed your test. Maybe the visitor to the original version converted and the visitor to the new variant didn’t. Does that mean your test failed?

Of course not. For one thing, conversion rates in ecommerce tend to be quite small (for Shopify sales, the average conversion rate is 1.4%, and even a good conversion rate is still just 3.2%). You’ll need to wait until you’ve had at least hundreds of visitors before you can judge whether a variant is underperforming. Indeed, it might take about a hundred visits to get just one conversion, even under normal conditions.

And then there’s the demographic issue. There are likely all kinds of people who visit your store — people of different ages, who live in different areas, make different amounts of money, etc. All of these factors can affect how likely it is that someone will be influenced by the changes you’ve made to the new variant in your test. 

If you end your test too early, then some of these segments may be underrepresented (or overrepresented, for that matter) in your sample. And if the sample population in your test doesn’t accurately reflect the entire population of the people who visit your store, then your results might point you in the completely wrong direction.

CXL provides an illustrative example of why you must collect enough data for your A/B tests. At first, the control in their experiment looked like the obvious winner:

The control initially looks like the winner in this experiment. 

But then, as more data came rolling in, it became clear that it was actually the new variant that performs better:

With more data, the new variant now appears to be the winner.

As a rule of thumb, the math shows that you should probably wait until each variant gets, at the very least, 355 visits before concluding your test.

Remember, this is a per-variant figure — if you want to test two new variants against the control in an experiment rather than just one (this is known as an A/B/C test), you should expect it to take more time to produce reliable results compared to an A/B test, as you’ll need more visitors. 

Using the 355 rule, an A/B test would need at least 710 visitors total (355 times two), while an A/B/C test would need 1,065 (355 times three). But depending on the circumstances, such as how far apart the results are between Version A and Version B, you may want to wait until you get many more visitors than that.

The technical term for this threshold of how much data you need to produce reliable results is “statistical significance”. 

Specifically, statistical significance is reached once you’ve calculated that the p-value for your test is less than 0.05. Without getting too much into the math here, that means there’s only a 5% chance that, if Version A and Version B in your test were designed exactly the same, they would have results this different. In other words, it means you can be 95% confident that the changes you’re testing out are really having their intended effect. 

Also, even once you’ve reached statistical significance, you should still keep your test running until you get to the same day of the week that you started on (e.g., if your test hits statistical significance on Day 10, keep it running until Day 14). 

The reason for this is that, for most stores, each day of the week has its own trends when it comes to traffic, conversion rate, average order value, etc. For example, maybe people tend to do more research on the weekend (so, more traffic) but make most of their purchasing decisions on weekdays (so, more revenue). 

Just like how you don’t want to underrepresent or overrepresent any audience demographics in your sample, you don’t want to underrepresent or overrepresent any days of the week as well.

Tips for Interpreting Results

So, your test reached statistical significance and you then waited for the next-closest end-of-week increment to end it. Now what?

Well, if the new variant outperformed the original, you should go ahead and make it the only published version of the page. 

And if the new variant performed the same or worse, you should yank it back and fully revert to the original version of the page.

Beyond that, here are a few other points to keep in mind:

  • Small changes in performance can make a big difference: Let’s say your test increased the conversion rate of a landing page promoting a certain product from 2% to 4%. That doesn’t sound like much, but another way of looking at it is that the test has doubled the sales performance of the page. Depending on how much traffic the page receives, that could mean many thousands (or, in some cases, even millions) of dollars in additional revenue for your store. 
  • Improvements in performance compound over time: Even if you don’t get much traffic, it’s important to keep the big picture in mind. Let’s take the same example from the bullet point above, except this time we’ll specify that it’s a low-traffic page, and the test that showed an increase in conversion rate from 2% to 4% was run over the course of a month. So, for that one month maybe the difference in raw sales performance wasn’t that great. But what about a year? With that time horizon, the conversion rate increase generated by your test is now a much more substantial 24% (2% times 12 months). That likely adds up to a nice bump in revenue no matter the size of your store. 
  • Don’t forget about secondary metrics: For a landing page test, the metric you’re probably paying the most attention to is the conversion rate. But you would be wise to review all of the other information you’re collecting about each variant as well. For example, if your landing page links to one of your collections, maybe the new variant “wins” in terms of conversion rate but actually has a lower average order value due to people making bigger purchases when they do visit the collection through the original version of the landing page. That sort of finding could lead you to question why this is the case and develop entirely new, potentially fruitful hypotheses to investigate.

Also, you may be wondering — can the results of an experiment on one page be applied to other pages (e.g., if you find that a certain CTA button color works best on one of your existing landing pages, should you then make that detail part of the default style on future landing pages)?

Merchants who want to extrapolate results like this should do so with caution. There may be factors at play during your test on one page that wouldn’t affect other pages for whatever reason.

To help control for these unknown variables, you’ll need to keep your test running for even longer and collect much more data than you would have otherwise. Again, more traffic means more reliable results. A 95% confidence level is great, but if you want to extrapolate you should go beyond standard statistical significance and aim for something more like 99%. 

Testing on Ecommerce Landing Pages

Shogun A/B Testing is a powerful, affordable, and easy-to-use option for conducting your own ecommerce experiments. 

This app allows you to create the variants in your A/B tests with Shopify’s built-in theme editor. Unlike most other optimization platforms, you won’t need to figure out how to navigate any new design tools in order to set up your experiments. There’s absolutely no learning curve involved.

Shogun A/B Testing makes it easy to set up your experiments.

Shogun A/B Testing also gives you a high degree of control over your testing parameters. You can determine how much traffic goes to the original version of the page as well as how much goes to the new variant, and you can target your test to a specific segment of your audience if you prefer (e.g., if your hypothesis only applies to visitors from a certain country, you can set it up so that the test is only shown to visitors who happen to be located in that country). 

You can create custom audiences within Shogun A/B Testing.

Shogun A/B Testing conveniently keeps track of statistical significance for you. After you start running a test, you can check in to see whether you should keep waiting to collect more data or whether your results are ready — there’s no need to do any of the math on your end. 

Shogun A/B Testing tells you when your results are statistically significant.

And it’s also worth noting that Shogun A/B Testing keeps track of a variety of metrics for each variant in your test, including conversion rate, total sales, top products ordered, top clickthrough destinations, and much more. On top of your primary goal, you can review all sorts of additional context that will help you make highly informed decisions about what to do with your test results.

You can review a variety of different metrics for each variant in Shogun A/B Testing.

With the technical capabilities of Shogun A/B Testing and the best practices covered in this guide, you have everything you need to maintain an incredibly effective landing page testing process. 

Ecommerce experimentation has never been more accessibleShogun A/B Testing’s user-friendly interface makes it easy for absolutely anyone to set up, manage, and evaluate landing page tests on Shopify.Get started now

You might also enjoy

Get started for free

Get hands on with everything you need to grow your business with a comprehensive suite of tools.
Start now
Arrow pointing up and to the right Arrow pointing up and to the right