Good test design is a crucial part of any long-term successful conversion optimisation programme. Companies that are really good at it manage to increase testing velocity & quality, as well as the amount of knowledge they have about their customers. This post is about structuring and designing the best possible test once you have found a strong hypothesis. The hypothesis generation process will be part of a separate post – here we focus on how you can use our framework to design an iterative approach to structuring your experiments. By following lean product development principles, it is designed to enable you to learn faster, and at a lower cost.

The P-M-D framework

The P-M-D framework

The P-M-D framework

Before designing a test, you will need to have:

  • a strong hypothesis
  • enough quantitative and/or qualitative data to support the hypothesis
  • defined goals and constraints: what are your business goals, what are your users’ goals, and what are your limitations (technical/analytical resources, traffic, time, legal, etc)

The process to create these will be covered in a separate post (subscribe to our newsletter at the bottom of the page to be notified).

Step 1: “P” – Proof of Concept

Test the conversion lever: what is the underlying assumption to our hypothesis?

The pitfall that most companies fall into is to try and build a complete test to answer the hypothesis from the start. Not only is this slow but it also increases the cost of failure. In true lean fashion, the alternative to this is to reduce the hypothesis to its core assumption and test it. If the assumption is wrong then it’s likely that the hypothesis is wrong, too. This also allows to fail fast and cheap.

For example, instead of implementing complex urgency messaging showing how many seats are left on a plane (which requires offline connections to be created), test the core assumption which is that users respond to pressure when they are in a browsing situation. For example, you can test showing a more generic “This is one of the busiest routes, book now to avoid disappointment” and prove that your users do indeed respond to the messaging.

If the test is successful, then proceed to building an MVP (step 2). If the test is flat or negative, then reiterate this step by testing the same underlying assumption differently (perhaps make the change more prominent or place it on more pages) or moving on to another hypothesis and rule this one out. That way, you wasted minimal time and resources and gained valuable insights.

Step 2: “M” – Minimum Viable Product

Test the execution: how to implement the hypothesis quickly?

Once the concept is proven, you can be sure that you have found an important conversion lever. Now the goal is to get ROI as quickly as possible by testing the execution of the hypothesis as opposed to the lever, but still keeping costs low. Design a campaign which focuses on testing a few variations only, relying on qualitative and quantitative data you have collected to inform your choices (what are other sites doing? What do users say in surveys? etc). By testing only as many variations as your traffic (or your technical limitations) allow at once, and by making the variations as varied as possible, you will maximise your chances of detecting a significant change in a reasonable timeframe.

For example, instead of implementing a large MVT testing all combinations of designs, positions, and complex features for a progress bar on your checkout, test a few combinations of simple applications of these features. The key is to be open-minded and creative with these variants.

At the end of this test, if you have a clear winner, implement it on your site and go to step 3: development, where you will fine-tune and further enhance your execution in the long term. If there was no winner or control was the winner, then try different ways of implementing your hypothesis or go back to step 1 to validate the assumption differently.

Step 3: “D” – Development

Refine the execution: how to develop and improve the implementation further?

At this stage, you have proven that the conversion lever you are focusing on is indeed worth it, and you have already implemented a better version of your site that responds to the initial hypothesis (even though you haven’t tested everything – and you shouldn’t). The data you have collected so far is critical to designing further iterations. The MVP(s) you experimented with will give you data about how people react to the change, and it’s time to dig deeper into the data to identify interesting behaviours and segments.

Using this data, you can now design larger tests which will build on the basic design you have implemented and compare the importance and execution of various features. For example, if you have implemented a simple price filter on your search results page, you may now want to structure an MVT which tests some designs (sliding scale vs free-form vs pre-determined price categories) as well as introduce new functionalities which would have been complex to build earlier on (such as live update of the results when the filter is used).

Depending on other priorities and technical complexity, aim for smaller, more focused tests in a continuous optimisation mindset.

Notes

Knowledge Management

As your optimisation programme grows and scales, it becomes crucial to record and share learnings. The P-M-D framework allows you to differentiate learnings about concepts (users who know what they want use the search bar rather than the menu categories) and features (search algorithm A works better than search algorithm B).

MVT or A/B/n?

Multivariate tests (MVTs) are useful to determine the impact of specific features by combining them together, but this is at the cost of time and resources. We recommend to run MVTs only at step 3 (and step 2 if your traffic and technical resources allow).

Defining KPIs

In order to gain the power of the iterative methodology, it’s important to maintain a constant set of a few primary KPIs to measure success with (such as purchase, or account creation). Secondary KPIs can be created on a test by test basis to provide additional learnings about user behaviour.

Sign up to the CRO Newsletter to stay up to date