CASE STUDY: ISAP

http://blog.vendoservices.com/vendo-blog/2017/09/06/case-study-isap

VPC

This is an introduction to ISAPs, a tool used at Vendo. It will be used to kick off the VPC as a valuable tool to be used to improve as leaders

isap
Matt was so shocked by the number that had flashed onto his screen that he pushed his mouse completely off his desk. It was hanging by the USB cable. His hands were in the air, palms facing up. On his face was the universal expression of “WTF???” It’s recognizable on faces of  Wall Street traders after sudden movements in the market, and natives of remote tribes when they first see an outsider.

The number on his screen was 25%. Impossible.

A few weeks earlier a HiPPO (Highest Paid Person in the room) had suggested a simple test. Remove the warning pages in front of tours. “I’ll bet we see a huge increase,” he’d said. Matt was skeptical. Now the test results were staring him in the face. A 25% increase in sales.

Matt picked up the dangling mouse from the side of his desk and placed it next to his keyboard.

He had some questions about this number. Where to begin?

Matt had learned a few things about testing over the years. He reviewed the key points as he thought about the 25%.

When do you test? Businesses are constantly growing and testing is a key part of any business. But when do you test? You have to have a reason, a motive or an initiative. Who decides when and why? In our organization we are testing all the time. If you become proficient at testing it can be a competitive advantage. If you over test it can be a distraction, and muddy the waters vs clarifying. We generally test when the initiative is able to drive improvements.

What do you test? Anything can be tested once you define an objective. Without a clear objective don’t get started. Objectives will be tied to metrics but clarity around the objective is fundamental. Management steps in here to make sure the what is clear and quantifiable.

How do you test? Well you need something to move the needle. It can be anything. A new tour. New banners or text to improve click through. Something to improve retention. Once you have the what then somebody or some team has to come up with the how. This can be individualized or brainstormed by a group. Again they have to be concrete and clear.

What is the metric? Obvious but not always so obvious. Classic metrics are conversion, sales, revenue or retention. They could be click through or bounce rate. Whatever it is has to be tied back to the what and how so it can provide you information at the end of the test to take a decision.

Who does it? It could be an individual with the “when” and the “how.” A department or an analyst or team of analysts. We want to make sure the test is setup for success and has the least chance of failure. A failed test costs time and money. The “how” and the metric have to be clearly identified. The time frame to reach an outcome or statistical significance has to be monitored by the appropriate person or team with experience.

How long? Well in a perfect world you want to test as fast as possible. The less resources and time used to get to answer is the preference. Then you can take a decision, stop feeding a loser or losers and benefit from your test. The longer a test runs the more chances are for something to go wrong. When you want to test improvements for retention well you starting on a journey so you better be prepared and your testing atmosphere air tight.

What is the cost? Depending on what you’re testing the cost could be feeding a loser over a winner. Or it could be more. It could be a failed test due to corrupt data. Or a failed set up. The distribution is not correct. One side of the test is getting more traffic than the other.

Matt looked back at the 25%. He just didn’t buy it. He brought in Tim who was leading the test. He told them to go back to the test and double check everything. What Tim found was an error in the set up. On the warning pages there is a join button which had not been accounted for. Once they accounted for those sales they found that removing the warning page made no affect on overall sales.

Questions for discussion:

  1. What have you learned about testing?
  2. How can it be a competitive advantage for your organization?
  3. What are you planning on testing next? How will you do it?

 

Leave a Reply

Your email address will not be published. Required fields are marked *