Tackling Low Completion Rates—A Compare.com Conundrum (C)
After his initial meeting with Amy Law on his first day at Compare.com (Compare), Kyle Brodie knew he
needed to make a few quick decisions on how to best execute the estimates test. He was especially excited to
tackle this project within Compare’s deep test-and-learn culture, and he started to consider the best way to build
an A/B test within the customer journey.
Test Parameters
First, Brodie needed to pick a few states in which to conduct the estimates test. Compare ran tests on a
state-by-state basis to prevent any overlap of simultaneous sprints. Running one Compare test at a time in any
given state ensured that the results could be isolated to that specific test. Additionally, each test was usually
conducted in multiple states to better demonstrate how the proposed changes would be received nationally
since each state had a different number of insurance carriers quoting on its panel. If a test worked in multiple
test states, the proposed change would then be developed and deployed nationally.
The second consideration was the duration each test should run. While there was not a rule of thumb on
test duration at Compare, the management team generally wanted enough customers to participate for test
results to be considered statistically significant.
Given these considerations, Brodie and Law agreed that Brodie’s test should be run in Georgia, Texas, and
Virginia. These states were selected because each one had sufficient customer traffic to generate enough data
points in just a few weeks, and they represented a good mix of state profiles. Additionally, these states had a
good balance of both total insurance carriers on the panel and average rates returned per quote. (See Table 1
for a summary of the test states chosen.)
Test Design
After setting these key parameters, Brodie wondered how to instruct the development team to design and
build the estimates test to ensure it delivered the clearest results possible. He spent the next week meeting with
the designers and developers to determine the optimal test. In the end, they decided to develop something
simple and straightforward that they hoped would give them a clear picture as to whether or not estimates
helped in lifting the site’s completion rate.
Page 2
Table 1. Carrier profiles of states used in estimates test (June 2016).
State Carriers
on Panel
Avg.
Rates
Returned
Georgia 9 3
Texas 18 7
Virginia 12 4
Source: Unless otherwise noted, all data, figures, and exhibits are company
documents, used with permission.
The resulting test design (see Figure 1) included a banner graphic above the question form on the vehicle
page that featured either the lowest- or average-quoted premium amount for the three test states. For customers
with IP addresses in Virginia, for example, this banner graphic displayed the phrase “Drivers in Virginia were
quoted as low as $51 per month for state-minimum coverage!” where $51 represented either the average or
minimum quote from that state over the past six months.
Figure 1. Estimate banner displayed on vehicle page.
Addition of
Banner
Display Ad
Several key decisions had been made regarding the design features of the test that yielded these graphics.
First, the group had decided to test both the lowest-quoted premium and the average-quoted premium amounts
on the test graphics. (Brodie was able to find the lowest-quoted premium and calculated the average-quoted
premium using the previous six months of data in Compare’s data warehouse for each state.) The decision to
test both premium amounts came as a result of lively debate within the management team. Some felt strongly
that Compare’s value proposition had always been the accuracy of its quotes, meaning that offering the averagequoted premium in each state would provide a more accurate, representative estimate of the monthly premium
quotes consumers would actually see. Others felt that consumers would only be concerned with the accuracy
of the estimate in relation to the lowest-quoted premium they were shown on their resulting quote page. In
other words, if customers in the Virginia test were shown an estimate of $51 per month before they started the
questionnaire, and their lowest-quoted premium ended up actually being $70, they would not feel a sense of
disappointment or false promise in the accuracy of their estimate. This group felt that utilizing the averagequoted premium amount would not incentivize customers to complete the form the way the lowest estimate
would. Both groups had valid arguments, so the decision was made to test both versions in order to see if one
had a greater effect on customer behavior than the other.
Page 3
Second, given the wide range of driver profiles and insurance plans offered, the group decided to focus on
quoted premiums for state-minimum coverage for a specific “good” driver profile. The profile of a good driver
was someone who was currently insured, had no driving incidents (accident or tickets), had a current, valid US
driver’s license, and had been licensed for at least five years. These categories were selected as they were some
of the most important factors that drove insurance-premium pricing, and including profiles of drivers with
accidents would have unfairly driven the estimates up for safe drivers.
Additionally, the group decided to place the test banner on the vehicle page of the questionnaire, because
it was the page that every customer had to complete at the start of the customer journey. Furthermore, the
customer drop-off data revealed that most people who did not complete the quote process dropped off after
completing the vehicle page. Since one of the goals of the test was to try to improve the drop-off rate, the
group decided this page made the most sense for the test location.
Finally, in terms of design, the group had decided to use a banner graphic placed toward the top of the
screen to ensure that customers in the test saw the graphic before beginning the questionnaire. Based on internal
consumer research done by the company’s marketing department, Brodie knew that visitors to Compare’s
vehicle page generally looked toward the top of the page when they first arrived. Placing the test graphic directly
under the navigation banner would provide the best opportunity for customers to view the graphic versus any
other place on the web page (i.e., left/right side panel). In addition to the graphic being highlighted in yellow,
the quoted monthly premium amount was depicted in bold text as an additional way to ensure customer
viewing.
Test Execution
In implementing the test, the product team used an equally weighted, random assignment for each customer
to one of the test groups. This implementation technique was easier to use than screening for specific
characteristics of each consumer prior to assigning them to a test group. At the conclusion of the test, a check
was done of the test sample groups to confirm that each group had roughly the same demographic makeup.
With the results in hand, Brodie had to decide what it all meant. (See Figure 2 below for a summary of the
test results.) What could he infer about customer preferences and behaviors on Compare’s site when presented
with estimated quotes? What next steps should he advise the company to take in order to improve the customer
journey?
Figure 2. Summary of test results.
Quote Complete: Refers to customers who answered all questions on the Compare questionnaire required to lead
them to a final summary of available quotes on the Compare platform.
Click-Rate: Refers to customers who not only completed the questionnaires (as above), but who also clicked on a
specific quote, leading them to the insurer’s purchase page.
Source: Compare.com.
Read And Analysis