Showing posts with label SEM. Show all posts
Showing posts with label SEM. Show all posts

27 January 2016

An Introduction to A/B Testing (Part 2) - Designing an email A/B Test

This is a continuation of our article An Introduction to A/B Testing (Part 1).

Designing an email A/B test

Email sent to prospects who have chosen to give your organization information is a unique medium in modern marketing in the following ways:
  • Your prospects have already expressed interest in your brand and have voluntarily shared information with you.
    • name
    • email address
    • phone number
    • product interest
  • Your systems also know something about the prospects
    • origin source/medium
    • keywords
    • previous website & media consumption
  • When you send email, your email automation system knows
    • if the email was received (potentially considered)
    • if (and when) the email was opened and by whom
    • if the CTA was clicked
    • the user on multiple devices (e.g. home computer, work computer, mobile phone, tablet, TV, …)
  • You do not have to pay to put the media in front of a prospect
  • The action rates (opens and clicks on CTAs) are much higher than other mediums
  • Prospects have trackable histories and futures
    • allowing you to do lead scoring
  • You can do follow-up experiments on prospects

Picking which email to test

Let’s start at the very beginning
A very good place to start
– “Do-Re-Mi” from “The Sound of Music”,
Rodgers & Hammerstein
When you are starting out, start at the beginning with the first email (the “thank you” email, which should be sent immediately after sign-up) and with the first scheduled campaign email (which is probably being sent when the prospect is not actively engaging with your content).
Early emails are where you win or lose the eyeballs of a fresh prospect with respect to future marketing. If the prospect does not find the emails useful, (s)he is less likely to open future emails. The improvements you make early in an email relationship can lift the results of successive emails and increase your sales/wins. (You can and should measure how each test group does on subsequent identical emails.)

Assigning prospects to groups

All of the email systems we have encountered assign identifiers to prospects.
An easy, recommended scheme for a 50/50 A/B test on new prospects is to assign the even-number IDs to one group and the odd-numbered IDs to the other group.
When you begin testing later emails, we recommend what are called “paired tests” in which you build several customer “segments” based on certain characteristics and then divide each segment into two groups (although the even/odd approach often works here, we tend to export the IDs in each segment, randomly assign groups and then import the group assignments). Paired tests reduce variance and often allow you to use smaller sample sizes to obtain statistically significant results.

Deciding what is a conversion

You have many choices: open, click, form completion, in-person visit, RPQ, price quote, sale, referral, …
Your email automation system is collecting opens and clicks on every email. If your system is website-integrated and things are properly set up, then you are probably able to measure customer
What you pick to use as a conversion depends on
  • business goals
  • the ability of your systems to automatically tie the conversion back to the email
  • conversion time
Our advice is to
  1. Measure as a conversion the logical next step after encountering the media (an open or a click)
  2. AVOID using anything that is too long-term as a conversion (you should always measure the performance; it is important to move at “internet speed”, however — is there a milestone before the long-term event that can be used as an indicator?)
  3. Once you have decided what to test, measure (and report) the logical second step the prospect should take sua sponte — if you drive up open rates, but clicks on your CTA don’t increase, did you really improve?
  4. Measure and report the performance of the next email for each group — improving the open rate on a bad/low-utility email can do more harm than good — it is important to catch this early.

In the next article in this series, we will discuss "Measuring Results".

20 January 2016

Continuous improvement is better than delayed perfection.
– Samuel Clemens (a.k.a. Mark Twain)

Introduction

Sold/won.
Topline.
Revenue.
Your data tells a story. Listen. Sell more.™
The revenue opportunity with the best ROI for any organization is increasing the win rate of existing leads/prospects.
A/B testing is where you start. It is low cost and relatively quick.
The communication your organization shares with prospects has the goal of increasing conversions.
A/B testing helps you determine how to improve individual pieces of communication media to increase conversions and achieve your organization’s objectives.

What is A/B testing?

A/B testing is comparing two versions of a piece of media (an email, a web page, a digital advertisement, etc.) to see which one performs better.

The performance of each piece of media is compared by showing the two variants to similar users at the same time.
The one that gives a better statistically-significant conversion rate, wins.
A-B testing examples
typeAB
button color
buttons vs. linksVisit our site
graphical vs. text element

visit us
subject lineThree-day Sale25% off this weekend
senderFrom: SalesFrom: Bob
PersonalizationDear CustomerDear Sue
menu order
  • Home
  • Buy Now
  • Contact Us
  • Home
  • Contact Us
  • Buy Now
It is best to use caution when generalizing the results of one A/B test to all of your emails. All the result tells you is that for this particular scenario one option outperforms the other. When extending the learning to prescribe changes in other emails, A/B tests should be conducted on those emails before the change is made globally.

Why should my organization do A/B testing?

Try this – you’ll be amazed
Construct a table like the following for your first three emails.
Email #Opened?Win Rate
1Yes25%
1No10%
2Yes30%
2No8%
3Yes32%
3No5%
Enter your values below
Email #Opened?Win Rate
1Yes %
1No %
2Yes %
2No %
3Yes %
3No %
What you will discover is that the customers who open your email are more likely to convert to a win.
You can sell more simply by selling smarter.

Cost & Time

A/B testing a particular piece of media is free (you’re already paying for the email marketing automation system, website, etc.).
A/B testing a single piece of media generally takes little time to set up and start.
The other things you can do to increase sales — Price (adjustment/discount), Promotion (advertising), People (hiring & training), Place (new location(s)/relocation(s)), etc. — tend to have meaningful OPEX/CAPEX costs. They also can take meaningful time to implement.
Low resource utilization & quick vs. high resource utilization & slow — if you want more conversions, A/B testing is the logical place to start.

Segmentation & Lead Scoring

How customers interact with a piece of optimized media allows you to group them into segments, allowing more use-case/persona specific messaging.
With an ensemble of responses to optimized media, you can group prospects into clusters, which are very, very valuable for lead scoring.
With reliable lead scoring, your sales teams will be able to prioritize higher-performing clusters. Your revenue will increase without additional investment — a ROI win!

What can be tested?

Email

  • Subject lines
  • Calls to action (CTA)
  • CTA colors
  • Cadence (the timing between emails)
  • Omission (whether a particular email in the sequence matters)
  • Sending day of week
  • Sending time of day
  • Deliverability

Web

  • Landing pages
  • Sign-up pages
  • Menus
  • CTAs
  • Site navigation elements
  • Responsive design layouts

Display advertising

  • Flash vs. animated GIFs
  • CTAs
  • Destination landing page
  • Dayparts
  • Behavioral/demographic targets

Search engine marketing

  • Headlines
  • Display URLs
  • Descriptions
  • Extensions
  • Day parts

The next part of this series "Designing an A/B Test" will published in the next few days.

21 December 2015

Marginal Contribution & Shapley Values

Since our article Better Attribution: Using Clickstream Data and Shapley Analysis to Get More Accurate CPA & ROAS, there have been some questions asked about marginal contribution and Shapley values. 


Marginal Contribution

A straight-forward way to understand marginal contribution is to consider the problem of how to allocate the cost of building a new runway  between four aircraft that need different runway lengths.

AircraftRunway Required
A8
B11
C13
D18

runway needs
Runway Needs









Aircraft D is the only one that needs the last 5 runway units.

Aircraft C and D are the only ones that need the penultimate 2 units.

B, C and D need 3 common units.

All four need 8 units.

One way to allocate cost is to take the marginal cost (MC) for each segment and divide it by the number of beneficiaries.

AircraftA+B+C+D
MC8325
# Aircraft benefitting4321
Cost per aircraft2115

Thus, the cost/value allocated to each aircraft is as follows:


Cost to A2
Cost to B21
Cost to C211
Cost to D2115

The total of each row in the runway problem is an equitable way to assign runway cost to each aircraft.

Assigned Cost
Cost to A22
Cost to B213
Cost to C2114
Cost to D21159
18

Shapley Value

Shapley values are essentially averages of the cost/benefit for each participant. It is normally used in scenarios where the different players can participate in different orders.

Thus in a 4-player scenario, the following permutations need to be considered when calculating the Shapley value:

ABCD
ABDC
ACBD
ACDB
ADBC
ADCB
BACD
BADC
BCAD
BCDA
BDAC
BDCA
CABD
CADB
CBAD
CBDA
CDAB
CDBA
DABC
DACB
DBAC
DBCA
DCAB
DCBA

In the runway scenario, rearranging the aircraft makes no sense.

The Glove Game

Order makes sense in many other cases, however.

Consider the following scenario where order matters:

You are searching for a pair of gloves in a box with 1 left glove and 2 right gloves. 



When you have a pair of gloves, the game is won.


If you want to figure out the value of each glove in the outcome, order matters.

Consider the following scenario where order matters:

Glove 1Glove 2Glove 3Win Credit
L R1 R2R1
L R2 R1R2
R1 L R2L
R1 R2 LL
R2 L R1L
R2 R1 LL

The Shapley value of the Left glove is ⅔ (it gets credit for 4 out of the 6 wins), the Shapley value of each Right glove is ⅙ (each gets credit for 1 out of 6 wins).

Application to Online Advertising

The application to online advertising is straightforward.

Build a Table of Media Permutations

First, build a table with all of the orderings.

If a particular media type contributes more than once, consider it as a different media type for the purpose of building the table -- you will be able to figure out if a media type contributes more than once by looking at the source for the first pageview of each session and then building a table of all of the media types that brought a particular user to your site.

Media 1Media 2Media 3
LR1R2
LR2R1
R1LR2
R1R2L
R2LR1
R2R1L

Outcome Table

Next, figure out the outcome after each step, entering a "1" for a win. 

OrderOutcome @ 1Outcome @ 2Outcome @ 3
L, R1, R2011
L, R2, R1011
R1, L, R2011
R1, R2, L001
R2, L, R1011
R2, R1, L001

[In advertising, there will normally be wins in column #1 and the number of wins will generally increase from left to right; since we are extending the glove game example, column #1 always has zero wins and the number of wins doesn't change in column #3 if there was a win in column #2. Please see our previous article for a more realistic table.]

Marginal Contribution

Next, for columns after the first column subtract the value of the preceding column from the current column (for the first column simply carry over the value).

OrderOutcome @ 1Outcome @ 2Outcome @ 3
L, R1, R2010
L, R2, R1010
R1, L, R2010
R1, R2, L001
R2, L, R1010
R2, R1, L001

If you are using Google Sheets, the formula for doing this looks like this: =ARRAYFORMULA(F14:F19-I14:I19)


Figuring Out the Value of Each Media Player in Each Row

Next, for each row, grab the value associated with each media type.
LR1R2
010
001
100
100
100
100

If you are using Google Sheets, the formula for doing this looks like this: =SUMPRODUCT(($A14:$C14=L$13)*($I14:$K14))

Shapley Value

Finally, sum each column and divide by the number of rows -- this is the Shapley value.

LR1R2
010
001
100
100
100
100
0.670.170.17

If you are using Google Sheets, the formula for doing this looks like this: =SUM(L14:L19)/ROWS(L14:L19)

Conclusion

Building a table of each ordering of media that contributes to a website conversion and then computing the marginal contribution at each step allows you to calculate the Shapley value of each media type.

Normalizing the Shapley values allows you to assign win/conversion percentages to each media type. 

As shown in our previous article, doing this allows you to more accurately assign cost and value to your online efforts and will allow you to make better decisions about your marketing investments.