Press    How Do MTA and DOE Work Together?

Original Publisher

MTA (multi-touch attribution) and DoE (design of experiments) are complementary because incrementality testing addresses many of the data and data tracking gaps that currently serve as severe limitations to MTA’s ability to measure marketing contribution across all addressable marketing channels. Currently MTA has a major data gap in the so-called walled gardens (Facebook, AdWords,Instagram, Pinterest, YouTube etc.) in which no customer level data gathering is permitted. MTA has no answers for these channels with no clear avenues for improvement short of a 180 degree reverse of course on data sharing by the likes of Facebook (don’t hold your breath).

Even in trackable addressable media channels, pixel related data loss can be severe, ranging from 5% in paid search to as much as 80% in channels like online video.  While cookie level data tracking has lower rates of data loss, it’s ongoing viability is in question after Google recently announced the discontinued sharing of Google User IDs that this approach relies upon beginning in Q1 2020.

DoE can both fill the gaps created by the so-called “walled garden” media channels as well as validate and inform media channels suffering from pixel related data loss.  As the market continues to evolve, and legislation to address privacy concerns like GDPR proliferates, MTA measurement unsupported by DoE will likely become obsolete.

 

DoE can fill the gaps created by “walled-gardens” as well as validate & inform media channels suffering from pixel related data loss.

MTA (multi-touch attribution) and DoE (design of experiments) are complementary because incrementality testing addresses many of the data and data tracking gaps that currently serve as severe limitations to MTA’s ability to measure marketing contribution across all addressable marketing channels. Currently MTA has a major data gap in the so-called walled gardens (Facebook, AdWords,Instagram, Pinterest, YouTube etc.) in which no customer level data gathering is permitted. MTA has no answers for these channels with no clear avenues for improvement short of a 180 degree reverse of course on data sharing by the likes of Facebook (don’t hold your breath).

Even in trackable addressable media channels, pixel related data loss can be severe, ranging from 5% in paid search to as much as 80% in channels like online video.  While cookie level data tracking has lower rates of data loss, it’s ongoing viability is in question after Google recently announced the discontinued sharing of Google User IDs that this approach relies upon beginning in Q1 2020.

DoE can both fill the gaps created by the so-called “walled garden” media channels as well as validate and inform media channels suffering from pixel related data loss.  As the market continues to evolve, and legislation to address privacy concerns like GDPR proliferates, MTA measurement unsupported by DoE will likely become obsolete.

Original Publisher

 

DoE can fill the gaps created by “walled-gardens” as well as validate & inform media channels suffering from pixel related data loss.

Press    What is A/B Testing for Media?

Original Publisher

A/B testing is a specialized type of incrementality measurement. In A/B testing randomized groups are shown a variant of a single variable (web design, landing page, marketing creative etc.) in order to determine which variant is more effective.

Incrementality measurements use A/B testing in certain media or general marketing channels, such as prospecting, where tracking a media exposure for both the test and control groups is required. In the case of media incrementality the A group (test group) is shown business as usual media exposure while the B group (control group) has exposure withheld or is shown a null media exposure, typically a public service announcement (PSA) for a charity of the marketers choice. The more generic form of A/B testing is called Design of Experiments (DoE).

Easily Run A/B Tests with Measured

We have worked hard to plug-in directly to 100+ media platforms and their APIs. Because of this, Measured provides incrementality measurement and testing with ease and speed. We can then run 100s of audience-level experiments with a quick one-time set up with your publishers. Find out media’s true contribution across all your addressable and non-addressable channels.
Learn more here

 

In marketing, A/B testing is a specialized type of incrementality measurement and is very effective when measuring the marginal lift of a media exposure.

A/B testing is a specialized type of incrementality measurement. In A/B testing randomized groups are shown a variant of a single variable (web design, landing page, marketing creative etc.) in order to determine which variant is more effective.

Incrementality measurements use A/B testing in certain media or general marketing channels, such as prospecting, where tracking a media exposure for both the test and control groups is required. In the case of media incrementality the A group (test group) is shown business as usual media exposure while the B group (control group) has exposure withheld or is shown a null media exposure, typically a public service announcement (PSA) for a charity of the marketers choice. The more generic form of A/B testing is called Design of Experiments (DoE).

Easily Run A/B Tests with Measured

We have worked hard to plug-in directly to 100+ media platforms and their APIs. Because of this, Measured provides incrementality measurement and testing with ease and speed. We can then run 100s of audience-level experiments with a quick one-time set up with your publishers. Find out media’s true contribution across all your addressable and non-addressable channels.
Learn more here

Original Publisher

 

In marketing, A/B testing is a specialized type of incrementality measurement and is very effective when measuring the marginal lift of a media exposure.

Press    Can I Operationalize Media Channel Experimentation in Steady State?

Original Publisher

In short, yes. Experimentation relies on steady state operation of your media channels as the vast majority of your customer base, receiving business as usual media exposure, will comprise the all important test group in the experimental design. Good experimentation carefully selects out a small, but representative subset of your customers and withholds media exposure at the channel level in order to serve as the test group which will bear out the true incremental sales driven by channel level media exposure.

 

Operationalize Media Channel Experimentation with Measured

We have worked hard to plug in directly to 100+ media platforms and their APIs. Because of this, Measured provides incrementality measurement and testing with ease and speed. We can then run 100s of audience-level experiments with a quick one-time set up with your publishers. Find out media’s true contribution across all your addressable and non-addressable channels.

Learn more here

 

A clean read for experimentation requires a carefully selected and small, but representative subset of your customers to withhold media exposure.

In short, yes. Experimentation relies on steady state operation of your media channels as the vast majority of your customer base, receiving business as usual media exposure, will comprise the all important test group in the experimental design. Good experimentation carefully selects out a small, but representative subset of your customers and withholds media exposure at the channel level in order to serve as the test group which will bear out the true incremental sales driven by channel level media exposure.

 

Operationalize Media Channel Experimentation with Measured

We have worked hard to plug in directly to 100+ media platforms and their APIs. Because of this, Measured provides incrementality measurement and testing with ease and speed. We can then run 100s of audience-level experiments with a quick one-time set up with your publishers. Find out media’s true contribution across all your addressable and non-addressable channels.

Learn more here

Original Publisher

 

A clean read for experimentation requires a carefully selected and small, but representative subset of your customers to withhold media exposure.

Press    What Is a Design of Experiments (DOE) with Respect to Marketing?

Original Publisher

Design of Experiments (DOE) in marketing is a systematic method to design experiments to measure the impact of marketing campaigns. DoE is a method to ensure that variables are properly controlled, the lift measurement at the end of the experiment is properly assessed and the sample size requirements are properly estimated.

DoEs can be designed for most major types of media, like Facebook, Search, TV, display, etc.

How do you design an experiment for marketing campaigns?

Experiments are usually designed to understand the impact of a marketing campaign on desired marketing objectives. A simplistic design to measure certain marketing stimuli like a TV campaign or a Facebook campaign is a 2-cell experiment, where the marketing campaign is published to a certain group of users and held out to another group of users. The response behaviors of the two user groups are then observed over a period of time. The impact of the marketing campaign is then assessed as the difference in response rates between those two user groups.

The science of experimental design applied to marketing is about carefully selecting and controlling the variables that affect outcomes, designing the approach for sample size sufficiency, and tailoring the overall design to have enough power to read the phenomenon being observed.

What are factors in experimental design for marketing campaigns?

The factors to be controlled depend on the phenomenon being measured. But in general, some of the factors that play a critical role in marketing that are candidates to be controlled are: marketing spend, campaign reach, impression frequency, audience quality, audience type, conversion rates, seasonality, collinearity and interaction effects.

Each marketing channel, like Facebook or TV or Google Search, each have their own unique campaign management levers to control audience reach, spend, frequency, etc. The challenge designing proper experiments is to apply experimental design principles to the specific channels and how they are typically operated by marketers.

Basic Principles of Experimental Design in marketing measurement

Learning objectives: The first and foremost thing is to identify objectives that are meaningful to measure. Typically, these are sales and other business outcomes that marketing campaigns are looking to drive.

Audiences and Platforms: Each marketing platform like Facebook and Google have very specific ways to activate audiences and market to them. Experiments have to be designed around these campaign specific levers to control the factors relevant for the marketing experiments.

Decisions: Marketers make specific decisions around campaigns, like campaign budgets, campaign bids, creative choices, audience choices etc., Experiments have to be designed to inform the specific choices at the level of granularity that is meaningful for marketers.

How does experimental design differ from A/B testing?
Design of Experiments is a formal method for designing tests. A/B tests are a simple form of a two-cell experiment. Typically industrial scale experiments are generally multivariate in nature, maybe 2-cells or more, and designed carefully to control for various factors to enable flighting the experiment and collecting data in very specific ways to enable getting a clean usable read.

Reduce hold & scale with Measured

Design of Experiments (DoE) Examples

Many marketing platforms enable experimentation deliberately or coincidentally. In platforms like Facebook it is possible to select and target audiences in randomized ways but target them differentially. This enables marketers to design experiments and test audiences for different marketing treatments. Similar approaches are taken in tactics like site retargeting where audiences are split into segments and various segments are offered differential treatments, like retargeting some segments, and holding out other segments from retargeting, and observe the behavior of response over a period of time.

How Does MTA Attribution & DOE Experiments Work Together?

MTA and DoE are complementary because incrementality testing addresses many of the data and data tracking gaps that currently serve as severe limitations to MTA’s ability to measure marketing contribution across all addressable marketing channels.

Currently MTA has a major data gap in the so-called walled gardens (Facebook, AdWords,Instagram, Pinterest, YouTube etc.) in which no customer level data gathering is permitted. MTA has no answers for these channels with no clear avenues for improvement short of a 180 degree reverse of course on data sharing by the likes of Facebook (don’t hold your breath). Even in trackable addressable media channels, pixel related data loss can be severe, ranging from 5% in paid search to as much as 80% in channels like online video. While cookie level data tracking has lower rates of data loss, it’s ongoing viability is in question after Google recently announced the discontinued sharing of Google User IDs that this approach relies upon beginning in Q1 2020. DoE can both fill the gaps created by the so-called “walled garden” media channels as well as validate and inform media channels suffering from pixel related data loss. As the market continues to evolve, and legislation to address privacy concerns like GDPR & CCPA proliferate, MTA measurement unsupported by DoE will likely become obsolete.

 

Experiments must be designed to inform the specific questions marketers have about their paid media and inform a level of granularity that is meaningful.

Design of Experiments (DOE) in marketing is a systematic method to design experiments to measure the impact of marketing campaigns. DoE is a method to ensure that variables are properly controlled, the lift measurement at the end of the experiment is properly assessed and the sample size requirements are properly estimated.

DoEs can be designed for most major types of media, like Facebook, Search, TV, display, etc.

How do you design an experiment for marketing campaigns?

Experiments are usually designed to understand the impact of a marketing campaign on desired marketing objectives. A simplistic design to measure certain marketing stimuli like a TV campaign or a Facebook campaign is a 2-cell experiment, where the marketing campaign is published to a certain group of users and held out to another group of users. The response behaviors of the two user groups are then observed over a period of time. The impact of the marketing campaign is then assessed as the difference in response rates between those two user groups.

The science of experimental design applied to marketing is about carefully selecting and controlling the variables that affect outcomes, designing the approach for sample size sufficiency, and tailoring the overall design to have enough power to read the phenomenon being observed.

What are factors in experimental design for marketing campaigns?

The factors to be controlled depend on the phenomenon being measured. But in general, some of the factors that play a critical role in marketing that are candidates to be controlled are: marketing spend, campaign reach, impression frequency, audience quality, audience type, conversion rates, seasonality, collinearity and interaction effects.

Each marketing channel, like Facebook or TV or Google Search, each have their own unique campaign management levers to control audience reach, spend, frequency, etc. The challenge designing proper experiments is to apply experimental design principles to the specific channels and how they are typically operated by marketers.

Basic Principles of Experimental Design in marketing measurement

Learning objectives: The first and foremost thing is to identify objectives that are meaningful to measure. Typically, these are sales and other business outcomes that marketing campaigns are looking to drive.

Audiences and Platforms: Each marketing platform like Facebook and Google have very specific ways to activate audiences and market to them. Experiments have to be designed around these campaign specific levers to control the factors relevant for the marketing experiments.

Decisions: Marketers make specific decisions around campaigns, like campaign budgets, campaign bids, creative choices, audience choices etc., Experiments have to be designed to inform the specific choices at the level of granularity that is meaningful for marketers.

How does experimental design differ from A/B testing?
Design of Experiments is a formal method for designing tests. A/B tests are a simple form of a two-cell experiment. Typically industrial scale experiments are generally multivariate in nature, maybe 2-cells or more, and designed carefully to control for various factors to enable flighting the experiment and collecting data in very specific ways to enable getting a clean usable read.

Reduce hold & scale with Measured

Design of Experiments (DoE) Examples

Many marketing platforms enable experimentation deliberately or coincidentally. In platforms like Facebook it is possible to select and target audiences in randomized ways but target them differentially. This enables marketers to design experiments and test audiences for different marketing treatments. Similar approaches are taken in tactics like site retargeting where audiences are split into segments and various segments are offered differential treatments, like retargeting some segments, and holding out other segments from retargeting, and observe the behavior of response over a period of time.

How Does MTA Attribution & DOE Experiments Work Together?

MTA and DoE are complementary because incrementality testing addresses many of the data and data tracking gaps that currently serve as severe limitations to MTA’s ability to measure marketing contribution across all addressable marketing channels.

Currently MTA has a major data gap in the so-called walled gardens (Facebook, AdWords,Instagram, Pinterest, YouTube etc.) in which no customer level data gathering is permitted. MTA has no answers for these channels with no clear avenues for improvement short of a 180 degree reverse of course on data sharing by the likes of Facebook (don’t hold your breath). Even in trackable addressable media channels, pixel related data loss can be severe, ranging from 5% in paid search to as much as 80% in channels like online video. While cookie level data tracking has lower rates of data loss, it’s ongoing viability is in question after Google recently announced the discontinued sharing of Google User IDs that this approach relies upon beginning in Q1 2020. DoE can both fill the gaps created by the so-called “walled garden” media channels as well as validate and inform media channels suffering from pixel related data loss. As the market continues to evolve, and legislation to address privacy concerns like GDPR & CCPA proliferate, MTA measurement unsupported by DoE will likely become obsolete.

Original Publisher

 

Experiments must be designed to inform the specific questions marketers have about their paid media and inform a level of granularity that is meaningful.