Marketers Underuse Ad Experiments. That’s a Big Mistake.
Business experimentation is—rightfully—framed as a gold standard by scholars and leading practitioners, but the practice has yet to find its way into most firms’ day-to-day advertising strategy. That’s a big mistake. E-commerce companies that conduct ad experiments see two to three percent better performance per experiment run (as measured by purchases achieved per advertising dollar spent). In this study, an advertiser that ran 15 experiments (versus none) in a given year sees about a 30% higher ad performance that year; those that ran 15 experiments in the prior year see about a 45% increase in performance.
Recently, I gave a talk to 30 senior digital growth managers on how to use business experimentation effectively. I started the session with a brief survey: Who had run experiments with their website and app — for example, testing different layouts, colors, designs, or onboarding experiences? Close to 90% of hands rose in response. Then I asked who had run experiments with their digital advertising, such as evaluating different audience targeting, frequency, or optimization regimes for their campaigns? Only about a third of those same hands went up.
This result did not surprise me; rather, it confirmed what I had seen during a year-long investigation of firm behavior on Facebook’s advertising platform: While business experimentation is — rightfully — framed as a gold standard by scholars and leading practitioners, the practice has yet to find its way into most firms’ day-to-day advertising strategy. Many firms are used to non-experimental approaches to advertising measurement, such as marketing mix models, and hesitate to adopt experimentation-based measurement in part because they overestimate its complexity. Most digital advertising platforms offer “experimentation-as-a-service” tools and, while some of the testing parameters are indeed more advanced than what has long been commonly available in advertising, a digital team can help sort through optimization strategies, frequency regimes, or audience targeting strategies. Many firms simply start off by testing their current digital approach against a control population called a “holdout group” which receives no messaging; this allows them to gauge how their existing ads work relative to no advertising. More sophisticated experimentation can then follow.
To quantify companies’ use of experiments, my colleagues and I at Facebook Marketing Science Research conducted an observational survey of leading firms’ use of randomized control trials (RCTs) to gauge the impact of a given ad campaign on various business outcomes relative to a control. Though we suspected that only a minority of firms used the practice, we were surprised by just how few: Only 12.6% of the 6,777 companies we looked at had conducted a recent RCT (see our academic paper here).
As you might expect, companies’ use of experimentation varies by industry. Factors such as the pace of consumer consideration and purchase cycles or (as will be discussed) the presence of organizational-culture barriers can influence adoption. At the top of the scale, somewhat more than 20% of e-commerce, telecommunications, and retail advertisers conducted experiments; however, only 6.7% of consumer package goods companies and 4.2% of automotive businesses did so. Firms that do experiment tend to run more than one experiment in a year, averaging 15 experiments per firm in e-commerce and close to 50 in the travel sector, and typically invest about 10% of their overall advertising budget in experimentally measured campaigns. This investment seems to pay off: E-commerce companies that conduct ad experiments see two to three percent better performance per experiment run (as measured by purchases achieved per advertising dollar spent). In our sample, an advertiser that ran 15 experiments (versus none) in a given year sees about a 30% higher ad performance that year; those that ran 15 experiments in the prior year see about a 45% increase in performance, highlighting the positive longer-term impact of this strategy.
Given the powerful impact of ad experiments, why are they so underused? Through conversations with internal and external domain experts and scholars, I’ve identified several common organizational obstacles that may account for this.
Organizational inertia: The “Mad Men” days of advertising are (largely) over, but creativity continues to be an essential ingredient for lasting success in advertising. You may remember Donald Draper’s tantrums when he faced research that challenged his preconceptions. Because marketing creatives can be skeptical about data-analytic approaches, empiricism and data-driven decision making need to be carefully introduced to — and ultimately combined with — creative and intuitive advertising operations. This process takes time.
Holdout aversion: Randomized controlled trials require a control group; as noted, in ad experimentation the control is often a holdout group that is not exposed to any advertising.
This of course lowers the number of potential customers that the campaign will reach. This does not sit well with many marketers who may feel that the cost associated with excluding potential customer from a campaign outweighs the benefits of experimentation. But as noted, our study shows that advertisers who run experiments experience better performance than those who do not. While we cannot be certain that experimentation caused this outperformance, it is highly unlikely that using holdouts as a control is a net negative.
Requirement of inter-company alignment: Managers are used to regularly changing, adapting, and evaluating the firm’s websites and apps, tapping inside talent. Experimentation with advertising, on the other hand, commonly requires collaboration between companies and the use of tools provided by other firms. Additionally, many firms may lack the required culture of collaboration between marketing and other functions needed to execute ad experimentation.
Entrenched legacy decision support tools: Marketing mix models are well established and trusted decision-support tools that are seen as less technologically complex and costly than setting up RCTs. But in fact, launching RCTs on digital channels doesn’t require unusually complicated technology, can be done at near zero cost, and can actually help optimize existing marketing mix models.
My colleagues and I at Facebook worked with the Boston Consulting Group to understand how leading firms successfully integrate experimentation into advertising operations. While there is no “one size fits all” solution, we identified the following four organizational characteristics among high-performing firms.
Executive endorsement: Successful experimentation initiatives need an executive sponsor who can align stakeholders and create the urgency required to change existing behavior and convictions. Advertising experiments can reveal new approaches that may require major marketing budget reallocations across channels. If, for example, an experiment identifies a strategy that would increase ROI five-fold but would require shifting millions of dollars in the budget against some executives’ will, an “intervention from the entire C-suite, with the CEO as the ultimate advocate,” may be needed “to break down broader business silos,” according to BCG’s recent report “Marketing Measurement Done Right.”
Culture: Leadership needs to foster a culture where robust empirical evidence can overrule preconceptions and opinions. This culture needs to be set (and lived!) at the top. Installing cross-functional committees and “forced mingling” in cross-departmental workshops can catalyze the required culture shift. The BCG report describes the case of a large automaker: “Management initiated a program of cross-functional conversations involving business units and functional departments. These sessions used multiunit data first to define profitability and then to determine how it would be measured across the company and what success would look like for the manufacturer as a whole.” While such initiatives benefit the firm more widely, they help establish a crucial basis for companies to be able to leverage the insights generated in large-scale advertising experiments.
Talent: The talent required to produce and understand complex data and experimental results needs to be acquired and trusted by senior stakeholders. A marketing data science team reporting to an executive function can take on this role. In the words of a digital leader in the telecommunications space quoted in the BCG report: “Our priority became hiring the best talent for our digital team…With the right people in place, we achieved 40% year-over-year sales growth with the same marketing spend.”
Persistence: Experimentation is a continuous iterative process that is never completed and that shapes other internal processes. To reap its full benefits, firms need to view it not as a tactical task but a strategic pillar of decision making and ideation. Rallying the troops behind it may take a long time. As the vice president of marketing for an automotive OEM that BCG surveyed reports: “It takes a year of everyone questioning a new model until we have enough history to believe in it.”
A heuristic approach to get advertising experimentation off the ground can be to hire a dedicated digital marketing team and earmark a portion of the advertising budget for experimentation; as noted, 10% is typical among companies with a successful experimentation program. Then, in collaboration with other company functions such as finance, marketing leadership can identify a key performance indicator that advertising is to drive. Examples are return on marketing investment or a composite of brand and direct response metrics. The marketing team can then develop new advertising approaches to influence the defined metric. The right culture can catalyze the team to rigorously adopt better-performing advertising approaches and abandon poorly performing ones — ultimately ensuring that existing and potential customers see better and more relevant ads.
Marketers Underuse Ad Experiments. That’s a Big Mistake.
Research & References of Marketers Underuse Ad Experiments. That’s a Big Mistake.|A&C Accounting And Tax Services
Source
0 Comments