November 2017

Sabermetrics and How to Build a Better Influencer Marketing Program with Statistical Analysis

Introduction

Defined by Bill James and rising to world-wide notoriety by way of the book “Moneyball” by Michael Lewis, Sabermetrics is the pursuit of objective knowledge about baseball using statistical analysis. While originally scorned by many baseball traditionalists, Sabermetrics is now broadly accepted as a critical tool that supplements the gut instinct of many of baseball’s insiders.

Fuse recently applied Sabermetrics principles to its influencer marketing data to improve its programs. We analyzed our micro-influencer programs (defined by entrepreneur.com as, “…tastemakers, opinion-shapers and trend-forecasters who generally have between 1,000 and 50,000 followers”) from the last 13 years and covering 10 industries.

We hired data scientists from academia and the social sciences to help build our model and then complete the following:

  • Descriptive Analysis, Correlation Analysis, and Test of Difference (see “Recommended Plan to Analyze” below for more information)
  • Review of nearly 70 variables to identify which elements of our influencer programs had an association, trend, or causation with the success or failure
  • Analysis of variables from each stage of influencer marketing programs from planning and recruitment to onboarding, activation, measurement

Five Sabermetrics Findings that Correlate to Better Program Results

New Influencers Work Better Than Recycled Ones

Brands often believe that using the same influencers for vastly different programs (for example, using the same influencer for a gaming program because he had success previously as an apparel influencer) will save time and money during the onboarding process, but still yield positive results. But our data told a different story.  Programs that reused influencers from substantially different programs were 38% more likely to fail. Programs that used predominantly new influencers had a more motivated, engaged workforce who tapped into a new set of consumers to drive higher engagement and sales.

Influencer Recruitment

Programs that Failed

  • Did Not Hire New Influencers 31% 31%
  • Hired New Influencers 11% 11%
Programs that Met Goals
  • Did Not Hire New Influencers 69% 69%
  • Hired New Influencers 61% 61%
Programs that Exceeded Goals
  • Did Not Hire New Influencers 0% 0%
  • Hired New Influencers 28% 28%

Robust Interviewing of Influencers Pays Dividends

Interviewing potential influencers can be a long, time-consuming process, but the data indicated that it was worth the effort. Time spent on interviewing (identified below in the number of touchpoints the agency had with candidates it interviewed) had an association with program success. The more touch points there were during the interviewing process – including one on one engagement, multiple interviews, and reference checks – the better the programs performed. Programs that exceeded goals averaged almost 5 touchpoints.

Average Number of Interview Touchpoints

Programs that Failed

Programs that Met Goals

Programs that Exceeded Goals

Reporting Tools

While our intuition was that reporting tools should be used to acquire more accurate information from influencers, the data suggests tools like Traackr, Network Ninja and others are even more valuable than first thought. Programs that did not use reporting tools had a failure rate 42% higher than programs that did.

Use of Reporting Software

Programs that Failed

  • Did Not Use Reporting Software 33% 33%
  • Used Reporting Software 6% 6%
Programs that Met Goals
  • Did Not Use Reporting Software 67% 67%
  • Used Reporting Software 63% 63%
Programs that Exceeded Goals
  • Did Not Use Reporting Software 0% 0%
  • Used Reporting Software 31% 31%

Better Budgets Generate an Almost Guaranteed Win

Small budget programs failed 50% more often than programs with medium sized budgets. Large budget programs rarely failed. The key element here was a budget “relative” to the duration of the program. Stretching a budget too thin, by increasing its duration, was almost always a recipe for failure.

Relative Budget

Programs that Failed

  • Low Budget Relative to Program Duration 33% 33%
  • Medium Budget Relative to Program Duration 22% 22%
  • High Budget Relative to Program Duration 0% 0%
Programs that Met Goals
  • Low Budget Relative to Program Duration 67% 67%
  • Medium Budget Relative to Program Duration 44% 44%
  • High Budget Relative to Program Duration 80% 80%
Programs that Exceeded Goals
  • Low Budget Relative to Program Duration 0% 0%
  • Medium Budget Relative to Program Duration 33% 33%
  • High Budget Relative to Program Duration 20% 20%

You Can Compensate Influencers Too Much

Most marketers agree that compensation is one of the most important elements of an influencer program. Not surprisingly, under-compensating influencers resulted in a greater chance of failure. But over-compensating – paying influencers above market rates – had a negative effect on outcomes too. Offering greater compensation generated more candidates, but appears to have attracted unqualified candidates, who when hired, lacked the understanding of how to be a good influencer.

Influencer Compensation

Programs that Failed

  • Low Compensation 50% 50%
  • Medium Compensation 5% 5%
  • High Compensation 40% 40%
Programs that Met Goals
  • Low Compensation 50% 50%
  • Medium Compensation 70% 70%
  • High Compensation 60% 60%
Programs that Exceeded Goals
  • Low Compensation Per Influencer 0% 0%
  • Medium Compensation Per Influencer 25% 25%
  • High Compensation Per Influencer 0% 0%

Recommended Plan to Analyze

For brands that decide to analyze their own influencer programs – instead of hiring Fuse to do so – we recommend the following approach:
Descriptive Analysis: Examine the percentage of programs reporting each predictor to look at the overall distribution of the variables. If a variable does not exhibit variation, do not proceed to look at this variable. The distribution represents the spread of a variable around a mean. If a variable doesn’t differ, then it is not useful as a predictor.

Correlation Analysis: Estimate correlations between the variables and the outcome measure of success. This tests to see if there was an association between the predictors and success. If there seems to be a significant association (i.e. moderate to large correlation), then proceed to Test of Difference.

Test of Difference:  Estimate a statistical test to see whether the outcome is different depending on the mean or level of the predictor variable. The outcome is a 3-level categorical variable. For predictor variables that are continuous, use the ANOVA test. The ANOVA tests whether the mean of the predictor is different at different levels of the outcome. For predictor variables that are binary (no/yes) or categorical, use the chisquare test. If the predictor and outcome are not associated, then expect an equal distribution of the predictors across levels of the outcome. If the observed distribution is different than expected given an equal distribution, then conclude that the variable is associated with program success.