FB Ad Account Structure Analysis For 12 Accounts At Scale

The best way I could think of to share what I’m actually seeing in the Facebook/Instagram universe was to do an aggregate analysis of a handful of ad accounts based on very recent data.

I’m a firm believer that the major platforms have almost entirely automated away the media buying process. Running campaigns effectively is primarily an exercise in following a simple set of rules, staying organized, and channeling as much energy as you can muster into creative, LP, and offer testing (which is all much easier said than done).

To this day, I regularly read in Facebook Groups/Blogs/Slack Channels/D2C Twitter/Linkedin anecdotes that start with: “What’s working now on FB is…”. Usually those anecdotes are focused on audience targeting (ex: “You must try this LAL audience stack”) and bidding (ex: “Cost Caps are working best right now”). Lately, there has been even more conjecture about how audience targeting has changed as a result of Apple’s iOS14.5 rollout (one common and intuitive position is that LAL targeting is less efficacious and so interest targeting is more important as a result). 

The best way I could think of to share what I’m actually seeing in the Facebook/Instagram universe was to do an aggregate analysis of a handful of ad accounts based on very recent data. For each ad account, I wanted to see:

  • How many ad sets were they running?
  • How many of those ad sets were meeting Facebook’s learning phase guidance (namely >= 50 optimization events per week per ad set)?
  • How much spend did the top 5 spending ad sets account for?
  • What was the distribution of spend by bid type?
  • What was the distribution of spend between prospecting & retargeting?
  • What was the distribution of prospecting spend between broad, interest (aka Flexible Inclusions), and LAL?

My Methodology

  • I reviewed 12 ad accounts. Obviously, there is tremendous bias in my sample set, but in an attempt to control for that slightly I included 6 Thesis accounts and 6 accounts from other internal teams/agencies.
  • Date range: July 27, 2021 - August 2, 2021.
  • I’ve rounded spend to the nearest hundred thousandth.
  • These ad accounts combined for about $350k of spend per day which translates to roughly $30m per quarter. By my math they account for about .11% of Facebook’s quarterly revenue (using Facebook’s Q2 $28.58b ad revenue figure)... so consider this analysis with a healthy grain of salt.
  • I manually classified each ad set as Prospecting or Retargeting and as Broad, Interest, or LAL. In any instances where Interest targeting was layered onto a LAL audience I classified that ad set as a LAL as the tiebreaker (this applied to a tiny percentage of ad sets).
  • Technically speaking what I’m labeling here as Interests were any ad sets where any “Flexible Inclusions” were used including demographic targeting etc. 99% of the time if Flexible Inclusions were used they were regular old “Interests”.
  • I did not include KPIs like CPAs, ROAS etc as that introduced a lot of complexity as not all of the ad sets in each ad account had consistent Optimization Event goals, not every account uses consistent exclusions, not every account uses consistent attribution etc etc etc.
  • I did all of this analysis by filtering the ad account for “Had Delivery” and exporting the results at the Ad Set level. I matched the campaign data (spend, conversions etc) to the exported account configuration file using Ad Set IDs. I did all of this analysis using Google Sheets and I have no doubt that someone smarter than I with API access could have done all of this faster and better. That said, I’m pretty sure I didn’t make any huge errors.

Ad Sets Analysis

These accounts had many more ad sets in use than I anticipated (including Thesis clients) with 52 on average. To be clear, I'm not saying they had 52 ad sets active just that 52 ad sets had delivery during that seven day period. I think this is primarily because most advertisers use dedicated, broad targeted ad sets for their creative tests. That results in significant ad set bloat if you are regularly testing creative.

Notably, the average % of ad sets where optimization events were >= 50 was only 20%. Less than 50 optimization events per week does not guarantee an ad set will be stuck in Learning Limited but there is definitely a strong correlation there. That does not appear to be of grave concern for these accounts/brands. As an aside, I think the Learning Phase impact is real, but it does strike me as one of those areas where Facebook reps’ emphasis on the topic significantly outstrips its practical importance (Facebook Canvas is another one of those).


As with most things in life, the 80/20 generally applies here. The top 5 spending ad sets constitute 50% of total spend on average.


Bidding

Nearly all of the advertisers in this sample are using Lowest Cost bidding with one exception. 


Prospecting vs Retargeting


Prospecting accounted for 86% of spend on average and a few were 100% prospecting accounts.


Broad vs LAL vs Interest


Broad targeting accounted for 50% of total spend on average. There were a few accounts here that were notable holdouts that were extremely reliant on LAL or Interest targeting, but I think this data confirms what is now the industry mainline approach... lean into broad targeting and let the creative do most of the work.

Get in touch