It’s no secret that campaign management (on Facebook & Google, and even on challenger platforms like Snap & TikTok) is fast becoming a largely automated process. The app install ecosystem is the furthest ahead on the automation curve thanks to Google’s Universal App Campaigns and Facebook’s Automated App Ads (for those less familiar with the mobile app ecosystem, in both cases you essentially upload creatives and walk away). Ad ops automation is without a doubt a positive trend for the industry, though it’s inherently disruptive to the standard digital agency model (Thesis included... unfortunately for us).
At Thesis, we often talk about these four pillars of successful, automation supported, paid acquisition efforts:
* Media buying
* Conversion rate optimization (often via landing pages)
In this post, we want to share a fifth pillar which can be a significant point of leverage: signal optimization.
In layman’s terms, every event that you fire back to an ad network via a pixel/SDK/server-to-server integration is a signal. The ad networks use those signals to guide their algorithmic targeting.
For example, in the D2C ecosystem most advertisers rely on the Purchase event as their key signal. Their ad sets/campaigns are set to Optimize for Purchase or Value and then Facebook/Google/TikTok etc. simply work their magic to find more Purchasers as efficiently as possible.
But, of course, not all purchasers are of equal value. Ideally, you’d minimize your campaigns’ payback periods/maximize your LTV / CAC ratios by acquiring higher value customers at an equal or lower cost. To do so, you could project an individual purchaser’s lifetime value and pass that projected LTV (pLTV) back to the ad networks as a new signal. With that new signal in hand, you could experiment with using different optimization events at the ad set level (ex: Purchasers Whose pLTV is >$100) which hopefully would result in a better payback period for your campaigns.
With this approach you are simply providing a better feedback loop to the algorithms in real-time. With stronger feedback they should be able to easily meet your stated objectives.
This subject matter can become very complex, especially once data scientists are involved. But at the end of the day, the goal is to provide as accurate an LTV prediction as possible to the ad networks as quickly as possible. In my experience, something (ex: an imprecise pLTV used for optimization) is usually better than nothing (ex: optimizing for all Purchasers equally).
I’ve seen models factor in:
* First and third party demographic data (ex: age)
* Third party data purchased from the likes of Experian/Acxiom/TransUnion (ex: credit scores)
* Last click attributed source data (ex: utm_source=Facebook)
* Behavioral data (ex: completed level 1)
* Sign up quiz data (ex: Do you subscribe to other meal kit services?)
* And much more….
You might even consider extremely rudimentary questions to identify high and low value leads. For example, a real estate business might ask users: “Are you in the market for a home in the next 6 months?" It’s very likely that users that answer in the affirmative have a dramatically higher pLTV and it’s also likely that Facebook’s targeting would benefit from that direction.
For some businesses it’s fundamentally easier to project LTV, and it’s in those industries where you find this sort of optimization most often. In my experience, subscription eComm and gaming apps are categories where you are most likely to run into pLTV optimization events. That said, the vast majority of subscription eComm businesses that I’ve seen in the last ~12 months are not using such an approach and simply optimize for subscription/trial starts.
Notably, this tactic can be game changing for lead generation businesses. Facebook in particular does an incredible job at driving leads/form fills at a low cost but the lead quality can vary tremendously. By providing a better feedback loop to Facebook, you might drastically change the types of users they target to fill your forms.
In the last year or two there has been a LOT of activity from startups attempting to productize these sorts of pLTV models for the purposes of acquisition campaigns. I’ve run into 5 (all of whom have .ai domains):
https://www.voyantis.ai/ (PS: these guys have a great blog!)
It’s still common practice to do analysis of campaigns & customer value and use that analysis to restructure campaigns in such a way that maximizes customer value. For example, you might notice that women are 50% more valuable than men and choose to exclude men from your targeting altogether. Or you might identify that different keywords are driving different payback periods, and so you might use an Enhanced CPC strategy and set different bids on keywords based on the resultant user values.
I don’t think this approach is wrong (and in fact it can work very well) but it certainly does not jive with where these ad networks are headed. They want you to opt in to all placements. They want you to use auto bidding. They want you to consolidate your campaigns. I believe that attempting to optimize for LTV through account structure is akin to swimming upstream. pLTV optimization may be a happier path in that you can follow the networks’ best practices while ensuring that those networks are still aligning their delivery with your desired outcomes.
I was describing this blog post to a friend that works in the direct mail ecosystem and he thought it was one of the dumbest things he had ever read. He could not fathom that all of the above is not already universally applied to digital marketing campaigns. For years, he’s done basic customer analysis and then used that analysis to inform which addresses to mail. That’s basically what pLTV optimization is, but in an online context.
I don’t mean to imply that anything about signal optimization in digital marketing is fundamentally revolutionary. But with all due respect to my friend referenced above, it is different from the approach you might take in direct mail/radio/OOH/TV simply because the targeting on these digital ad networks is effectively automated. My direct mail friend knows exactly which addresses he is mailing when we start his campaign. Meanwhile, I have absolutely no idea at the user level whom Snapchat is targeting on any given day. It makes sense that I need to work a bit harder and in real-time to give Snapchat the feedback it requires to make my campaigns work. I can’t afford to wait to do a retrospective analysis using a spreadsheet that then informs future campaign optimizations. That’s simply too slow.
As always I welcome feedback on the above and if you are interested in discussing this further please email me at adam at thesistesting.com!
PS: Credit to the Towards Data Science blog for the image.
We've seen surprisingly strong performance using local TV news content in our online paid ads. In this post, I provided an overview of local TV lifestyle programming and how we use that content to drive paid performance.
We (along with the entire industry as far as I know) have seen Facebook's performance decline since Apple's introduction of ATT. Over the last 12 months, we've made channel exploration and expansion a core focus.
Our friends at Nest Commerce recently published their Readout for Jan 2023. In it, they discuss a number of trends they see impacting D2C. Their graphs comparing 2021 and 2022 performance on Black Friday caught my eye as they saw a considerable improvement...