Even in 2021, depending on who you talk to, Measurement can be quite a scary and anxiety filled subject to approach. People can tend to avoid it if they deem it to be unsolvable, difficult to garner or if there’s an undercurrent of doubt that exists throughout the organisation in non-linear Media effectiveness. Even if your business completely buys into Digital Media Measurement, the below structure should help you to formalise this in a concise way that’s easily communicated to those above and not as close to the Media.
The truth is, we all have a responsibility to take the ownership of Measuring our activity. We lost the luxury of living in an unmeasured BAU world a good few years ago and we should be meeting the conversation head on.
Tier Three: Fairly obvious but bear with me. The establishment of a foundation of accurate, historical data in the form of core media metrics backdated as far back as they can go with a select few that are highlighted for Benchmarking. This Benchmarking point is crucial as it’ll allow you to communicate how you’ve performed/are performing against your key media metrics in a way that’s easy to comprehend.
Delivery: Spend / Impressions / CPM / CPC / Clicks / Video Views.
Engagement: CTR% / Video Completion Rate % / Conversion Rate %.
Quality: Viewability and Time-in View.
This data should feed into regular reporting of both historical and forecasted data, to give a view of “we said we’d achieve this, and we achieved this”. Saying you’ll do something and doing it, naturally instills the confidence in the powers above. Don’t get lost and tie yourself in knots by chasing every single metric under the sun. Only a few truly matter in the grand scheme of what you’re trying to achieve. Also, don’t look back too far, a couple of additional years at most when doing any kind of comparison – but you’ve got the data in your back pocket if you need to go back any further.
Forecasting: Data from the past should be used to inform data in the future and the more data you have, the easier that prediction becomes to make.
Key questions such as, how much traffic is my activity going to drive in Q4? Based on a +15% budget vs. last Black Friday, how much are we forecasting to take in September in linear Revenue? If we improve site Conversion Performance by “X”%, this’ll lead to an uplift in “X” Revenue (something useful when collaborating cross-team with Dev or CRO functions). These targets should be applied on an annual, quarterly and monthly basis (refined as the forecasted period gets nearer as you can easier predict next month/quarter).
Additional Note on Data ownership: More often than not I’ve found that businesses can have varied control of their performance data. Sometimes that can manifest itself in not being given access to Search or Social accounts and therefore not being “allowed” to keep the historical data if that relationship breaks down in future. Make sure you work with your partners on this and establish a clean way of housing this data into a simple Data Studio dashboard.
Tier Two: Vendor, partner and Business Intelligence studies are a crucial part of separating those hard media metrics vs. having some kind of incremental uplift in either hard commercial or the more mid to upper funnel metrics such as Awareness or Consideration.
Here’s a quick few examples here of what I mean when I talk about lift testing.
- Facebook Brand or Conversion Lift testing. A quick note here is that Facebook will be losing its ability to conduct Conversion lift tests with the advent of ios14 and its removal of the IDFA, however that could still be a few months away so its worth looking into sooner rather than later. Facebook (and all other Social platforms) offer some form of lift testing using a third party (usually Nielsen or Millward Brown) to add a verification layer to the test. You’ll conduct a control vs. exposed as usual, but will be able to delve into platform specific insights here to inform future testing.
- Direct to publisher lift testing, such as when buying directly with a publisher, who can conduct simple control vs. exposed tests and allow you to unearth detail on how your activity has performed at geo, demo and interest level splits. Did your campaign result in a +15% Awareness amongst your target demographic with this particular vendor? These are key questions you’ll look to answer here.
- Collaborating with Business Intelligence, CRM or data functions within your business to conduct uplift testing. Traditionally this’ll involve showing a portion of your existing customers some activity vs. a hold out group who didn’t receive the media exposure. An analysis is performed on commercial performance (did the control group buy more product over the campaign period?) Was our media incremental in performance?
The key here with these lift studies is to ensure you follow a couple of consistent rules throughout.
Firstly, try your best to have these tests third party verified so the vendor isn’t marking their own homework. It helps to no end when a reputable name has been involved in the testing for validation.
Secondly, ensure it’s as robust as it possibly can be. No test is perfect, but control what you can. Duration (4 week minimum), Spend, Exclusions to avoid overlap and even the creative weighting (a Brand message mixed with a DR message may skew results so try to keep it consistent and do either/or).
Keep the faith. Performance can sometimes drop off when performing tests (set-up dependent) but its a natural by-product of trying to learn more about your media. Communicate this beforehand and you’ll likely see no problems in keeping it live. Leave it too late however and you can run into issues with those invested in upholding Performance at all costs.
Finally, remember to Communicate the aim, methodology findings and next steps to stakeholders – even if the results weren’t positive. Even a test that yields negative results is still a learning and that’s what this whole process if about.
Tier One: This one has gone through some real changes over the past few years and the increasing shut-up-shop approach from the walled gardens certainly hasn’t made life easier for those of us trying to amalgamate data into one place to unearth Attributional learnings.
This top tier is placed as the most important because its the furthest away from hard media metrics and is used to infer some kind of cross-media assumptions based on what’s happening at the top and bottom of the funnel, via something called Econometric modelling. Digital Attribution (or at least what it used to be), is now largely defunct thanks to changes in the ecosystem in response to GDPR and protecting user privacy. These changes continue to come into effect (see ITP, IDFA and the cookie migration), but are positive changes for an industry often plagued with direct or indirect data challenges.
Essentially, Econometric models can be developed by a third party or your in-house data science function and involves utilising mathematical logic to build a model that fits your business using all of the media data you gathered in step one. They are used to form the basis of media decisioning that ultimately impact a set of defined metrics across the funnel (such as Awareness, Consideration or Conversion) albeit it will be more tailored to the funnel of your business.
Here’s some elements to keep in mind throughout your journey with media modelling to form and maintain that confidence:
- Find the right partner, recruit the right talent. This isn’t something you want to muddle through and feel around in the dark with. Whoever you work with needs to have Media measurement experience and have an appreciation of building / maintaining / communicating modelled data.
- Buy into the methodology and sell it in, otherwise it’s useless. I’ve seen this a good few times. A business builds a model, one of the internal teams doesn’t believe in it, the model slowly gets used and believed less and less – leading to its eventual demise. Mould and influence the methodology and accept that it’s a source of truth like any other.
- Feed the best data in, in a uniformed fashion. Don’t feed in inaccurate, piece-meal data that hasn’t been formatted correctly. Like anything, what you put in is what you get out – so take the time to cleanse and format any data that needs to be so that you have one single data source ready to be analysed.
- Always use the “so what?” approach. This model is telling us that Programmatic Media influences Ad Recall positively, so what should we do about it? This model is telling us that Paid Search actually has a role to play for existing customers, so what should we do about it? Keep using this to keep unearthing next steps and further thought to roll out into channel.
That’s all for this one. I hope you enjoyed reading through my three simple tiers of breaking down Measurement in an attempt to communicate this effectively internally to your stakeholders. Feel free to drop a note below or an email through the contact page if you have any questions or further thoughts.