Tag Archives: programmatic

Using multiple DSPs

Unless they’ve had their head under a stone for the last 2 years, anyone in digital advertising will be aware of the current huge growth in programmatic media buying. In the US it currently makes up around 20% of digital budgets,¬†with many marketeers planning for this to grow to 40% or more. In the UK the H2 2014 estimates are around 15% with predictions ranging up to 25% which would represent a huge 83% of all display advertising.

This presents a variety of issues for the agency and client media buyer, as new DSPs, exchanges, and technologies are launched seemingly every day. They are all (if you took each media representatives word for it) based on the most *amazing* technology, staffed with the cleverest PHD data boffins, and are therefore guaranteed to be the single best way to approach programmatic buying.

I have sat through so many powerpoint presentations in the last year with different coloured and shaped variations of diagrams aiming to represent the stack of technology and the billions of bidding decisions and data points being utilised, that I must admit to now being a bit powerpoint-blind.

In most media buying scenarios, you have an easy way of testing such a competitive market, which is to test them all over a set period of time and see whose technology wins according to the KPIs – volumes, CPAs, new business, what have you.

Sadly, testing multiple DSPs at the same time is at best impractical, and at worst, a way to invalidate results, waste a whole pile of cash – and annoy your customers all at the same time.

Here’s why:

1) Cookie bombing

In an all out competitive bun-fight, it’s not infeasible that one or more of the media partners will simply buy the cheapest mass inventory they can find, in the hope that having their cookies on as many devices as possible will mean that they’ll get (view) credit for more conversions, without actually having an impact on the user. This is especially risky in mass market scenarios, where many users will be buying/interacting anyway, and serving ads willy-nilly (especially if over half of them aren’t even seen) is basically a bit like cheating. Think of it as the shotgun approach – something will get hit, if you spray wildly enough.

2) Frequency & Bidding against yourself

With multiple buyers aiming at the same goal, and especially so where the market is smaller and necessitates re-targeting based on 1st party website data, there is a risk of this smaller target market seeing multiple ads from multiple providers, all in the same day. Not only is this wasteful, but risks annoying the very people who are or should be your best customers. Of course you can cap frequency at a campaign level but will it be real-time enough to capture all impressions in-flight and prevent the risk totally?

To make matters worse, if more than one DSP is bidding for the same user simultaneously, then they could be bidding against each other, and inflating the price which will, of course, be passed on the the advertiser.

3) Attribution & Fair testing

In such a DR driven sector like digital, display advertising naturally struggles for justification against such monsters as the SERP. Justifying impression-based sales, showcasing brand building and early-funnel strategies through clever use of first-touch and assist data are critical to ensure it gets the credit it deserves.

Adding complexity to the campaigns, competition for each customer touch point and making the true impact harder to see amongst the vast weeds of data is not going to result in a clearer view. The risk is that the widest/more scatter-gun approach will pay off, potentially leaving clever targeting & higher impact placements losing out.

If you can’t see the true result of your test, then it becomes pointless. This is where the amazing data capabilities of digital become a handicap rather than a help, as it becomes impossible to make business decisions based on the results, and hence they add nothing except data for data’s sake.

It doesn’t mean that you have stick with what you have – far from it, but there are measured ways to test more than one programmatic provider, which I’ll be covering in my next post.