12 Oct 2009

The Truth About Attribution Modeling [UPDATED]

Ah, Attribution Modeling.

It seems to be the new buzz phrase right now, and it’s what a lot of marketers are scrambling to achieve because they believe all other marketers have it. The reality is quite different and very few have reached this evolutionary point, and are actually quite some distance from doing so.

Definitely not a sexy topic, but an important one if your advertising programs are going to be optimized.

For some marketers, attribution is just sneaking on to their radar as they hear more and more about the benefits of investing in this area – whilst the marketer of 2008 wasn’t looking for this solution in earnest, the marketer of 2010 will be. For those who have begun to explore, they have uncovered a series of barriers, some of which are technical and some are organizational. It remains on their ‘to do’ list, but is perhaps being pushed down and down by more pressing matters.

Last week I spoke on an SMX panel in NY to discuss the ‘attribution battle’ alongside Sara Holoubek (SEMPO President and Chair of our panel), Roger Barnette (Search Ignite), Kevin Lee (Didit), Alan Osetek (iProspect) and Tony Wright (WrightIMC). The key takeaway was actually that anyone solving this problem is ahead of the masses – within a room of approx 120 people, half had budgets of over $50k and of those about 8 were doing any kind of attribution, but only 2 kept their hands up when asked if they were happy with it.

I suspect the dissatisfaction is partly due to the expectation they had before going into this, perhaps hoping to see accurate models that explain the impact of every click, impression and social mention and to be able to tie them back to the revenue generated. If so, then they would definitely be disappointed as attribution should be considered a macro exercise and not a micro analysis tool.

Hopefully with this article I can provide an overview of the important factors of attribution modelling, and some of the choices that lie ahead of you.


Firstly, why do you want attribution modeling?
Have you actually stopped to think about that question? According to a recent Forrester survey, more than half of web decision makers think it will make them smarter and provide them with a better understanding of their customers’ online behaviour. When I talk to our iCrossing clients and push for a more granular reason, what I hear typically falls into one of two buckets – the need to understand the right media mix and need to take into account view-thru data from display.


Let’s take a typical client setup before.

There will be our proprietary I2A tracking solution that we use for monitoring the performance of our SEO and SEM campaigns, and this tool can also feed us data on direct load. There will often be some affiliate software, an ad server (typically DoubleClick for us) will be present if the client is running display and finally a site side analytics solution – Omniture and Core Metrics being the most common for our clients. Each of those tools has a different approach to tracking and each one of them will have tracking code that is fired at a different point in the click stream. Already we can see why 100% accuracy is not possible!

CoreMetrics will provide reports that show how all marketing efforts add up nicely to 100% of the revenue generated through the website. Whilst there are many ways to implement such a tool, we very commonly see the 30 day cookie window using a last click look back model. By its very nature this favours some channels over others, and it has no sight at all of post impression display data as no click was generated.

Hence attribution modeling comes in to play to give display back the value we and the industry know it has. (See the case study at the end of this article for our latest numbers on the uplift display has on search and site traffic or click here).

The results from this exercise will naturally help solve the 2nd common request, which is to understand media mix modelling and where to invest your spend. If the model demonstrates an impact on natural search from certain placements, then it might make sense to invest further in that area even if the ROI is low.


Secondly, let’s address the barriers to getting this done:
We have already considered the technical mismatch of data above, and the more you dig, the more potential problems will be uncovered. A nice place to start though is to do a little housekeeping and check that your cookies are all capturing orders and revenue in the same way, i.e. net or gross, inclusive or exclusive of sales tax etc, and then to check the cookie windows are set equally. A 30 day window is by far the most common, but that doesn’t mean its the right choice for your business – make sure you are considering the buying cycle of your product or service.

But are you ready for the internal battle? If your organisation has invested in a tool like CoreMetrics that investment goes along way beyond just the licence fee; you can bet a lot of folks have spent a lot of time tweaking it to be just so and also on extensive training. And so a barrier can sometimes be reliability on legacy systems and ways of working.

CoreMetrics et al are great tools, there are only problem in this case is their lack of ability to see the effects of post impression display. Therefore we are looking at a longer term educational program, helping your teams to understand why you want to consider an additional model that allows all marketing elements to be included.


Thirdly, how do you model the data?
On our panel, SearchIgnite showed a few screenshots of their reports for attribution, and they seem like they definitely do the job - I couldn’t help wondering if they work too well though. Roger explained how they tried several models for the client in question, before settling on a cascading attribution model. What were they looking for though? Were different models tried because the previous ones had not revealed the result everyone was looking for in the first place?

SearchIgnite have taken the right path by remaining flexible in their technology as this is such a new area, but does that flexibility create more problems than it solves? How do you determine which model is right for you?

First click?
Last click?
Weighted attribution?
Equal attribution?
Cascading attribution?

The problem with flexibility is its ability to be flexible!

If I am a marketer responsible for a display budget, I am going to push for a very different model than the marketer holding the SEM budget. One possible solution to the argument is to take the “daddy says so” approach – what do Forrester say. Conveniently they have developed a model that is easy to understand and replicate and would make an ideal starting point.


But there must be a really simple solution?
Actually not, but there are many companies, approaches and tools that can help you in one form or another.

Technology vendors such as ClearSaleing and TagMan provide a universal tracking code that can be dropped on to your site and will identify all your other marketing pixels with the same unique code so that the data matching can be done more efficiently. A tool like TagMan also manages your pixels away from the site, and so tag changes no longer need IT resources.

iCrossing (and other agencies) approach this from the dashboard perspective. Our analytics team accept that clients have historical tagging situations and work to collect the data from those legacy systems and map them together to achieve the outcome. The data can be presented quantitatively in Excel or qualitatively in a management dashboard.

There are also options to use ad serving tools and what is being called ‘path to conversion’ analysis; both DoubleClick and Atlas have moved in this direction but the solution requires all data to flow through their system and they typically only work for media spend, not NSO, affiliate etc.

And Akamai could very well be one to watch. Akamai’s primary business is as CDN (Content Distribution Network) working with busy websites and ad servers to distribute their content globally across servers so that every viewer has a speedy experience. But this means they have a sites content flowing through its servers already and we can see from the click stream that they are dropping a cookie from their own domain. It may well be that they are seeing enough data to attribute across channels, but only time will tell.


What’s the payoff?
I felt like our audience at SMX were a little deflated by the end of the session; many had come for that one piece of info that would solve the problem for them, but left having discovered its actually harder than they first thought!

The trick is to get started, to take small steps and try and chip away at the understanding.

To me it’s like when I was a kid and I would go find my kite that was inevitably at the bottom of the cupboard somewhere, all tangled and knotted up. I could spend the best part of the day untangling every knot in that string and miss the best part of flying time, or I could get about 80% of it done and go out and fly my kite. The next weekend perhaps I could invest a few minutes into unravelling the remaining 20% and fly it that bit higher.

Of course, I could have cut the string off completely and just started again! And some of you reading this will have such a complex legacy tracking system that you could spend your career trying to unravel those knots. A fresh start is an option, consider cutting off the string.

But when attribution works and the problems are solved, the resulting data can be very insightful.

iCrossing published a capabilities deck on a travel client that showed what sort of information becomes available when this problem is solved. There had been many research papers that had looked at the display and SEM overlap, but very few that also took NSO and direct load into account too.

You can view the report here. The headlines from the campaign are listed below, and what’s important to note is that without this data, it is likely the client would have removed their display budget and seen their overall marketing ROI go down.

- 13.7% increase in natural search visitors
- 2.5% increase in unique visitors
- 14.8% increase in paid search click thru rate
- 11.2% decrease in paid search cost per click

[UPDATED: See follow up - An update on attribution modeling - little has changed in a year]

[UPDATED: Aug 2011: See 3 Simple Alternatives to Attribution Modeling on Search Engine Land]

1 comment:

Ecommspark said...

Before the marketer starts building an attribution model he/she should address the below questions:

- Does the company plan to build an online or combined online and offline attribution model?

If the company is involved in online activities only, then an online attribution model is the solution. It definitely gets more sophisticated when offline initiatives are being added to the mix.

- Does the company have enough human, financial and technical resources?

The marketers should understand that building a custom attribution model is a very challenging process which requires human, technical and financial resources. It is not as easy as going into DoubleClick, for instance, setting up the rules in the Multichannel solution and getting immediate results out of the system as a result of the queries running on the back end.

- Does the company have access to the data needed (sufficient historical data)?

The marketer will at least need the data for the past 12 months.

- What is the source of the information?

Is it a web analytics tool or an ad-server platform? Any analytical tool that tracks marketing campaigns’ performance might become a valid source of information.

- How do the processes work on the back end?

The marketer should understand how the data is being collected on the back end, how it is being linked and presented in the reports.



http://www.ecommspark.com/steps-to-take-before-building-attribution-model/