Wanna become a data scientist? Checkout Beyond Machine!
A common problem in online advertising is identifying which online ads are the most successful. For example, let’s assume that we are using 10 different online ads in order to promote a new mobile app. The ads have a different number of impressions and clicks. Which ads are the best performing ones?
Let’s say we have the following set of data:
|Ad ID||Clicks||Impressions||Click Through Rate|
You can quickly see that by just eyeballing it, it becomes difficult to see which ads perform best and which do not. Some challenges are:
- Ads with very different impressions might have similar performance (e.g. Ad3 and Ad4). So, are they both the same in terms of performance?
- Some Ads have a small number of impressions. How confident can be we be in our assessment of their performance?
- Which ad is the best overall taking into account both the impressions and the clicks?
You can see here that the ads ranked in order of performance from the best to the worst. Secondly, we get a measure of confidence in our conclusion.
As the marketeer, you want two things:
- Choose the best performing ads with a good degree of confidence.
- Give the good performing ads with low confidence some extra impressions, until you realise whether their performance is actually good or it was just a coincidence.
So, in this case, if we had to pick out one ad, it would be Ad 1. It looks like Ad 1 is a good performer, and we are also extremely confident in our conclusion regarding that.It would also be good to display Ad 8 a few more times to see whether it is indeed as a good performer as we are suspecting it might be.
This approach is way more powerful than simply using the Click Through Rate as an indicator of performance, as it is based on a robust statistical model and takes confidence into account, as well. Simply load all your ads in the required format and you’re good to go! Feel free to get in touch for any comments, questions or suggestions.
Ad 8 which has 54 clicks over 100 impressions gets the best performance score (close to 40% click-through-rate), but we are not confident in our conclusion, due to the small number of impressions. For Ad 9 however, which has similar views to Ad 8 (110) but far less clicks than it (10), we are extremely confident in our conclusion that it is a decent performer. Why? Because its results do not deviate much from the mean as measured by the rest of the ads. This intuitively makes sense. If an ad is not much different to the rest of the ads, it is easy to say with confidence that it’s performance is typical. However, for ads with performance that is different to the mean (either much better or much worse), we need more samples to reach a confident conclusion.