- Posted by neefischer
- On 20. November 2018
First: Why not use draft and experiment by google?
In my case I have a large amount of campaigns that should be part of the same test. Googles A/B Testing possibilities are always on campaign level. If all campaigns are used the number of cases is a lot higher and you have results more quickly.
Some thoughts about how to set up the A/B Testing environment
- We have to split the account structure randomly in partitions that perform similar
- We have to ensure that the A/B Testing partitions do not overlap. For example one product has keywords in different matchtypes – if we split the same product into A and B group the test results can be misleading.
- The splitting logic also depends on which test is made. If you test a feature that is only available on campaign level, e.g. Target CPA Bidding, than you have to make the split on campaign level. If you test device bid adjustments on adgroup level you can split on adgroup level, etc.
- It is better to make more than 2 partitions. It is good to have the option for running also A-A Tests besides your test.
The Testcase: Can a custom bidding model outperform googles Target CPA Bidding?
Google will tell you how smart their algorithms are. Machine learning. Some more buzzwords. Just trust the blackbox! Ehhhhm, wait a minute. Google has a generic bidding approach that has to work for every customer and they do not know our business like we do. We have also more data than google (YES – really!) to use for a own bidding solution. The process looks like this in a simplified way:
- data munging
- feature engineering with the help of your business knowledge
- choose a model that works well for your target variable
The main difference to googles bidding solutions is the feature engineering part. There can be a BIG difference for the model quality. And this difference is normally more important than the lift of choosing model A (random forest regression) over model B (linear regression) or model C (stacked/ensembled models). Enough 🙂 Let’s test!