Show Notes
Marketers can iterate on advertising and other paid media by running experiments and then mining the data to identify the most effective path to the audience.
How to iterate on advertising or paid media:
- There are two types of paid media placements: Direct Response and Brand Awareness.
- Direct Response is ideal for experimentation because we are asking the prospect to take action, which produces measurable results.
- Brand Awareness does not typically have a call to action, which makes measuring its effectiveness difficult.
- This applies to both digital and traditional advertising.
How to set up experiments:
- Design experiments to identify the most cost-effective path to your audience. These can include:
- Placement Methods
- Direct vs. Programmatic
- Programmatic vs. Network
- Targeting Methods
- First-Party Data vs. Behavioral (Third-Party)
- Behavioral vs. Contextual
- Contextual vs. Look-Alike Audiences
- Look-Alike Audiences vs. Individual Site Whitelisting
Resource: Designing An Effective Marketing Experiment
Resource: The Role of Experiments In Iterative Marketing
How to decide what experiment type to use:
- A/B Testing
- When testing creative, A/B is a solid option. It allows for quick iteration and knowledge gathering.
Resource: How Statistical Significance Makes Your Results More Reliable
Resource: How to Ensure Your A/B Testing Gets Results
- Multivariate Testing
- The lack of control makes this type of experience less desirable.
Keys to a successful experiment:
- Never let time impact your experiments. Run an experiment for at least 2 weeks and a maximum more than 8 weeks. Keep in mind, the day of week, holidays, business cycles (month-end for example) can all impact media performance.
- Apply what you learned from direct response ads to your brand awareness advertising.
Document any insights learned from direct response experiments, and update relevant personas and customer journeys accordingly.
Iterate for future use:
- Make sure that any media consumption insights are documented in the persona for future use.
Charity of the Week:
We hope you want to join us on our journey. Find us on IterativeMarketing.net, the hub for the methodology and community. Email us at podcast@iterativemarketing.net, follow us on twitter at @iter8ive or join The Iterative Marketing Community LinkedIn group.
The Iterative Marketing Podcast is a production of Brilliant Metrics, a consultancy helping brands and agencies rid the world of marketing waste.
Producer: Heather Ohlman
Transcription: Emily Bechtel
Music: SeaStock Audio
Onward and upward!
►▼Transcription
Steve Robinson: Hello, Iterative marketers! Welcome to the Iterative Marketing podcast, where each week, we give marketers and entrepreneurs actionable ideas, techniques and examples to improve your marketing results. If you want notes and links to the resources discussed on the show, sign up to get them emailed to you each week at iterativemarketing.net. There, you’ll also find the Iterative Marketing blog and our community LinkedIn group, where you can share ideas and ask questions of your fellow Iterative Marketers. Now, let’s dive into the show.
Hello everyone, and welcome to the Iterative Marketing podcast. I am your host, Steve Robinson, and with me as always is the ever dependable Elizabeth Earin. How are you doing, Elizabeth?
Elizabeth Earin: I am great, Steve. How are you?
Steve Robinson: I am good. I am one baby heavier. I don’t know. We added a child to our family.
Elizabeth Earin: And how is that going?
Steve Robinson: Well, I would say a little bit different than anticipated. So this is our third child and I figured that we’d kind of be old-hat about the baby thing at this point and from that standpoint, I was right. We have got the rhythm of getting what little sleep we can get and we are not surprised by any bumps in the road with the baby. What I didn’t anticipate is exactly what the impact of being outnumbered would be, moving from a man to man to zone defense bit.
Elizabeth Earin: Yeah, because you went from two to three now.
Steve Robinson: Right, right. And one would think that would be like a small incremental change, but no, this threw our entire world into a complete tizzy and tailspin. So what are we talking about today?
Elizabeth Earin: Today, we are talking about continuous improvement in paid media.
Steve Robinson: As we roll through this topic today, we are going to touch on how we can iterate or experiment on paid media and specifically which media is right for experimentation, which media you can’t experiment directly on. We’ll talk about some keys to success and then wrap up on how to get the most out of this experimentation.
Elizabeth Earin: You mentioned that there’s some media that you can experiment on and some that you can’t. What are those two types of media?
Steve Robinson: Well, the key is to separate your media into direct response and awareness. And we will link to a blog post I wrote on this topic a while back, but basically if you have a direct response ad, you are asking your prospect to do something, whereas awareness may not even have a call to action on it. You are simply trying to raise awareness of brand or a particular message and you don’t want to clutter it by asking the prospect to do something while they are trying to read that ad. Direct response works really well for experimentation because if you are keeping the creative the same and you are simply changing where you are putting it. Now, when we are trying to get the right message to the right person at the right time, we take message out of the equation and make sure the time stays the same, all that’s left is the right person and so we are able to experiment and provide different means of delivering that same message and find out which means has the highest response rate and therefore we know how to reach our consumer better in the future.
Elizabeth Earin: Now you mentioned that this doesn’t really apply to awareness or brand advertising because we don’t have that same call to action. We don’t have that thing that we can measure the effectiveness of them taking action on but we still have the opportunity to apply what we have learned with our direct response to our brand advertising, correct?
Steve Robinson: Yeah, and we’ll talk a little bit more about that in depth later. I think the key right now is when you are looking at where you want to run an experiment, make sure that you have some action, some metric on the other side of that media that you can measure as a success.
Elizabeth Earin: I think another important point is that this is not limited to digital advertising. If you have got a strong call-to-action and you have got a way to measure that, whether it be a phone number or a vanity URL, then you still have the ability to test this with your traditional advertising as well.
Steve Robinson: Absolutely. You just want to make sure that you can accurately predict that all of your audience is going to follow your direction, right? Because you don’t want them just going and Googling your name and coming in through a side door that you aren’t measuring. But in general, yeah, if you have got the tracking in place and the KPI, then you know exactly how to optimize it.
Elizabeth Earin: Wonderful. Now there are a few different ways that we can experiment. We are not limited to just one thing. There’s actually quite a few where we have the opportunity to really refine our ads and make them more effective. Do you want to talk about being able to test different placements?
Steve Robinson: Yeah, absolutely. And in all cases, you are just trying to figure out what is the most effective path to get my message to my audience. And so anytime you are presented with multiple different paths to get there, it’s a great opportunity for an experiment. The most obvious and one that we run with a few of our clients is direct placement ads, where you are calling up a pub and negotiating a buy versus going through either programmatic or an ad network, where you don’t really know where your ads are going to run. It’s a great experiment to run.
Elizabeth Earin: So direct placements definitely have an advantage in some ways. You know exactly what publication you are going to be appearing in. You know exactly what audience it is that you are reaching and so there are some benefits to that, yet it’s not always necessarily the best case and we actually had a client where this was very apparent.
Steve Robinson: Yeah, for one client, it was an absolute landslide where we found that running programmatically, we were able to reach the same audience at significantly reduced cost per impression, cost per click, cost per action, every metric we could find was better, but that’s not always necessarily the case. And we have had instances where we have advised clients the other direction, where even though programmatic slightly outperformed a direct placement, there are those intangibles you can get with the direct placement that might make that balance a little bit different. So, for example, if you are B2B and you are running in a trade magazine that has a lot of clout in that industry, you are actually going to get a little lift in authority simply by having your brand placed right next to theirs. That’s not going to come out in your direct response measurements, so you are going to want to account for that and kind of discount your results if you have a slightly better improvement in a programmatic or network capacity. Either way, it’s still a very valid comparison to run.
Elizabeth Earin: So another way that we can experiment is actually within the targeting methods of either programmatic or network buys. When we are taking a look at demand-side providers and ad networks, they often give us a multitude of ways to target our media, so we can look at things like contextual targeting, behavioral, look-alike audiences, individual site white-listing, these are all different ways that we can go and we can test the best way to get our message to the right people at the right time.
Steve Robinson: Yeah. Often, there are multiple ways even inside of those different targeting methods, so you can test individual site, white-listing, oftentimes you’ll get multiple third-party behavioral audiences that look like they are a match for your audience, but you can actually experiment hit those individual third-party audiences against each other and then figure out which one is the best way to reach your audience.
Elizabeth Earin: And that’s a great way to experiment because that behavioral data that can be kind of expensive, kind of pricey at times, so it’s a great way to find out rather than I have got ten to choose from, maybe one or the two or the three that are most effective for me.
Steve Robinson: And we have also found that if you segment your audience and split them into bands of different demographics, whether that’s male/female, education, income level or simply age, that you’ll find that there are sweet spots that you can significantly improve your cost per action or cost per thousand.
Elizabeth Earin: Another way to experiment is with creative sizes and ad formats. And this really comes down to the idea that different sizes perform better or worse depending on the targeting and the creative. And a great example of this is when you are looking at GPS or geo-targeting, a lot of times this is limited to mobile ad sizes. And so in this particular case, you want to make sure that you are including mobile ad sizes because other ad sizes aren’t going to perform as well. We have also seen this based on industry. I was actually reading a really interesting article a few weeks ago that talked about the different ad sizes, and there’s so many ad sizes out there, but there’s this ad size, it’s an 88 x 31, very, very small, kind of oddly shaped ad and it makes up 2% of ad impression share. And that’s because it is almost exclusively used in the financial industry, so while no one else is using it, they are using it at such a high rate that it makes up 2% of ad impression share which I find to be so incredibly interesting. And so that’s another great way where you can test the effectiveness of what is the right ad format that you are delivering or even the right creative sizes.
Steve Robinson: That’s fascinating. That’s such an oddball size. I never would have expected that it would have that sort of penetration.
Elizabeth Earin: I want to see what one looks like because I’m having a really hard time visualizing like what your actual message would be in on an 88 x 31–
Steve Robinson: Yeah.
Elizabeth Earin: –size ad.
Steve Robinson: That’s itty-bitty and weirdly shaped. Another place we found is animation. This one you have to be a little bit careful because animation is expensive to produce, so then if you are going to run an experiment to find out that it didn’t work, it might be a little issue there, but animation has been proven to either help or hurt depending on the circumstances. I can’t tell you the number of blog posts I have read that said that animated ads outperform static ones, and then a couple weeks later, static ads outperform animated ads and the fact of the matter is that the conditions of your individual program or campaign or whatever you call it will drastically impact performance of animation versus static ad and something that it’s definitely worth testing.
Elizabeth Earin: And there’s a lot of different variables there. I mean, you are talking about where it’s being placed, what are your messages and who it is you are trying to reach and what other messages you are competing with, and so that’s definitely something to add to the list in terms of determining what right creative is for you.
Steve Robinson: You can also experiment on whole channels. You are not just limited to experimentation within one channel. So you can now compare the effectiveness of Twitter versus Facebook, or display versus video versus social and understand where you are going to have the lowest cost per action or the best response rate for the given program that you are trying to run.
Elizabeth Earin: Yeah. We actually ran a test for an industrial client of ours where they were targeting the automotive industry and we wanted to test the effectiveness of Twitter, Facebook, display and YouTube and we found that for some audiences, Twitter was not a good channel for us to be advertising in, while for others, it was great. In this particular case, it was it was Mexico. We had some ads targeted to people in Mexico and Twitter just did a phenomenal job, whereas in the United States, not so much.
Steve Robinson: The only thing you want to keep in mind is when you change channel, you are probably also going to be changing formats, so you don’t know for sure whether the channel performed better or if the restrictions or features of that particular format performed better. And with that same – actually, I think it was the same program. We ran YouTube against a variety of other channels and found that from a direct response standpoint, obviously YouTube didn’t perform very well. So we pulled it out of the experiment, and said we are going to look at this as if it were a brand or awareness advertising. We are not going to look at it as if it’s direct response and we are not going to hold it to the same metrics. We will leave it in there because we know that the targeting works from other experimentation we were running with that same targeting but we can’t – it’s apples to oranges. We can’t include this in the experiment.
Elizabeth Earin: So we have talked about what we can experiment on. How do we set up these experiments?
Steve Robinson: I think it’s important to look at — that this is different from testing creative. When you are testing creative, it’s the A/B test that is king because in an A/B test, you have a control version A of your creative and a variable which is version B of your creative and you are changing one thing and that one thing gives you the insight that you are able to walk away from that particular experiment with. When you are testing media and how to reach your particular audience, oftentimes an A/B test just isn’t really practical and a multivariate test where you have different versions, different opportunities to reach your audience, you’re testing all simultaneously, works better. So multivariate test is basically like throwing spaghetti at the wall and seeing which one sticks. You have A, B, C, D, E and F maybe that you are throwing out that simultaneously.
Elizabeth Earin: Yeah. And if you have several ways to target your media, this could be the most effective way to figure out which is best. And we actually had a client that was using a variety of different third-party audiences and we wanted to test and we started testing using A/B testing and we very quickly realized oh my gosh, I think there’s something like 20 lists that we were – it was going to take us forever to try and get through this and figure out what was effective. But then we were running into some of the issues that we are going to talk about in the next few minutes about setting up a successful experiment, and so we ended up switching from an A/B test to multivariate test and we were able to more quickly identify which of those lists made the most sense for this particular client.
Steve Robinson: And the only trick to a multivariate test is making sure that each one of your versions, whether it’s different targeting methods or different individual sites that you are white-listing, making sure that each one has enough data behind it to have some statistical significance. So for smaller buys or smaller publications or smaller third-party audiences, sometimes the audience isn’t big enough, and in that case, we found that grouping things together can help overcome that issue. I think this is a great opportunity for us to take a quick break. So without further ado, let’s figure out how we can help some people.
Elizabeth Earin: Before we continue, I’d like to take a quick moment to ask you Iterative Marketers a small but meaningful favor and ask that you give a few dollars to a charity that’s important to one of our own. This week’s charitable cause was sent in by Steve James of Stream Creative. Steve asks that you make a contribution to the Lighthouse Youth Center, a community center offering youths age 10 to 18 a safe place to gather for recreational activities, to get help with homework and enjoy the positive influence of adult Christian mentors. Learn more at lighthouseyouthcenter.com or visit the link in the show notes. If you would like to submit your cause for consideration for our next podcast, please visit iterativemarketing.net/podcast and click the “Share a Cause” button. We love sharing causes that are important to you.
Steve Robinson: And we are back. So before the break, we talked about the different places that you can run an experiment and what type of creative you want to run an experiment versus what type you don’t. We talked about A/B versus multivariate testing. Let’s hit on some keys to success and how you can really apply the learnings that you have from your media experiments.
Elizabeth Earin: And we kind of mentioned this in our last example, but time is very important and we want to make sure that we don’t let time impact our experiment. Do you want to talk a little bit about what that means?
Steve Robinson: Yeah. I think that there are a number of elements where time can really screw up an experiment If you try and run one version, version A, or if you are running multiple versions in a multivariate, you run different versions at different times. So the biggest impact I think when we are looking at media is something called the mere exposure effect. And so what the mere exposure effect says that simply by being exposed to a message repeatedly over time, it’s going to increase the viewer’s trust or response rate for that message and so if you try running an experiment with an audience that has seen your messaging more than another audience, you are going to run into problems. The same thing is true on the other side where as a message stays in market too long, eventually it saturates and that given particular creative is going to drop in effectiveness. And so you want to make sure that you are running your experiments concurrently so that no one group has an opportunity to see the message more than another group.
Elizabeth Earin: And that way, you are also helping to control for any of those other variables, making sure that you are using the same sort of timeframe. And I think a good example of this especially in the B2B standpoint is if you are talking about budget and putting together a fiscal budget, if we are talking about month and/or we are talking about a time when budget money has to be spent that could be impacting your response rate in a way that you may not see any other time of the year.
Steve Robinson: Yeah. And running during the summer, during people’s summer vacations, even days of the week, if you happen to have more Saturdays in one group than you have in another group, that will totally skew your responses and may give you either a false response or make you think something is significant that isn’t.
Elizabeth Earin: So what’s the right timeframe to take into account holidays and days of week and the right length of an experiment? In our experience, we found running an experiment for either a minimum of two weeks or up to a maximum of eight weeks is really that sweet spot. Obviously, there’s always other factors that come into play and it may be different for your particular industry or for your business model, but in general, what we found is that two- to eight-week period seems to work best.
Steve Robinson: Yeah. And another thing you want to try and control for related to time is frequency. So often, you’ll be comparing two different channels or two different direct versus programmatic or ad network placement. And if you looked at the raw data that came back out, you might find that the direct placement maybe ran at a significantly higher frequency than the programmatic did, where you were hitting the same people over and over and over again. That frequency is going to change the response rate. And if you can at all try and control for it, it’s definitely worth the effort of setting frequency caps where you can to make sure that you are delivering those ads at the same frequency to the audience that you are reaching through those different mediums because otherwise it sort of invalidates your experiment.
Elizabeth Earin: Something interesting that we have seen is that we often have clients come to us and say that some content has outperformed what they have run before. So, for example, we have got a client who’s running a spider chart creative right now. A few months ago, they were running a white paper. And so when they looked at the two of those, they tried to compare which one performed better than the other. That’s hard to do because the time difference between when that old content ran and when this new content has run, that landscape has changed and it’s changed in a number of different ways. One, the audience that we are targeting has greater awareness for our brand because they originally saw the white paper. They have seen some brand ad since then and now they are seeing this new spider chart and so they are more familiar with our brand. They trust us more. They are more likely to click on our direct response ads. Another thing that’s come into play or another factor that may be influencing the response rate is market conditions may have changed. Again, if we are targeting one over summer and now we are in fall, how people are interacting, what’s top-of-mind for them, those may all be different things that are factoring how people are interacting and responding to our ads.
Steve Robinson: So bottom-line, be very mindful of time and timing and frequency and just this idea that when matters just as much as how you are directing that media and controlling for that to the best of your ability when you are executing these experiments. Another completely different key to success here is making sure that you have the right metrics that you are looking at to determine success. You want to — if you can’t optimize on the action that that consumer takes after the click because there just isn’t enough data there, you at least want to be looking at it and not just optimizing on click rate because if you are delivering ads to an audience that is click happy to click on them, that doesn’t necessarily translate to an audience that is ready to buy. And a key example of this and one that comes up over and over and over again is when we pit mobile versus desktop because you’ll find that your mobile click-through rate is significantly higher than your desktop click through rate. That’s not because you have magically found this awesome audience that is a glued to their smartphone, right? It’s because we have fat thumbs and your click-through rate will always be higher on a mobile device than it will be on a desktop device. When you go and dig a little bit deeper and you actually start looking at conversion rates or engagement rates behind that audience, you’ll find that that it tells a very different story and a lot of that traffic is balancing right away, sometimes even before your analytic suite has time to load and even tell you that they got there.
Elizabeth Earin: You know Steve, I’m so glad you brought that up because this really is a phenomenal point. I think that so often we get hung up on metrics. We are taking a look at what it is that the data is telling us, that we are not actually looking at the data. We are not looking deeper into the data. And yes you may have those clicks, but if they are not converting, if they are not leading to your ultimate goal then it’s not necessarily helping our program to advance, helping our business to accomplish what it needs. So I think this is a really great point. I am glad that you touched on it.
Steve Robinson: We have been beating this direct response horse this whole particular episode here, talking about click through rate and conversion rate and how you know if you have got the right audience because they are taking action and we have said early on that that you can take what you learn in your experiments for direct response and apply it to brand, so let’s talk briefly about how you do that.
Elizabeth Earin: Because our direct response ads – again, we mentioned this before, but since they have an action that we are asking someone to take, we can measure the effectiveness. Our brand awareness ads don’t have that. However, with that being said, we are typically targeting the same audiences. We are using similar channels and so we can take what we learned from those experiments and then apply them over to our awareness ads. And so through that experimentation, we are able to figure out how best to reach our audience even if it’s with a brand awareness ad.
Steve Robinson: So for example, we have a software client that we are currently working with where we are targeting different direct-response ads depending on where they are in their buyers’ journey. So if you are early in the buyers’ journey at “See” state, you are going to get a different ad versus later in the journey, but along the whole journey, we are making sure that we are reinforcing the brand through some just awareness ads and those don’t even have a call to action on them. So we have no way of measuring them, but since we are running direct response ads through the whole journey, we do have the ability to take what we learned there and then target those brand ads the same way that we are targeting the direct response ads. Every time we make a tweak to direct response, we go and make the same tweak over on brand and now we are able to make sure that we are getting the brand ads in front of that right-fit audience for the direct response. Likewise, we also learned that as people fall later in the journey, our audiences gets smaller and smaller and smaller, so we can’t optimize our direct response ads late in the journey in isolation. We have to again take what we learned from those broader audiences earlier in their journey and apply those same optimizations to the ads that we are targeting later in that customer journey because in that case we just don’t have enough data. The audiences are too small.
Elizabeth Earin: So we can take what we have learned from our direct response ads and apply that to brand awareness but we can also apply what we are learning from our direct response ads across our other marketing efforts and this ties specifically back to our personas and in some cases are customer journeys. And so it’s important as we are learning things, as we are moving through this process and we are gaining these insights, that we update our persona based on what it is we have learned. Oftentimes, what we are learning about our media targeting through these experiments that we are running will help us to figure out how we want to target specific personas in the future. And we ran into this in a really actually very clear-cut case with food service equipment client of ours who we ran an experiment where we tested all of their direct buys to determine which publications were really hitting their target audience, their target personas. And this helps them because it allowed them to cut those underperforming placements and then transfer that spending placements where the ROI was better and more in line with what their marketing goals were. Kind of a side piece that came out of this though is once we figured out which publications were really hitting their target audience, this could be applied to other marketing efforts, specifically PR as they were working on press releases and some sponsored advertising pieces. They were able to determine which of those publications that we have learned about in the experiment made the most sense for them to target with those PR efforts.
Steve Robinson: The key is taking that information and putting it in a central repository which is usually going to be your persona. Because if you are targeting media along the lines of a persona or a group of personas, you are going to be able to understand what their media consumption habits are, where they hang out, and be able to document it there. Let’s take a moment to sum up here. So, what have we learned today Elizabeth?
Elizabeth Earin: So I think number one, we have learned that experimenting with your direct response ads is the right thing to do. Brand awareness, not so much, but you can take the lessons that you have applied from those direct response ads and apply them to your awareness advertising.
Steve Robinson: We learned you can experiment on different channels, different placements, different formats, different targeting methods within the same channel. All of them are right for experimentation. Anytime there’s a fork in the road and there’s multiple different ways you can go to reach your audience, you are going to want to take the time to put an experiment in place.
Elizabeth Earin: And you can use A/B or multivariate testing to find the best way to reach your audience. Either one works. There’s benefits, pros and cons to both. See which one makes the most sense and then run your experiment.
Steve Robinson: And then we also want to make sure that any media consumption habits that we are able to identify, any insights and media consumption that we are able to identify through our experimentation, that we take the time to document them, update our personas, and then try and use them in other ways through our other marketing efforts, especially PR. I mean, that’s just right for this data. At this point, I want to thank everybody for making time to join us again this week. We really appreciate your viewership. If you have a moment, we’d love it if you popped out to iTunes and gave us a quick review, please, please, please. It’s kind of dead out there. We know you are listening. We know you will come back and you love us. Show us some love on iTunes. In the meantime, until next week, onward and upward!
Elizabeth Earin: If you haven’t already, be sure to subscribe to the podcast on YouTube on your favorite podcast directory. If you want notes and links to resources discussed on the show, sign up to get them emailed to you each week at iterativemarketing.net. There, you’ll also find the Iterative Marketing blog and our community LinkedIn group, where you can share ideas and ask questions of your fellow Iterative Marketers. You can also follow us on Twitter. Our username is @iter8ive or email us at podcast@iterativemarketing.net.
The Iterative Marketing Podcast is a production of Brilliant Metrics, a consultancy helping brands and agencies rid the world of marketing waste. Our producer is Heather Ohlman with transcription assistance from Emily Bechtel. Our music is by SeaStock Audio, Music Production and Sound Design. You can check them out at seastockaudio.com. We will see you next week. Until then, onward and upward!
Leave a Reply