Show Notes
- Experiments must run direct-response content, which asks the target to do something. This generates data for you to optimize.
- Test creative and content on landing pages, banner ads, Google AdWords, native ads and email.
- Insights gained from testing can be applied to other mediums. Digital testing insights can apply to traditional marketing. Direct response insights apply to brand.
- Experiments must include a control and variable. Control is what stays the same. Variable is something changed. Document your control and variable to stay organized.
- Design experiments to test only one variable at a time. Isolating one variable proves that a result or effect from the test can be attributed to one, singular change. For example, if testing call to actions on banner ads, keep the banner color, timeframe, channels, visual hierarchy and font the same on both banners, but change your message or CTA only.
- When you place content, make sure the person viewing the content sees only one version every occurrence. Convert.com, Optimizely or Google Experiments help accomplish this task.
- Determine a KPI (key performance indicator). Get as close to the money as you can. For example, a manufacturer with complex distribution would measure actions that indicate leads – like a button that takes customers to the distributor’s website or counting phone calls.
- Segment experiments by persona. Every persona responds differently to the same stimulus. These differences gives us insights into the persona.
- Limiting your experiment to target one persona per experiment shows you the most efficient way to reach that individual persona.
- Channel v. channel tests are valuable. Use one piece of content and test its targeting within a channel. For example, use interest targeting (control) v. behavioral targeting (variable) on Facebook to learn which yields the best traffic.
- Strive for statistical significance when analyzing results. If you ran this experiment more than once, what is the probability the outcome would be the same as what you just calculated? Use a statistical significance calculator to determine that confidence. The higher the percentage (95% or more) the more significant. But there are other factors to keep in mind when applying insights based on statistical significance.
- Experimentation and optimization is the fifth actionable component of Iterative Marketing.
Charity of the Week:
Blue Dog Rescue
Six Actionable Components are the actions we take as marketers to implement Iterative Marketing. They don’t have to be implemented all at once. They are:
- Brand Discovery: Uncover how your buying audience feels about your product or service and how they rationalize the decision to buy.
- Persona Discovery: Document the individuals involved in the buying process in a way that allows us to empathize with them in a consistent way.
- Journey Mapping: Plot the stages and paths of the persona lifecycle, documenting each persona’s unique state of mind, needs and concerns at each stage.
- Channel and Content Alignment: Align every piece of content and marketing channel/activity to a primary persona and primary marketing stage, identifying new channels and content needs where opportunities exist.
- Experimentation and Optimization: Conduct thoughtful experiments designed to produce statistically significant business insights and apply the results to optimize performance.
- Reporting and Feedback: Report and review data and insights to drive decisions in content and strategy, as well as information to be used by the organization as a whole.
We hope you want to join us on our journey. Find us on IterativeMarketing.net, the hub for the methodology and community. Email us at [email protected], follow us on twitter at @iter8ive or join The Iterative Marketing Community LinkedIn group.
The Iterative Marketing Podcast is a production of Brilliant Metrics, a consultancy helping brands and agencies rid the world of marketing waste.
Producer: Heather Ohlman
Transcription: Emily Bechtel
Music: SeaStock Audio
Onward and upward!
►▼Transcription
Steve Robinson: Hello, Iterative Marketers! Welcome to the Iterative Marketing Podcast, where each week, we give marketers and entrepreneurs actionable ideas, techniques and examples to improve your marketing results. If you want notes and links to the resources discussed on the show, sign up to get them emailed to you each week at iterativemarketing.net. There, you’ll also find the Iterative Marketing blog and our community LinkedIn group, where you can share ideas and ask questions of your fellow Iterative Marketers. Now, let’s dive into the show!
Hello everyone, and welcome to the Iterative Marketing podcast! I’m your host, Steve Robinson, and with me, as always, is the enlightening and entertaining Elizabeth Earin. How are you doing today, Elizabeth?
Elizabeth Earin: I’m good, Steve. How are you?
Steve Robinson: I am doing great. Spent last night planning our family vacation this summer.
Elizabeth Earin: Oh, fun! Where are you guys going?
Steve Robinson: We are going to beautiful South Haven, Michigan. Not someplace I’d probably normally go, but we have a wedding to go there and it turns out it’s a beautiful place, so we are going to rent a house and hang out for the week.
Elizabeth Earin: I think some of the best vacations I have been on are those ones that are maybe kind of off the beaten path but they end up being so much fun.
Steve Robinson: Yeah, it will be nice. It will be nice and low key. And we are renting a house, so kids can be as loud as they want. We are not going to disturb the people above us, and we are not trying to get in and out of a hotel. And it’ll be good, it’ll be a great, great relaxing vacation as relaxing as a vacation can be with a 2-year-old and a 4-year old, right?
Elizabeth Earin: Very true, very true.
Steve Robinson: Excellent. So what are we talking about today?
Elizabeth Earin: Today, we are talking about what it takes to get to design an effective marketing experiment.
Steve Robinson: We get started here. There are two ways that we use experiments, right? One of them is to refine our content and make sure our content is more on target and develop the insights that we get out of doing that. And then the other is to refine our channels or how are delivering that content. Should we talk about channels or content first?
Elizabeth Earin: Why don’t we start with content?
Steve Robinson: When we talk about content, the first thing that comes to my mind is that we need to emphasize that when you are running an experiment on content, you are running usually a direct response type of content, so it’s either a landing page or an ad or an email that’s asking someone to do something, right? So, can you think of a good example of content or creative that would be asking someone to do something?
Elizabeth Earin: Yeah. Actually, I think if we take a look at the real estate market, it’s something that we all have experience with, and so an example of direct response would be to request a showing. It’s an action that we are asking the user to take and it’s something that we are able to then measure.
Steve Robinson: Excellent. And so taking that example a little bit further, you would test a couple of different creatives. It would be asking someone to book a showing, and then based on the responses there, you would gain some insights that maybe you did your testing digitally, but then you could apply that if you were doing direct mail or doing some print advertising or some other things offline, right?
Elizabeth Earin: Yeah. By using the direct response, we can measure those results and then we can take that into things where we may not be able to measure as easily like traditional advertising. We also have the ability to apply the insights that we have learned from those direct response ads to our brand advertising as well.
Steve Robinson: That implies that the results that we get from these experiments really derive some insights around our personas, right? Because we can’t just be testing a button caller and be able to take that over to brand ads. We have to be learning more about our target and understanding what really motivates them from an emotional standpoint or from features and benefits standpoint, right?
Elizabeth Earin: Definitely. And so in terms of our real estate example, you can test landing pages and you can look at the various types of properties. You can use banner ads to test different types of properties, your Google AdWords, you can set experiments up that may be targeting not so much the buyers of the homes but the sellers, trying to get new business that way. In terms of email marketing that you may be doing, you can do a split test with some of your new property listings that you may be sending out to people, and then there’s a couple other opportunities that we have. But really, it comes down to identifying those things where you can make some small changes and trying to figure out what’s most effective to the people that are trying to target.
Steve Robinson: When you talk about making the small changes, we should talk about kind of how you setup that experiment and where you make those changes, right?
Elizabeth Earin: And I think this is something – I love that we are going to talk about this, and we are going to get into talking about controls and variables, because I think this is somewhere where we want to experiment. As marketers, we want to know what’s most effective, but this is where it gets a little — you kind of start to wonder, okay, how exactly do I do that? Not only how do I do it, but how do I set up an effective experiment, and it all starts with the controls and the variables.
Steve Robinson: Yeah. So, the control is going to be – Basically, you are going to create two different versions of your ad, a version A and a version B. Version A is also called your control, it is the one where you don’t change anything. That’s what you are running right now; you know it works and it’s consistent. Version B is your variable, and that’s the one where you change one thing and one thing only. So, coming back to our real estate example, we might change the visual and the background of the ad from highlighting the property first and foremost to highlighting the agent first and foremost and see which one of those runs a little bit better, but you are going to want to keep some things the same, right?
Elizabeth Earin: And I think the first thing you want to keep the same is the objective, what it is that you are trying to achieve. For the experiment to really make sense, we want to have – we want to compare apples to apples, and to do that, we need to be making sure we are asking them to take the same action and accomplish the same thing. And so the first thing that we want to do is make sure that our objective is the same for both our A test and our B test.
Steve Robinson: And we also want to make sure that we are only changing one other thing, right? So, if we are changing the headline, we want to keep the visuals the same. If we are changing the visuals, we want to keep the headline the same. We want to keep the same colors, the same typography. And one that I know that we have run into on a regular basis as being the hardest thing to keep the same is the visual hierarchy. So, if we have to make a change between version A and version B that tweaks the layout of the ad or the homepage or the landing page, now all the sudden things move, they have to maintain the same visual weight. Your eye has to be drawn to the same things in approximately the same order. If that doesn’t happen, then you kind of invalidate the test, and you don’t know if version A won because the headline ended up becoming bigger or if version A won because it was some different words or different visual.
Elizabeth Earin: Yeah. Every time there is an additional change, if there is more than one variable between your two ads when you get your results back, you don’t necessarily know what it is tied to. If you are able to limit it to that one variable and then you find a difference between the two ads, it is very clear what it was that caused that change. And again, the ability, once you have that information, you can apply that to your traditional advertising and your brand advertising, and that effectiveness spreads across your entire program.
Steve Robinson: I think there are usually three different places where we see these changes occurring. One is with the actual message itself, so what is the core message of the ad? Usually you have a message and a call to action. If we change the message, how does that impact the number of people who take advantage of the ad or the landing page and convert? We can also change the way in which that message is delivered. So, this could be tweaking wording or a visual that supports that message in order to see if we can pull at a different emotional string in order to get the person to act. And then the third thing that we can change is the actual call to action, asking them to do something different.
Elizabeth Earin: So, coming back to our real estate example. When we are talking about imagery, we could be testing something along the lines of making the agents’ face more prominent versus making the home that we are advertising more prominent, and so seeing which one that the target audience responds to best is going to help determine that advertising moving forward. In terms of headlines, we can be talking about comparing location versus house attributes, which one is going to appeal to your target market. And then in terms of the call to action, it can be as simple as just changing action you want them to take, and even though it may mean the same action on their end, it may mean a click, how we word that can greatly influence the actions they take. And so for example, request a showing versus setting up an appointment, and which one resonates best with your audience.
Steve Robinson: The key thing is making sure that you are only changing one thing at a time, right?
Elizabeth Earin: Yes. And that’s, I think, the hardest to do. We run into that time and time again in different ads that we have helped our clients with, and even some of our own, of wanting to — really trying to limit those changes, so we make sure that we are getting the cleanest possible data back that we can get.
Steve Robinson: Let’s talk a little bit about the technology that goes into this, because when you place – when you make these changes, one of the tricks is you want to make sure that the same people are saying the same particular version every time they get a chance to be exposed to the ad or the landing page. That’s a technological feat, right? Because if we just randomly split it so that you, Elizabeth, happen to get version A this week, and then next week, you get version B, and then next weekn you get version A again, and maybe you could get version A three times. We don’t really know if the fact that you saw version B three or four times before you saw version A is what actually made you click on version A, and so in order to get really solid results, we have to make sure that you are getting version A every single time and that requires some technology.
Elizabeth Earin: Thankfully, there are some really great tools out there that can be used, and kind of all — sort of across all spectrums of budgets. So, convert.com, we have used that before. Optimizely, we have used that. They are very user friendly. They are easy to set up. It’s very intuitive. And then we also have Google Experiments, which — it’s a free option but it’s also a little bit harder to use, but it’s still much easier than trying to run this experiment on your own.
Steve Robinson: And those are the tools for landing pages. There are other tools for if you are testing banner ads. They are usually going to use whatever tool you are using to distribute the banner ads. If you are testing two different promoted posts on Facebook, that actually gets a little bit tricky because Facebook likes to take the winner and run with it, and next thing you know, you don’t really have enough data to get to statistical significance, which is something we are going to be talking about a little bit later. So Facebook is a bit tricky, but when it comes to general advertising, the platform that you are placing the ads will usually let you set up a split test. For email, the email tool that you are using will usually let you set up an A/B test or a split test there as well. And split test, A/B test, that’s another word for what we call experiments. You are looking for wherever the tool is that’s letting you put the content out there. What is their mechanism for it? When it comes to web pages, as Elizabeth mentioned, convert.com, Optimizely, Google Experiments, and then there are some number of more enterprise-class tools, like Adobe has as a tool for this. Other, more expensive platforms have tools for them as well.
So we have set up the technology to do our split test. What are we measuring? What are we trying to figure out? What determines a winner? I mean we have got A, we have got B, we got a bunch of data that comes back. How do we decide whether A actually was better than B?
Elizabeth Earin: Well, I think it all comes down to what is it that we are trying to do. And typically, we are either trying to make more money or we are trying to spend less money, or a combination of both. And so, to really determine the effectiveness, we want to get as close to the money as we can. And so when it’s possible, we want to tie this to direct revenue, that is the ideal situation. Unfortunately, that’s not always possible based on your industry. And so, another way to go is taking a look at leads, specifically qualified leads. And what we mean when we say qualified leads is, it’s great if you get 50 people coming into your website. Along the lines of our real estate example, you have 50 people that want to come in and say they are interested in buying a home. If 50 of those people are just looky-loos who don’t have any timeline to buy and don’t have a budget, are they considered qualified leads? Whereas if you have got someone coming in who has been pre-approved for a loan and they know that they want to buy in the next three months, that’s something that you can put a timeframe around, and that’s something that you can actively work for. And you know that if you are able to find the right home for them, you are going to have revenue coming in. And so really taking a look at what those qualified leads are, and to determine what those qualified leads are, that’s going to, again, also vary by industry and what it is you are trying to accomplish. Like I said in our real estate example, we have got the timeline that they want to buy, if they are approved for a loan, but within your own industries, you are going to find that there’s different thing. And your sales team has great insight into this as to really when they get a prospect, what those kind of triggers are, signs are, that it’s someone that’s going to convert for them.
Steve Robinson: And if you are in e-commerce, you can just measure the money. You can literally measure the revenue, so which one of these produces more revenue. If you can get to lifetime revenue, that’s an even better metric. If you have some indication of, yeah, this is the type of customer that’s going to be a repeat buyer, but just getting to the revenue, I think, is plenty great. Now, if you are in, say, manufacturing, then you might have a problem because if you have got a more complex distribution channel where, okay, we generate a bunch of demand, but then that demand pops out to a distributor and maybe we get an order back on the other side. And what happens inside the distributor is a black box to us. That gets a little bit more complicated, and at that point, you just do the best you can, so you measure the signals that are coming in. Usually, how many people came into our website and then clicked out to a distributor’s website. We want to look at what is the quantity of those clicks out to the distributor or what is the quality of the traffic on our site. How many pages are they looking at, etc., etc.
Elizabeth Earin: One other thing we want to keep in mind when we are talking about our experiments with our creative and our content is talking about segmenting by persona. And it’s really important that when we set these up, that we really – we keep those in mind because each persona that we have, they are very different. They have different wants and needs and motivations in their life and different context in which they are viewing our ads. And so one of the things we really want to look at is how are they each responding to what we are putting out there. In our real estate example, coming back to that, a first-time homebuyer is going to be very different in their wants, their needs, and how they respond than an empty nester. And so by setting up our experiments that really take that into account, we are able to get some better insights into that and then that influences our programs moving forward.
Steve Robinson: And so to do that mechanically, the key is to make sure that when you get the data back out from your analytics, then you are able to split up the numbers based on the persona that went in. And you do that by making sure that you are sending signals into your analytics software, usually using like a UTM parameter or something along those lines for Google Analytics, but you want to send the data into your analytics software about which persona we were targeting, so that when you get the numbers back out, you can split them out and see, okay, how many was our conversion rate or how many conversions did we get for persona Lucy versus persona Mary, right? And be able to figure out what that difference is. Work with your tech team to figure out how to do that, because that data is really key. I think we are probably at a good point for a break, so why don’t we talk about helping someone.
Elizabeth Earin: Before we continue, I would like to take a quick moment to ask you iterative marketers a small but meaningful favor. We don’t have sponsors for this podcast. Instead of asking you to get a free digital scale or enter some code for 10% off your website, we ask that you give a few dollars to a charity that’s important to one of our own. This week’s charitable cause was submitted by Bill Bowman of Magaya, who asks that you make a donation to Blue Dog Rescue, an all-volunteer charitable organization that works to find permanent loving homes for at-risk homeless dogs. Learn more at www.bluedogrescue.org or visit the link in the show notes. If you would like to submit your cause for consideration for our next podcast, please visit iterativemarketing.net/podcast and click the share a cause button. We want to support what’s important to you.
Steve Robinson: And we are back. So, for the first half of the show here today, we were really focused on talking about content and setting up experiments to optimize your content. But your content is only one area that you can experiment with. The other area that you can experiment with is channels. So as you refine your marketing programs, as you get better and better, one of the things you want to get better at is getting the content into the, I guess, to the eyeballs of the right consumers. And so, you do that by setting up thoughtful experiments with your channels.
Elizabeth Earin: And one of the things we want to keep in mind is that we want to limit our experiment to targeting one persona per experiment. We sort of talked about that right before the charity break. The importance and the insights we can get when we run experiments that are targeted at our personas, but to really get the maximum effect we have to limit that. If we combine – if we have got two personas, we have Bob and Bill and they are very different people, and we lump them together and try to test a headline with them, they are going to end up canceling, or they could potentially end up canceling each other out, and so it’s going to come back looking like you have got inconclusive results, when in reality, if we had split those two personas out, we would have seen that version A really resonated with Bob while version B really resonated with Bill. And those are great insights that, then, can be applied across our traditional advertising, our brand advertising and even used to update our personas themselves.
Steve Robinson: And the same thing is true when we talk about channels. So we may find that Bob loves Facebook and responds really well on Facebook, whereas Bill just doesn’t really click on anything in Facebook. And if we want to get to Bill, we need to be using banner ads or some other type of advertising. So when you are testing your channels, that means that you have to have your channel set up in such a way so that you have a primary target. Tou have targeted that channel down to an individual persona. If you haven’t done that, then you are kind of missing a prerequisite to being able to really effectively execute experiments with your channels. Again, just as with the creative, we are going to really focus our experimentation with direct response. You can’t really test brand ads because you are not asking them to do anything. So if you want to change the way somebody feels, you don’t really want to use those ads to be testing your different channels because you have no way to measure whether or not that you hit the right person and your change was effective. But if you are testing direct response and your creative is targeted at the same persona as your channel, you now have an opportunity to measure, okay, so we delivered an ad or a message to 2000 people. Of those, 200 people converted. We delivered the message to 2000 people on this other channel, and only 50 people converted. Well, obviously the first one is going to be more effective than the second.
Elizabeth Earin: So, in terms of the different tests that we can run, we can do channel to channel or channel versus channel, and then we can also look within that same channel. Is that correct?
Steve Robinson: That’s correct. You can run either channel versus channel or targeting within one channel as your experiments. If you are doing channel versus channel, we are going to see whether or not Facebook or DoubleClick is more successful at achieving our result, but if we are running within one channel, that could be two different types of targeting within Facebook. So within Facebook, you can target your advertising based on somebody’s interests, the things that they have liked or you can target your Facebook advertising based on behaviors which are usually signals that Facebook has acquired from other sources. Both are great ways to target. Sometimes, behaviors is better than interests and sometimes interests is better than behaviors. So by running the same message in Facebook across both behaviors and interests and then measuring the results, you can figure out which one is better for that given persona, because it’s also going to vary by persona from one persona to the next.
Elizabeth Earin: So, what are the KPIs that we’re using to measure our channels?
Steve Robinson: Unlike with creative, we are really looking at the number and quantity of conversions. For channels, we’re looking at the cost per conversion. And so you are going to measure what was your cost for every time you get the desired action you are trying to get. So, on a real estate example, what is our cost per showing? You usually want to set a quality threshold, though, so not every showing counts. So to Elizabeth’s prior example, we might want to say only showings where they are pre-approved and they have a defined time to buy, those are the showings that counts. So, now we are measuring the cost for a quality showing and that’s your key metric. If you are e-commerce, it’s going to be essentially your cost per dollar in revenue, usually measured as advertising as a percentage of revenue. Either way, you are trying to get at what is the most efficient channel. And efficiency is measured in how much does it cost for each person we can get in, right?
Elizabeth Earin: I think that makes a lot of sense in terms of content and in terms of channel, but now we get to the part that I know I always struggle with, and that is actually analyzing the results. And I know that the way we analyze results is going to apply to both creative and channel, even though we were using different types of KPIs. But really, this is, I think, the part that a lot of marketers struggle with, and so the key is getting to statistical significance. Which, first of all, is a word that I stutter over, so I am going to apologize now, but can you go into a little bit about what statistical significance is and why it’s important to our experiments?
Steve Robinson: Yeah, and I’ll preface this, but I hated statistics in high school and college and it was not my – I was good in math and hated statistics. So even with that, the good news is there are some great tools out there to make this easy, so you don’t have to get into the math. You don’t even have to know the math, as long as you understand the numbers that go in and numbers that come out.
Elizabeth Earin: I like that. I like that a lot.
Steve Robinson: Statistical significance is the one key number that you do need to understand the definition of, and what statistical significance means, in a nutshell, is if I ran this experiment over and over and over again, what’s the probability I am going to get the same result.
Elizabeth Earin: So pretty much, it’s proving that it’s not attributed to chance.
Steve Robinson: Exactly. Exactly. Because if you don’t have strong enough results, then it could have been a fluke. It could have been influenced by any number of little things, a butterfly flapping their wings somewhere across in the Pacific somehow made your experiment end up one way when it could have gone the other way. And so the tools that let you do that, what we are talking about, is the statistical significance calculator. And we are linked to a great post by Heather Ohlman, where she actually has embedded in the post a statistical significance calculator you can download off of iterativemarketing.net. But you are essentially putting in four numbers. If you have got an A/B test, you have got two numbers for version A, whether that’s creative or channel, and then two numbers for version B, and the two numbers are going to be the sample size. So how many opportunities did somebody have to convert, and then the number of conversions. And you put those two numbers in for version A and for version B and hit go, and out comes a statistical significance. And based on that statistical significance, you can know whether or not your results, your winner, whether it was version A or version B that had more conversions. Whether that will happen every time or if it could kind of go either way. And the number you are looking for is 95% or higher. If you get something that’s in the 80% to 95%, you can sometimes make a judgment call and say, you know what? We are going to run with the data we got. But really, to have confidence in knowing that you have gotten a good result, you are looking for that 95%.
Elizabeth Earin: I like the point that you made about if you have got 80% or 85%, because I think its sometimes frustrating as marketers when we are running experiments – run an experiment and people are always saying it’s the 95% and you are at 92%, you are like, do I throw all this out? And that really, to your point, becomes a judgment call and you have to decide what’s right for your business as well as kind of how it’s going to be used and applied across the organization. And if it’s something that you are comfortable running with or if it’s something that you want to tweak and try running again or getting a larger sample size for.
Steve Robinson: You can always let it run longer unless you can’t. I think that’s pretty much a wrap for this week. Join us next week, where we talk about reporting and feedback and taking all this wonderful data and these insights that we have produced and getting them in the hands of the right person in a way that they understand.
Elizabeth Earin: Sounds great.
Steve Robinson: Well, until next week…
Elizabeth Earin: Bye!
If you haven’t already, be sure to subscribe to the podcast on YouTube or your favorite podcast directory. If you want notes and links to resources discussed on the show, sign up to get them emailed to you each week at iterativemarketing.net. There, you’ll also find the Iterative Marketing blog and our community LinkedIn group, where you can share ideas and ask questions of your fellow Iterative Marketers. You can also follow us on Twitter. Our user name is @iter8ive or email us at [email protected].
The Iterative Marketing Podcast is a production of Brilliant Metrics, a consultancy helping brands and agencies rid the world of marketing waste. Our producer is Heather Ohlman, with transcription assistance from Emily Bechtel. Our music is by SeaStock Audio, Music Production and Sound Design. You can check them out at seastockaudio.com. We will see you next week. Until then, onward and upward!
Leave a Reply