Show Notes
Experiments in the context of Iterative Marketing serve a bigger function than simply increasing conversion rates. They help us gain insight and knowledge into our target audience. This podcast explores the role of experiments in Iterative Marketing, and shares how marketers can get the most out of their own experiments.
What are we talking about when we say experiments?
- Experiments compare the status quo against a new idea and are usually conducted in the form of an A/B or split test.
- Experiments in the context of Iterative Marketing should meet two qualifiers: The scientific method must be applicable, beginning with a hypothesis and using controls and statistical measures; and they must generate insights beyond simply improved conversions.
- Reference: Podcast Episode 22 – Let’s Talk Statistics
- Reference: Podcast Episode 7 – Designing an Effective Marketing Experiment
What is the role of experimentation within Iterative Marketing?
- Real-Time Bidding is the buying and selling of ad inventory in real-time auctions. This means the ad is actually purchased at the moment the user is seeing it.
- RTB is an ad space for marketers and doesn’t require human media buyers.
Why do we experiment?
- Programmatic is here and it’s here to stay. It not only has become a staple of modern media buying but is slated to account for 50% of digital ad sales.
- It is the future of media buying for digital, but also TV, display, video, broadcast radio and publications.
- RTB and programmatic allows us to deliver the right ad to the right people at the right time.
- This new technology enables optimizations because we can see rich data of ad performance and can know which targeting methods are working.
What are the limitations of how many experiments you should run?
- Programmatic gives us the means to get the right message, to the right person, at the right time — which is a key component of Iterative Marketing.
- Reference: Programmatic Storytelling: How Programmatic is Evolving Traditional Branding
How do you get the most out of experiments at your organization?
- The first thing is to find a DSP partner — SiteScout, ExactDrive or BlueAgile — that will allow you to get your toes wet – preferably one that allows you to start small.
- For other references, check out: Real-Time Bidding, Private RTB and Programmatic Direct and why marketers should adopt programmatic marketing.
Charity of the Week:
Your local school district: Many students cannot afford basic supplies. Please contact your local school to learn how you can donate.
We hope you want to join us on our journey. Find us on IterativeMarketing.net, the hub for the methodology and community. Email us at podcast@iterativemarketing.net, follow us on twitter at @iter8ive or join The Iterative Marketing Community LinkedIn group.
The Iterative Marketing Podcast is a production of Brilliant Metrics, a consultancy helping brands and agencies rid the world of marketing waste.
Producer: Heather Ohlman
Transcription: Emily Bechtel
Music: SeaStock Audio
Onward and upward!
►▼Transcription
Steve Robinson: Hello, Iterative Marketers! Welcome to the Iterative Marketing Podcast, where each week, we give marketers and entrepreneurs actionable ideas, techniques and examples to improve your marketing results. If you want notes and links to the resources discussed on the show, sign up to get them emailed to you each week at iterativemarketing.net. There, you’ll also find the Iterative Marketing blog and our community LinkedIn group, where you can share ideas and ask questions of your fellow Iterative Marketers. Now let’s dive into the show.
Hello everyonem and welcome to the Iterative Marketing podcast. I’m your host, Steve Robinson, and with me as always is the smart and imaginative Elizabeth Earin. How are you doing, Elizabeth?
Elizabeth Earin: I am good, Steve. How are you?
Steve Robinson: Well, as you can tell by the sound of my voice, I’m getting over a cold that made me lose my voice earlier in the week.
Elizabeth Earin: Yeah. I am not doing much better myself. I have got the onsets of a cold coming as well.
Steve Robinson: I thought we were supposed to be out of cold and flu season here.
Elizabeth Earin: I think that when you have toddlers and small children, it’s a year-round thing. It doesn’t quite go away.
Steve Robinson: Yeah, they are really efficient at the whole like finding germs, incubating them and distributing them.
Elizabeth Earin: Oh yes, definitely, and they are done as — disguised as sweet lovely kisses.
Steve Robinson: And snuggles and yeah, yeah. So what are we talking about today?
Elizabeth Earin: Today, we are talking about experiments.
Steve Robinson: Excellent. We are really going to get into how experiments fit into Iterative Marketing because we have talked about experiments in the past, right?
Elizabeth Earin: We have. We have done a couple of episodes and will refer to those throughout the episode and refer to them in the show notes as well.
Steve Robinson: I think it’s important that we say that when we talk about experiments, we are talking about experiments within the context of Iterative Marketing which may be a little different from how most marketers approach what they would consider experiments or A/B tests or split tests.
Elizabeth Earin: So we’ll get into why that’s a little bit different. And then we are going to follow up with how to use experiments to generate actionable insights which is also kind of a component of Iterative Marketing.
Steve Robinson: And then finally, we’ll give you some tips and tricks on how to make sure that you are getting the most out of experiments at your organization. So let’s start off with what we mean when we say experiments.
Elizabeth Earin: So experiments are usually comparing the status quo against a new idea, either a new idea or a new way of delivering that idea, any sort of change that you are making to what you are currently running.
Steve Robinson: And the idea is that you are going to pit the old idea or your control or version A against the new idea or your version B or your variable. And in doing so, you are going to execute what’s called a split test or an A/B test in order to find out which one is the better idea or the better delivery mechanism for getting your idea to your audience.
Elizabeth Earin: And a lot of marketers are doing this. They are running their own split tests or A/B tests today. The difference though is when we start talking about these tests as it relates to Iterative Marketing. And so what makes it different?
Steve Robinson: Well, we have a little bit more finite definition of what an experiment is above and beyond an A/B test. And it really has to qualify under two additional parameters and one of those is, is it scientific. Are we using scientific methods? Do you have a control? Do you have a variable? Are you using statistical significance and statistical calculations in order to determine your result? And the other is: Are you generating more than just a bump in a lift in conversion rates or revenue? Are you going after insights as well? Do you have a hypothesis before you start your test?
Elizabeth Earin: So let’s jump back to scientific real fast. I want to dig into that because as anyone who has listened to the podcast before kind of knows that the scientific piece, the data piece, is not something that comes natural to me, yet I see the importance of it. And so I want to make sure our listeners who are in the same boat as me understand that this isn’t scary and it is doable. And we actually have a really great episode about that, Episode 22, Let’s Talk Statistics, that is a really great reference point. I highly recommend listening to because it helps to put this into perspective and help you really gain an understanding of what it is so that you can apply this and make sure that you do have statistically significant results.
Steve Robinson: You don’t need a lab coat. You don’t need a degree in engineering or mathematics. It’s not rocket science here but it is adhering to some pretty sound principles to make sure that the results that you are generating are the results that you mean to generate.
Elizabeth Earin: And that’s important when we get into the second sort of difference here with Iterative Marketing and an experimentation. And that’s the insights because if we are going to go to our organization and say “this is what we have learned,” we want to be confident that what we are saying is true.
Steve Robinson: Exactly, exactly. And again, we talk about that more in Episode 7 where we talk about designing an effective experiment and going after those insights and not just which button color is better or which home page works better.
Elizabeth Earin: And again, the point here when we talk about these insights — and this is one of the things that I love about Iterative Marketing and I feel kind of sets it apart and that really comes through in experimentation — is that we are not just throwing those two different ideas up there. We are testing for a specific hypothesis. We are testing to find out which one performs better and it goes beyond, to your point, button color. It’s something that can extend across the organization and so this kind of gets into other topics that we have talked about before, but really helping to increase the value of marketing across the organization as a whole.
Steve Robinson: And we’ll talk more about that at the end of the podcast when we get into how to make sure that the experiments you are running are delivering the most value to you and your organization. Should we jump into what the role of experiments are within Iterative Marketing?
Elizabeth Earin: Yeah, I think that’s a great place to go next. And there’s two separate rules when we talk about experiments within Iterative Marketing and the process. And the first of that is testing a tactic in a small way before going big. And that really ties into our fundamental truth of starting small. And then the second one is continuous improvement for running your programs. And I think we probably want to dive into both of those. So why don’t we start with starting small?
Steve Robinson: Sure. So before you go to start a new program or a new strategy or tactic within an existing program, you probably don’t want to throw a ton of money at something until you have some indicators that you are going to find some success there. And so you can execute a well-controlled experiment to compare the new idea of how you might bring an idea to life or take it to market compared to what you have been doing to date. You just need to make sure that your control, what you have been doing to date, and your variable have the same objective and then you can go to town testing something. Alternatively, you can still run an experiment without necessarily having a control just to determine if something is going to be effective based on an objective or a goal or baseline of what you need to hit in order to determine success.
Elizabeth Earin: Do you have an example that maybe you could share with the audience?
Steve Robinson: Sure. So let’s say you have an existing program and you sell poker sets for example and your existing program discounts the price by 15% using a coupon code to see if you can get more immediate business and get people across the finish line here to buy. And you have an idea. Well, what if instead of discounting our price, we added more value and maybe we gave away a free deck of premium cards along with every order? How does that compare to the discounted price? So you could set up a very simple experiment running just a limited amount of Facebook creative and compare the response rate and the conversion rate of that small test against the existing program to see if going the route of adding value or going the route of including an added bonus is as effective or more effective than a discounted price.
Elizabeth Earin: And again, you are not talking about launching this out as part of the overall strategy. We are talking about starting with a really small audience so we are just looking at Facebook. And so a very targeted small and concise — a small part of the budget, but the insights that we gain from this experiment then can be applied across the program going forward, right?
Steve Robinson: Right. You can take that same insight that you learn there and now you can apply that to your display advertising and your print advertising and even fundamentally to how you approach servicing your clients. If you understand that your customers really value – would really appreciate more value versus a discounted price, that could be a fundamental shift in your entire go-to-market strategy if you start to apply and test that elsewhere.
Elizabeth Earin: Yeah. We actually had this exact scenario with a client. We had a technology client who wanted to know — they were ready to expand and wanted to know what the right market was to go into and they were considering biosciences, healthcare and manufacturing. And so we set up a very simple Facebook program for each of these three audiences, very minimal spend and based on that, we were able to help them determine that manufacturing was actually their best option.
Steve Robinson: Exactly. The key is start small. Figure out what the minimally viable tests you can run that’s going to have statistically significant results. In the episode we talked about statistics, we refer to a couple of tools, your confidence calculator and your sample size calculator, so you want to use those to make sure that your test isn’t too small and then execute it to see if you can get better results than you are getting with your existing marketing activities. If it works, go big.
Elizabeth Earin: Yeah, exactly. And it’s great because if you start small, you actually have the ability to — if your audience size is big enough, actually test more than one thing. So in this particular scenario with our client, we were able to explore multiple opportunities at once which shortened the length of time before we were able to recommend and the client was able to make a decision. And so that’s one of those things that you can consider as well if you are able to set that up and you have the audience size that starting small allows us to explore more than one opportunity and decide which of those options makes the most sense for our business.
Steve Robinson: Yep. So that’s the starting small component. What does it look like from a continuous improvement standpoint?
Elizabeth Earin: So continuous improvement really gets down to the heart of Iterative Marketing. But the experimentation allows us to continuously optimize on our existing programs which leads to that continuous improvement that we strive for.
Steve Robinson: So for example, if you already had a program running and it was successful on Facebook but you wanted to make it better, you might take a look at — okay, how are we targeting our audience on Facebook? And maybe there’s a couple of different targeting mechanisms between their interests versus some of their demographics or maybe behavioral targeting, and you could split off a separate segment and test targeting that segment differently using, say, behavioral versus interest-based targeting and run those both concurrently and see which one produce better results.
Elizabeth Earin: Yeah. And keeping all things equal, you are using the same creative, so really, we are just looking at those different audiences and setting the test up that way. We have actually done this with a client. And we have an industrial client that was targeting a very specific audience, very specific, and that audience was available through third-party data but we weren’t sure how viable those lists were because this was such a specific audience. And so we looped through those different audiences that are available to us and tested until we figured out what the right audience was for that specific program.
Steve Robinson: The key is that you are keeping things the same as far as everything else goes. So you are keeping the same creative. You are still targeting the same audience with the same objective and all you are doing is splitting your targeting mechanisms to see which one performs better.
Elizabeth Earin: Yeah, and with the example of our industrial client, that program had been up and running for months. It was very successful but just because it was doing well doesn’t mean it can’t do better and so that’s what we were looking for, that opportunity. And so we looked for that opportunity to improve on their current results which is that continuous improvement that we strive for, and because how this was set up and what it was we were testing, this could actually run the same time that we had other experiments running, so we had insights coming in left and right. And it’s fantastic because then we have the ability to apply that, and again, build on that continuous improvement.
Steve Robinson: So I think it’s a great lead-in to kind of overall covering why we run these experiments. Why not run Iterative Marketing without experiments?
Elizabeth Earin: Experimentation is important. Maybe you can enlighten me, but I don’t see how you can do Iterative Marketing without experimentation.
Steve Robinson: I think it’s really core at the iteration part of it, otherwise you are just randomly shooting in the dark and simply replacing things to replace them, right? Experimentation lets you do that in an educated way.
Elizabeth Earin: And you have said this before, if you are not experimenting all the time, then you are wrong. And that sounds so severe but can you go into kind of the thought process behind that and why not experimenting means you are wrong?
Steve Robinson: Well, if the rest of the world stood still, you’d be fine, but that’s not a reality, at least not for most of us. What you have is — there is constantly changing forces externally to your own marketing programs. So those changes in market dynamics, you have new competitors coming in and coming out of the same space that you are operating in, there is changes in consumer preference or in features and benefits. And so if you are not consistently iterating and testing and experimenting, you are running the same thing, and the same thing will eventually stop working because something is going to change around you to break it. And by experimenting, you are at least keeping pace if not consistently improving upon past successes.
Elizabeth Earin: I think there’s also a loss portion to this, and that’s if you are not focused on experimentation, then there’s that sort of wasted — there’s that opportunity cost of wasted gains of not testing. We have got a small window to be able to test on and each test builds on top of each other, and so if we have a period of time where we are not testing, then we have lost that opportunity to get that time back. But not only that, we have lost that opportunity to build on whatever insights came from that experimentation. And so not experimenting actually puts you behind the game.
Steve Robinson: If you could produce leads at $80 lead but you are producing leads at $100 a lead, I mean that’s a $20 per lead loss that you are not able to effectively realize because you are not running the experiments to identify them. The other place where you have the opportunity cost is in launching new initiatives and new programs. You now have to wager with a bigger bat when you go and launch something because you didn’t test it first. And so by executing effective experiments on the front end, you are able to minimize your risk and maximize your return on new programs, new initiatives, new ideas, new creative, new targeting methods, all of them.
Elizabeth Earin: One other thing that I really like about experimentation, and I think we have all probably been in this scenario before, but experiments sort of — it takes the opinion out of the question. And we have all been in that scenario where we are looking at creative and someone’s like, well, I don’t like it. And it’s their own personal opinion but now that creative may be off the table or you have got to work even harder to try and get it to move forward because someone didn’t like it. And they don’t necessarily have a reason for why they didn’t like it. And so experimentation helps to take that emotion out of those group decisions because rather than saying that someone doesn’t like it, you now have proof saying, well, even if you don’t agree with that, our audience loves it. It resonates. We have got a higher conversion rate. We have got a higher click-through rate. And you can’t argue with that, or it’s harder to argue with that.
Steve Robinson: The best is when somebody comes in and says that creative isn’t going to work because I know that our audience hates purple, for example. I mean, that’s an extreme but now you can say, well, can we test that assumption because I am not sure that our audience hates purple and I would really love to find out and document this in a scientific way. How would anybody else say no? First of all, because if they really think they are right, then you are giving them the opportunity to prove they are right and you end up now with a great insight. And you are able to test the assumptions that the organization has been operating under in countless other areas. I think the other key thing that experimentation gives us is if it’s used correctly, it really helps us make failure okay. We as marketers can’t be perfect. We never have enough data. We don’t really know when things are shifting out there in the marketplace real-time. And by setting up the expectations of those around us and above us that we are running an experiment and we don’t know what the outcome is going to be, then success is insight. It’s knowledge. It’s finding out the best path forward. It’s not whether or not the creative that we launched worked. And so by operating under experiments, it actually makes us a lot more comfortable as marketers because we are able to set the expectation at we are going to learn and then we are going to apply what we learn to make money, not just we are going to make money.
Elizabeth Earin: I like this because I think so often marketers are asked to be fortune-tellers. Well, is this creative going to work? Well, I don’t know. I mean, I think so. I wouldn’t have presented the idea if I didn’t think it was going to work. But now you have this extra level of fear of, oh gosh, it better work or they are not going to trust me next time I want to pitch an idea. And this takes that out of it. It gives us the power to say, I am not sure but this is my reasoning for why I think it’s going to work. Let’s test it, let’s see. And it kind of opens you up and increases that value within the organization because you become someone who is using the data and who is making sure that the decisions that we are making going forward are very well-thought-out and we have got a plan for them.
Steve Robinson: Well, I think this is a great point for us to take a quick break and go help some people.
Elizabeth Earin: Before we continue, I’d like to take a quick moment to ask you iterative marketers a small but meaningful favor, and ask that you give back to your community. Usually, we ask that you make a donation to a charity or a cause submitted by one of our listeners. However, this week, we are doing something a little different. With the start of the school year right around the corner, we are asking that our listeners donate to their local schools. Many schools have students that cannot afford basic school supplies and we kindly ask that you contact your local district and find out how you can help ensure that something as simple as a pencil or paper will not be the barrier to a child’s academic success. Next week, we will return to highlighting causes submitted by our listeners. If you would like to submit a cause for consideration for our next podcast, please visit iterativemarketing.net/podcast and click the “Share a Cause” button. We love sharing causes that are important to you.
Steve Robinson: And we are back. So before the break, we defined what an experiment is. We talked through how experiments fit within Iterative Marketing and why they are so important. I think now let’s talk a little bit about logistically of how you execute them within Iterative Marketing. One of the questions I know that we get on a regular basis is how many experiments should I be running right now?
Elizabeth Earin: And that really depends on three things, and I know that doesn’t make this an easy answer, but its resources, audience size and how many opportunities you have to experiment.
Steve Robinson: Yeah. So we all have limited resources and the fact of the matter is that experiments aren’t free. They do take resources. They take resources in setting up the experiment, administering it and measuring it and watching it. Okay, so you have to have that resource, whether its internal or external, working on that. You also have — oftentimes when you are running an experiment, if it’s regarding creative or some digital experience, you are going to have to create two versions of either the creative or the digital experience which means paying for additional creative or development costs. And then finally, sometimes we need to throw a few extra dollars of media at something just to get the audience size where it needs to be to execute an experiment. And we have seen that situation occur here and there. So at some point, if you try to run experiments everywhere, a lot of organizations will simply run out of resources for their current budget. The good news is if you are consistently generating insights and those insights are improving your return on investment and helping prove your value to those that have the purse strings, that that budget generally will go up over time and you’ll be able to execute more experiments. But in the meantime, budgets are what they are.
Elizabeth Earin: So that’s looking at it from a resource perspective. Taking a look at it from an audience size perspective, our audience size limits the number of experiments that we’re able to run. And this comes back to some of the resources that we referred to in our talking about statistics podcast, but to get statistically significant results, you need to have a large enough audience. And the tool we referenced in Episode 22, the sample size calculator and the statistical significance calculator, will help you to determine what that is. But again, not every audience is big enough to do an experiment on it. And an example of this is we had a manufacturer that we were working with that has a key persona that, when we take a look at their audience size, it’s very, very limited. There’s about a hundred people in it. And so that is not large enough to run any experimentation on it at all or any experimentation that’s going to result in statistically significant results that we feel comfortable going back to the client and saying, yes, we know that this is something that is repeatable; this is something that we can reproduce in the future and we should make decisions based on this data.
Steve Robinson: Yeah. And the last area where you are going to find that you hit a wall in how many experiments you can run concurrently is what we call experiment slots. And what do I mean by slot? Well, a slot is for every audience that you have, you have limited number of positions in their customer journey where you can run an experiment because you can really only run one experiment at a time for every intersection of an audience and creative or every intersection of an audience in a targeting mechanism. So for example, if you wanted to test two creatives against a particular direct response program for a particular audience, that slot is now filled and you cannot test any other creatives when you are looking at marketing to that particular audience. The same thing is true on the targeting side where, say for example, you are running some Twitter advertising and you think that you can target that Twitter advertising in a better way and you want to test it and run an experiment. Well, now you have kind of occupied your Twitter targeting slot with this experiment and until this experiment runs its course, you are not going to be able to introduce any other testing inside that’s regarding targeting on Twitter.
Elizabeth Earin: At some point, we have got experiments running everywhere that we have a large enough audience. And that’s sort of the perfect world that we are getting to because again, you only have so much time available to run these and so when you are able to really maximize those spots based on your audience, then you can maximize those insights.
Steve Robinson: So last thing I want to make sure that we talked about today is really getting the value out of your experiments that you are running. I think at this point, we probably sold you on experiments. If not, then we didn’t do our jobs. Go back and start over and listen again to see if it works the second time. Let’s talk about if you are running experiments, how do you extract the most value from the resources that you did have to dedicate to them.
Elizabeth Earin: And I sort of just mentioned this right before we moved into this point, but we really want to run as many experiments as our resources, our audience and our programs allow for because, again, each experiment that we run is going to build off of each other, so if we are able to very strategically plan these out so that we constantly have experiments running, we are constantly building on what we learned, then we can maximize our results.
Steve Robinson: The next thing that we want to do is we want to make sure that we keep a ledger of those insights, those results that we are generating because if you learn something and then don’t do anything with it or don’t even write it down, then you have no chance of improving on it. And what you’ll find is that particularly if you are keeping this ledger in the context of your audience segments as you have to defined them, so say by persona, now you can loop through, okay, what do we know about Kathy? Okay, Kathy does like this and doesn’t like this and we want to avoid trying to target Kathy this way because it just doesn’t work. And then all of a sudden, you are reading through that and you realize, well, there’s several ideas or gaps here in our knowledge that this would make a great experiment or that would make a great experiment. It could also come up with great ideas for future content or creative or future entire programs for a given audience segment.
Elizabeth Earin: One thing you said it’s important to keep a log of those insights, but it’s important to apply those insights as well. And we are running the experiments and we are writing them down, we want to make sure that those are being applied. And so it’s not just about killing what’s not performing. It’s also seeing where those insights can be applied. And to your point, it can be applied to updating the personas. It can lead to future ideation for upcoming experiments. It can lead to program expansions. In the example we gave of our client who wanted to go into a new market, we were able to determine that this was the market that they were going into which opened up an entirely new program for them, an entirely new audience. And so those are some of – again making sure that you are making the most out of your experimentation is applying those insights.
Steve Robinson: And then finally, you want to make sure that you are reporting what you learn. And so this is not just reporting it up the ladder to your boss or to the CMO or to the CEO. This is also reporting it laterally throughout the organization because every little bit that you learn about the consumer of your product or service can help others within the organization probably in ways that you can’t even think of, and so get the results out there.
Elizabeth Earin: And I think that’s a great point in ways you can’t think of. You understand marketing so well, but the people that run other departments understand their aspect of the business. So if you give them that knowledge, the possibilities of what they could do with that are endless and that’s where, again, we really work to increase marketing’s value within the organization because we are not just focused on the brand awareness and the advertising component. We are now sharing insights that can be applied and impact operations across the organization.
Steve Robinson: Absolutely. Well, I think that wraps it up for this week. So I want to thank everybody for making time for us this week and putting up with my raspy voice. Until next week, onward and upward.
Elizabeth Earin: If you haven’t already, be sure to subscribe to the podcast on YouTube on your favorite podcast directory. If you want notes and links to resources discussed on the show, sign up to get them emailed to you each week at iterativemarketing.net. There, you’ll also find the Iterative Marketing blog and our community LinkedIn group, where you can share ideas and ask questions of your fellow Iterative Marketers. You can also follow us on Twitter. Our username is @iter8ive or email us at podcast@iterativemarketing.net.
The Iterative Marketing Podcast is a production of Brilliant Metrics, a consultancy helping brands and agencies rid the world of marketing waste. Our producer is Heather Ohlman with transcription assistance from Emily Bechtel. Our music is by SeaStock Audio, Music Production and Sound Design. You can check them out at seastockaudio.com. We will see you next week. Until then, onward and upward!
Leave a Reply