Iterative Marketing

Login | Become a Member
  • What Is It?
  • Blog
  • Podcast
  • Community
  • Join
  • Resources
  • Webinars
  • Connect

May 9, 2017 by steve robinson Leave a Comment

Podcast Episode 46: How To Run An Effective A-B Test

Show Notes

A-B testing is the core of experimentation. With the right execution, it not only provides uplift in click-through rate and conversions, but also serves as an audience insight generator. This podcast explores how six things — sample size, random sample, controls, duration, statistical confidence and testing for insight — can make an A-B test effective and beneficial to all departments in an organization.

What is an A-B Test (2:59 – 4:29)

  • The testing of two different versions of the same content to determine which results in a better outcome
  • A-B tests are important to Iterative Marketing because they are the core of experimentation
  • Can apply to any medium (print, banner ads, direct mail, email, etc.)
  • Tools for A-B testing (Optimizely, Convert, Google Optimize) are becoming more user-friendly. Many testing tools are embedded in platforms like Marketo and Pardot.

Why A-B Testing Is Important (4:30 – 6:06)

  • Produces an impact on our marketing that is based on data, rather than gut feelings or personal preference for the best way to allocate marketing resources
  • Helps multiple departments find out definitively what the audience prefers

Six Things That Make an A-B Test Work (6:07 – 7:06)

  • Sample size
  • Random sample
  • Controls
  • Duration
  • Statistical confidence
  • Testing for insight

1) Sample Size (7:07 – 9:42): The number of times you need to present version A or version B to determine a clear winner

  • Sample size calculators can help you determine how big of an audience you need to achieve 90% or 95% confidence.
  • Marketers should not attempt a test if you are not going to have a big enough sample. It’s important to determine this BEFORE you start the A-B test to not waste resources.

2) Random Sample (9:43 – 11:42): Sample must not only be large enough, but it must also be segmented randomly.

  • Many tools do this for us

Charity Break – American Foundation for Suicide Prevention – (11:43 – 12:30)

3) Control (12:32 – 17:10): The efforts put in place to make sure the thing being tested is the only thing that’s different between the experience of those getting version A and those getting version B.

  • Test only one variable at a time so you know which change is producing the result
  • Design version A and version B as exact replicas in layout, font size, color etc. except for the variable being changed to isolate what is being tested
  • Run the test with version A and version B at the same time so breaking news, weather, or other elements, do not change the outcome of the test
  • Make sure your audience has not seen either version before the test starts

4) Duration (17:11 – 18:49): How long to run an A-B test

  • In our experience, do not run a test longer than 90 days because too many factors may impact the result
  • If testing relies on cookies in a browser, they are not reliable for more than a few weeks
  • A test should be run long enough to factor in various business cycles
  • Ex: Running a test Thurs-Mon favors weekend habits, while running it Mon-Thurs favors weekday habits.

5) Statistical Confidence (18:50 – 21:35): A complicated math equation to help us determine if an A-B test is repeatable, or the result of chance

  • We have an easy-to-use A-B confidence calculator on our website. Simply plug in your impressions or sessions compared to clicks or conversions to find out the statistical significance
  • Usually represented as a percentage, which represents probability.
    • Ex: If results are 95%, it means if you ran the same test 100 times, you’d expect 95 of those to work out with the same winner.
    • Resource: Podcast Episode 22: Let’s Talk Statistics
  • Marketers usually strive for 95% confidence, although we have taken the results of a test with 90% confidence as usable information, or as a good working hypothesis until a better test can be run.

6) Testing for Insight (21:36 – 26:12): Learning more about our audience beyond gaining an increase in click-through rate or conversions.

  • The best A-B tests test the psychographics of an audience segment to gain insight that can be applied to multiple departments in an organization.
  • To get started, brainstorm a hypothesis for how you expect your audience to act and why. Then, build an A-B test to validate or invalidate that hypothesis.
    • Ex: A bad hypothesis would be — “The headline, ‘Don’t make these three massive mistakes’ will result in more conversions than the headline, ‘Use these three tips to amp-up your results.’”
    • This hypothesis is not audience-specific and is very specific to this piece of content.
    • Ex: A good hypothesis would be — “Mary (our persona) will be more likely to convert when presented with an offer that limits her risk because Mary prefers avoiding risk over new opportunity.”

Summary (26:14-28:49)

We hope you want to join us on our journey. Find us on IterativeMarketing.net, the hub for the methodology and community. Email us at [email protected], follow us on twitter at @iter8ive or join The Iterative Marketing Community LinkedIn group.

The Iterative Marketing Podcast is a production of Brilliant Metrics, a consultancy helping brands and agencies rid the world of marketing waste.

Producer: Heather Ohlman
Transcription: Emily Bechtel
Music: SeaStock Audio

Onward and upward!


►
▼Transcription

Steve Robinson: Hello, Iterative Marketers! Welcome to the Iterative Marketing podcast, where each week, we give marketers and entrepreneurs actionable ideas, techniques and examples to improve your marketing results. If you want notes and links to the resources discussed on the show, sign up to get them emailed to you each week at iterativemarketing.net. There, you’ll also find the Iterative Marketing blog and our community LinkedIn group, where you can share ideas and ask questions of your fellow Iterative Marketers. Now, let’s dive into the show.

Hello everyone and welcome to the Iterative Marketing podcast. I am your host Steve Robinson and with me as always is the observant and analytical and not sick Elizabeth Earin. How are you doing, Elizabeth?

Elizabeth Earin: I am well, Steve. How are you?

Steve Robinson: I have a little bit of a tickle in my throat again today, so such is the life of small children, right.

Elizabeth Earin: It is. It is.

Steve Robinson: So, are you getting geared up for summer here?

Elizabeth Earin: Yeah, we are. My husband has a new bike of some sort. I don’t know if it’s a mountain bike or I’m not a bike person obviously. And my son has his – Oh! shoot, what are they called? A balance bike.

Steve Robinson: Okay.

Elizabeth Earin: He’s two, so he has got a balance bike and then I have this adorable beach cruiser, custom made beach cruiser that my husband’s friend made for me a few years ago but I am deathly afraid of bikes and so I keep putting off. I’m like, well, it looks overcast today. Maybe we shouldn’t go on a bike ride. So, I don’t know how much longer I’m going to get away with this.

Steve Robinson: So you have a custom built bicycle but you are deathly afraid of bicycles.

Elizabeth Earin: Yes. Apparently, there’s an appropriate age to learn how to ride a bike and I missed it. We lived in Arkansas when I was very little when it was kind of bike riding age and we didn’t have a yard.

Steve Robinson: They don’t have bicycles in Arkansas.

Elizabeth Earin: We lived on the side of a mountain and so we had a deck around the house. So I could ride around the deck but that was about it. So I really missed out on that like very important developmental phase of learning how to ride a bike. So anytime I get on a two wheeler now I get – I am very wobbly, get scared, it’s – it’s kind of sad and pathetic.

Steve Robinson: Maybe this is the summer to overcome that.

Elizabeth Earin: We’ll see. We’ll see what happens.

Steve Robinson: Excellent. So we’re not talking about bicycle riding today. What are we talking about?

Elizabeth Earin: Today we’re talking about how to run an effective A/B test.

Steve Robinson: A/B tests are really core to Iterative Marketing and then when what we do because they’re – they’re the core of experiments, right?

Elizabeth Earin: Um-hmm Um-hmm. So today in our episode we’re going to talk about what an A/B test is, we’re going to run through not only what they are but why they’re important. And then we’re going to outline the six things that make them effective and I think this is really important because just running an A/B test isn’t enough. There are some core elements, some key things that you need to hit on to be able to make sure that you’re making the most of the time that you’re spending setting up and running this test.

Steve Robinson: I think we should probably start by explaining what an A/B test actually is, right?

Elizabeth Earin: So an A/B test tests two different versions of the same content to determine which is going to have a better outcome.

Steve Robinson: And good A/B tests also try to control four external factors so that you’re really only testing what you need to be testing and that you don’t have a bunch of other things impacting the outcome.

Elizabeth Earin: What’s nice about A/B tests is that they can be applied to any medium, banner ads, landing pages, direct mail, even print. There’s a lot of opportunities to run an A/B test and not only to run an A/B test but to run a series of A/B tests that results in some really interesting and unique insights that can be passed through to the rest of the organization.

Steve Robinson: And A/B testing has gotten a lot easier because it’s actually built into a lot of the tools that we use every day and/or there are tools outside. So if you’re running an A/B test in emails, it’s probably built into MailChimp or the marketing automation platform, they’re running like Pardot or Marketo. If you’re running it on a website we’ve got great tools, Optimizely is out there as a paid solution. Convert.com is the one we use, it’s a little bit cheaper. And then there’s even a free one that Google just revamped and is great. It’s the Google content experiments as well.

Elizabeth Earin: And then some of the advertising platforms have some as well. Facebook is releasing an A/B – some A/B testing capabilities and an update that should be happening any day now if it hasn’t already happened and I’m just behind the times on that. And so again it really makes it easy for marketers to be able to set these up and run them.

Steve Robinson: So why are – why are we talking about A/B testing today? Why is this important? I know I alluded to it earlier but why is it important to you as a marketer?

Elizabeth Earin: Yeah. It’s one of those things when you talk about testing and experimentation I think so many marketers get scared because we’re going to use some words later that I know make me nervous like statistical significance and some other things that are common in your vocabulary but I think most marketers are a little afraid of. They took statistics because it was a required business course and they passed it barely and that was the extent of it. And that’s what is nice about A/B test is that it’s very simple very easy to run and it has such a big impact on what we’re doing, specifically on impacting the effectiveness of our marketing and it does so in a way that allows us to figure out definitively the best way to allocate our marketing resources. It’s not about my gut feeling, it’s not about my CEO’s gut feeling if he likes this picture over that picture. He thinks this headline is better. It’s not about the personal preference. It is about the cold hard facts what the data shows our audience is going to respond to.

Steve Robinson: Yeah. I can’t tell you the number of conflicts that I’ve resolved by saying, why don’t we just run that as an A/B test.

Elizabeth Earin: It’s amazing and it’s one of those things too. I have found in my experience that if you’re butting heads with – When you work with marketing you’re working with so many other departments and you may be butting heads with that department head who thinks that they know their audience so well and not saying that they don’t. But again testing that is a really great way to put those arguments or to put those disagreements to an end and find out definitively what it is that the audience really prefers.

Steve Robinson: So today we’re going to go through six core keys, I hate that. I think we say keys to this and keys to that way too often. So six things you really got to – you really got to watch to have a good A/B test, right?

Elizabeth Earin: And these are important because this is – these six items really separate just running a test against running an effective test. And I think at the end of the day if we’re going to spend the time we’re going to put the resources towards this, we all want to have an effective experiment.

Steve Robinson: And those six things are and these are kind of in no particular order because they’re all really important but they are having the right sample size, making sure you have a random sample, we’re talking about controls and what those are and how to make sure you’ve got them. We’ll talk about duration and then statistical confidence and finally the one that’s nearest and dearest to my heart I know I said there were no important or no order but is testing for insight and not just not just conversions.

Elizabeth Earin: Steve definitely has a favorite here.

Steve Robinson: I’ll take that. I’ll take that. So let’s talk about sample size. What are we talking about when we’re talking about sample size?

Elizabeth Earin: Again, you said none of these are really more important than the others. But I have a feeling when we get to each one I’m going to be like this one’s really important and sample size is really important because what sample sizes is it’s the number of times you need to present either version A or version B to determine a clear winner. And this is important because when we’re talking about a clear winner we’re talking about statistical confidence and we’ll get into that later and again this is one of those areas where I’m like I just do what the calculator tells me and you have a better understanding of it. But what we’re looking for is we’re looking for a confidence level either 90% confident or preferably 95% confident that we would be able to reproduce this experiment again.

Steve Robinson: Right. And to your point a calculator would help you determine how big of an audience you need to have a reproducible experiment. But if your audience is too small it’s falling somewhat to luck and it takes time and it takes resources to run a good A/B test. So the last thing you want to do is dedicate these resources only to find out that while your tests produced a result it’s not a repeatable result. And you are basically doing just as well as you would have done if you flipped a coin.

Elizabeth Earin: And I think why I see this one as being so important and we talked about this earlier is you only have so many opportunities to run a test and we’ll get into that a little bit later and some of the other items that we talked about but you only have so long to run a test and so you want to make sure you’re running the most effective test possible and what’s nice about looking at your sample size is that you’re able to go into the experiment before you even start it and know how many people or how many clicks or conversions or actions that you need to have happen to get to that level that you’re looking for that confidence level. And so you can figure that out and then and look at your own website traffic and determine if you’re even able to make that happen or not.

Steve Robinson: Yeah. So before you start run the numbers of what you think you’re going to have as far as the amount of traffic or opportunities people are going to get to see your two different versions, run those numbers into a confidence or I’m sorry, into a sample size calculator and Optimizely has a great one, we will link to in the show notes, run that before you start your test. If it turns out that you don’t have a big enough sample size then scrap that test and come up with something else to test because otherwise you’re just – you’re wasting resources and you’re wasting time.

Elizabeth Earin: Yeah, definitely.

Steve Robinson: Now that’s not the only thing we need to talk about when we talk about sample though because just because you have a big enough sample size doesn’t mean that it’s an effective test. You also need to have a random sample.

Elizabeth Earin: Yes. And this is very important and I think this is one of the places that we as marketers have a tendency to sometimes get hung up on because it’s like, well, everyone is created equal but they’re not all created equal and that we have to be careful that when we’re segmenting that list if we’re segmenting that list that we’re not putting these groups in giving version A delivering it to one group that may be acting differently than version B and I think a great example is if we have an ad that we’re testing and we give version A to the people in Cleveland and version B to the people in Cincinnati we don’t know if the difference in how they react is because people in Cleveland and Cincinnati are just different. We don’t know if that’s what it is or if it’s the version of creative that we’ve shown them.

Steve Robinson: Yeah. People in Cincinnati are weird. I’m kidding.

Elizabeth Earin: We are going to get hate mail now.

Steve Robinson: We are. We are. Exactly, you don’t want to – you don’t want to split your list across some fault line that could result in some difference. You want to use a random number generator or another system to randomly split your list and assign different people to different groups. The nice thing is most of the time we don’t have to worry about this. Most of the time this is baked into the tool that is administering our tests, so if that’s like an Optimizely or convert.com, it’s divvying up your website visitors and putting group A in group A and group B in group B and making sure that group A only sees version A and group B only sees version B for you and it’s doing that randomly on your behalf. The same thing is true of if you’re using an A/B split test tool within your e-mail client. The only time you have to worry is if you’re doing something like an A/B test in direct mail where you are the one splitting the list and in that instance you just want to make sure that you use a tool to do it randomly. I think this is a great time for us to take a quick break, so let’s go help some people.

Elizabeth Earin: Before we continue I’d like to take a quick moment to ask you Iterative Marketers a small but meaningful favor and ask that you give a few dollars to a charity that’s important to one of our own. This week we are asking that you make a donation to the American Foundation for Suicide Prevention. Their mission is to save lives and bring hope to those affected by suicide. They have set a bold goal to reduce the annual suicide rate in the US 20% by the year 2025. To find out how you can help please visit afsp.org or visit the link in the show notes. If you would like to submit your cause for consideration for our next podcast please visit iterativemarketing.net/podcast and click the share a cause button. We love sharing causes that are important to you.

Steve Robinson: And we are back. My voice is holding out so far. We’ll see if we can make it to the end of the episode. But before we left we talked about samples and sample sizes and making sure that we had a large enough sample size and that our sample was random. Those were our first two tips or keys or important things, right. Our next one gets into controls and this can be a hard thing for people to get their head around.

Elizabeth Earin: Yeah, the controls are the efforts that you put in place to make sure that the one thing that you’re testing is the only thing that’s different between the experience that you’re trying to create for version A and version B.

Steve Robinson: And there are a couple of things that make this up. It means first of all testing only one thing. You’re going to want to make sure that if you’re changing a headline and a banner ad on a landing page, you want to make sure that you’re only testing one of those at a time so that you know whether it was the headline or the banner ad, or the headline or the photo on the landing page that influence the outcome because if you test three or four different things are different between version A and version B you don’t really know which one is the key.

Elizabeth Earin: The other thing is it means making sure that version A and version B are the same size, same color. All of the other elements that make up what it is that you’re testing are completely the same.

Steve Robinson: So, for example, if you had two different versions of a landing page and one of them had a headline that said download our ebook and nail your next webinar and you made that one green and then the other version was download our ebook and avoid these costly webinar mistakes and you made that one red. Now you don’t know if it’s because one had a green headline and the other one had a red headline that was what resulted in your test or if it’s because – because of what you were meaning to test the two different messages resulted in the difference in outcome between version A and version B.

Elizabeth Earin: That’s a great point. I see font size a lot too. If you’re looking at a headline and one is bigger than the other and reduce the font size so that it fits or it’s breaking on two lines. Those are all little elements that you’re going to want to keep in mind.

Steve Robinson: Absolutely, absolutely. It also means running a test at the same time. Sometimes it’s hard to administer the test where you have some system divvying up your users so that they’re getting version A or version B at the same time. And so we become tempted to, well, we’re going to run version A this week and then will run version B next week and compare the results. The thing is your audience is fickle, little little dumb things will impact their interaction with your stuff and you see that when you look at the lines in Google Analytics and if you go to the daily view and it’s going up and down like this what was different between Tuesday and Wednesday. The reality is I mean things like weather or what’s in the news that week might change how somebody reacts to your very content. So if you run version A this week and version B next week and some crazy news item hits next week it could totally change the outcome of your tests and in ways that you might not even realize, so you always want to run your two versions at the same time and that’s an example of controlling for time.

Elizabeth Earin: The other thing you want to try and control for is making sure that the audience hasn’t seen either of your versions before you actually start your test and I’ve seen this happen before where someone will take the test – the version that they’ve been running and they say you know, I think maybe this headline might be better so I’m going to create version B and then just launch a version B to run as well. Well, if you’ve already had version A in the market people have already seen that, they’ve built an affinity to it. They’ve gotten used to that message and that’s going to skew the results that you get.

Steve Robinson: This is just a couple of examples of controls or things you need to control for as an experiment, right? As a person administering this A/B test. This is just the tip of the iceberg. You really want to sit down take a look at your test objectively and say, am I making sure that there’s nothing else that’s changing between version A and version B either because I’m running things at a different time or they may have seen something beforehand or inadvertently we ended up shifting something else on this page in order to make version B work and now really these are two different experiences in more ways than we intend, come back and figure out how to control for that and get version A and version B really only testing one thing, the one thing you want to test.

Elizabeth Earin: The next thing we want to look at is duration and specifically being careful how long we’re running our A/B test.

Steve Robinson: In our experience we found that you never really want to run a test longer than 90 days. If you run it longer than 90 days bad things happen from a couple of different fronts. You get noise in your test because the landscape is shifting while you’re trying to run the test and more often than not that just further ensures that you’re going to end up with a non-result because early indications that A was winning will be cancelled out by indications that B is winning later but it also gets into some technical components of how these platforms run these tests and if anything is relying on cookies in a browser. Cookies aren’t reliable beyond a few weeks to be consistent. And so you want to make sure that you just don’t run longer than 90 days.

Elizabeth Earin: At the same time you want to make sure that you’re running your test long enough that it factors in any business cycles that might be impacting that test. So, for example, if you’re running a test Thursday to Monday and running it over the weekend then you’re favoring weekend habits. Whereas if you’re running your test Monday to Thursday then you’re favoring weekday habits and depending on your B2B or B2C this could really skew the results that you get back. Another thing you want to keep in mind when you’re talking about making sure you’re running it for long enough is taking a look at month end cycles especially if you’re in the B2B market, what’s happening month end or even year end can be very different than what’s happening at other times of the year. And so those are both periods that you want to take into account when you’re looking at the time frame or the time period that you’re running your experiment for.

Steve Robinson: Our fifth item here is statistical confidence and this is the one where we start getting into geeky words and Elizabeth’s eyes glaze over so, but statistical confidence isn’t that complicated. This is not that complicated of a concept and once you have your head around it and you can speak and educate others in your organization about this, you’ll look like a rock star.

Elizabeth Earin: For those of you that disagree with what Steve just said and you do think it’s complicated, don’t let this scare you. We have an easy to use confidence calculator on our website. Again very easy to use. You simply plug in your impressions or sessions compared to either your clicks or conversions and it’s going to tell you for both your controlling your variable, it’s going to come back and let you know what percentage confidence rate you’ve come back with.

Steve Robinson: Yeah. Confidence is just that, it’s a percentage, at least most of the time it’s presented as a percentage and it represents a probability and it represents the probability that if you ran this test over and over again how likely are you to get the same result that – How likely are others to get the same result that you just got. And you don’t want to run with what you think are facts that turn out to not necessarily be valid because if somebody else runs the test they’re going to get a different result. So you want to see a statistical confidence ideally at about a 95% range. Now if you’re in academia and you were running a scientific experiment you’d be looking for 99% statistical confidence. Thankfully we’re not in academia. We don’t have to do that. 95% is pretty good and enough for us to run with that at least as a working hypothesis. I have run with confidences less than that if it was something where, you know what, I needed at least a working frame of reference in order to move forward, I’ll run with something at a 90%. But I’ll make a note to come back and test that later because I’m going to want to – I’m going to want to validate that a little bit further with a future test.

Elizabeth Earin: You explained this –explained confidence me to this way once and it made a lot of sense. So, if you don’t mind I’d like to share this.

Steve Robinson: Yeah.

Elizabeth Earin: But if you’re looking at a confidence rate of 95% that means that if you were to run the test 100 times we would expect that 95 of those would work out with the same exact winner but then 5 or so would work out to the other version. Again it’s looking at the probability of being able to repeat this over and over and over and so we know that it’s not some fluke. We know it’s not something funny or funky that happened. And when we are able to prove this when we can come back and we can say with this confidence it’s a lot easier to go to the department head that you might be having a disagreement with and say, no, look, the data shows me and you can’t argue with the data.

Steve Robinson: Exactly. Exactly. So this brings us to my little pet point here, right, of testing for insight. I think it’s great when you can run an A/B test that’s going to impact the bottom-line and you’re going to get more leads or better leads or higher conversion rate or higher click through rate and that’s all well and good but I think you’re leaving a ton on the table if you’re not also testing for insights.

Elizabeth Earin: When we’re talking about A/B tests, we’ve talked about this before and button color is for whatever reason a really popular test to run. The problem is that that doesn’t give us the insights that we need, the insights that can be applied not only to our own marketing but to other departments within the organization. And so one of the things we really want to come back to when we’re looking at these A/B tests is finding ways to test the psychographics of a very narrow audience segment so that we can discover those insights and share them with the organization and improve our performance across all departments.

Steve Robinson: It lets us take the results of this test out of the microcosm, the environment in which we operated the tests, so if we ran the test as subject lines for email, if we can take a learning from that and make it universally applicable for all of our other marketing, that’s a huge win whereas if we come away just knowing how to make better subject lines, well yeah, okay, that’s useful information but not nearly as useful as learning more about our audience in ways that we can apply elsewhere.

Elizabeth Earin: So, the key to doing this is coming up with a hypothesis for how we expect our audience to act and why and then building an A/B test that validates or invalidates that hypothesis. And in doing this, we are very deliberate in what we are testing.

Steve Robinson: So, for example, this would be a bad hypothesis. If we wanted our hypothesis to be as follows: We believe that the headline, don’t make these three massive mistakes, will result in more conversions than the headline. Use these three tips to amp up your results. That’s a bad hypothesis and it’s bad for a few reasons. It’s not specific to an individual audience, so we don’t know exactly who we can apply this learning to and it’s also very specific to this piece of content. If we wanted to turn that around and turn that into a good hypothesis it would look something like, we believe that Mary our persona in this case, will be more likely to convert if presented with an offer that limits her risk versus an offer that offers a greater reward because Mary is more concerned with limiting her risk. Now we’ve learned something about our audience. We’ve learned something about Mary and if we make sure that our test is the best possible test we can use to validate or invalidate that hypothesis we’re able to take that learning and apply it elsewhere, it becomes a true insight.

Elizabeth Earin: Yeah. We’re not just updating the headline here, we’re updating other landing pages that may be targeted at Mary. We’re using it to create content that is going to resonate with Mary and we’re sharing this with our operational departments who are interacting with Mary and maybe revising call scripts and other internal documents that help our internal team members to connect better with our Mary’s.

Steve Robinson: Now by testing for insights there’s good and there’s bad associated with trying to do this because it means we have to have a narrower audience if we want to be able to apply this insight to Mary’s then that means that we’re really only testing Mary’s. Now this is good because if we’ve done a good job of segmenting our audience then Mary’s are going to act consistently, so we’re more likely to get a good result because if we were testing Marys and Lucases at the same time and Lucas goes left when Mary goes right they cancel each other out, so it can actually benefit us in our test results. But it can also hurt us because we now only are testing Marys and maybe we can’t get a sample size big enough according to our sample size calculator to even run a test against only Marys and so it becomes this balancing act that we have to run that can sometimes be really challenging to run.

Elizabeth Earin: At the same time if you are testing with a narrow audience the nice thing is that you aren’t necessarily limited. We talked about wanting to make sure that you’re only testing one thing at a time but if you’re testing something with your Mary audience you can also be testing something with your Lucas audience at the same time and so you’re generating insights across personas which makes your marketing even better.

Steve Robinson: So we talked about a ton again today. Of course all of this will be detailed in the show notes if you haven’t selected the show notes but just to kind of briefly recap here. We’re talking about A/B testing and the importance of A/B testing. If you’re not doing this obviously we encourage you to get started today and it’s easy, the tools are probably built into the tools that you’re using.

Elizabeth Earin: By testing two different versions of the same content we’re able to determine which is more effective which allows us to quantitatively figure out the best way to allocate our marketing resources. And if we do it correctly also provides us insights that make us more informed and more effective marketers.

Steve Robinson: But if we’re going to do this well, if we’re going to get the true value out of A/B testing there are six things that we need to be looking at. And those six things include sample size. How big is your audience.

Elizabeth Earin: Random sample.

Steve Robinson: Controls.

Elizabeth Earin: Duration.

Steve Robinson: Statistical significance.

Elizabeth Earin: And Steve’s favorite, testing for insight.

Steve Robinson: I want to thank everybody for making time for us again today. If you haven’t already we would love it if you pop out to iTunes and leave us a review, we’ll be sure to give you a shout out on a future podcast. And if you haven’t signed up to receive the show notes in your e-mail there’re some great links and valuable content and takeaways there and you don’t have to worry about accessing them on your mobile device, so please pop out to iterativemarketing.net, sign up to get those shots in your email inbox. You don’t have to try and reference them from the car while you’re driving, please don’t. Until next week onward and upward.

Elizabeth Earin: If you haven’t already be sure to subscribe to the podcast on YouTube or your favorite podcast directory. If you want notes and links to resources discussed on the show sign up to get them emailed to you each week at iterativemarketing.net. There, you will also find the Iterative Marketing blog in our community LinkedIn group where you can share ideas and ask questions of your fellow Iterative Marketers. You can also follow us on Twitter, our username is @iter8ive or email us at [email protected]. The Iterative Marketing podcast is a production of Brilliant Metrics, a consultancy helping brands and agencies rid the world of marketing waste. Our producer is Heather Ohlman with transcription assistance from Emily Bechtel. Our music is by Seastock Audio music production and sound design. You can check them out at Seastockaudio.com. We will see you next week, until then onward and upward.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Become a Member

Join IterativeMarketing.net and gain access to exclusive materials only available to members. Membership is free.

Create Account

Subscribe to the Podcast

The Iterative Marketing Podcast is a weekly show dedicated to helping marketing professionals and entrepreneurs implement Iterative Marketing for your organizations and your clients. Subscribe today and be a better marketer.

Subscribe via iTunes

Subscribe via RSS

Subscribe via Email

The podcast can also be found on YouTube, Stitcher, SoundCloud and Google Play.

Latest Blog Posts

  • Podcast Episode 50: Diversity in Personas
  • Don’t Drop the Phone Ball
  • Podcast Episode 49: Moving From Attribution To Contribution

About

IterativeMarketing.net is hosted and maintained by Brilliant Metrics, Inc, a consultancy out to rid the world of marketing waste through marketing measurement, optimization and automation. Learn more about Brilliant Metrics.
Read Iterative Marketing's Privacy Policy

Connect

  • Facebook
  • LinkedIn
  • Twitter

Search

© 2023 – Brilliant Metrics, Inc