The fifth component of Iterative Marketing is “experimentation and iteration.” Without experimentation and iteration, Iterative Marketing would be… just marketing. It’s the basis for continuous improvement within the methodology. Let’s discuss why optimization is so important and how to ensure success as we set up our experiments and iterate.
Optimizations are powerful. When we make an optimization to a continuously running program, we aren’t just having an impact on today’s results. Our impact extends indefinitely into the future of that program. Just as interest compounds in a bank account, our optimizations build on each other. Today’s 10% improvement in program performance turns next week’s 10% improvement into an 11% improvement.
Optimizations are also portable. If you are thoughtful about what you choose to analyze and improve, those same improvements can carry over into other channels. For example, if we test two banner ads against each other to learn which one evokes more action and the only thing we change is the emotional appeal of the headline, we can then take that appeal and apply it to our outdoor advertising, our print advertising and even radio.
In that same example, we learned something about our target audience and what appeal drives action. That insight helps us round out the targeted persona and could inform customer service, product development or even the c-suite as they make future decisions regarding corporate direction. Thoughtful experiments and optimizations result in insights and those insights have value, not only to us as marketers, but to the organization at large.
While we report on the great insights that our experiments have produced, we also have the opportunity to report on how we are improving our results. If there’s one thing that executives and management like, it’s graphs that consistently go up and to the right. If you are regularly optimizing, you have an opportunity to regularly report on progress toward better and better results.
Keys to Successful Optimization
Below are eight keys to successful optimization. If you follow these eight rules, you’ll find that your experimentation and optimization efforts will be significantly improved.
1. Set Aside Resources: One of the key challenges for mature marketing organizations trying to adopt Iterative Marketing is the allocation of budget and resources to allow for continuous improvement. It takes time and money to run experiments, both to administer the experiments and to create alternate versions of creative. The compounding benefits more than pay for the effort, but if the resources aren’t there before you start, optimization becomes a non-starter.
2. Start With an MVMP: Starting with a minimum viable marketing program (MVMP) reduces complexity and allows us to take the lessons we learn from our initial experiments and carry them through every tactic, additional channel and future content as we grow the program.
3. Measure First, Optimize Second: The famous quote, often misattributed to Peter Drucker, “what gets measured, gets managed” seems obvious in the context of optimization. However, we often forget to make measurement a priority early in the planning process and then attempt to bolt it on afterwards. By putting measurement first, we have the opportunity to strategically roll out new creative on channels which are easy to measure, identify the key performance indicators (KPIs) to watch in the process, and have the tools in place to measure them before going to market.
4. Measure Outcomes, not Actions: When optimizing, it is tempting to jump to conclusions based on the simple actions our target audience takes when interacting with our content. “More people clicked on this ad, therefore it must be better,” can seem obvious in the moment, but the answer is always more nuanced. If I publish an ad that reads, “FREE MONEY! Click Here!,” you can bet that I’ll get plenty of clicks. However, if the landing page is offering carpet cleaning services for office buildings, the quantity and quality of leads I’ll receive from those clicks will be horrible when compared to the ad that reads, “Find out how clean your office carpets should be.” It’s important to follow the actions down to the metrics that connect to the desired outcome – conversions, leads, sales, etc. Ideally we are measuring the quality of these outcomes in addition to the quantity, as well.
5. Use the Right Tools: I have tried a good number of tools out there for setting up experiments and measuring results. Every tool has its pros and cons. It’s extremely important to learn, test and understand the tradeoffs of the tools you use before you make decisions based on the data they produce. I have had good results with Convert and Optimizely, but the key is to take the time to identify the right tool for you and your infrastructure.
6. Have Enough Data: We are all wired to make conclusions based on the data available, even if there isn’t enough data to truly draw a conclusion. If the outcome of your experiment could have gone the other way if the third visitor to your site happened to get a phone call while they were completing your lead form, chances are your result isn’t valid. When architecting your experiments, make sure that have enough positive results to hit a target statistical confidence level. We call this having a statistically significant result. While the math isn’t hard, it’s definitely the topic for another article. A good rule to follow, though, is if you cannot count on at least a double-digit number of successful outcomes, your experiment might be weak.
7. Manage Complexity: If you are testing multiple things (or variables) at once, you must be certain they don’t interfere with each other. You can’t test headlines on the same landing page you are testing the call to action. Doing so, you set up what’s called a multivariate test, which is significantly more complicated to administer. Test one thing at a time along the prospect’s path so you can be certain in the insights you glean and can move on to the next test.
8. Manage Noise: Say you are running television advertising at the same time as a digital program you are working to optimize. The television media is flighted (running intermittently) such that it’s on for one week and off for two. While the television ads are running, your landing page version A outperforms landing page version B. During the break in TV ads, the results of the landing page test flip, showing version B outperforming A. There’s an insight here, but if you’re only averaging the results over the duration of the whole test, you might find that version A’s winning streak cancels out version B’s winning streak and assume that they are equivalent. The key is to avoid starting or stopping any other tactic that might impact a test while the test is running. This also means you are best off running shorter tests when possible to avoid the possibility that the landscape changes in such a way to sabotage your data.
Remember: Good experiment design may or may not lead to good results, but poor experiment design will produce poor results every single time.