The time has come. Digital has taken center stage and the perception is that if it’s bits and bytes, it’s measurable. This means the spotlight is on marketing to prove how we spend our time and money. Are you feeling the pressure? Today I lay out three common marketing practices we should reevaluate to find greater efficiency and impact in our marketing efforts.
Wasteful Practice #1: Big Campaign Thinking
There was a time when the world moved a little more slowly. Our ability to measure marketing results was limited to the anecdotal feedback we got from sales and the executive team. At that time, big campaigns were a great way to go. They grabbed the attention of the executives and the creative demonstrated the power and capabilities of the marketing team.
Today, digital marketing gives us the ability to measure the response to our messages and our channels in ways we never could before. It allows us to get near-instant feedback on whether our content resonates or turns off our audience.
With measurement comes an obligation to act on that measurement. If we build out an entire campaign before placing any of it in market, we lose the opportunity to get feedback on the efficacy of the creative and make improvements as we go.
It is far better to start small, get some data behind what’s working before investing in more. Instead of big campaign thinking, start with a minimum viable marketing program (MVMP) – the minimum required to test the message and primary channels. If we start small, we can make improvements and enhancements as we grow the program, adding new channels and more complex messages. In doing so, we can greatly improve the end result and avoid gambling a big chunk of our budget.
Wasteful Practice #2: Time-Boxing
Before we had the technology to measure marketing effectiveness in near real-time, all we could do is take a best guess at how long our creative should stay in market before either changing creative or retiring the campaign. We were forced to set arbitrary start and end dates to our media and messages.
With modern marketing technology, we are now able to get feedback from the audience on the effectiveness of each channel and our creative as a whole. If we are effectively measuring and optimizing our campaigns, then we should know when that campaign ceases to be effective. Then, and only then, does it make sense to update creative.
Taking the idea of measuring and optimizing one step further, it brings to question what happens to that knowledge when the campaign ends. If when we change creative we also change the mechanics of the campaign, we end up throwing the baby out with the bathwater. Much of the measurement and optimization we did may or may not work with the next campaign, essentially requiring us to start over.
It’s for these reasons, the Iterative Marketing community suggests that instead of time-boxed marketing campaigns, we run indefinite, continuous and repeatable marketing programs. Infact, we try to avoid using the word “campaign” at all.
Wasteful Practice #3: Optimizing Only for Conversions
Conversions are important. Ultimately, any A/B testing we do should net a higher quantity or quality of conversions. But when we choose what we want to test to improve conversions, we as marketers rarely look to what other insights our A/B test might produce. There are two primary ways in which we lose opportunities to gain insights: limiting the scope of our insights and testing too many things at once.
Limiting the Scope of Our Insights
This common trap’s poster child is the “button color test,” where marketers test whether a green button gets more conversions than a red button. We have a tendency to test obvious visual changes to our pages and ads because these tests are popular and easy to borrow from others.
These easy-to-apply tests cause us to miss out on real insights into how our audiences respond to different messages and visuals. Far better tests involve swapping the focus of a visual or the primary call to action. For example, should the ad feature the doctor or a patient? Or, which is the better motivator, “act now to get the most savings” vs “don’t waste another day without our product”?
Applying tests like these allow us to take the insight created and carry it outside the realm of the test itself. If you know ads featuring patients outperform those featuring doctors, that same idea can be applied beyond digital, to outdoor, radio, print and social. If you know, “don’t waste another day without our product” works best, then let the sales team know so they can work that into their scripts.
Testing Too Many Things at Once
When we change eight different things between version A and version B, our test can only produce one insight: version B is better than version A. In this case, we could switch all traffic over to version B and see an uptick in conversions, but we still don’t know why our audience prefers version B. We don’t walk away from the test with any information we can apply outside of this specific landing page or creative.
Instead, we should limit the number of changes in any given test to one. For example, by only changing the headline of the landing page from “tastes great” to “less filling,” it is very clear any improvement in results is due to one headline resonating better with our audience. The insight garnered from this landing page test can be applied to the ads driving traffic to the page and even to other media, including offline.
Let’s Trim the Fat
There is no reason in today’s measurement-driven world for us to be operating under antiquated tradition or limiting the scope of the insights we can generate. Let’s create more value with less time and money and ultimately become more effective marketers.