21 Sep 2021
Product | 6 min read

Rockerboxer Roundup: Eddie Chou on Experimentation

Rockerbox - Alyssa Jarrett Written by Alyssa Jarrett
on September 21, 2021

Last week, we had the pleasure of hosting a webinar where we went behind the scenes of our recent product launch of Rockerbox Experiments. We explained the importance of integrating attribution and experimentation to drive more effective marketing performance, as well as how Rockerbox makes achieving this possible.

In our first installment of our newest blog series, “Rockerboxer Roundup,” we’re asking Product Manager Eddie Chou what marketers at direct-to-consumer brands need to know to conduct effective ad experiments and enable a more data-driven organization.

Rockerboxer Roundup: Eddie Chou

1. Tell us about yourself, Eddie, and your role at Rockerbox.

I’m the Product Manager leading Experiments at Rockerbox. I also lead other parts of our product, primarily focused on platform integrations and reporting. Prior to that, I led our first fifty customers through onboarding on our platform. 

Before Rockerbox, I spent 15 years in SaaS start-ups, and even earlier a brief two-year stint in manufacturing. I have a passion for technology and solving problems, and many times those two go hand-in-hand.

Outside of work, I enjoy exploring the Pacific Northwest with my partner and two dogs, cooking, gardening and training Brazilian jiu-jitsu.

 

2. Let’s get on the same page. What is an experiment?

One key learning we discovered while engaging with our customers and prospects is that they’re thirsty to test marketing! They test different creatives, messages, landing pages, geos, audiences, and so on. To set up these tests, they would run these in individual ad platforms and rely on those platforms’ reporting to analyze the results.

This created disparate sources of data assets as these same customers were already incorporating Rockerbox into their marketing analysis and decision-making, so they would look at Rockerbox for cross-channel performance and then individual platforms for test results. This wasn’t ideal, and we built Rockerbox Experiments to bring this all into one platform, which is the desired operating procedure for our customers.

Rockerbox Experiments therefore allow our customers to review the results of third-party platform tests within Rockerbox, alongside their existing marketing attribution. 

 

3. How is experimentation different from attribution and other measurement approaches?

Each measurement approach has its strengths and purposes:

  • Attribution: Best for determining the impact of each reported touchpoint and which channel gets credit for a given conversion. Most marketers are interested in multi-touch attribution, which constructs the most accurate user-level path to conversion.
  • Experimentation and incrementality: Best for determining the impact of individual campaigns and ads against a baseline. This approach usually requires upfront investment to design and execute tests. 
  • Media mix modeling: Best for determining correlations between spend and results. This approach is great for high-level analysis and suited for budget allocation, but offers limited agility and insight at a tactical level.

With these approaches in mind, Rockerbox Experiments was built to fill the need for brands that run tests frequently and need to see those results alongside their multi-touch attribution results. 

 

4. What are the biggest challenges marketers face with experimentation today?

Experimentation is challenging because it requires several critical components:

  • Upfront investment in design. You need to design solid experiments, because like the old adage: garbage in, garbage out.
  • Budget allocation. Some experiments are bound to fail. You need to be able to portion out some of your budget at the expense of learning what truly works. 
  • Analysis that measures lift and statistical significance. This is ultimately what brands are after. They need to know which campaigns and creatives perform better, and that there is enough data to be confident in those results.

5. What are things to keep in mind before running an experiment?

An experiment’s results are only as good as its design. It’s extremely important that prior to setting up an experiment, you ask yourself the following questions:
  • What do you want to learn from running this experiment? Perhaps you’re looking to see if a new creative or message resonates with your audience, or whether a similar message performs in different geographical markets. Set your intention, as the kids say these days, and make sure you know what you want to get out of your tests before you put too much into them.
  • How much budget will you need to generate statistically significant results? If you need assistance knowing how to allocate your spend, there are several tools available to help you predict the expected budget necessary.
  • Can you determine whether your experiment is “clean”? Meaning, you want to avoid running campaigns that have a significant degree of overlapping users. This is a classic scientific method: only test one variable at a time!
  • Can the platform you are using support the experiment design? Some platforms excel at testing audiences, while others have more granular geo targets, and still others easily support creative variants. Pick the right platform for the type of experiments you’re looking to run. 

6. Tell us about the advantages of Rockerbox Experiments. How does it solve the challenges you mentioned?

We view Rockerbox as not only a measurement platform, but also core data infrastructure for a marketing organization. To that end, we’re primarily focused on analytics and don’t involve ourselves in media execution. Rather, we see ourselves as arming our marketers with a wealth of information to make better business decisions.

The great news is that if you’re already running those experiments in other ad platforms, then all you need to do is identify those campaigns within Rockerbox Experiments, and you’ll immediately see the lift and statistical significance of those results. Already started an experiment? Don’t worry: so long as you’ve completed standard Rockerbox tracking, then you’ll be able to review the analysis in Experiments. No wasted test budget! 

Running multiple test campaigns that you want to compare against a single baseline? Now you can identify them all and view each result individually.

 

7. How does Rockerbox Experiments report on performance?

Rockerbox Experiments leverages all the existing products we built out to provide as much depth of marketing and conversion reporting as we have available. That means we will use clicks, impressions and other techniques, including creating synthetic events to measure performance. If we’re reporting on marketing performance in our attribution platform, then we will also report on that in Experiments. 

A similar question we get asked is what attribution type are we using, meaning: is the performance based on last touch, first touch, or modeled? For Experiments, our motivation is to focus on the performance lift of each test, and for that, it made most sense to us to use a concept called “assisted conversions.”

An “assisted conversion” is any conversion where the marketing touchpoint was on the path to conversion, regardless of the sequence of events. We chose this because this metric most closely corresponds to what each ad platform will report on their own and will separate the noise of other campaigns from the Experiment. 

 

Continually Improve With Experimentation

Rockerbox is all about upholding our core values, one of which is continually improving to strive for better results. That’s why we’re dedicated to enabling marketers with the tools they need to achieve their business goals.

You can watch our webinar on Rockerbox Experiments on-demand in case you missed it, and if you’d like to learn more about experimentation with Rockerbox, reach out today to schedule a demo.

No more confusion. Just real marketing insights.

Talk to our team about how Rockerbox can change the way you spend—for the better.