Eppo News
Measure Your Marketing Spend with Eppo Geolift
Use gold standard incrementality testing to evaluate the true contribution of marketing to your company's bottom line
Learn more
Today, Eppo is excited to launch Contextual Bandits - a powerful tool that lets data, ML, and growth teams customize bandit algorithms that automatically personalize user experiences to optimize outcomes.
Traditional A/B testing has been the gold standard for evidence-based decision-making, and the core of Eppo’s experimentation platform. However, some problems require real-time optimization instead. That’s where Contextual Bandits come in. Eppo’s Contextual Bandits offer an easy way for you to automatically optimize and personalize user experiences in real-time to ensure you’re not leaving money on the table.
Few machine learning techniques pique Marketing and Product teams’ interest more than bandit algorithms. These reinforcement learning algorithms balance “exploration” (or learning) with “exploitation” (or optimization) with each incremental user, introducing the efficiency of machine learning in place of the rigor of an A/B test.
The standard bandit algorithms, however, lack one important capability: context. They presume that there is only one single best action (or treatment), and their goal is to zero in on that single optimal choice quickly. Of course, there is rarely a single optimal choice across all your users — preferences often vary. Contextual bandits introduce the ability to consider key information known about a user when predicting which treatment to serve them.
A simpler way to think about the most common use case of contextual bandits is that they enable 1:1 personalization (as opposed to broader, rules-based personalization). If we can identify the important characteristics or “features” that might be relevant in determining the optimal user experience, a contextual bandit will help us make a one-to-one match of the right experience for each individual user, at scale, automatically.
There are a few “contextual bandit” solutions commercially available today, but these are usually “off the shelf” models that consider some pre-determined list of characteristics (usually ones that are easily determined on any website) - what browser does the user use, what is their location, time of day, are they a new or returning user?
The real power of contextual bandits, though, is unlocked by building bandits that are specific to each use case. You can use far more informative characteristics, which goes a long way towards actually achieving positive business impact. You can even use contextual bandits as a way to “operationalize” existing AI/ML models, using them as inputs in determining the right user exercise to serve and making sure you’re getting all the value possible out of what you’ve already built.
This is what we’ve built in Eppo Contextual Bandits.
Teams often get excited about bandit algorithms only to find them time-consuming to build, and difficult to validate. Eppo does the hard work for you - just supply actions and their contexts to the Eppo SDK, tell us about the business metric you want to optimize, and let us do the rest.
Eppo’s Contextual Bandits also integrate easily with the rest of your stack and provide a simplified developer experience: use a single SDK for your feature flagging, experimentation, and bandits, and all configuration is handled in a unified UI.
The direct integration between Contextual Bandits and the rest of Eppo’s experimentation platform also allows for direct observation of the true impact your bandit algorithm is having on key business metrics - a key challenge for many teams today.
Your goal isn’t just to run a bandit algorithm, you also want to make sure it actually improves outcomes. However, traditionally it has been difficult to prove the actual ROI or business impact of implementing a bandit algorithm. Generally, bandit algorithms work best optimizing short-term metrics, but business metrics are often measured on a longer timescale.
But there are also more technical challenges to accurately measure the impact of a bandit: observations across users are not independent, as actions the bandit algorithms take today depend on the historic actions and their outcomes: if the bandit decided to take a different action on day 1, that could lead to a very different bandit policy on day 10.
To solve for these challenges, Eppo’s Contextual Bandits are tightly integrated with our experimentation analysis tools and leverage a holdout strategy to measure performance. This lets you rigorously understand performance across any metric you care about, measure guardrail metrics, and conduct deep-dive investigations to understand exactly what is happening under the hood. It also means that you have all the same world-class statistical tools available as any other experiment on Eppo: CUPED, sequential tests, Bayesian analysis, etc.
Contextual Bandits are powerful tools for making personalized decisions at scale, without the heft or cold start problem of recommendation systems. There are potential use cases anywhere a more tailored user experience may improve outcomes. Here are some questions to consider when exploring if contextual bandit algorithms are the right tool for you:
Do you have many options to choose between? (likely tens to hundreds)
Does the optimization problem have a short-ish timeframe? (weeks instead of months or years)
Do you have informative data to inform context? (e.g. a logged-in user) If so, Eppo Contextual Bandits can help you shortcut the hard work of building and implementing bandits from scratch and let you get straight to what matters - setting your personalization strategy and driving real business results.
With the launch of Contextual Bandits, Eppo reaffirms its commitment to empowering teams with the decision-making tools they need to succeed in a competitive digital landscape. We're excited to see how our customers will leverage this new product to achieve unprecedented levels of personalization and efficiency in their optimization efforts. Welcome to the future of data-driven decision making — where the gold standard of randomized controlled experiments meets the cutting edge of machine learning, powered by Eppo.
Eppo customers can start using this feature today. For those considering Eppo, we invite you to request a demo and see how it can enhance your experimentation (and optimization) efforts.