Why do Bayesian and Frequentist claims sound so different?
Make every A/B test count. Discover proven techniques to reduce type 1 errors, maximize ROI on your experiments, and drive better business outcomes.
Escaping dogma that harms experimentation programs
Measuring impact when you cannot hold out a control group
Don't let these violations invalidate your experiment results
A holistic comparison of statistical methods for online experimentation
Some of my personal takeaways and highlights from a slightly bigger, seemingly more diverse CODE conference.
Eppo makes CUPED widely available, allowing teams to run experiments up to 65% faster than before.
Bayesian and frequentist approaches are fundamentally different, so why do they sometimes yield the same results?
How to understand statistical power, multiple testing, and peeking by leveraging the definition of a p-value.
If you're a business running low-powered experiments, you risk missing out on insights worth thousands of dollars
The solution that improves everything