It can be not only extremely useful but also deeply satisfying to occasionally dust off one’s math skills. In this article, we approach the classical problem of conversion rate optimization—which is frequently faced by companies operating online—and derive the expected utility of switching from variant A to variant B under some modeling assumptions. This information can subsequently be utilized in order to support the corresponding decision-making process.
An R implementation of the math below and more can be found in the following repository:
However, it was written for personal exploratory purposes and has no documentation at the moment. If you decide to dive in, you will be on your own.
Suppose, as a business, you send communications to your customers in order to increase their engagement with the product. Furthermore, suppose you suspect that a certain change to the usual way of working might increase the uplift. In order to test your hypothesis, you set up an A/B test. The only decision you care about is whether or not you should switch from variant A to variant B where variant A is the baseline (the usual way of working). The twist is that, from the perspective of the business, variant B comes with its own gain if it is the winner, and its own loss if it is the loser. The goal is to incorporate this information in the final decision, making necessary assumptions along the way.
Let and be two random variables modeling the conversion rates of the two variants, variant A and variant B. Furthermore, let be the probability density function of the joint distribution of and . In what follows, concrete values assumed by the variables are denoted by and , respectively.
Define the utility function as
where and are referred to as the gain and loss functions, respectively. The gain function takes effect when variant B has a higher conversion rate than the one of variant A, and the loss function takes effect when variant A is better than variant B, which is what is enforced by the two indicator functions (the equality is not essential). The expected utility is then as follows:
We assume further the gain and loss are linear:
In the above, and are two non-negative scaling factors, which can be used to encode business preferences. Then we have that
For convenience, denote the four integrals by , , , and , respectively, in which case we have that
Now, suppose the distributions of and are estimated using Bayesian inference. In this approach, the prior knowledge of the decision-maker about the conversion rates of the two variants is combined with the evidence in the form of data continuously streaming from the A/B test. It is natural to use a binomial distribution for the data and a beta distribution for the prior knowledge, which results in a posterior distribution that is also a beta distribution due to conjugacy.
A posteriori, we have the following marginal distributions:
where and the shape parameters of , and and of the shape parameters of . Assuming that the two random variables are independent given the parameters,
We can now compute the expected utility. The first integral is as follows:
where, which a slight abuse of notation, is the beta function and
The function can be computed analytically, as shown in the blog posts mentioned above. Specifically,
Regarding the last two integrals in the expression of the utility function,
Assembling the integrals together, we obtain
At this point, we could call it a day, but there is some room for simplification. Note that, in the case of the assumed linear model, we have the following relationship between and :
where is the different between the above two ratios of beta functions. Therefore,
The decision-maker is now better equipped to take action. Having obtained the posterior distributions of the conversion rates of the two variants, the derived formula allows one to assess whether variant B is worth switching to, considering its utility to the business at hand.
The reason the expected utility can be evaluated in closed form in this case is the linearity of the utility function . More nuanced preferences require a different approach. The most flexible candidate is simulation, which is straightforward and should arguably be the go-to tool regardless of the availability of a closed-form solution, as it is less error-prone.
Please feel free to reach out if you have any thoughts or suggestions.
- Chris Stucchio, “Easy evaluation of decision rules in Bayesian A/B testing,” 2014.
- David Robinson, “Is Bayesian A/B testing immune to peeking? Not exactly,” 2015.
- Evan Miller, “Formulas for Bayesian A/B testing,” 2014.