Part one: Toward a theory of everything

Uncertainty, market efficiency and the Kelly Criterion are three of the most common discussion points amongst bettors. In his latest guest contribution for Pinnacle, @PlusEVAnalytics tries to answer the questions that often arise out of these discussions. Read on to find out more.

In physics, a “theory of everything” is an attempt to conceptualise all aspects of the known universe in a single theoretical framework. My aim here is far less grandiose but along the same lines. Over the past few weeks I’ve been involved in several discussions (and at times heated debates) on Twitter on topics that have more in common than people may realize:

– What sample size of results is sufficient to convince a modeler that their edge is or is not real?

– What is “regression to market” and how should it be done?

– How important is closing line value (CLV)?

– What is better, full Kelly or fractional Kelly? If fractional, what’s the best way to determine the fraction?

In this article I will tie all of these ideas together into the betting version of a theory of everything.

The generator

Let’s start by defining a “generator” as a process that creates a sequence of random outcomes. We can divide generators into two categories:

The quantification of uncertainty is the center of all gambling mathematics. What people often neglect is that uncertainty comes from two distinct sources, and the nature of each is quite different.

Artificial generators like dice, cards or roulette wheels are carefully controlled so that everything there is to know about their probability distribution is already known. (For the sake of this discussion we’ll ignore things like loaded dice, marked cards and biased wheels!)

Natural generators like sporting events, elections and weather result from complex interactions of many factors. They too have a probability distribution, but it is unknown and unknowable – the best we can do is build models to estimate it. Sports betting exists entirely in the realm of natural generators.

For a model used to estimate the behaviour of a natural generator, let’s define the “true probability” as the thing that’s unknown and unknowable, the “model probability” as the modeler’s best estimate, and the “model error” as the difference between them.

Model probability =

true probability + model error

As the difference between a known quantity and an unknown quantity, the model error is also unknown and unknowable.

Process uncertainty and parameter uncertainty

Next, let’s use this idea of generators to discuss uncertainty. The quantification of uncertainty is the center of all gambling mathematics. What people often neglect is that uncertainty comes from two distinct sources, and the nature of each is quite different.

“Process uncertainty” comes from the randomness that’s built into the generator itself. It’s why repeated spins of the same wheel don’t necessarily give the same result, and it’s also why the same two tennis players can play each other multiple times under identical conditions and have one player win some and the other win some.

“Parameter uncertainty” comes from our incomplete understanding of how the generator works. In some more high-brow publications it may be referred to as “epistemic uncertainty”, rooted in epistemology – the philosophy of knowledge.

For example, suppose you give a soccer team a 60% probability of winning, you bet on them at even money, and they lose. Why did you lose your bet? Perhaps you were correct in your assessment, but you were unlucky – the 40% event happened, and you lost your bet. This is process uncertainty – good bet, unlucky result.

On the other hand, perhaps you were incorrect in your assessment – the true probability may have been 50%, or 30%, or even 1%. You made a bet that you thought was a good bet but in reality was a bad bet. This is parameter uncertainty. Because the true probability is unknown, it’s very difficult to figure out how much of your results – both good and bad – are driven by process uncertainty as opposed to parameter uncertainty.

Let’s pause here and make some statements that will prove useful as we continue this exploration into generators and uncertainty:

1. Artificial generators include process uncertainty only. Natural generators include both process uncertainty and parameter uncertainty.

2. Recall that we defined model error as the difference between the true probability and the model probability. Therefore, model error is a reflection of parameter uncertainty.

3. Process uncertainty impacts each result in a series independently of the others. This is why roulette scoreboards provide useless information.

4. Parameter uncertainty impacts each result in a series in a correlated manner. If you use the same model to produce probabilities for 100 different games, any errors in your model will likely propagate in a similar way across some or all of the 100.

5. Corollary of #3: Over a large sample size, the impact of process uncertainty will shrink towards zero. The proportion of snake eyes in repeated rolls of two dice will approach the theoretical probability of 1/36. This is known as the “law of large numbers”.

6. Corollary of #4 and #5: Repeatedly observing the results of a natural generator allows one to continually learn from the observations to improve one’s understanding of how the generator works. This is the basis of Bayesian modelling.

Regression to market

Model error is, as we’ve shown, quite a slippery creature. We know it exists, but it’s impossible to quantify. There is, however, one important property of parameter uncertainty that we can infer from the theory of market efficiency:

In a market that is at least somewhat efficient, the parameter uncertainty will tend to “push” in the direction of the probabilities that are implied by the market price. The more efficient the market, the stronger the push.

For all the numerical examples that follow, we will use a point spread bet with market odds of -105 / -105 for the sake of simplicity. The math works equally well for any odds, but the calculations become more difficult.

If you find a betting angle that nobody else in the world has, you may have a great deal of positive expected value, but your CLV will hover around zero.

In this case, the market price implies a win probability of 50% for each side. Suppose your model projects a win probability of 55% for one side. Market efficiency dictates that the model error is much more likely to be +2% (meaning a true probability of 53%) than -2% (meaning a true probability of 57%). To restate this same example in terms of expected values – if your model projects a theoretical expected value of +7.4%, your true expected value is much more likely to be +3.5% than +11.3%.

This asymmetry means that using the Kelly Criterion with an assumed edge of +7.4% is likely to cause you to over-bet your true edge, which is an extremely dangerous thing to do from a bankroll management perspective.

Bettors generally have two strategies to guard against this – either they use a fractional Kelly approach, or they take their model’s projected probabilities and “regress them to market” by using a weighted average of the model probabilities and the market implied probabilities. The weights can be chosen subjectively, or they can be estimated using a method such as maximum likelihood.

Note that because bet size under Kelly is directly proportional to the size of the edge, the two methods above are mathematically equivalent. The formula to convert between them is

Regression weight =

(Kelly multiple * model prob + (1 – Kelly multiple) * market prob * (1 + bookmaker’s margin) – market prob / (model prob – market prob)

In our example, the bookmaker’s margin is 1 – 0.5 * (205/105) = 2.4%. So, a bet size of 1/4 Kelly would be equivalent to a regression weight of (0.25 * 0.55 + (0.75 * 0.50 * 1.024) – 0.50) / (0.55 – 0.50) = 0.429, meaning the equivalent regression would be 0.429 x model probability + 0.571 x market probability.

Suppose you give a soccer team a 60% probability of winning, you bet on them at even money, and they lose. Why did you lose your bet? Perhaps you were correct in your assessment, but you were unlucky – the 40% event happened, and you lost your bet.

The “regression to market” approach has clear advantages, but it also has some disadvantages. The proper weights may change over time as the market evolves – this is true both on a micro level (from opening line to the time you evaluate your bet to the closing line) and on a macro level (as individuals enter and exit the market).

Also, the logic is a bit of a tightrope walk – it’s based on the idea that the market is efficient, but in a fully efficient market it’s impossible to even have an edge to begin with. This exact criticism can be applied to the idea of measuring one’s edge using “closing line value” (CLV) – that is, measuring a model’s success by how correlated it is with the movement from the line at the time the bet is placed to the closing line.

This is a good measure of success if, and only if, the market is reacting to the same “signal” as your model, just later in time. If you find a betting angle that nobody else in the world has, there is no reason why the market would catch up to you – you may have a great deal of positive expected value, but your CLV will hover around zero.

SportStatist.com

MORE: TOP 100 Online Bookmakers >>>
MORE: TOP 20 Cryptocurrency Sportsbooks >>>
MORE: Best E-Sports Betting Sites >>>

Source: pinnacle.com

Comments are closed.