I ended an earlier post by suggesting a new definition of a probability distribution. I glossed over the bulk of the math in that post, so I’ll attempt to remedy that now.
Definition: Let be a algebra. A neoprobability is a ray within , the space of measures.
Ray?
A ray is a semiinfinite line segment through the origin of a vector space. The reason they are called “rays” is selfevident.
We can also think of rays in terms of symmetry by considering the group of positive realnumbers with group product given by multiplication, . For any real vector space , we can define the group action of on by where is just the rescaling of by . That this is a group action is basically a bulletpoint within the definition of a vectorspace. A ray is nothing but an element of the quotient space where .
In our case, the vector space under consideration is the space of measures, over some algebra, . We’ll denote the space of rays by with the quotient projection .^{1}
You lost me at quotient space
It probably seems needlessly abstract to define ray’s as elements of a quotient space, but let me bring it back home. We were talking about probability a moment ago, and the notion of rescaling comes up a lot there. Recall, a probability is a measure of unit mass. Given a finite measure, ,over some algebra, , we can define a probability, . Note that the measures contained in the ray all map to the same probability distribution. In otherwords the unit measures, , embed into the rayspace of measures, . This is how to correspond our new “raybased” theory of probability with the classical “unitmeasures based” theory of probability. However, this embedding is not surjective. It is only injective, which is to say that the “raybased” theory is more general.
Meh. So what?
You might have missed it, but there is a big restriction placed on the sort of measures which can be normalized. In particular, the measure must be finite (). There are many measures, such as the uniform measure on , which statisticians use it all the time, but are not finite. For example, when we do leastsquares linear regression, we are (implicitly) assuming a uniform prior over the coefficients. It’s disturbing that these infinite measures are occasionally treated as if they are probability distributions, yet are prohibited from the classical theory. I think this means that we should have a more inclusive theory.^{2} Using ray’s instead of unitmass measures serves as a solution because the ray of measure is well defined, even if the measure is not finite.
Previous work
After a little digging, it seems that other people have thought along similar lines. I noticed it pop in this blog post which cites, this recent preprint, which then cites both Renyi and Kolmogorov… (yeah, that Kolmogorov). Sorry to name drop like that, but since this idea may seem a tad screwy I felt compelled to note that other people besides me drawn to this line of thought.
I also asked a friend of mine for his thoughts, a professor of probability theory and statistics at Princeton, and he said that this redefinition did not feel so radical. In fact, he felt it was implicitly adapted in certain scenarios (which we will get to in this post).
Conditional probability using the classical definition
I mentioned this in my earlier post, but didn’t really go through with the details. Let be a algebra and be a measurable set. then is a subalgebra of .
If restrict unitmass measure, (i.e. a classical probability distribution), to the sigma algebra the resulting restricted measure would not be unit mass and therefore not a probability measure! You would need to normalize by it in order to obtain the conditional probability . Therefore, the classical definition of a probability distribution does not transform properly under restrictions.
Transforming properly is important
An entity transforms properly if there is a set of morphisms which allow one to form a category. Apparently, defining probability distributions as unitmass measures prohibits us from forming a category if we wish to do things like restriction to a subset. That’s fuddup.
I know this might not mean much to many (most?) statisticians and probabilists. but virtually every other subdiscipline of mathematics studies an entity which is contained in a category.
I can’t describe why this is important very quickly, and I will probably need to do another post to do the point justice. In the meantime, here are two lists. First, a nonexhaustive list of major branches of mathematics where the central entity transforms properly:
 Topology (topological spaces via continuous maps)
 Differential Geometry (smooth manifolds via smooth maps)
 Analysis (function spaces via continuous maps)
 Logic and set theory (boolean algebras via maps between sets)
 Algebraic geometry (varieties via regular maps)
 Algebraic anything really (algebraic things via homomorphisms)
 Group theory (groups via group homomorphisms)
 Linear Algebra (vector spaces via linear maps)
 Category theory (tags via natural transformations)
 Measure theory (measure spaces via measurable maps)
Here is an exhaustive list of major mathematical subdisciplines which lack a notion of “morphism”:
 applied mathematics^{3}
 my 3year olds counting book
 whatever this guy does^{4}
 Kabbalah
 probability theory (probability spaces)
 and of course, time cube
Probability theory deserves a better home.
Conditional probability with rays
As mentioned, measures spaces form a category. Specifically, we can consider the category whose objects algebras and whose morphisms are … well, I’d like to save that for part III, but certainly measureable maps are among the morphisms. The mapping , which sends a algebra to the cone of measures over that algebra yields a natural transformation to a new category where the objects are spaces of measures. We can (quite literally) project that later category onto the relevant rayspace. This is because the map yields a functor.
How is this related to conditional probability? Well, we can restrict any measure, , to by pullback to get a measure . This is just fancy notation, for arbitrary . Additionally, we can project each of the measures and To there respective ray spaces. So now we can draw a commutative diagram (because yields a functor):
and, spoiler, conditional probability will arise from the bottom arrow in the diagram. If is finite, we can identify the ray with the (classical) probability measure “” for each . Similarly, if is finite, then the ray of would be identified with … That last expression is nothing but the conditional probability … So that’s telling.
However, this way thinking about Bayesianism is robust. If is not finite, the classical theory explodes while the raybased theory is honeybadger.
Final thoughts
This has been a long post, just to derive the classical notion of conditional probability. Bad news though. I don’t think this is the best way to think about conditional probability. Just like I want to do away with the classical definition of probability, I’d also like to do away with the classical definition of conditional probability. I don’t want to think of it simply as a map from a sigma algebra to a probability measure. Instead I’d like to think of conditional probabilities as arrows within a category. But this post is too long, so I’ll save that for part III.
Footnotes:

Technically, the domain of is not all of since there is no ray associated to the zeromeasure. Perhaps that’s an indication we are on the right track (there is no probability distribution associated with the zeromeasure). ↩

Admittedly, not all measures are reasonable. For example, the measure over the reals serves as a crazy prior for determining the mean of Gaussian random variable, because it puts you in a situation where any finite number of samples gives you basically no knowledge (see Craig Gidney’s post). However, throwing out all improper priors because a subset of them are pernicious in a subset of situations strikes me as throwing the baby out with the bathwater. In that same post, the improper priors “” and “” are both used as examples of reasonable priors to use in this scenario. ↩

Applied mathematics gets a pass with respect to the “I transform properly” certification in my mind. There is no central entity, and the field is defined by its borderline mathematical status. ↩

He looks like this . Here is a video you should not watch. ↩