Short story:
Given a -level nominal input variable in a Bayesian regression model, let be the effect of level . To achieve a symmetric prior on the effects, with a prior mean of 0 and prior covariance of for each , the prior on the vector of regression coefficients should have the form
where
- is a square matrix of rows/columns:
- is a square matrix of rows/columns giving the level encodings:
- is the encoding of level for .
- The encoding of level is .
- No row of is a linear combination of the other rows.
Long story: (Full write-up)
Effects
Suppose that we have a -level nominal input variable used in a Bayesian regression analysis, with each level encoded as a row vector
Let be the regression coefficient corresponding to the -th element of the encoding, so that a level of contributes the term
to the overall regression sum. We call the effect of level .
Any prior on defines a corresponding joint prior on the effects via the above equation. Our goal is to construct an appropriate prior distribution for using as our only prior information some notion of how large any of the effects may plausibly be: we want the prior mean for each to be , and the prior variance to be some given value . Since this information makes no distinction between the levels, the joint prior for the effects should be symmetric: reordering the levels should leave this joint prior unchanged.
We would like the effects to indicate the differences between levels, and not include any constant (independent of level) contribution to the overall regression sum; thus we require that
This implies that the joint distribution for the effects is degenerate. In the remainder of this note we therefore define the vector to be the first effects,
and use
Encodings
Using we have
and so, assuming a non-degenerate (full-dimensional) prior over , the level encodings must satisfy
We therefore define the matrix to be the first row vectors :
and use
Our equation defining the effects then becomes
and so, to have a one-to-one correspondence between effects vectors and regression-coefficient vectors , we require that be invertible. That is,
- must be a square matrix (we require );
- no level encoding , , may be expressible as a linear combination of the remaining level encodings (excluding ).
One example of an encoding satisfying these requirements is effects coding:
An obvious prior that doesn’t work
With effects coding the obvious symmetric prior for ,
leads to a very asymmetric prior for the effects : for we have
independently ( if ), but for we have
and for the covariance between and is .
Solution strategy
We find an appropriate prior for by first constructing a symmetric prior for the effects themselves, then solving for the corresponding prior on . The prior we derive for turns out to be a multivariate normal with mean vector and a covariance matrix defined later. Since
the required prior for is
For the effects coding, is just the identity matrix (remember that only has rows, omitting ), and so the prior covariance matrix for is just itself.
We seek to construct the most diffuse, least informative prior distribution for satisfying
for all , . We do so using the method of maximum entropy: our prior will be the maximum-entropy distribution satisfying the given constraints. (See references 1, 2, and 3.)
The entropy of a distribution is a measure of how much information the distribution provides about the variable(s) in question; the greater the entropy, the greater the uncertainty and the less informative the distribution. The entropy of a distribution with pdf is defined as
where is a reference measure chosen to coincide with some notion of maximal ignorance. Note that the entropy is invariant under a change of variables because both the density and the reference measure transform in the same way.
Form of the maximum-entropy solution
In general, the maximum-entropy distribution satisfying a set of constraints
has a pdf of form
for some -vector of parameter values and corresponding normalizing constant . Applying this to the problem at hand, and using the uniform measure , we find that the pdf for the maximum-entropy distribution on having and for all is
for some choice of parameters and , and corresponding normalizing constant .
Rather than directly solving for the parameters and , we note the following:
- Since is quadratic in we can complete the square to re-express as a multivariate normal density with some mean and covariance matrix .
- Since we know that .
- Our constraints are symmetric: if is any vector obtained from by reordering its elements, the constraints on are equivalent to identical constraints on . Therefore the maximum-entropy distribution for is also symmetric: the distributions for and are identical. That is, must remain unchanged after any permutation of its rows and columns.
This last observation implies that
- the diagonal elements of are all the same; and
- the off-diagonal elements of are all the same.
Combining this with the requirement that for all , we see that we must have
for some value .
The full writeup shows that a solution of this form can be written in the maximum-entropy form of equation (1).
Solving for the common covariance
At this point we have satisfied all of the constraints except for , and we choose accordingly. We find that
and so we require
A bit of algebra then gives
Final notes
The full writeup verifies that
- as we have defined it is positive definite, and hence a legitimate covariance matrix; and
- this solution is symmetric: for any , even when one of or is .
I first derived this prior circa 2005, but did not publish it. Lenk and Orme independently propose an “effects prior” using the same covariance matrix described here, in the context of a hierarchical regression model. Their derivation assumes an effects coding and proceeds from different premises than those used herein.
References
- Jaynes, Edwin T. (1957). “Information Theory and Statistical Mechanics,” Physical Review, Series II 106 (4): 620–630.
- Jaynes, Edwin T. (1957). “Information Theory and Statistical Mechanics II,” Physical Review, Series II 108 (2): 171–190.
- Jaynes, Edwin T. (2003). Probability Theory: The Logic of Science, Cambridge University Press, pp. 351–355.
- Lenk, Peter and Bryan Orme (2009). “The Value of Informative Priors in Bayesian Inference with Sparse Data,” J. of Marketing Research 46 (6): 832–845.
Nice read, I just passed this onto a friend who was doing some research on that. And he actually bought me lunch as I found it for him smile Thus let me rephrase that Thanks for lunch! eadfgcgkeedkkkdd