Steve Omohundro tells of the following strange result.
There is in three dimensions a Gaussian distribution which is spherical and standard deviation 1.
In other words the probability density is
(2π)−3/2
e−((x−x')2 + (y−y')2 + (z−z')2)/2
about some unknown point <x', y', z'>.
We are given one unbiased sample from the distribution:
<xs, ys, zs>.
What is the best estimate for <x', y', z'>?
Intuition demands that indeed the sample is the only possible estimate.
Steve quotes a result asserting that the best estimate is on the line segment between the sample and the "origin", at some fixed proportion of
the distance between the two.
Strictly speaking the problem is ill-posed for there is an implicit assumption of a uniform prior distribution for the mean, and there are no such infinite uniform distributions.
It is clear that the maximum likelyhood estimate is indeed the sample. From the Bayesian perspective I assume that the estimate we seek is the point whose coordinates are the respective expected values for x', y' and z' from the posterior distribution in light of the one sample.
Bayes requires us to pick a prior distribution. There being no uniform distribution over all of R3, I will explore the limiting cases for two prior distributions:
At this point I will switch to the n dimensional case as the notation is briefer and we want to generalize anyway. Let r2 = Σxj2where j ranges over the n dimensions.
The Bayesian perspective transforms and simplifies the word problem to the following: We have a random variable C over a prior distribution, here centrally symmetric about the origin. We also have a random variable U independently distributed by a normal distribution of σ = 1 at the origin. We learn a sample value for U+C and want the centroid of the posterior distribution of C in light of that sample. I.e. we want to estimate C from one sample of U+C.
It seems that I need to do a 1D case first.
Call the distributions of U and C, u(x) and c(x) and s is the sample of U+C.
What posterior distribution c' should we ascribe to C in light of this sample?
It seems clear that c'(x) = c(x)u(s−x) / ∫ c(z)u(s−z) dz
and that the centroid of this distribution is at m = ∫ z c(z)u(s−z) dz / ∫ c(z)u(s−z) dz
and this centroid is the posterior expected value of C.
If we recast x, s, z and m as vectors then the formula still holds.
We now do the case of a Gaussian prior distribution.