课程: Probability Foundations for Data Science
今天就学习课程吧!
今天就开通帐号,24,700 门业界名师课程任您挑!
MAP applications
- [Instructor] In this video, you will review a few examples of how you can use Maximum A Posteriori estimation to estimate parameters for some probability distributions. Let's start with an example for the normal distribution. Suppose you have a set of observations X1 to XN that you assume are drawn from a normal distribution with known variance sigma squared, and an unknown expected value of mu. Let's estimate mu using map estimation. First, let's assume a normal prior distribution for mu. This will for mu, be approximately normal with mu zero and tau squared. Next you'll get the likelihood function from mu, which is given by the flowing equation, which will be the probability of X given mu equal to the product of I equal one to N of one divided by the square root of two multiplied by pi multiplied by sigma squared. Multiply that by E to the negative XI minus mu squared divided by two, multiplied by sigma squared. Next, let's gather the posterior distribution. Remember, the…