11 Lecture 02 - 2019

11.1 Joint prior distribution

The joint prior distribution is the prior probability of distribution + parameters

11.2 Example: inflatable world

11.2.1 Design the model

p: water proportion

1-p: land proportion

11.2.2 Condition

Bayes updating: converts priors to posteriors

  • Adds all data at once
  • All posteriors are the prior for next observation
  • Sample size is embodied in the posterior

11.2.3 Evaluate

Golem must be supervised

  • Did it malfunction?
  • Does the answer make sense?

11.2.4 Define parameters

N: fixed by experimenter

W: a probability distribution, in this case a binomial distribution

WLWWLWWLW

dbinom(6, size = 9, prob = 0.5)

p: prior probability distribution, in this case uniformed

11.2.5 Joint model

W ~ Binomial(N, p)

p ~ Uniform(0, 1)

(W is distributed binomially with probability p on each measure, p is uniform at 1)

11.2.6 Posterior

Posterior = (probability observed variables * prior) / normalizing constant

(If priors are uniform, they don’t affect the shape of the posterior. They may influence the shape though)

11.3 Grid approximation

Grid approximation: consider only a finite discrete set of solutions

For example, 1000 solutions

  1. Generate a sequence of solutions seq_sol <- seq(0, 1, length.out = 1000)
  2. Prior = uniform 1 across sequence of solutions prior <- rep(1, seq_sol)
  3. Probability of data = binomial prob_data <- dbinom(6, size = 9, prob = seq_sol)
  4. Posterior numerator = posterior_num <- prior * prob_data
  5. Posterior standardized = posterior_numerator / sum(posterior_num)

11.3.1 Sampling from the posterior

Approximate the posterior, then you can sample from the posteriors sample(p, prob = posterior, 1e4, replace = TRUE)

Summarize

  • above/below some value
  • Percentile interval
  • Highest posterior density interval

Predictive checks

  • rbinorm(1e4, size = 0, prob = samples)