Calibrating Interval Estimates with Binary Observations
A Polling Nightmare
$$
\[ \begin{array}{r|rrrr|r} i & 1 & 2 & \dots & 625 & \bar{Y}_{625} \\ Y_i & 1 & 1 & \dots & 0 & 0.72 \\ \end{array} \]
\[ \color{lightgray} \begin{array}{r|rrrrrr|r} j & 1 & 2 & 3 & 4 & \dots & 7.23M & \bar{y}_{7.23M} \\ y_{j} & 1 & 1 & 1 & 0 & \dots & 1 & 0.70 \\ \end{array} \]
\[ \begin{array}{r|rrrr|r} i & 1 & 2 & \dots & 625 & \bar{Y}_{625} \\ Y_i & 1 & 1 & \dots & 0 & 0.72 \\ \end{array} \]
\[ \begin{array}{r|rrrrrr|r} j & 1 & 2 & 3 & 4 & \dots & 7.23M & \bar{y}_{7.23M} \\ y_{j} & 1 & 1 & 1 & 0 & \dots & 1 & 0.70 \\ \end{array} \]
\[ \begin{array}{r|rr|rr|r|rr|r} \text{call} & 1 & & 2 & & \dots & 625 & & \\ \text{poll} & J_1 & Y_1 & J_2 & Y_2 & \dots & J_{625} & Y_{625} & \overline{Y}_{625} \\ \hline \color[RGB]{7,59,76}1 & \color[RGB]{7,59,76}869369 & \color[RGB]{7,59,76}1 & \color[RGB]{7,59,76}4428455 & \color[RGB]{7,59,76}1 & \color[RGB]{7,59,76}\dots & \color[RGB]{7,59,76}1268868 & \color[RGB]{7,59,76}1 & \color[RGB]{7,59,76}0.68 \\ \color[RGB]{239,71,111}2 & \color[RGB]{239,71,111}600481 & \color[RGB]{239,71,111}0 & \color[RGB]{239,71,111}6793745 & \color[RGB]{239,71,111}1 & \color[RGB]{239,71,111}\dots & \color[RGB]{239,71,111}1377933 & \color[RGB]{239,71,111}1 & \color[RGB]{239,71,111}0.71 \\ \color[RGB]{17,138,178}3 & \color[RGB]{17,138,178}3830847 & \color[RGB]{17,138,178}1 & \color[RGB]{17,138,178}5887416 & \color[RGB]{17,138,178}1 & \color[RGB]{17,138,178}\dots & \color[RGB]{17,138,178}4706637 & \color[RGB]{17,138,178}1 & \color[RGB]{17,138,178}0.70 \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ \color[RGB]{6,214,160}1M & \color[RGB]{6,214,160}1487507 & \color[RGB]{6,214,160}1 & \color[RGB]{6,214,160}393580 & \color[RGB]{6,214,160}1 & \color[RGB]{6,214,160}\dots & \color[RGB]{6,214,160}1247545 & \color[RGB]{6,214,160}0 & \color[RGB]{6,214,160}0.72 \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ \end{array} \]
Sampling Distributions of Sums of Binary Random Variables
This’ll be a slog. I’ll explain why we’ve bothered when we get to the end.
\(j\) | name |
---|---|
\(1\) | Rush |
\(2\) | Mitt |
\(3\) | Al |
\(j\) | name | \(y_j\) |
---|---|---|
\(1\) | Rush | \(0\) |
\(2\) | Mitt | \(0\) |
\(3\) | Al | \(1\) |
\(p\) | \(J_1\) | \(Y_1\) |
---|---|---|
\(\frac13\) | \(1\) | \(0\) |
\(\frac13\) | \(2\) | \(0\) |
\(\frac13\) | \(3\) | \(1\) |
\(p\) | \(J_1\) | \(Y_1\) |
---|---|---|
\(\frac13\) | \(\color[RGB]{64,64,64}1\) | \(0\) |
\(\frac13\) | \(\color[RGB]{64,64,64}2\) | \(0\) |
\(\frac13\) | \(\color[RGB]{64,64,64}3\) | \(1\) |
\(p\) | \(\color[RGB]{64,64,64}J_1\) | \(Y_1\) |
---|---|---|
\(\color[RGB]{239,71,111}\frac13\) | \(\color[RGB]{64,64,64}1\) | \(\color[RGB]{239,71,111}0\) |
\(\color[RGB]{239,71,111}\frac13\) | \(\color[RGB]{64,64,64}2\) | \(\color[RGB]{239,71,111}0\) |
\(\color[RGB]{17,138,178}\frac13\) | \(\color[RGB]{64,64,64}3\) | \(\color[RGB]{17,138,178}1\) |
\(p\) | \(Y_1\) |
---|---|
\(\frac23\) | \(\color[RGB]{239,71,111}0\) |
\(\frac13\) | \(\color[RGB]{17,138,178}1\) |
\(p\) | \(Y_1\) |
---|---|
\(\color[RGB]{239,71,111}\frac23\) | \(\color[RGB]{239,71,111}0\) |
\(\color[RGB]{17,138,178}\frac13\) | \(\color[RGB]{17,138,178}1\) |
\(j\) | \(y_j\) |
---|---|
\(1\) | 1 |
\(2\) | 1 |
\(3\) | 1 |
\(4\) | 0 |
⋮ | ⋮ |
\(m\) | 1 |
\(p\) | \(J_1\) | \(Y_1\) |
---|---|---|
\(1/m\) | \(1\) | \(1\) |
\(1/m\) | \(2\) | \(1\) |
\(1/m\) | \(3\) | \(1\) |
\(1/m\) | \(4\) | \(0\) |
⋮ | ⋮ | ⋮ |
\(1/m\) | \(m\) | \(1\) |
\(p\) | \(Y_1\) |
---|---|
\(\underset{\color{gray}\approx 0.30}{\sum\limits_{j:y_j=0} \frac{1}{m}}\) | \(0\) |
\(\underset{\color{gray}\approx 0.70}{\sum\limits_{j:y_j=1} \frac{1}{m}}\) | \(1\) |
\(j\) | name | \(y_j\) |
---|---|---|
\(1\) | Rush | \(0\) |
\(2\) | Mitt | \(0\) |
\(3\) | Al | \(1\) |
Aren’t you happy that, even though you can’t make a 3-sided die, I didn’t ask you to use 4-sided one?
\(p\) | \(J_1\) | \(J_2\) | \(Y_1\) | \(Y_2\) |
---|---|---|---|---|
\(\color[RGB]{239,71,111}1/9\) | \(1\) | \(1\) | \(0\) | \(0\) |
\(\color[RGB]{239,71,111}1/9\) | \(1\) | \(2\) | \(0\) | \(0\) |
\(\color[RGB]{17,138,178}1/9\) | \(1\) | \(3\) | \(0\) | \(1\) |
\(\color[RGB]{239,71,111}1/9\) | \(2\) | \(1\) | \(0\) | \(0\) |
\(\color[RGB]{239,71,111}1/9\) | \(2\) | \(2\) | \(0\) | \(0\) |
\(\color[RGB]{17,138,178}1/9\) | \(2\) | \(3\) | \(0\) | \(1\) |
\(\color[RGB]{17,138,178}1/9\) | \(3\) | \(1\) | \(1\) | \(0\) |
\(\color[RGB]{17,138,178}1/9\) | \(3\) | \(2\) | \(1\) | \(0\) |
\(\color[RGB]{6,214,160}1/9\) | \(3\) | \(3\) | \(1\) | \(1\) |
\(p\) | \(Y_1\) | \(Y_2\) | \(Y_1 + Y_2\) |
---|---|---|---|
\(\color[RGB]{239,71,111}\frac{4}{9}\) | \(0\) | \(0\) | \(0\) |
\(\color[RGB]{17,138,178}\frac{2}{9}\) | \(0\) | \(1\) | \(1\) |
\(\color[RGB]{17,138,178}\frac{2}{9}\) | \(1\) | \(0\) | \(1\) |
\(\color[RGB]{6,214,160}\frac{1}{9}\) | \(1\) | \(1\) | \(2\) |
\(p\) | \(Y_1 + Y_2\) |
---|---|
\(\color[RGB]{239,71,111}\frac{4}{9}\) | \(0\) |
\(\color[RGB]{17,138,178}\frac{4}{9}\) | \(1\) |
\(\color[RGB]{6,214,160}\frac{1}{9}\) | \(2\) |
\(p\) | \(J_1\) | \(J_2\) | \(Y_1\) | \(Y_2\) | \(Y_1 + Y_2\) |
---|---|---|---|---|---|
\(\frac{1}{m^2}\) | \(1\) | \(1\) | \(y_1\) | \(y_1\) | \(y_1+y_1\) |
\(\frac{1}{m^2}\) | \(1\) | \(2\) | \(y_1\) | \(y_2\) | \(y_1+y_2\) |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | |
\(\frac{1}{m^2}\) | \(1\) | \(m\) | \(y_1\) | \(y_m\) | \(y_1+y_m\) |
\(\frac{1}{m^2}\) | \(2\) | \(1\) | \(y_2\) | \(y_1\) | \(y_2+y_1\) |
\(\frac{1}{m^2}\) | \(2\) | \(2\) | \(y_2\) | \(y_2\) | \(y_2+y_2\) |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | |
\(\frac{1}{m^2}\) | \(m\) | \(m\) | \(y_m\) | \(y_m\) | \(y_m+y_m\) |
\(p\) | \(Y_1\) | \(Y_2\) | \(Y_1 + Y_2\) |
---|---|---|---|
\(\textcolor[RGB]{239,71,111}{\sum\limits_{\substack{j_1,j_2 \\ y_{j_1},y_{j_2} = 0,0}} \frac{1}{m^2}}\) | \(0\) | \(0\) | \(\textcolor[RGB]{239,71,111}{0}\) |
\(\textcolor[RGB]{17,138,178}{\sum\limits_{\substack{j_1,j_2 \\ y_{j_1},y_{j_2} = 0,1}} \frac{1}{m^2}}\) | \(0\) | \(1\) | \(\textcolor[RGB]{17,138,178}{1}\) |
\(\textcolor[RGB]{17,138,178}{\sum\limits_{\substack{j_1,j_2 \\ y_{j_1},y_{j_2} = 1,0}} \frac{1}{m^2}}\) | \(1\) | \(0\) | \(\textcolor[RGB]{17,138,178}{1}\) |
\(\textcolor[RGB]{6,214,160}{\sum\limits_{\substack{j_1,j_2 \\ y_{j_1},y_{j_2} = 1,1}} \frac{1}{m^2}}\) | \(1\) | \(1\) | \(\textcolor[RGB]{6,214,160}{2}\) |
\(p\) | \(Y_1 + Y_2\) |
---|---|
\(\textcolor[RGB]{239,71,111}{\sum\limits_{\substack{j_1,j_2 \\ y_{j_1}+y_{j_2} = 0}} \frac{1}{m^2}}\) | \(\textcolor[RGB]{239,71,111}{0}\) |
\(\textcolor[RGB]{17,138,178}{\sum\limits_{\substack{j_1,j_2 \\ y_{j_1}+y_{j_2} = 1}} \frac{1}{m^2}}\) | \(\textcolor[RGB]{17,138,178}{1}\) |
\(\textcolor[RGB]{6,214,160}{\sum\limits_{\substack{j_1,j_2 \\ y_{j_1}+y_{j_2} = 2}} \frac{1}{m^2}}\) | \(\textcolor[RGB]{6,214,160}{2}\) |
To find the distribution of a sum of two responses, \(Y_1+Y_2\), we do the same thing.
We start with the joint distribution of two dice rolls.
Then we add columns that are functions of the rolls: \(Y_1\), \(Y_2\), and \(Y_1+Y_2\).
Then we marginalize to find the distribution of the sum. We’ll do this in two steps.
\[ \begin{aligned} \sum\limits_{\substack{j_1,j_2\\ y_{j_1},y_{j_2} = a,b}} \frac{1}{m^2} &= \sum\limits_{\substack{j_1 \\ y_{j_1}=a}} \qty{ \sum\limits_{\substack{j_2 \\ y_{j_2}=b}} \frac{1}{m^2} } \\ &= \sum\limits_{\substack{j_1 \\ y_{j_1}=a}} \qty{ m_b \times \frac{1}{m^2} } \qfor m_y = \sum\limits_{j:y_j=y} 1 \\ &= m_a \times m_b \times \frac{1}{m^2} = \theta_a \times \theta_b \qfor \theta_y = \frac{m_y}{m} \\ &= \theta_1^{s} \theta_0^{1-s} \qfor s = a+b \end{aligned} \]
\(p\) | \(J_1\) | … | \(J_n\) | \(Y_1\) | … | \(Y_n\) |
---|---|---|---|---|---|---|
\(\frac{1}{m^n}\) | \(1\) | … | \(1\) | \(y_1\) | … | \(y_1\) |
\(\frac{1}{m^n}\) | \(1\) | … | \(2\) | \(y_1\) | … | \(y_2\) |
⋮ | ⋮ | ⋱ | ⋮ | ⋮ | ⋱ | ⋮ |
\(\frac{1}{m^n}\) | \(1\) | … | \(m\) | \(y_1\) | … | \(y_m\) |
\(\frac{1}{m^n}\) | \(2\) | … | \(1\) | \(y_2\) | … | \(y_1\) |
\(\frac{1}{m^n}\) | \(2\) | … | \(2\) | \(y_2\) | … | \(y_2\) |
⋮ | ⋮ | ⋱ | ⋮ | ⋮ | ⋱ | ⋮ |
\(\frac{1}{m^n}\) | \(m\) | … | \(m\) | \(y_m\) | … | \(y_m\) |
\(p\) | \(Y_1 + \ldots + Y_n\) |
---|---|
\(\color[RGB]{239,71,111}\sum\limits_{\substack{a_1 \ldots a_n \\ a_1 + \ldots + a_n = 0}} \sum\limits_{\substack{j_1 \ldots j_n \\ y_{j_1} \ldots y_{j_n} = a_1 \ldots a_n}} \frac{1}{m^n}\) | \(\color[RGB]{239,71,111}0\) |
\(\color[RGB]{17,138,178}\sum\limits_{\substack{a_1 \ldots a_n \\ a_1 + \ldots + a_n = 0}} \sum\limits_{\substack{j_1 \ldots j_n \\ y_{j_1} \ldots y_{j_n} = a_1 \ldots a_n}} \frac{1}{m^n}\) | \(\color[RGB]{17,138,178}1\) |
⋮ | ⋮ |
\(\color[RGB]{6,214,160}\sum\limits_{\substack{a_1 \ldots a_n \\ a_1 + \ldots + a_n = n-1}} \sum\limits_{\substack{j_1 \ldots j_n \\ y_{j_1} \ldots y_{j_n} = a_1 \ldots a_n}} \frac{1}{m^n}\) | \(\color[RGB]{6,214,160}n-1\) |
\(\color[RGB]{255,209,102}\sum\limits_{\substack{a_1 \ldots a_n \\ a_1 + \ldots + a_n = n}} \sum\limits_{\substack{j_1 \ldots j_n \\ y_{j_1} \ldots y_{j_n} = a_1 \ldots a_n}} \frac{1}{m^n}\) | \(\color[RGB]{255,209,102}n\) |
\(p\) | \(Y_1\) | \(Y_2\) | … | \(Y_n\) | \(Y_1 + \ldots + Y_n\) |
---|---|---|---|---|---|
\(\color[RGB]{239,71,111}\sum\limits_{\substack{j_1 \ldots j_n \\ y_{j_1}, y_{j_2} \ldots y_{j_n} = 0, 0 \ldots 0}} \frac{1}{m^n}\) | 0 | 0 | … | \(0\) | \(\color[RGB]{239,71,111}0\) |
\(\color[RGB]{17,138,178}\sum\limits_{\substack{j_1 \ldots j_n \\ y_{j_1}, y_{j_2} \ldots y_{j_n} = 0, 0 \ldots 1}} \frac{1}{m^n}\) | 0 | 0 | … | \(1\) | \(\color[RGB]{17,138,178}1\) |
⋮ | ⋮ | ⋱ | ⋮ | ⋮ | ⋱ |
\(\color[RGB]{17,138,178}\sum\limits_{\substack{j_1 \ldots j_n \\ y_{j_1}, y_{j_2} \ldots y_{j_n} = 0, 1 \ldots 0}} \frac{1}{m^n}\) | 0 | 1 | … | \(0\) | \(\color[RGB]{17,138,178}1\) |
⋮ | ⋮ | ⋱ | ⋮ | ⋮ | ⋱ |
\(\color[RGB]{6,214,160}\sum\limits_{\substack{j_1 \ldots j_n \\ y_{j_1}, y_{j_2} \ldots y_{j_n} = 0, 1 \ldots 1}} \frac{1}{m^n}\) | 0 | 1 | … | \(1\) | \(\color[RGB]{6,214,160}n-1\) |
\(\color[RGB]{255,209,102}\sum\limits_{\substack{j_1 \ldots j_n \\ y_{j_1}, y_{j_2} \ldots y_{j_n} = 1, 1 \ldots 1}} \frac{1}{m^n}\) | 1 | 1 | … | \(1\) | \(\color[RGB]{255,209,102}n\) |
To find the distribution of a sum \(Y_1 + \ldots + Y_n\), we do the same thing.
We start by writing out the joint distribution of \(n\) dice rolls.
Then we marginalize in two steps.
This isn’t a class about counting, so this stuff won’t be on the exam.
\[ \begin{aligned} \sum_{\substack{j_1 \ldots j_n \\ y_{j_1} \ldots y_{j_n} = a_1 \ldots a_n}} \frac{1}{m^n} &= \sum_{\substack{j_1 \ldots j_{n-1} \\ y_{j_1} \ldots y_{j_{n-1}} = a_1 \ldots a_{n-1}}} \sum_{\substack{j_n \\ y_n=a_n}} \frac{1}{m^n} \\ &= \sum_{\substack{j_1 \ldots j_{n-1} \\ y_{j_1} \ldots y_{j_{n-1}} = a_1 \ldots a_{n-1}}} \sum_{\substack{j_n \\ y_n=a_n}} m_{a_n} \times \frac{1}{m^n} \\ &= \sum_{\substack{j_1 \ldots j_{n-2} \\ y_{j_1} \ldots y_{j_{n-2}} = a_1 \ldots a_{n-2}}} \sum_{\substack{j_{n-1} \\ y_n=a_{n-1}}} m_{a_n} \times \frac{1}{m^n} \\ &= \sum_{\substack{j_1 \ldots j_{n-2} \\ y_{j_1} \ldots y_{j_{n-2}} = a_1 \ldots a_{n-2}}} \sum_{\substack{j_{n-1} \\ y_n=a_{n-1}}} m_{a_{n-1}} m_{a_n} \times \frac{1}{m^n} \\ &= m_{a_1} \ldots m_{a_{n}} \times \frac{1}{m^n} \qqtext{after repeating $n-2$ more times} \\ &= \theta_{a_i} \ldots \theta_{a_n} \\ &= \prod_{i:a_i=1} \theta_1 \prod_{i:a_i=0} \theta_0 = \theta_1^{s} \theta_0^{n-s} \qfor s = a_1 + \ldots + a_n \end{aligned} \]
\[ \begin{aligned} \sum\limits_{\substack{j_1 \ldots j_n \\ y_{j_1} + \ldots + y_{j_n} = s}} \frac{1}{m^n} &= \sum\limits_{\substack{a_1 \dots a_n \\ a_1 + \ldots + a_n = s}} \ \ \sum_{\substack{j_1 \ldots j_n \\ y_{j_1} \ldots y_{j_n} = a_1 \ldots a_n}} \frac{1}{m^n} \\ & =\sum\limits_{\substack{a_1 \dots a_n \\ a_1 + \ldots + a_n = s}} \theta_1^{s}\theta_0^{n-s} \\ &= \binom{s}{n} \theta_1^{s} \theta_0^{n-s} \qqtext{ where } \binom{s}{n} = \sum\limits_{\substack{a_1 \dots a_n \\ a_1 + \ldots + a_n = s}} 1 \end{aligned} \]
\[ P\qty(\sum_{i=1}^n Y_i = s) = \binom{s}{n} \theta_1^{s}\theta_0^{n-s} \ \ \text{ where } \ \ \binom{s}{n} \text{ is the number of binary sequences $a_1 \ldots a_n$ summing to $s$.} \]
choose
function in R will do it for us: \(\binom{s}{n}\) is choose(n,s)
.dbinom
function will give us the whole probability: \(\binom{s}{n}\theta_1^s\theta_0^{n-s}\) is dbinom(s, n, theta_1)
.rbinom
function will draw samples from this distribution: rbinom(10000, n, theta_1)
gives us 10,000.\[ \begin{aligned} &\overset{\color{gray}=P\qty(\frac{1}{n}\sum_{i=1}^n Y_i = \frac{s}{n})}{P\qty(\sum_{i=1}^n Y_i = s)} = \binom{s}{n} \theta_1^{s}\theta_0^{n-s} \\ &\qfor n =625 \\ &\qand \theta_1 \in \{\textcolor[RGB]{239,71,111}{0.68}, \textcolor[RGB]{17,138,178}{0.7}, \textcolor[RGB]{6,214,160}{0.72} \} \end{aligned} \]
To estimate this sampling distribution, you plug your point estimate \(\hat\theta\) into the Binomial formula. \[ \hat P\qty(\sum_{i=1}^n Y_i = s) = \binom{s}{n} \hat\theta^{s} (1-\hat\theta)^{n-s} \qqtext{ estimates } P\qty(\sum_{i=1}^n Y_i = s) = \binom{s}{n} \theta^{s} (1-\theta)^{n-s} \]
To calibrate your interval estimate, you …
rbinom
to draw 10,000 samples from this estimate of the sampling distribution.width
from the Week 1 Homework to find an interval that covers 95% of them.Another Example. This won’t be on the Exam.
\(p\) | \(J_1\) | … | \(J_n\) | \(Y_1\) | … | \(Y_n\) |
---|---|---|---|---|---|---|
\(\frac{m-n!}{m!}\) | \(1\) | … | \(1\) | \(y_1\) | … | \(y_1\) |
\(\frac{m-n!}{m!}\) | \(1\) | … | \(2\) | \(y_1\) | … | \(y_2\) |
⋮ | ⋮ | ⋱ | ⋮ | ⋮ | ⋱ | ⋮ |
\(\frac{m-n!}{m!}\) | \(1\) | … | \(m\) | \(y_1\) | … | \(y_m\) |
\(\frac{m-n!}{m!}\) | \(2\) | … | \(1\) | \(y_2\) | … | \(y_1\) |
\(\frac{m-n!}{m!}\) | \(2\) | … | \(2\) | \(y_2\) | … | \(y_2\) |
⋮ | ⋮ | ⋱ | ⋮ | ⋮ | ⋱ | ⋮ |
\(\frac{m-n!}{m!}\) | \(m\) | … | \(m\) | \(y_m\) | … | \(y_m\) |
\(p\) | \(Y_1 + \ldots + Y_n\) |
---|---|
\(\color[RGB]{239,71,111}\sum\limits_{\substack{a_1 \ldots a_n \\ a_1 + \ldots + a_n = 0}} \sum\limits_{\substack{j_1 \neq \ldots \neq j_n \\ y_{j_1} \ldots y_{j_n} = a_1 \ldots a_n}} \frac{m-n!}{m!}\) | \(\color[RGB]{239,71,111}0\) |
\(\color[RGB]{17,138,178}\sum\limits_{\substack{a_1 \ldots a_n \\ a_1 + \ldots + a_n = 0}} \sum\limits_{\substack{j_1 \neq \ldots \neq j_n \\ y_{j_1} \ldots y_{j_n} = a_1 \ldots a_n}} \frac{m-n!}{m!}\) | \(\color[RGB]{17,138,178}1\) |
⋮ | ⋮ |
\(\color[RGB]{6,214,160}\sum\limits_{\substack{a_1 \ldots a_n \\ a_1 + \ldots + a_n = n-1}} \sum\limits_{\substack{j_1 \neq \ldots \neq j_n \\ y_{j_1} \ldots y_{j_n} = a_1 \ldots a_n}} \frac{m-n!}{m!}\) | \(\color[RGB]{6,214,160}n-1\) |
\(\color[RGB]{255,209,102}\sum\limits_{\substack{a_1 \ldots a_n \\ a_1 + \ldots + a_n = n}} \sum\limits_{\substack{j_1 \neq \ldots \neq j_n \\ y_{j_1} \ldots y_{j_n} = a_1 \ldots a_n}} \frac{m-n!}{m!}\) | \(\color[RGB]{255,209,102}n\) |
\(p\) | \(Y_1\) | \(Y_2\) | … | \(Y_n\) | \(Y_1 + \ldots + Y_n\) |
---|---|---|---|---|---|
\(\color[RGB]{239,71,111}\sum\limits_{\substack{j_1 \neq \ldots \neq j_n \\ y_{j_1}, y_{j_2} \ldots y_{j_n} = 0, 0 \ldots 0}} \frac{m-n!}{m!}\) | 0 | 0 | … | \(0\) | \(\color[RGB]{239,71,111}0\) |
\(\color[RGB]{17,138,178}\sum\limits_{\substack{j_1 \neq \ldots \neq j_n \\ y_{j_1}, y_{j_2} \ldots y_{j_n} = 0, 0 \ldots 1}} \frac{m-n!}{m!}\) | 0 | 0 | … | \(1\) | \(\color[RGB]{17,138,178}1\) |
⋮ | ⋮ | ⋱ | ⋮ | ⋮ | ⋱ |
\(\color[RGB]{17,138,178}\sum\limits_{\substack{j_1 \neq \ldots \neq j_n \\ y_{j_1}, y_{j_2} \ldots y_{j_n} = 0, 1 \ldots 0}} \frac{m-n!}{m!}\) | 0 | 1 | … | \(0\) | \(\color[RGB]{17,138,178}1\) |
⋮ | ⋮ | ⋱ | ⋮ | ⋮ | ⋱ |
\(\color[RGB]{6,214,160}\sum\limits_{\substack{j_1 \neq \ldots \neq j_n \\ y_{j_1}, y_{j_2} \ldots y_{j_n} = 0, 1 \ldots 1}} \frac{m-n!}{m!}\) | 0 | 1 | … | \(1\) | \(\color[RGB]{6,214,160}n-1\) |
\(\color[RGB]{255,209,102}\sum\limits_{\substack{j_1 \neq \ldots \neq j_n \\ y_{j_1}, y_{j_2} \ldots y_{j_n} = 1, 1 \ldots 1}} \frac{m-n!}{m!}\) | 1 | 1 | … | \(1\) | \(\color[RGB]{255,209,102}n\) |
To find the distribution of a sum \(Y_1 + \ldots + Y_n\) when we sample without replacement we do the same thing.
We start by writing out the joint distribution of \(n\) dice rolls.
Then we marginalize in two steps.
\[ \begin{aligned} \sum_{\substack{j_1 \neq \ldots \neq j_n \\ y_{j_1} \ldots y_{j_n} = 0 \ldots 0}} \frac{m-n!}{m!} &= \sum_{\substack{j_1 \neq \ldots \neq j_{n-1} \\ y_{j_1} \ldots y_{j_{n-1}} = 0 \ldots 0}} \sum_{\substack{j_n \not\in j_1 \ldots j_{n-1} \\ y_n=0}} \frac{(m-n)!}{m!} \\ &= \sum_{\substack{j_1 \ldots j_{n-1} \\ y_{j_1} \ldots y_{j_{n-1}} = a_1 \ldots a_{n-1}}} \sum_{\substack{j_n \not\in j_1 \ldots j_{n-1} \\ y_n=0}} (m_{0} - n + 1) \times \frac{(m-n)!}{m!} \\ &= \sum_{\substack{j_1 \neq \ldots \neq j_{n-2} \\ y_{j_1} \ldots y_{j_{n-2}} = 0 \ldots 0}} \sum_{\substack{j_{n-1}\not\in j_1 \ldots j_{n-2} \\ y_n=0}} (m_0 - n + 2) (m_{0} - n + 1) \times \frac{(m-n)!}{m!} \\ &= m_0 \times \ldots \times (m_0 - n + 1) \times (m-n)!/m! = \frac{m_0!}{(m_0-n)!} \times \frac{(m-n)!}{m!} \qqtext{after repeating $n-2$ more times} \\ &= \frac{m_0!}{(m_0-s_0)!} \times \frac{m_1!}{(m_1-s)!} \times \frac{(m-n)!}{m!} \qqtext{ generally. } \end{aligned} \]
\[ \begin{aligned} \sum\limits_{\substack{j_1 \ldots j_n \\ y_{j_1} + \ldots + y_{j_n} = s}} (m-n)!/m! &= \sum\limits_{\substack{a_1 \dots a_n \\ a_1 + \ldots + a_n = s}} \ \ \sum_{\substack{j_1 \ldots j_n \\ y_{j_1} \ldots y_{j_n} = a_1 \ldots a_n}} (m-n)!/m! \\ & =\sum\limits_{\substack{a_1 \dots a_n \\ a_1 + \ldots + a_n = s}} \frac{m_0!}{(m_0-s_0)!} \times \frac{m_1!}{(m_1-s_1)!} \times \frac{(m-n)!}{m!} \\ &= \binom{s}{n} \frac{m_0!}{(m_0-s_0)!} \times \frac{m_1!}{(m_1-s)!} \times \frac{(m-n)!}{m!} \end{aligned} \]
\[ P\qty(\sum_{i=1}^n Y_i = s) = \binom{s}{n} \frac{m_0!}{(m_0-\{n+s\})!} \times \frac{m_1!}{(m_1-s)!} \times \frac{(m-n)!}{m!} \]
dhyper
function will give us the probability of a sum \(s\). That’s dhyper(s,m_0,m_1,n)
.rhyper
function will draw samples: rhyper(10000, m_0, m_1, n)
gives us 10,000.You say that’s how often, anyway. Hence ‘nominal’.
In technical terms, these estimates are draws from your estimator’s sampling distribution.
Impossible. I know.
We’re coloring it black instead of green here. It’s hard to see green on a green background