In [1]:

```
# Run this cell to set up packages for lecture.
from lec18_imports import *
```

- Homework 4 is due
**tomorrow at 11:59PM**. - Lab 5 is due
**Thursday at 11:59PM**.

- Recap: Standard units.
- The Central Limit Theorem.
- Using the Central Limit Theorem to create confidence intervals.

Suppose $x$ is a numerical variable, and $x_i$ is one value of that variable. Then, $$x_{i \: \text{(su)}} = \frac{x_i - \text{mean of $x$}}{\text{SD of $x$}}$$

represents $x_i$ in **standard units** – the number of standard deviations $x_i$ is above the mean.

SAT scores range from 0 to 1600. The distribution of SAT scores has a mean of 950 and a standard deviation of 300. Your friend tells you that their SAT score, in standard units, is 2.5. What do you conclude?

The distribution of flight delays that we've been looking at is *not* roughly normal.

In [2]:

```
delays = bpd.read_csv('data/united_summer2015.csv')
delays.plot(kind='hist', y='Delay', bins=np.arange(-20.5, 210, 5), density=True, ec='w', figsize=(10, 5), title='Population Distribution of Flight Delays')
plt.xlabel('Delay (minutes)');
```

In [3]:

```
delays.get('Delay').describe()
```

Out[3]:

count 13825.00 mean 16.66 std 39.48 ... 50% 2.00 75% 18.00 max 580.00 Name: Delay, Length: 8, dtype: float64

- We used bootstrapping to estimate
**the distribution of a sample statistic (e.g. sample mean or sample median)**, using just a single sample.

- We did this to construct confidence intervals for a population parameter.

**Important**: For now, we'll suppose our parameter of interest is the population mean,**so we're interested in estimating the distribution of the sample mean**.

- What we're soon going to discover is a technique for
**finding the distribution of the sample mean and creating a confidence interval, without needing to bootstrap**. Think of this as a shortcut to bootstrapping.

Since we have access to the population of flight delays, let's remind ourselves what the distribution of the sample mean looks like by drawing samples repeatedly from the population.

- This is
**not bootstrapping**. - This is also
**not practical**. If we had access to a population, we wouldn't need to understand the distribution of the sample mean – we'd be able to compute the population mean directly.

In [4]:

```
sample_means = np.array([])
repetitions = 2000
for i in np.arange(repetitions):
sample = delays.sample(500) # Not bootstrapping!
sample_mean = sample.get('Delay').mean()
sample_means = np.append(sample_means, sample_mean)
sample_means
```

Out[4]:

array([18.17, 18.21, 18.01, ..., 17.49, 17.46, 14.66])

In [5]:

```
bpd.DataFrame().assign(sample_means=sample_means).plot(kind='hist', density=True, ec='w', alpha=0.65, bins=20, figsize=(10, 5));
plt.scatter([sample_means.mean()], [-0.005], marker='^', color='green', s=250)
plt.axvline(sample_means.mean(), color='green', label=f'mean={np.round(sample_means.mean(), 2)}', linewidth=4)
plt.xlim(5, 30)
plt.ylim(-0.013, 0.26)
plt.legend();
```

The Central Limit Theorem (CLT) says that the probability distribution of the

sum or meanof a large random sample drawn with replacement will be roughly normal, regardless of the distribution of the population from which the sample is drawn.

**Shape**: The CLT says that the distribution of the sample mean is roughly normal, no matter what the population looks like.

**Center**: This distribution is centered at the population mean.

**Spread**: What is the standard deviation of the distribution of the sample mean? How is it impacted by the sample size?

The function `sample_mean_delays`

takes in an integer `sample_size`

, and:

- Takes a sample of size
`sample_size`

directly from the population. - Computes the mean of the sample.
- Repeats steps 1 and 2 above 2000 times, and returns an array of the resulting means.

In [6]:

```
def sample_mean_delays(sample_size):
sample_means = np.array([])
for i in np.arange(2000):
sample = delays.sample(sample_size)
sample_mean = sample.get('Delay').mean()
sample_means = np.append(sample_means, sample_mean)
return sample_means
```

Let's call `sample_mean_delays`

on several values of `sample_size`

.

In [7]:

```
sample_means = {}
sample_sizes = [5, 10, 50, 100, 200, 400, 800, 1600]
for size in sample_sizes:
sample_means[size] = sample_mean_delays(size)
```

Let's look at the resulting distributions.

In [8]:

```
plot_many_distributions(sample_sizes, sample_means)
```

What do you notice? 🤔

- As we increase our sample size, the distribution of the sample mean gets narrower, and so its standard deviation decreases.
- Can we determine exactly how much it decreases by?

In [9]:

```
# Compute the standard deviation of each distribution.
sds = np.array([])
for size in sample_sizes:
sd = np.std(sample_means[size])
sds = np.append(sds, sd)
sds
```

Out[9]:

array([18.65, 12.78, 5.51, 3.83, 2.76, 1.96, 1.35, 0.9 ])

In [10]:

```
observed = bpd.DataFrame().assign(
SampleSize=sample_sizes,
StandardDeviation=sds
)
observed.plot(kind='scatter', x='SampleSize', y='StandardDeviation', s=70, title="Standard Deviation of the Distribution of the Sample Mean vs. Sample Size", figsize=(10, 5));
```

*decreases quickly*.

- Here's the mathematical relationship describing this phenomenon:

- This is sometimes called the
**square root law**. Its proof is outside the scope of this class; you'll see it if you take an upper-division probability course.

**Note**: This is**not**saying anything about the standard deviation of a sample itself! It is a statement about the distribution of all possible sample means. If we increase the size of the sample we're taking:- It
**is not true**❌ that the SD of our sample will decrease. - It
**is true**✅ that the SD of the distribution of all possible sample means of that size will decrease.

- It

If we were to take many, many samples of the same size from a population, and take the mean of each sample, the distribution of the sample mean will have the following characteristics:

**Shape**: The distribution will be roughly normal, regardless of the shape of the population distribution.

**Center**: The distribution will be centered at the population mean.

**Spread**: The distribution's standard deviation will be described by the square root law:

**🚨 Practical Issue**: The mean and standard deviation of the distribution of the sample mean both depend on the original population, but we typically **don't have access to the population**!

**Idea**: The sample mean and SD are likely to be close to the population mean and SD. So, use them as approximations in the CLT!

- As a result,
**we can approximate the distribution of the sample mean, given just a single sample, without ever having to bootstrap!**- In other words, the CLT is a shortcut to bootstrapping!

Let's take a single sample of size 500 from `delays`

.

In [11]:

```
np.random.seed(42)
my_sample = delays.sample(500)
my_sample.get('Delay').describe()
```

Out[11]:

count 500.00 mean 13.01 std 28.00 ... 50% 3.00 75% 16.00 max 209.00 Name: Delay, Length: 8, dtype: float64

In [12]:

```
resample_means = np.array([])
repetitions = 2000
for i in np.arange(repetitions):
resample = my_sample.sample(500, replace=True) # Bootstrapping!
resample_mean = resample.get('Delay').mean()
resample_means = np.append(resample_means, resample_mean)
resample_means
```

Out[12]:

array([12.65, 11.5 , 11.34, ..., 12.59, 11.89, 12.58])

In [13]:

```
bpd.DataFrame().assign(resample_means=resample_means).plot(kind='hist', density=True, ec='w', alpha=0.65, bins=20, figsize=(10, 5));
plt.scatter([resample_means.mean()], [-0.005], marker='^', color='green', s=250)
plt.axvline(resample_means.mean(), color='green', label=f'mean={np.round(resample_means.mean(), 2)}', linewidth=4)
plt.xlim(7, 20)
plt.ylim(-0.015, 0.35)
plt.legend();
```

The CLT tells us what this distribution will look like, without having to bootstrap!

Suppose all we have access to in practice is a single "original sample." If we were to take many, many samples of the same size from this original sample, and take the mean of each resample, the distribution of the (re)sample mean will have the following characteristics:

**Shape**: The distribution will be roughly normal, regardless of the shape of the original sample's distribution.

**Center**: The distribution will be centered at the**original sample's mean**, which should be close to the population's mean.

**Spread**: The distribution's standard deviation will be described by the square root law:

Let's test this out!

Using just the original sample, `my_sample`

, we estimate that the distribution of the sample mean has the following mean:

In [14]:

```
sample_mean_mean = my_sample.get('Delay').mean()
sample_mean_mean
```

Out[14]:

13.008

and the following standard deviation:

In [15]:

```
sample_mean_sd = np.std(my_sample.get('Delay')) / np.sqrt(my_sample.shape[0])
sample_mean_sd
```

Out[15]:

1.2511114546674091

In [16]:

```
norm_x = np.linspace(7, 20)
norm_y = normal_curve(norm_x, mu=sample_mean_mean, sigma=sample_mean_sd)
bpd.DataFrame().assign(Bootstrapping=resample_means).plot(kind='hist', density=True, ec='w', alpha=0.65, bins=20, figsize=(10, 5));
plt.plot(norm_x, norm_y, color='black', linestyle='--', linewidth=4, label='CLT')
plt.title('Distribution of the Sample Mean, Using Two Methods')
plt.xlim(7, 20)
plt.legend();
```

**Key takeaway**: Given just a single sample, we can use the CLT to estimate the distribution of the sample mean, **without bootstrapping**.

In [17]:

```
show_clt_slides()
```

Now, we can make confidence intervals for population means **without needing to bootstrap**!

- Previously, we bootstrapped to construct confidence intervals.
- Strategy: Collect one sample, repeatedly resample from it, calculate the statistic on each resample, and look at the middle 95% of resampled statistics.

- But,
**if our statistic is the mean**, we can use the CLT.- Computationally cheaper – no simulation required!

- In both cases, we use just a single sample to construct our confidence interval.

We already have a single sample, `my_sample`

. Let's bootstrap to generate 2000 resample means.

In [18]:

```
my_sample.get('Delay').describe()
```

Out[18]:

In [19]:

```
resample_means = np.array([])
repetitions = 2000
for i in np.arange(repetitions):
resample = my_sample.sample(500, replace=True)
resample_mean = resample.get('Delay').mean()
resample_means = np.append(resample_means, resample_mean)
resample_means
```

Out[19]:

array([14.37, 13.93, 11.34, ..., 16.84, 14.46, 11.4 ])

In [20]:

```
bpd.DataFrame().assign(resample_means=resample_means).plot(kind='hist', density=True, ec='w', alpha=0.65, bins=20, figsize=(10, 5));
plt.scatter([resample_means.mean()], [-0.005], marker='^', color='green', s=250)
plt.axvline(resample_means.mean(), color='green', label=f'mean={np.round(resample_means.mean(), 2)}', linewidth=4)
plt.xlim(7, 20)
plt.ylim(-0.015, 0.35)
plt.legend();
```

In [21]:

```
left_boot = np.percentile(resample_means, 2.5)
right_boot = np.percentile(resample_means, 97.5)
[left_boot, right_boot]
```

Out[21]:

[10.6359, 15.61205]

In [22]:

```
bpd.DataFrame().assign(resample_means=resample_means).plot(kind='hist', y='resample_means', alpha=0.65, bins=20, density=True, ec='w', figsize=(10, 5), title='Distribution of Bootstrapped Sample Means');
plt.plot([left_boot, right_boot], [0, 0], color='gold', linewidth=10, label='95% bootstrap-based confidence interval');
plt.xlim(7, 20);
plt.legend();
```

But we didn't *need* to bootstrap to learn what the distribution of the sample mean looks like. We could instead use the CLT, which tells us that the distribution of the sample mean is normal. Further, its mean and standard deviation are approximately:

In [23]:

```
sample_mean_mean = my_sample.get('Delay').mean()
sample_mean_mean
```

Out[23]:

13.008

In [24]:

```
sample_mean_sd = np.std(my_sample.get('Delay')) / np.sqrt(my_sample.shape[0])
sample_mean_sd
```

Out[24]:

1.2511114546674091

So, the distribution of the sample mean is approximately:

In [25]:

```
plt.figure(figsize=(10, 5))
norm_x = np.linspace(7, 20)
norm_y = normal_curve(norm_x, mu=sample_mean_mean, sigma=sample_mean_sd)
plt.plot(norm_x, norm_y, color='black', linestyle='--', linewidth=4, label='Distribution of the Sample Mean (via the CLT)')
plt.xlim(7, 20)
plt.legend();
```

**Question**: What interval on the $x$-axis captures the **middle 95%** of this distribution?

Range | All Distributions (via Chebyshev's inequality) | Normal Distribution |
---|---|---|

mean $\pm \ 1$ SD | $\geq 0\%$ | $\approx 68\%$ |

mean $\pm \ 2$ SDs | $\geq 75\%$ | $\approx 95\%$ |

mean $\pm \ 3$ SDs | $\geq 88.8\%$ | $\approx 99.73\%$ |

As we saw last class, if a variable is roughly normal, then approximately 95% of its values are within 2 standard deviations of its mean.

In [26]:

```
normal_area(-2, 2)
```

In [27]:

```
stats.norm.cdf(2) - stats.norm.cdf(-2)
```

Out[27]:

0.9544997361036416

Let's use this fact here!

- Approximately 95% of a normal distribution's values fall within $\pm$ 2 SDs of the mean.

- The distribution in question here is the distribution of the sample mean.
**Don't confuse the sample SD with the SD of the sample mean's distribution**!

- So, our interval is given by:

In [28]:

```
left_normal = sample_mean_mean - 2 * sample_mean_sd
right_normal = sample_mean_mean + 2 * sample_mean_sd
[left_normal, right_normal]
```

Out[28]:

[10.50577709066518, 15.510222909334818]

In [29]:

```
plt.figure(figsize=(10, 5))
norm_x = np.linspace(7, 20)
norm_y = normal_curve(norm_x, mu=sample_mean_mean, sigma=sample_mean_sd)
plt.plot(norm_x, norm_y, color='black', linestyle='--', linewidth=4, label='Distribution of the Sample Mean (via the CLT)')
plt.xlim(7, 20)
plt.ylim(0, 0.41)
plt.plot([left_normal, right_normal], [0, 0], color='#8f6100', linewidth=10, label='95% CLT-based confidence interval')
plt.legend();
```