In [1]:

```
# Set up packages for lecture. Don't worry about understanding this code, but
# make sure to run it if you're following along.
import numpy as np
import babypandas as bpd
import pandas as pd
from matplotlib_inline.backend_inline import set_matplotlib_formats
import matplotlib.pyplot as plt
set_matplotlib_formats("svg")
plt.style.use('ggplot')
np.set_printoptions(threshold=20, precision=2, suppress=True)
pd.set_option("display.max_rows", 7)
pd.set_option("display.max_columns", 8)
pd.set_option("display.precision", 2)
# Imports for animation.
from lec18 import sampling_animation
from IPython.display import display, IFrame, HTML, YouTubeVideo
def show_permutation_testing_summary():
src = "https://docs.google.com/presentation/d/e/2PACX-1vSovXDonR6EmjrT45h4pY1mwmcKFMWVSdgpbKHC5HNTm9sbG7dojvvCDEQCjuk2dk1oA4gmwMogr8ZL/embed?start=false&loop=false&delayms=3000"
width = 960
height = 569
display(IFrame(src, width, height))
def show_bootstrapping_slides():
src = "https://docs.google.com/presentation/d/e/2PACX-1vS_iYHJYXSVMMZ-YQVFwMEFR6EFN3FDSAvaMyUm-YJfLQgRMTHm3vI-wWJJ5999eFJq70nWp2hyItZg/embed?start=false&loop=false&delayms=3000"
width = 960
height = 509
display(IFrame(src, width, height))
```

- Lab 5 is due
**Saturday 11/5 at 11:59pm**. - Homework 5 is due
**Tuesday 11/8 at 11:59pm**.

- Permutation testing examples.
- Are the distributions of weight for babies 👶 born to smoking mothers vs. non-smoking mothers different?
- Are the distributions of pressure drops for footballs 🏈 from two different teams different?

- Bootstrapping 🥾.

Permutation tests help answer questions of the form:

I have

two samples, but no information about any population distributions. Do these samples look like they were drawn from the same population?

- Are the distributions of weight for babies 👶 born to smoking mothers vs. non-smoking mothers different?

- Are the distributions of pressure drops for footballs 🏈 from two different teams different?

In [2]:

```
babies = bpd.read_csv('data/baby.csv').get(['Maternal Smoker', 'Birth Weight'])
babies
```

Out[2]:

Maternal Smoker | Birth Weight | |
---|---|---|

0 | False | 120 |

1 | False | 113 |

2 | True | 128 |

... | ... | ... |

1171 | True | 130 |

1172 | False | 125 |

1173 | False | 117 |

1174 rows × 2 columns

**Null Hypothesis**: In the population, birth weights of smokers' babies and non-smokers' babies have the same distribution, and the observed differences in our samples are due to random chance.

**Alternative Hypothesis**: In the population, smokers' babies have lower birth weights than non-smokers' babies, on average. The observed differences in our samples cannot be explained by random chance alone.

- Test statistic: Difference in mean birth weight of non-smokers' babies and smokers' babies.

- Strategy:
- Create a "population" by pooling data from both samples together.
- Randomly divide this "population" into two groups of the same sizes as the original samples.
- Repeat this process, calculating the test statistic for each pair of random groups.
- Generate an empirical distribution of test statistics and see whether the observed statistic is consistent with it.

- Implementation:
- To randomly divide the "population" into two groups of the same sizes as the original samples, we'll just shuffle the group labels and use the shuffled group labels to define the two random groups.

In [3]:

```
babies_with_shuffled = babies.assign(
Shuffled_Labels=np.random.permutation(babies.get('Maternal Smoker'))
)
babies_with_shuffled
```

Out[3]:

Maternal Smoker | Birth Weight | Shuffled_Labels | |
---|---|---|---|

0 | False | 120 | False |

1 | False | 113 | True |

2 | True | 128 | True |

... | ... | ... | ... |

1171 | True | 130 | False |

1172 | False | 125 | False |

1173 | False | 117 | False |

1174 rows × 3 columns

The `'Maternal Smoker'`

column defines the original groups. The `'Shuffed_Labels'`

column defines the random groups.

For the original groups:

In [4]:

```
original_groups = babies.groupby('Maternal Smoker').mean()
original_groups
```

Out[4]:

Birth Weight | |
---|---|

Maternal Smoker | |

False | 123.09 |

True | 113.82 |

In [5]:

```
original_means = original_groups.get('Birth Weight')
observed_difference = original_means.loc[False] - original_means.loc[True]
observed_difference
```

Out[5]:

9.266142572024918

For the random groups:

In [6]:

```
def difference_in_group_means(weights_df):
group_means = weights_df.groupby('Shuffled_Labels').mean().get('Birth Weight')
return group_means.loc[False] - group_means.loc[True]
# Shuffling the labels again.
babies_with_shuffled = babies.assign(Shuffled_Labels=np.random.permutation(babies.get('Maternal Smoker')))
difference_in_group_means(babies_with_shuffled)
```

Out[6]:

-1.157965781495193

In [7]:

```
n_repetitions = 500 # The dataset is large, so it takes too long to run if we use 5000 or 10000.
differences = np.array([])
for i in np.arange(n_repetitions):
# Step 1: Shuffle the labels.
shuffled_labels = np.random.permutation(babies.get('Maternal Smoker'))
# Step 2: Put them in a DataFrame.
shuffled = babies_with_shuffled.assign(Shuffled_Labels=shuffled_labels)
# Step 3: Compute the difference in group means and add the result to the differences array.
difference = difference_in_group_means(shuffled)
differences = np.append(differences, difference)
differences
```

Out[7]:

array([-0.39, 2.23, -2.16, ..., -2.59, 0.49, 1.47])

In [8]:

```
(bpd.DataFrame()
.assign(simulated_diffs=differences)
.plot(kind='hist', bins=20, density=True, ec='w', figsize=(10, 5))
);
```

- Note that the empirical distribution of the test statistic (difference in means) is centered around 0.
- This matches our intuition – if the null hypothesis is true, there should be no difference in the group means on average.

In [9]:

```
(bpd.DataFrame()
.assign(simulated_diffs=differences)
.plot(kind='hist', bins=20, density=True, ec='w', figsize=(10, 5))
);
plt.axvline(observed_difference, color='black', linewidth=4, label='observed difference in means')
plt.legend();
```

In [10]:

```
smoker_p_value = np.count_nonzero(differences >= observed_difference) / 500
smoker_p_value
```

Out[10]:

0.0

- Under the null hypothesis, we rarely see differences as large as 9.26 ounces.

- Therefore, we reject the null hypothesis: the evidence implies that the groups do not come from the same distribution.

## Can we conclude that smoking

No, we cannot. This was an observational study; there may be confounding factors. For instance, maybe smokers are more likely to drink caffeine, and caffeine causes lower birth weight.*causes*lower birth weight? Why or why not? Think about it, then click here for the answer.

In [11]:

```
show_permutation_testing_summary()
```

Recall, `babies`

has two columns.

In [12]:

```
babies.take(np.arange(3))
```

Out[12]:

Maternal Smoker | Birth Weight | |
---|---|---|

0 | False | 120 |

1 | False | 113 |

2 | True | 128 |

To randomly assign weights to groups, we shuffled `'Maternal Smoker'`

column. Could we have shuffled the `'Birth Weight'`

column instead?

- A. Yes
- B. No

- On January 18, 2015, the New England Patriots played the Indianapolis Colts for a spot in the Super Bowl.
- The Patriots won, 45-7. They went on to win the Super Bowl.
- After the game,
**it was alleged that the Patriots intentionally deflated footballs**, making them easier to catch. This scandal was called "Deflategate."

- Each team brings 12 footballs to the game. Teams use their own footballs while on offense.

- NFL rules stipulate that
**each ball must be inflated to between 12.5 and 13.5 pounds per square inch (psi)**.

- Before the game, officials found that all of the Patriots' footballs were at about 12.5 psi, and that all of the Colts' footballs were at about 13.0 psi.
- This pre-game data was not written down.

- In the second quarter, the Colts intercepted a Patriots ball and notified officials that it felt under-inflated.

- At halftime, two officials (Clete Blakeman and Dyrol Prioleau) independently measured the pressures of as many of the 24 footballs as they could.
- They ran out of time before they could finish.

- Note that the relevant quantity is the
**change in pressure**from the start of the game to the halftime.- The Patriots' balls
*started*at a lower psi (which is not an issue on its own). - The allegations were that the Patriots
**deflated**their balls, during the game.

- The Patriots' balls

In [13]:

```
footballs = bpd.read_csv('data/footballs.csv')
footballs
```

Out[13]:

Team | Pressure | PressureDrop | |
---|---|---|---|

0 | Patriots | 11.65 | 0.85 |

1 | Patriots | 11.03 | 1.48 |

2 | Patriots | 10.85 | 1.65 |

... | ... | ... | ... |

11 | Colts | 12.53 | 0.47 |

12 | Colts | 12.72 | 0.28 |

13 | Colts | 12.35 | 0.65 |

14 rows × 3 columns

- There are only 15 rows (11 for Patriots footballs, 4 for Colts footballs) since the officials weren't able to record the pressures of every ball.
- The
`'Pressure'`

column records the average of the two officials' measurements at halftime. - The
`'PressureDrop'`

column records the difference between the estimated starting pressure and the average recorded`'Pressure'`

of each football.

Did the Patriots' footballs drop in pressure more than the Colts'?

- We want to test whether two samples came from the same distribution – this calls for a permutation test.

**Null hypothesis**: The drop in pressures for both teams came from the same distribution.- By chance, the Patriots' footballs deflated more.

**Alternative hypothesis**: No, the Patriots' footballs deflated more than one would expect due to random chance alone.

Similar to the baby weights example, our test statistic will be the difference between the teams' average pressure drops. We'll calculate the mean drop for the `'Patriots'`

minus the mean drop for the `'Colts'`

.

In [14]:

```
means = footballs.groupby('Team').mean().get('PressureDrop')
means
```

Out[14]:

Team Colts 0.47 Patriots 1.21 Name: PressureDrop, dtype: float64

In [15]:

```
# Calculate the observed statistic.
observed_difference = means.loc['Patriots'] - means.loc['Colts']
observed_difference
```

Out[15]:

0.7362500000000001

The average pressure drop for the Patriots was about 0.74 psi more than the Colts.

We'll run a permutation test to see if 0.74 psi is a significant difference.

- To do this, we'll need to repeatedly shuffle either the
`'Team'`

or the`'PressureDrop'`

column. - We'll shuffle the
`'PressureDrop'`

column. - Tip: It's a good idea to simulate one value of the test statistic before putting everything in a
`for`

-loop.

In [16]:

```
# For simplicity, keep only the columns that are necessary for the test:
# One column of group labels and one column of numerical values.
footballs = footballs.get(['Team', 'PressureDrop'])
footballs
```

Out[16]:

Team | PressureDrop | |
---|---|---|

0 | Patriots | 0.85 |

1 | Patriots | 1.48 |

2 | Patriots | 1.65 |

... | ... | ... |

11 | Colts | 0.47 |

12 | Colts | 0.28 |

13 | Colts | 0.65 |

14 rows × 2 columns

In [17]:

```
# Shuffle one column.
# We chose to shuffle the numerical data (pressure drops), but we could have shuffled the group labels (team names) instead.
shuffled_drops = np.random.permutation(footballs.get('PressureDrop'))
shuffled_drops
```

Out[17]:

array([1.48, 0.85, 1.65, 1.18, 0.28, 1.8 , 0.47, 0.72, 1.23, 0.65, 1.35, 0.42, 1.38, 0.47])

In [18]:

```
# Add the shuffled column back to the DataFrame.
shuffled = footballs.assign(Shuffled_Drops=shuffled_drops)
shuffled
```

Out[18]:

Team | PressureDrop | Shuffled_Drops | |
---|---|---|---|

0 | Patriots | 0.85 | 1.48 |

1 | Patriots | 1.48 | 0.85 |

2 | Patriots | 1.65 | 1.65 |

... | ... | ... | ... |

11 | Colts | 0.47 | 0.42 |

12 | Colts | 0.28 | 1.38 |

13 | Colts | 0.65 | 0.47 |

14 rows × 3 columns

In [19]:

```
# Calculate the group means for the two randomly created groups.
team_means = shuffled.groupby('Team').mean().get('Shuffled_Drops')
team_means
```

Out[19]:

Team Colts 0.91 Patriots 1.03 Name: Shuffled_Drops, dtype: float64

In [20]:

```
# Calcuate the difference in group means (Patriots minus Colts) for the randomly created groups.
team_means.loc['Patriots'] - team_means.loc['Colts']
```

Out[20]:

0.12375000000000003

- Repeat the process many times by wrapping it inside a
`for`

-loop. - Keep track of the difference in group means in an array, appending each time.
- Optionally, create a function to calculate the difference in group means.

In [21]:

```
def difference_in_mean_pressure_drops(pressures_df):
team_means = pressures_df.groupby('Team').mean().get('Shuffled_Drops')
return team_means.loc['Patriots'] - team_means.loc['Colts']
```

In [22]:

```
n_repetitions = 5000 # The dataset is much smaller than in the baby weights example, so a larger number of repetitions will still run quickly.
differences = np.array([])
for i in np.arange(n_repetitions):
# Step 1: Shuffle the pressure drops.
shuffled_drops = np.random.permutation(footballs.get('PressureDrop'))
# Step 2: Put them in a DataFrame.
shuffled = footballs.assign(Shuffled_Drops=shuffled_drops)
# Step 3: Compute the difference in group means and add the result to the differences array.
difference = difference_in_mean_pressure_drops(shuffled)
differences = np.append(differences, difference)
differences
```

Out[22]:

array([-0.22, -0.38, 0.48, ..., 0.35, 0.13, -0.08])

In [23]:

```
bpd.DataFrame().assign(SimulatedDifferenceInMeans=differences).plot(kind='hist', bins=20, density=True, ec='w', figsize=(10, 5))
plt.axvline(observed_difference, color='black', linewidth=4, label='observed difference in means')
plt.legend();
```

It doesn't look good for the Patriots. What is the p-value?

- Recall, the p-value is the probability, under the null hypothesis, of seeing a result
**as or more extreme**than the observation. - In this case, that's the probability of the difference in mean pressure drops being greater than or equal to 0.74 psi.

In [24]:

```
np.count_nonzero(differences >= observed_difference) / n_repetitions
```

Out[24]:

0.0044

This p-value is low enough to consider this result to be *highly* statistically significant ($p<0.01$).

- We reject the null hypothesis, as it is unlikely that the difference in mean pressure drops is due to chance alone.
- But this doesn't establish
**causation**. - That is, we can't conclude that the Patriots
**intentionally**deflated their footballs.

Quote from an investigative report commissioned by the NFL:

“[T]he average pressure drop of the Patriots game balls exceeded the average pressure drop of the Colts balls by 0.45 to 1.02 psi, depending on various possible assumptions regarding the gauges used, and assuming an initial pressure of 12.5 psi for the Patriots balls and 13.0 for the Colts balls.”

- Many different methods were used to determine whether the drop in pressures were due to chance, including physics.
- We computed an observed difference of 0.74, which is in line with the findings of the report.

- In the end, Tom Brady (quarterback for the Patriots at the time) was suspended 4 games and the team was fined $1 million dollars.
- The Deflategate Wikipedia article is extremely thorough; give it a read if you're curious!

To actually establish causation, we need the following two statements to be true:

- The data must come from a randomized controlled trial, to mitigate the effects of confounding factors.

- A permutation test must show a statistically significant difference in the outcome between the treatment and control group.

If both of these conditions are met, then we can conclude that the treatment **causes** the outcome.

In [25]:

```
population = bpd.read_csv('data/2021_salaries.csv')
population
```

Out[25]:

Year | EmployerType | EmployerName | DepartmentOrSubdivision | ... | EmployerCounty | SpecialDistrictActivities | IncludesUnfundedLiability | SpecialDistrictType | |
---|---|---|---|---|---|---|---|---|---|

0 | 2021 | City | San Diego | Police | ... | San Diego | NaN | False | NaN |

1 | 2021 | City | San Diego | Police | ... | San Diego | NaN | False | NaN |

2 | 2021 | City | San Diego | Police | ... | San Diego | NaN | False | NaN |

... | ... | ... | ... | ... | ... | ... | ... | ... | ... |

12302 | 2021 | City | San Diego | Fire-Rescue | ... | San Diego | NaN | False | NaN |

12303 | 2021 | City | San Diego | Fleet Operations | ... | San Diego | NaN | False | NaN |

12304 | 2021 | City | San Diego | Fire-Rescue | ... | San Diego | NaN | False | NaN |

12305 rows × 29 columns

When you load in a dataset that has so many columns that you can't see them all, it's a good idea to look at the column names.

In [26]:

```
population.columns
```

Out[26]:

Index(['Year', 'EmployerType', 'EmployerName', 'DepartmentOrSubdivision', 'Position', 'ElectedOfficial', 'Judicial', 'OtherPositions', 'MinPositionSalary', 'MaxPositionSalary', 'ReportedBaseWage', 'RegularPay', 'OvertimePay', 'LumpSumPay', 'OtherPay', 'TotalWages', 'DefinedBenefitPlanContribution', 'EmployeesRetirementCostCovered', 'DeferredCompensationPlan', 'HealthDentalVision', 'TotalRetirementAndHealthContribution', 'PensionFormula', 'EmployerURL', 'EmployerPopulation', 'LastUpdatedDate', 'EmployerCounty', 'SpecialDistrictActivities', 'IncludesUnfundedLiability', 'SpecialDistrictType'], dtype='object')

We only need the `'TotalWages'`

column, so let's `get`

just that column.

In [27]:

```
population = population.get(['TotalWages'])
population
```

Out[27]:

TotalWages | |
---|---|

0 | 359138 |

1 | 345336 |

2 | 336250 |

... | ... |

12302 | 9 |

12303 | 9 |

12304 | 4 |

12305 rows × 1 columns

In [28]:

```
population.plot(kind='hist', bins=np.arange(0, 400000, 10000), density=True, ec='w', figsize=(10, 5),
title='Distribution of Total Wages of San Diego City Employees in 2021');
```

Consider the question

What is the median salary of all San Diego city employees?

What is the right tool to answer this question?

- A. Standard hypothesis testing
- B. Permutation testing
- C. Either of the above
- D. None of the above

- We can use
`.median()`

to find the median salary of all city employees. - This is
**not**a random quantity.

In [29]:

```
population_median = population.get('TotalWages').median()
population_median
```

Out[29]:

74441.0

- In practice, it is costly and time-consuming to survey
**all**12,000+ employees.- More generally, we can't expect to survey all members of the population we care about.

- Instead, we gather salaries for a random sample of, say, 500 people.

- Hopefully, the median of the sample is close to the median of the population.

- The full DataFrame of salaries is the
**population**.

- We observe a
**sample**of 500 salaries from the population.

- We want to determine the
**population median (a parameter)**, but we don't have the whole population, so instead we use the**sample median (a statistic) as an estimate**.

- Hopefully the sample median is close to the population median.

Let's survey 500 employees at random. To do so, we can use the `.sample`

method.

In [30]:

```
np.random.seed(38) # Magic to ensure that we get the same results every time this code is run.
# Take a sample of size 500.
my_sample = population.sample(500)
my_sample
```

Out[30]:

TotalWages | |
---|---|

599 | 167191 |

10595 | 18598 |

837 | 157293 |

... | ... |

2423 | 122785 |

7142 | 62808 |

5792 | 78093 |

500 rows × 1 columns

We won't reassign `my_sample`

at any point in this notebook, so it will always refer to this particular sample.

In [31]:

```
# Compute the sample median.
sample_median = my_sample.get('TotalWages').median()
sample_median
```

Out[31]:

72016.0

- Our estimate depended on a random sample.

- If our sample was different, our estimate may have been different, too.

**How different could our estimate have been?**

- Our confidence in the estimate depends on the answer to this question.

- The sample median is a random number.

- It comes from some distribution, which we don't know.

- How different could our estimate have been, if we drew a different sample?
- "Narrow" distribution $\Rightarrow$ not too different.
- "Wide" distribution $\Rightarrow$ quite different.

**What is the distribution of the sample median?**

- One idea: repeatedly collect random samples of 500
**from the population**and compute its median.- This is what we did in Lecture 14 to compute an empirical distribution of the sample mean of flight delays.

- The animation below visualizes the process of repeatedly collecting a sample and computing its median.

In [32]:

```
%%capture
anim, sample_medians = sampling_animation(population);
```

In [33]:

```
HTML(anim.to_jshtml())
```

Out[33]: