# Lecture 26 – Review¶

## DSC 10, Winter 2023¶

### Announcements¶

• The Final Exam is this Saturday 3/18 from 3-6PM in Galbraith Hall 242.
• See this EdStem announcement for details.
• Assigned seats will be emailed to you by Friday.
• We will check IDs.
• You'll have 2 hours, 50 minutes to work on the exam.
• No questions during the exam.
• The DSC 10 Reference Sheet will be provided. No calculators or other notes.
• Practice with old exam problems at practice.dsc10.com.
• If at least 80% of the class fills out both CAPEs and the End of Quarter Survey, then the entire class gets 0.5% of extra credit on their overall grade.

### Agenda¶

• No new material – just review!
• If you're attending lecture, fill in the code in the notebook as we go. We'll post the solutions later today.

## The data: Restaurants 🍟¶

Our data comes from data.sfgov.org.

We won't look at many of the columns in our DataFrame, so let's just get the ones we're interested in.

## At-risk restaurants ⚠️¶

For each restaurant, we have an inspection score.

In the preview above, we see...

• A restaurant with an inspection score of 92 being classified as 'Low Risk',
• A restaurant with an inspection score of 91 being classified as 'High Risk'
• A restaurant with an inspection score of 90 being classified as 'Low Risk'

This means that inspection scores don't directly translate to risk categories. Let's investigate the difference between the inspection scores of low risk and high risk restaurants.

Let's start by visualizing the distribution of inspection scores for low risk and high risk restaurants.

### Concept Check ✅ – Answer at cc.dsc10.com¶

We want to compare low risk restaurants to high risk restaurants and see if their inspection scores are significantly different. What technique should we use?

A. Standard hypothesis testing

B. Permutation testing

C. Bootstrapping

D. The Central Limit Theorem

Click for the answer after you've entered your guess above. Don't scroll any further. Permutation testing.

Let's keep only the relevant information.

Now, let's try shuffling a single one of the columns above. (Does it matter which one?)

Let's assign this shuffled column back into our original DataFrame. The resulting DataFrame is called original_and_shuffled.

Let's now visualize the distribution of inspection scores for low risk and high risk restaurants, in both our original dataset and after shuffling the labels.

### Concept Check ✅ – Answer at cc.dsc10.com¶

It looks like the two groups in the first histogram are substantially more different than the two groups in the second histogram.

What test statistic(s) can we use to quantify the difference between the two groups displayed in a given histogram?

A. Total variation distance
B. Difference in group means
C. Either of the above

Click for the answer after you've entered your guess above. Don't scroll any further. Difference in group means. TVD helps compare two categorical distributions, but we're dealing with two numerical distributions.

Let's compute the difference in mean inspection scores for the low risk group and high risk group (low minus high).

First, for our observed data:

Then, for our shuffled data:

We're going to need to shuffle the 'risk_category' column many, many times, and compute this difference in group means each time.

Let's put some of our code in a function to make it easier to repeat.

Each time we call this function, it shuffles the 'risk_category' column and returns the difference in group means (again, by taking low minus high).

We need to simulate this difference in group means many, many times. Let's call our function many, many times and keep track of its result in an array.

Now that we've done that, let's visualize the distribution of the simulated test statistics, and also see where the observed statistic lies:

What's the p-value? Well, it depends on what our alternative hypothesis is. Here, our alternative hypothesis is that low risk restaurants have higher inspection scores on average than high risk restaurants.

Since our test statistic was

$$\text{low risk mean} - \text{high risk mean}$$

larger values of the test statistic favor the alternative.

This is lower than any cutoff we'd consider, so we'd reject the null hypothesis that the two groups of restaurants have similar inspection scores.

## Bakeries 🧁¶

We'll load in a version of the restaurants dataset that has many more rows, some of which contain null values.

Let's look at just the restaurants with 'Bake' in the name that we know the inspection score for.

.str.contains can help us here.

Some bakeries may have 'bake' in their name, rather than 'Bake'. To account for this, we can convert the entire Series to lowercase using .str.lower(), and then use .str.contains('bake').

We can plot the population distribution, i.e. the distribution of inspection scores for all bakeries in San Francisco.

For reference, the mean and standard deviation of the population distribution are calculated below.

In this case we happen to have the inspection scores for all members of the population, but in reality we won't. So let's instead take a random sample of 200 bakeries from the population.

Aside: Does the .sample method sample with or without replacement by default?

We can plot the sample distribution:

Note that since we took a large, random sample of the population, we expect that our sample looks similiar to the population and has a similar mean and SD.

Indeed, the sample mean is quite close to the population mean, and the sample standard deviation is quite close to the population standard deviation.

Let's suppose we want to estimate the population mean (that is, the mean inspection score of all bakeries in SF).

One estimate of the population mean is the mean of our sample.

However, our sample was random and could have been different, meaning our sample mean could also have been different.

Question: What's a reasonable range of possible values for the sample mean? What is the distribution of the sample mean?

### The Central Limit Theorem¶

The Central Limit Theorem (CLT) says that the probability distribution of the sum or mean of a large random sample drawn with replacement will be roughly normal, regardless of the distribution of the population from which the sample is drawn.

To see an empirical distribution of the sample mean, let's take a large number of samples directly from the population and compute the mean of each one.

Remember, in real life we wouldn't be able to do this, since we wouldn't have access to the population.

Unsurprisingly, the distribution of the sample mean is bell-shaped. The CLT told us that!

The CLT also tells us that

$$\text{SD of Distribution of Possible Sample Means} = \frac{\text{Population SD}}{\sqrt{\text{sample size}}}$$

Let's try this out.

Pretty close! Remember that sample_means is an array of simulated sample means; the more samples we simulate, the closer that np.std(sample_means) will get to the SD described by the CLT.

Note that in practice, we won't have the SD of the population, since we'll usually just have a single sample. In such cases, we can use the SD of the sample as an estimate of the SD of the population:

Using the CLT, we have that the distribution of the sample mean:

• is roughly normal,
• is centered at the population mean (for which the sample mean is an estimate), and
• has a standard deviation of $\frac{\text{Population SD}}{\sqrt{\text{sample size}}}$ (which can be estimated using $\frac{\text{Sample SD}}{\sqrt{\text{sample size}}}$).

Using this information, we can build a confidence interval for where we think the population mean might be. A 95% confidence interval for the population mean is given by

$$\left[ \text{sample mean} - 2\cdot \frac{\text{sample SD}}{\sqrt{\text{sample size}}}, \ \text{sample mean} + 2\cdot \frac{\text{sample SD}}{\sqrt{\text{sample size}}} \right]$$

### Concept Check ✅ – Answer at cc.dsc10.com¶

Using a single sample of 200 bakeries, how can we estimate the median inspection score of all bakeries in San Francisco with an inspection score? What technique should we use?

A. Standard hypothesis testing

B. Permutation testing

C. Bootstrapping

D. The Central Limit Theorem

Click for the answer after you've entered your guess above. Don't scroll any further. Bootstrapping. The CLT only applies to sample means (and sums), not to any other statistics.

There is no CLT for sample medians, so instead we'll have to resort to bootstrapping to estimate the distribution of the sample median.

Recall, bootstrapping is the act of sampling from the original sample, with replacement. This is also called resampling.

Let's resample repeatedly.

Note that this distribution is not at all normal.

To compute a 95% confidence interval, we take the middle 95% of the bootstrapped medians.

### Discussion Question¶

Which of the following interpretations of this confidence interval are valid?

1. 95% of SF bakeries have an inspection score between 85 and 88.
2. 95% of the resamples have a median inspection score between 85 and 88.
3. There is a 95% chance that our sample has a median inspection score between 85 and 88.
4. There is a 95% chance that the median inspection score of all SF bakeries is between 85 and 88.
5. If we had taken 100 samples from the same population, about 95 of these samples would have a median inspection score between 85 and 88.
6. If we had taken 100 samples from the same population, about 95 of the confidence intervals created would contain the median inspection score of all SF bakeries.
Click for the answer after you've entered your guess above. Don't scroll any further. The correct answers are Option 2 and Option 6.

## Next time¶

### Next time¶

• One more review example.
• A high-level overview of the quarter.
• Some parting thoughts.