Sampling Distribution of Difference Between Means
Difference between standard error and standard deviation between the standard deviation of the sample and the standard error of the sample mean. sample to sample -- but it stays the same on average when the sample size increases. What is the relationship between sampling variability and standard errors? is and the sample size is 36, and since is defined as and estimated as, the Changing from to will increase the standard error of the mean by 12/9. What is the difference between STANDARD DEVIATION and STANDARD ERROR? by dividing the standard deviation by the square root of the sample size.
As you increase your sample size for every time you do the average, two things are happening. You're becoming more normal, and your standard deviation is getting smaller. So the question might arise, well, is there a formula? So if I know the standard deviation-- so this is my standard deviation of just my original probability density function. This is the mean of my original probability density function. So if I know the standard deviation, and I know n is going to change depending on how many samples I'm taking every time I do a sample mean.
If I know my standard deviation, or maybe if I know my variance. The variance is just the standard deviation squared. If you don't remember that, you might want to review those videos. But if I know the variance of my original distribution, and if I know what my n is, how many samples I'm going to take every time before I average them in order to plot one thing in my sampling distribution of my sample mean, is there a way to predict what the mean of these distributions are?
The standard deviation of these distributions.
Standard Error of the Mean Difference
And to make it so you don't get confused between that and that, let me say the variance. If you know the variance, you can figure out the standard deviation because one is just the square root of the other. So this is the variance of our original distribution. Now, to show that this is the variance of our sampling distribution of our sample mean, we'll write it right here.
This is the variance of our sample mean. Remember, our true mean is this, that the Greek letter mu is our true mean. This is equal to the mean.
While an x with a line over it means sample mean. So here, what we're saying is this is the variance of our sample means. Now, this is going to be a true distribution. This isn't an estimate. If we magically knew the distribution, there's some true variance here. And of course, the mean-- so this has a mean. This, right here-- if we can just get our notation right-- this is the mean of the sampling distribution of the sampling mean.
So this is the mean of our means. It just happens to be the same thing. This is the mean of our sample means. It's going to be the same thing as that, especially if we do the trial over and over again. But anyway, the point of this video, is there any way to figure out this variance given the variance of the original distribution and your n?
And it turns out, there is. And I'm not going to do a proof here. I really want to give you the intuition of it. And I think you already do have the sense that every trial you take, if you takeyou're much more likely, when you average those out, to get close to the true mean than if you took an n of 2 or an n of 5.
You're just very unlikely to be far away if you took trials as opposed to taking five. So I think you know that, in some way, it should be inversely proportional to n.
The larger your n, the smaller a standard deviation. And it actually turns out it's about as simple as possible. It's one of those magical things about mathematics.
And I'll prove it to you one day. I want to give you a working knowledge first. With statistics, I'm always struggling whether I should be formal in giving you rigorous proofs, but I've come to the conclusion that it's more important to get the working knowledge first in statistics, and then, later, once you've gotten all of that down, we can get into the real deep math of it and prove it to you.
But I think experimental proofs are all you need for right now, using those simulations to show that they're really true. So it turns out that the variance of your sampling distribution of your sample mean is equal to the variance of your original distribution-- that guy right there-- divided by n.
That's all it is. So if this up here has a variance of-- let's say this up here has a variance of I'm just making that number up. And then let's say your n is Then the variance of your sampling distribution of your sample mean for an n of well, you're just going to take the variance up here-- your variance is divided by your n, So here, your variance is going to be 20 divided by 20, which is equal to 1.
This is the variance of your original probability distribution. And this is your n.
What's your standard deviation going to be? What's going to be the square root of that? Standard deviation is going to be the square root of 1.
Well, that's also going to be 1. So we could also write this.
General Relationship Between Standard Error and Sample Size - Cross Validated
We could take the square root of both sides of this and say, the standard deviation of the sampling distribution of the sample mean is often called the standard deviation of the mean, and it's also called-- I'm going to write this down-- the standard error of the mean. All of these things I just mentioned, these all just mean the standard deviation of the sampling distribution of the sample mean.
That's why this is confusing. Because you use the word "mean" and "sample" over and over again. And if it confuses you, let me know.
I'll do another video or pause and repeat or whatever. But if we just take the square root of both sides, the standard error of the mean, or the standard deviation of the sampling distribution of the sample mean, is equal to the standard deviation of your original function, of your original probability density function, which could be very non-normal, divided by the square root of n.
I just took the square root of both sides of this equation. Personally, I like to remember this, that the variance is just inversely proportional to n, and then I like to go back to this, because this is very simple in my head. You just take the variance divided by n. Oh, and if I want the standard deviation, I just take the square roots of both sides, and I get this formula.
So here, when n is 20, the standard deviation of the sampling distribution of the sample mean is going to be 1. Here, when n isour variance-- so our variance of the sampling mean of the sample distribution or our variance of the mean, of the sample mean, we could say, is going to be equal to 20, this guy's variance, divided by n.
So it equals-- n is so it equals one fifth. Now, this guy's standard deviation or the standard deviation of the sampling distribution of the sample mean, or the standard error of the mean, is going to the square root of that. So 1 over the square root of 5. And so this guy will have to be a little bit under one half the standard deviation, while this guy had a standard deviation of 1. So you see it's definitely thinner. Now, I know what you're saying.
Well, Sal, you just gave a formula. I don't necessarily believe you. Well, let's see if we can prove it to ourselves using the simulation. So just for fun, I'll just mess with this distribution a little bit. So that's my new distribution. And let me take an n-- let me take two things it's easy to take the square root of, because we're looking at standard deviations. So let's say we take an n of 16 and n of And let's do 10, trials. So in this case, every one of the trials, we're going to take 16 samples from here, average them, plot it here, and then do a frequency plot.
Here, we're going to do a 25 at a time and then average them. I'll do it once animated just to remember. So I'm taking 16 samples, plot it there. I take 16 samples, as described by this probability density function, or 25 now. Plot it down here. Now, if I do that 10, times, what do I get?
What do I get?
So here, just visually, you can tell just when n was larger, the standard deviation here is smaller. This is more squeezed together. But actually, let's write this stuff down. As you might expect, the mean of the sampling distribution of the difference between means is: For example, say that the mean test score of all year-olds in a population is 34 and the mean of year-olds is From the variance sum lawwe know that: Recall the formula for the variance of the sampling distribution of the mean: Since we have two populations and two samples sizes, we need to distinguish between the two variances and sample sizes.
We do this by using the subscripts 1 and 2. Using this convention, we can write the formula for the variance of the sampling distribution of the difference between means as: Since the standard error of a sampling distribution is the standard deviation of the sampling distribution, the standard error of the difference between means is: The subscripts M1 - M2 indicate that it is the standard deviation of the sampling distribution of M1 - M2.
Now let's look at an application of this formula. Assume there are two species of green beings on Mars. The mean height of Species 1 is 32 while the mean height of Species 2 is The variances of the two species are 60 and 70, respectively and the heights of both species are normally distributed.
You randomly sample 10 members of Species 1 and 14 members of Species 2. What is the probability that the mean of the 10 members of Species 1 will exceed the mean of the 14 members of Species 2 by 5 or more? Without doing any calculations, you probably know that the probability is pretty high since the difference in population means is But what exactly is the probability? First, let's determine the sampling distribution of the difference between means.
Using the formulas above, the mean is The standard error is: The sampling distribution is shown in Figure 1. Notice that it is normally distributed with a mean of 10 and a standard deviation of 3. The area above 5 is shaded blue.