# Relationship between z and statistics two

### How to calculate Z-scores (formula review) (article) | Khan Academy

Back to Top. 2. Z Score Formulas. The Z Score Formula: One Sample. The basic z score formula for a sample is: z = (x – μ) / σ. For example, let's say you have a. For example, if a tool returns a Z score of + it is interpreted as "+ standard deviations away from the mean". P-values are probabilities. Both statistics are. Read and learn for free about the following article: Z-scores review. His grade was two standard deviations below the mean. Comparing with z-scores . The idea is that our statistics like xbar and s^2 are random variables - if we take different samples, we will get . Here is link to the video I think Sal was referencing.

Now, we usually don't know what this is either. We normally don't know what that is either. And the central limit theorem told us that assuming that we have a sufficient sample size, this thing right here, this thing is going to be the same thing as-- the sample is going to be the same thing as the standard deviation of our population divided by the square root of our sample size.

So this thing right over here can be re-written as our sample mean minus the mean of our sampling distribution of the sample mean divided by this thing right here-- divided by our population mean, divided by the square root of our sample size.

And this is essentially our best sense of how many standard deviations away from the actual mean we are.

And this thing right here, we've learned it before, is a Z-score, or when we're dealing with an actual statistic when it's derived from the sample mean statistic, we call this a Z-statistic. And then we could look it up in a Z-table or in a normal distribution table to say what's the probability of getting a value of this Z or greater.

So that would give us that probability. So what's the probability of getting that extreme of a result?

## Two-Tailed z-test Hypothesis Test By Hand

Now normally when we've done this in the last few videos, we also do not know what the standard deviation of the population is. So in order to approximate that we say that the Z-score is approximately, or the Z-statistic, is approximately going to be-- so let me just write the numerator over again-- over, we estimate this using our sample standard deviation-- let me do this in a new color-- with using our sample standard deviation.

And this is OK if our sample size is greater than Or another way to think about it is this will be normally distributed if our sample size is greater than Even this approximation will be approximately normally distributed. Now, if your sample size is less than 30, especially if it's a good bit less than 30, all of a sudden this expression will not be normally distributed.

So let me re-write the expression over here. Sample mean minus the mean of your sampling distribution of the sample mean divided by your sample standard deviation over the square root of your sample size.

### Z-test - Wikipedia

We just said if this thing is well over 30, or at least 30, then this value right here, this statistic, is going to be normally distributed. If it's not, if this is small, then this is going to have a T-distribution. And then you're going to do the exact same thing you did here, but now you would assume that the bell is no longer a normal distribution, so this example it was normal.

All of Z's are normally distributed. Over here in a T-distribution, and this will actually be a normalized T-distribution right here because we subtracted out the mean.

In this way, it is assumed to be known, despite the fact that only sample data is available and so normal test can be applied. All sample observations are independent Sample size should be more than Distribution of Z is normal, with a mean zero and variance 1.

The t-test can be understood as a statistical test which is used to compare and analyse whether the means of the two population is different from one another or not when the standard deviation is not known. As against, Z-test is a parametric test, which is applied when the standard deviation is known, to determine, if the means of the two datasets differ from each other.

On the contrary, z-test relies on the assumption that the distribution of sample means is normal.

## What is a Z score What is a p-value

However, they differ in the sense that in a t-distribution, there is less space in the centre and more in the tails. One of the important conditions for adopting t-test is that population variance is unknown.

Conversely, population variance should be known or assumed to be known in case of a z-test. Z-test is used to when the sample size is large, i. Conclusion By and large, t-test and z-test are almost similar tests, but the conditions for their application is different, meaning that t-test is appropriate when the size of the sample is not more than 30 units.

However, if it is more than 30 units, z-test must be performed. Similarly, there are other conditions, which makes it clear that which test is to be performed in a given situation.