Go back
Calculating adequate sample sizes

Calculating adequate sample sizes

Science

sh76
Civis Americanus Sum

New York

Joined
26 Dec 07
Moves
17585
Clock
11 Oct 11
Vote Up
Vote Down

I figure that mathematics is a science, so I hope I have the right forum.

I need to determine what level of sample size is necessary to generate statistically reliable data for exam results. In other words, if I give an exam to X random college students, I want to be able to assert that based on those results, I can be confident that Y% of random students will generate the same average score, within Z points either way.

I know very little about statistics in general (never even had a college course in it), so I hope:

1) That I am clear enough as to what I'm looking for; and

2) Someone can give me a layman's tip on how to make that determination.

Thanks!

googlefudge

Joined
31 May 06
Moves
1795
Clock
11 Oct 11
Vote Up
Vote Down

Originally posted by sh76
I figure that mathematics is a science, so I hope I have the right forum.

I need to determine what level of sample size is necessary to generate statistically reliable data for exam results. In other words, if I give an exam to X random college students, I want to be able to assert that based on those results, I can be confident that Y% of random students wi ...[text shortened]... or; and

2) Someone can give me a layman's tip on how to make that determination.

Thanks!
Depending on the significance you want, X may be quite large, I doubt you are going to be able to get
particularly reliable data with sub hundred numbers, and ideally you would want a thousand plus.....
depending on what kind of exam you are conducting and what you want to find out from it.

It might be possible to get a significant result with fewer subjects depending on what it is you are looking for.

More information on what it is you are trying to determine would be helpful. (as well as interesting)
So things like what kind of exam is it? are you trying to determine how many people will get certain grades (A,B,C)?
what questions are you trying to answer? ect ect...


As in statistics the question you ask and how you ask it can sometime be more important than the answer.

Also I would endeavour to learn more about statistics, it's more useful than most people realise.

sh76
Civis Americanus Sum

New York

Joined
26 Dec 07
Moves
17585
Clock
11 Oct 11
Vote Up
Vote Down

Originally posted by googlefudge
Depending on the significance you want, X may be quite large, I doubt you are going to be able to get
particularly reliable data with sub hundred numbers, and ideally you would want a thousand plus.....
depending on what kind of exam you are conducting and what you want to find out from it.

It might be possible to get a significant result with few ...[text shortened]... so I would endeavour to learn more about statistics, it's more useful than most people realise.
You're right about statistics. I'll put it on my list.

I actually know what X is; it's 845 (exam takers).

I need to show that the exams in question are valid. Suffice it to say that the results of the 845 takers have been within the acceptable average range for a college exam. The subjects are specialty subjects, but that's not really important. The course content has already been approved; we just need to demonstrate acceptable level of difficulty in the exams by pointing to mean and grade distribution data. I can do that, but I also want to show that our sample size makes the data statistically significant.

For example, if I can say: "Based on these results, we can project with 95% confidence that random exam takers will generate a mean within 3.5 points of the mean of our exam." or something to that effect, that would be great. I just don't quite know how to determine those other numbers.

I understand that curves and distributions are also important, but the standard deviation of our exam takers is too high for my comfort level and so I'm going to ignore that aspect unless they bring it up. If they ask for that info, I'll cross that bridge when I come to it.

Thanks for the response and your help! 🙂

a

Joined
08 Oct 06
Moves
24000
Clock
11 Oct 11
Vote Up
Vote Down

I think what you're looking for is standard deviation of the mean. If I recall correctly, that's simply the standard deviation divided by the root of the sample size. Assuming a standard deviation of 15 points or so with a 100 student class, you can be reasonably sure that the average of your next class will be within 5 points of the average of the previous class, all other things being equal. At least from what I recall.

a

Joined
08 Oct 06
Moves
24000
Clock
11 Oct 11
Vote Up
Vote Down

In reply to your last post, which I saw after I posted, what was your standard deviation?

sh76
Civis Americanus Sum

New York

Joined
26 Dec 07
Moves
17585
Clock
11 Oct 11
Vote Up
Vote Down

Originally posted by amolv06
In reply to your last post, which I saw after I posted, what was your standard deviation?
According to Excel, the standard deviation of all 845 scores is 12.08%

From what I recall from high school math, I think that means that roughly 68% of takers will get between a 73 and 97, assuming a mean of 85. Obviously, I don't know much about this, but that SD seems pretty high to me (and the SD "felt" pretty high as I was reading the scores) but hey, if that's not so high, then great.

a

Joined
08 Oct 06
Moves
24000
Clock
11 Oct 11
3 edits
Vote Up
Vote Down

So you should be 95% confident that the average of a similar sample will be within one percentage point of the average of your current sample, all else being equal. At least according to my calculations. A standard deviation of 12 seems pretty good to be regarding exam grades.

Edit 1: Your standard deviation of the mean was approximately .4.

Edit 2: This is pretty good, I think: http://en.wikipedia.org/wiki/Standard_error_%28statistics%29#Standard_error_of_the_mean

W
Pusher of wood

Los Gatos, CA

Joined
03 Mar 11
Moves
5760
Clock
11 Oct 11
Vote Up
Vote Down

Originally posted by sh76
For example, if I can say: "Based on these results, we can project with 95% confidence that random exam takers will generate a mean within 3.5 points of the mean of our exam." or something to that effect, that would be great. I just don't quite know how to determine those other numbers.
OK, I think I understand what you want. You want to calculate the confidence interval that your mean is correct.

When you calculate a mean from a sample, like:

sum(all test scores) / N - 1

you aren't really calculating the "true mean" for the test - you are estimating what it is.

If you give the same test to a different batch of students, your calculation of the mean will change, but the "true mean" -- the mean you'd calculate if you gave the test to an infinite number of students for the test won't change.

What you want (I think) is to calculate the likelihood that your calculated mean for a given sample is accurate, correct?

The normal way to do this is to calculate what's called a confidence interval.

It's rather involved to tell you how to calculate it, and i'm sure there are good resources on the web for this. Looks like wikipedia has a good discussion what what might be a pretty good practical example:

http://en.wikipedia.org/wiki/Confidence_interval

But it sounds like you want an Excel macro anyway. I think the CONFIDENCE macro does what you want.

W
Pusher of wood

Los Gatos, CA

Joined
03 Mar 11
Moves
5760
Clock
11 Oct 11
1 edit
Vote Up
Vote Down

Originally posted by amolv06
I think what you're looking for is standard deviation of the mean. If I recall correctly, that's simply the standard deviation divided by the root of the sample size. Assuming a standard deviation of 15 points or so with a 100 student class, you can be reasonably sure that the average of your next class will be within 5 points of the average of the previous class, all other things being equal. At least from what I recall.
Maybe I'm reading hist post wrong, but I don't think that's really what he wants?

I think what he wants is something to tell him the accuracy of his mean calculation, not the "spread" of the scores out around the mean...

The standard deviation just tells how much different students scores will vary.

The confidence interval will tell how like his mean is to vary, given different samples of different groups of students taking the same test.

a

Joined
08 Oct 06
Moves
24000
Clock
11 Oct 11
1 edit
Vote Up
Vote Down

But the standard deviation of the mean is a measure of how much your mean will vary, not the variance in the scores of the individual students.

sh76
Civis Americanus Sum

New York

Joined
26 Dec 07
Moves
17585
Clock
11 Oct 11
1 edit
Vote Up
Vote Down

Originally posted by WoodPush
OK, I think I understand what you want. You want to calculate the confidence interval that your mean is correct.

When you calculate a mean from a sample, like:

sum(all test scores) / N - 1

you aren't really calculating the "true mean" for the test - you are estimating what it is.

If you give the same test to a different batch of students, you like you want an Excel macro anyway. I think the CONFIDENCE macro does what you want.
Thank you. this is very helpful. It's not exactly what I'd had in mind (I was looking more for predicting the likelihood of future examinee scoring ranges) but I definitely think I can use it.

Earlier, I'd found this site that calculated confidence interval

http://www.surveysystem.com/sscalc.htm

But it seemed to me that this was only for binary results (e.g., "Will you vote for ABC for President?" ) where you were looking to show the odds that an individual result will gain X% or within the margin of error. It it equally valid to show exam results? In other words, if the CI at 95% is 4 points, does that mean there's a 95% chance that the true mean is within 4 points of my calculated mean?

sh76
Civis Americanus Sum

New York

Joined
26 Dec 07
Moves
17585
Clock
11 Oct 11
Vote Up
Vote Down

I think I got it.

With a calculated mean of 86.41 and a SD of 13.01 (I miscalculated earlier) and n=845, from Excel and an independent tool I found on the web, the 95% confidence level appears to apply to the "real" mean being between 85.533 and 87.287 (a confidence interval of .8772).

If this is correct (and I think it is), this is exactly what I need.

Thanks, guys! 🙂

a

Joined
08 Oct 06
Moves
24000
Clock
11 Oct 11
1 edit
Vote Up
Vote Down

Yup, those numbers are exactly the same as the ones I got.

SDM = standard deviation of the mean = standard deviation / sqrt(845) =13.01/29.06 = .44721

To have a 95% confidence interval you want 1.9599 standard deviations of the mean = 1.9599*.44928 = .8774.

W
Pusher of wood

Los Gatos, CA

Joined
03 Mar 11
Moves
5760
Clock
11 Oct 11
1 edit
Vote Up
Vote Down

Originally posted by amolv06
But the standard deviation of the mean is a measure of how much your mean will vary, not the variance in the scores of the individual students.
Standard deviation of a sample set is the measure of how much individual scores of the students vary in that sample set.

Standard deviation of the means of several different sample sets is a measure of how much the means of those sample sets vary.

So yes, if you're measuring how several different classes all scored on the test, and how their means varied, then I guess standard deviation is an ok indicator... but you'd really have to have a lot of different classes scores on those tests to be useful?

a

Joined
08 Oct 06
Moves
24000
Clock
11 Oct 11
Vote Up
Vote Down

The standard deviation of the mean can be calculated based off of a single sample of data. At least this is the way I've learned it. I study physics, so I don't really have a theoretical foundation behind this, but what we were told was that the standard deviation of the mean can be found by taking the standard deviation and dividing by the root of the sample size. My numbers seem to concur with the ones found by SH72 using his macro.

Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.