Pearson's chi-square test

A Pearson's chi-square test can be used as an inferential test of the independence of two nominal variables.

The value of the test-statistic is

where

= the test statistic that approaches a χ2 distribution.
= frequencies observed;
= frequencies expected (asserted by the null hypothesis).

Pearson’s Chi Square (χ^2) edit

In-Class Notes; CGU PSYCH 308D - 'Categorical Data Analysis' - Dale Berger, taken on 2014-04-01 by Josh Penman</ref>= Pearson’s Chi Square - (<- Redirect to) Pearson’s χ^2


Parametric Chi Square edit

χ^2 Rules of Thumb edit

  1. The expected frequencies are at least 10, if you want to be conservative about it.

If you were to take true normal distribution; you would have a Z score - if you randomly took a sample Z score from the distribution and squared it, what bottom value would you have?

A: Zero. (0)

If this distribution is Z^2, what shape do you think it would have? Would it be a nice normal distribution? No: it would be a rapidly descending graph.

Some formulas that need explaining! edit

Z^2 = χ^2(df=1)

Z^2+Z^2=χ^2(df=2)

Degrees of Freedom

F(Z^2)

E ((Xi-[μ])/(σx) = 1

The expected value of Chi Squared is equal to the degrees of freedom. If someone says, “oh, I found a Chi Square of 15. . .

Some statisticians use a notation “ν” (pronounced “nu” as in “new”) - but there are other beautiful relationships here:

χ^2ν = [∑]*Zi

Mean -

Variance

Kurtosis =

Mode =

  1. If ν ≤ 2 then Mode = 0
  2. If ν > 2 then Mode = ν-2

Example edit

If you wanted to test a die to see whether it’s a fair die, what would you do?

Class Answers: Roll it over and over and see what kind of distribution you get


Roll it over and over and see what kind of distribution you get

  1. If we roll this die 60 times, how many times would you expect to get a roll of 1? A: 6 (1 out of 6 possible combinations = 1/6; 1/6*60=10.
    1. Prior to the Pearson’ Chi Squared era, we’d say “this doesn’t look too bad. . . “
    2. If you randomly rolled a die 60 times and got exactly 10 of each. . . that would be extremely improbable!
    3. Let’s start by taking the observed frequency minus the expected frequency: (See the Google Example Sheet
  2. There are 5 degrees of freedom here (J: How do you calculate degree’s of freedom?)
  3. So here, you have Chi Square with 5 degrees of freedom; χf,05^2 at the .05 α level, that zero-value is 11.07
  4. So, our value is 9.6: have we proven that the die is fair? A: __
  5. In general, if we collect more information, we’ll get a better answer. Here, we don’t have statistical significance at the [α]=.05 level.


Now, in gambling; having a 1 and 6 being; if you shave a little bit off the 1 side, and a little bit off the six side; - we want to test whether getting a 1 or a 6; now; if this is a theory not driven by the data, but driven by what we know about our friend and the die, we might run a separate test . . .

  1. What is the observed frequency of getting a 1 or a 6? A: 16 + 16 = 31.
  2. What is the observed frequencey of everything else? . . . . A: 29.
  3. What is the expected frequencey of a 1 or a 6, and everyting else? A: 20; 40 for everything else.
  4. Observed minus expected . . . let’s get the Chi Squared:
    1. = (31 - 20)^2/20 + (29-40)^2/40
    2. =
    3. = 9.07
  5. Now, how many degrees of freedom do we have here?
    1. Two observed frequencies
    2. But we have a constraint on here: it’s out of 60; there’s only one piece of information, because if we know the frequency of getting a 1 or a 6, we also know the frequency of getting everything else.
  6. So, Chi Squared with one degree of freedom at α=.005 the Chi Squared cutoff is 7.88. If you were to calculate the actual p value for this, it would be __

The first test we had, you could call a “blob test - 5 degrees of freedom. . . but if you have a test with 1 degree of freedom, that’s more of a laser test.


S^2 = ([∑](Xi-x̅)^2)/(n-1)

Application of Chi Square edit

[w:Fisher Fisher] had 3 degrees of freedom - with 3, the mode is at one, and so forth. He took what [w:/Mendel Mendel] predicted, and what Mendel observed;

He expected a 3; but got a chi square that was very close: he got a closer model than what one would expect by chance. Fisher had a charitable explanation: that Mendel had over-enthusiastic assistants.

Others have analyzed this too, and made arguments that maybe it wasn’t as bad as Fisher made it out to be, but if the data fits the model too well, that may imply that something fishy (i.e., suspicious) is going on here.

Here’s an issue: If you did a χ^2 test with 1 degree of freedom, is this a 1-Tailed test or a 2-Tailed test?

  1. If you get a big χ^2 with one degree of freedom, it’s actually a 2-tailed test: if you want to do a 1-tailed test, you would need to

Question: With Mendel’s work, in genetics, do we really expect a random distribution? A: if parents, each with one blue eyed gene and one brown-eyed gene have children; what’s the probability that they’ll have blue eyes? 1/4th (J: I’m not sure if this is biologically accurate . . .) - same thing with the peas.

  1. You have a null hypothesis that you’re dealing with here: each plant is a cross of two particular peas, and 1/4th of them should turn out one way or the other.

Q: How did you get the 11.07 as the cutoff value above? A: That comes from a table.

Other Applications edit

Now we have a little two-by-two table: you can find this in Packet 5

  1. See Step 2 in the Google Spreadsheet
  2. What % of people with BA are on Salary? A: 85%
  3. What % of people without a BA are on Salary? A: 60%

If there is no relationship between salary and education level, what would we expect those frequencies to be? (Refer to Step 2.1 Independence Hypothesis here

  1. Null hypothesis is “Independence
  2. You would expect 25% of 40; 10, as the expected frequency;


The df = (# of Rows-1) * (# of columns -1)

  1. In this case, a 2-by-2 table; it’s (2-1)*(2-1) = 1*1 = 1; df (<- Redirect to Degrees of Freedom) = 1

There’s a neat little hand calculation you can do here:

  1. For a 2x2 table:
    1. χ^2 = (N(a*d-B*c)^2)/(a+b)*(c+d)*(a+c)*(b+d)
      1. Where:
      2. a is top left cell;
      3. b is top right cell;
      4. c is bottom left cell;
      5. d is bottom right cell

Now, if you wanted to do __; sometimes called Yate’s Correction for continuity; people have talked about how important it is to apply this. (See https://docs.google.com/spreadsheets/d/12s4TcLMNEvfKl_rVmAmXA5IWqJ2EcxTbzH_PFZtZJ-E/edit#gid=1502721222 Example 3)

There’ only one circumstance where you apply this that is appropriate; the situation is called Fixed Marginals - The Median Test is like this:

  1. You have 80 people; and you’re going to runs samples 20 people, and divide them into two groups.
    1. You decide a ahead of time you’re going to have 40 people in each group
  2. You expect marginal values of 40 for each; if you know the margins before collecting the data, that is called fixed margins.
  3. That is the only case in which you will use the Continuity correction (J: in the Behavioral Sciences as of 2014-04-01)
  4. Now, your expected frequencies, if you had independent samples, you would expect to find 20 people in each cell.
  5. Reduce |fo-fe| by .5 (“fo”= Frequency Observed; “”fe” = Frequency Expected; “|__|” = Absolute Values.


Non-Parametric Chi Square and Fisher’s Exact Test edit

For cases where you have very small frequencies, you can’t apply this test of independence: Let’s take an example, where we’ve got a program where people start, then drop out.

Now, for this you can calculate the Multinomial outcome;

  1. the probability of this exact outcome; give the observed marginals =
  2. ((a+b)!*(c+d)!*(a+c)!*(b+d)!)/(n!*A!*b!*c!*d!)
    1. (5!*6!*4!*7!)/(11!*0!*5!*4!*2!)
      1. = (6*5*4*3*2*1*7!)/11*10*9*8(7!*2!)
      2. = 6/(11*2*3*2)
      3. = 1/(11*2)
      4. = .045

McNemar’s Test edit

Now, let’s look at McNemar’s Test (Pronounced “Mac-no-mar” by professor) (See Example 5

  1. p(+|Judge1) = p(+|Judge 2)
  2. The null hypothesis says that the given data is a random outcome of a 50-50 spit; binomial distribution; people have various formulas for this to approximate: You can use a:
  3. Chi Square formula: χ^2 = (ABS(b-c)-1)^2/(b+c)
    1. = (|15-5|-1)^2/(15+5)
    2. = 9^2/20 = 81/20 = ‘’4.05’’
    3. We would say that Judge 2 is ___ (J: QUESTION 1!). This is an ‘’’approximation’’’.
  4. Z formula for Approximation of the Binomial Distribution (with Yate’s Correction) This is what you actually want to apply: the binomial distribution (also called, “binomial test, with
    1. Z = ‘’(ABS’’(x-Np)’’-.5)’’/ SQRT(n*p*q) where:
      1. x = Number of hits you got; the number observed in that cell: (use cell b)
      2. P = Likelihood of a hit for cell b
      3. Q = Likelihood of a miss for cell b
      4. N = b+c = 20
      5. p = .05
    2. Resulting p = .021 < .05, therefore we have statistical significance (note: current best practice uses 95% Confidence Interval, not just P values


With a small set of data, the McNemar’s Chi Squared Test is not accurate. I would suggest you never use it. In SPSS’s table, for a Chi Square tests, we had a superscript that indicated that a binomial distribution was used for the McNemar Test - that could be similar to SPSS telling you not to use McNemar’s test as well:)

Example 6: Other applications: How people would vote from one year to the next.

Example 7:


See also edit