[SciPy-Dev] chi-square test for a contingency (R x C) table

Bruce Southey bsouthey at gmail.com
Fri Jun 18 10:11:52 EDT 2010


On 06/17/2010 07:43 PM, Neil Martinsen-Burrell wrote:
> On 2010-06-17 11:59, Bruce Southey wrote:
>> On 06/17/2010 10:45 AM, josef.pktd at gmail.com wrote:
>>> On Thu, Jun 17, 2010 at 11:31 AM, Bruce Southey<bsouthey at gmail.com>  
>>> wrote:
>>>
>>>> On 06/17/2010 09:50 AM,josef.pktd at gmail.com  wrote:
>>>>
>>>>> On Thu, Jun 17, 2010 at 10:41 AM, Warren Weckesser
>>>>> <warren.weckesser at enthought.com>    wrote:
>>>>>
>>>>>
>>>>>> Bruce Southey wrote:
>>>>>>
>>>>>>
>>>>>>> On 06/16/2010 11:58 PM, Warren Weckesser wrote:
>
> [...]
>
>>>>>>> The handling for a one way table is wrong:
>>>>>>> >>>print 'One way', chisquare_nway([6, 2])
>>>>>>> (0.0, 1.0, 0, array([ 6.,  2.]))
>>>>>>>
>>>>>>> It should also do the marginal independence tests.
>>>>>>>
>>>>>> As I explained in the description of the ticket and in the 
>>>>>> docstring,
>>>>>> this function is not intended for doing the 'one-way' goodness of 
>>>>>> fit.
>>>>>> stats.chisquare should be used for that.  Calling chisquare_nway 
>>>>>> with a
>>>>>> 1D array amounts to doing a test of independence between 
>>>>>> groupings but
>>>>>> only giving a single grouping, hence the trivial result.  This is
>>>>>> intentional.
>>>>
>>>> In expected-nway, you say that "While this function can handle a 1D
>>>> array," but clearly it does not handle it correctly.
>>>> If it was your intention not to do one way tables, then you *must* 
>>>> check
>>>> the input and reject one way tables!
>>>>
>>>>>> I guess the question is: should there be a "clever" chi-square 
>>>>>> function
>>>>>> that figures out what the user probably wants to do?
>>>>>>
>>>>>>
>>>> My issue is that the chi-squared test statistic is still calculated in
>>>> exactly the same way for n-way tables where n>0. So it is pure
>>>> unnecessary duplication of functionality if you require a second
>>>> function for the one way table. I also prefer the one-stop shopping 
>>>> approach
>>>>
>>> just because it's chisquare doesn't mean it's the same kind of tests.
>>> This is a test for independence or association that only makes sense
>>> if there are at least two random variables.
>>
>> Wrong!
>> See for example:
>> http://en.wikipedia.org/wiki/Pearson's_chi-square_test
>> "Pearson's chi-square is used to assess two types of comparison: tests
>> of goodness of fit and tests of independence."
>>
>> The exact same test statistic is being calculated just that the
>> hypothesis is different (which is the user's problem not the function's
>> problem). So please separate the hypothesis from the test statistic.
>
> It is only the exact same test statistic if we know the expected cell 
> counts.  How these expected cell counts are determined depends 
> completely on the type of test that is being carried out.  In a 
> goodness-of-fit test (chisquare_oneway) the proportions of each cell 
> must be specified in the null hypothesis.  For an independence test 
> (chisquare_nway), the expected cell counts are computed from the given 
> data and the null hypothesis of independence.  The fact that the 
> formula involving observed and expected numbers is the same should not 
> obscure the fact that the expected numbers come from two completely 
> different assumptions in the n=1 and n>1 cases.  Can you explain how 
> the expected cell counts should be determined in the 1D case without 
> the function making assumptions about the user's null hypothesis?
The user's hypothesis is totally irrelevant here because you are 
computing the TEST STATISTIC (sum of observed minus expected squared 
divided by degree of freedom), that is available in multiple statistical 
books and a not so good link:
http://en.wikipedia.org/wiki/Pearson's_chi-square_test

This is one reason that the current stats.chisquare function accepts 
observed and expected values as input.

So given a set of observed values and the corresponding set of expected 
values, you calculate the test statistic and consequently a p-value (you 
can assume that the test statistic has a chi-squared distribution but 
nothing is stopping you from some other approach such as bootstrapping). 
Yet, nothing in that calculation of the actual test statistic nor the 
p-value makes any reference to the user's assumptions.

Where the user's assumption enter is how they compute the expected 
values as in your expected_nway function, which is NOT your 
chisquare_nway function. How these expected values defined do depend on 
the user's hypothesis and if the user does not define these then the 
function has to make a guess of what the user wants. That guess depends 
partly on the input (obviously independence is irrelevant for a 1way 
table) and what the developer wants such as (marginal) independence for 
higher order tables.

Thus it is up to the user to know what assumptions they made and what 
hypothesis is being tested when they interpret the p-value because scipy 
is not an expert system that somehow knows what the user wants to get in 
all cases.

>
> I believe that we CANNOT separate the test statistic from the user's 
> null hypothesis and that is the reason that chisquare_oneway and 
> chisquare_nway should be separate functions.  The information required 
> to properly do a goodness-of-fit test is qualitatively different than 
> that required to do an independence test.  I support your suggestion 
> to reject 1D arrays as input for chisquare_nway. (With appropriate 
> checking for arrays such as np.array([[[1, 2, 3, 4]]].)

All you need to calculate the Pearson chi-squared is the observed and 
expected values. I don't care how you obtain these values, but once you 
do then everything afterwards is the same until the user decides on how 
to interpret the p-value.

Under your argument there needs to be a new function for every different 
hypothesis that a user wants. So a 'goodness-of-fit test' must have a 
separate function (actually probably a good idea) than a one-way test 
function etc. But that is pure code bloat when the test statistic is 
identical.

>
>>> I don't like mixing shoes and apples.
>>>
>> Then please don't.
>
> Great.  I'm glad to see that we all agree that chisquare_oneway and 
> chisquare_nway should remain separate functions. :)
>
> -Neil
Sure as we need more code duplication to increase the size of scipy and 
confuse any user!

Bruce



More information about the SciPy-Dev mailing list