Guido sees the light: PEP 8 updated

Larry Martell larry.martell at gmail.com
Tue Apr 19 13:06:29 EDT 2016


On Tue, Apr 19, 2016 at 11:50 AM, Steven D'Aprano <steve at pearwood.info> wrote:
> On Wed, 20 Apr 2016 12:54 am, Rustom Mody wrote:
>
>
>> I wonder who the joke is on:
>>
>> | A study comparing Canadian and Chinese students found that the latter
>> | were better at complex maths
>
> Most published studies are wrong.
>
> http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/
>
> - Has that study been replicated by others? Have people tried to
>   replicate it? Were negative findings published, or do they
>   languish in some researcher's bottom drawer? (Publication bias
>   is a big problem in research.)
>
> - Was the study well-designed, and the given conclusions supported
>   by the study? How well did it survive the critical attention of
>   experts in that field? Did the study account for differences in
>   mathematics education?
>
> - Did the study have sufficient statistical power to support the
>   claimed results? Most published studies are invalid since they
>   simply lack the power to justify their conclusion.
>
> - Is the effect due to chance? Remember, with a p-value of 0.05 (the
>   so-called 95% significance level), one in twenty experiments will
>   give a positive result just by chance. A p-value of 0.05 does not
>   mean "these results are proven", it just means "if every single
>   thing about this experiment is perfect, then the chances that these
>   results are due by chance alone is 1 in 20".
>
> Anyone who has played (say) Dungeons and Dragons, or other role-playing
> games, will know that events with a probability of 1 in 20 occur very
> frequently. To be precise, they occur one time in twenty.
>
> Even if the claimed results are correct, how strong is the effect?
>
> (a) On average, Canadian students get 49.0% on a standard exam that Chinese
> students get 89.0% for.
>
> (b) On average, Canadian students get 49.0% on a standard exam that Chinese
> students get 49.1% for.
>
> The level of statistical significance is not related to the strength of the
> effect: we can be very confident of small effects, and weakly confident of
> large effects.

85% of all statistics are made up.



More information about the Python-list mailing list