certification (Brainbench)

Bengt Richter bokr at oz.net
Sat Feb 16 02:41:59 EST 2002


On Fri, 15 Feb 2002 15:59:16 -0500, "Dr. David Mertz" <mertz at gnosis.cx> wrote:

>|>In other words, I don't believe that Brainbench's Python exam
>|>is a *good* testing tool.
>
>|What would your thoughts be re an automated exam as a PSF - licensed free
>|collaborative project? Maybe even the exam definition itself could somehow
>|feed off a wiki-like process, if you constrained formats appropriately
>|for defining test elements?
>
>I don't necessarily have anything against testing in general.  It is
>possible--at least to a first approximation--to learn something about
>people's skill levels by means of tests.  And I wouldn't even go so far
Actually, I was thinking more in terms of test as a self-assessment/tutorial
tool than an employer's screening thing.

>as to suppose that a complete novice would get the same result as Alex
>Martelli on the Brainbench exam.  It's not as good as it *should* be, but
>it's not completely random either.
>
I'd guess it would be more effective used privately to assess one's own
mastery (or lack thereof), since one is privy to one's own internal
reactions, where an employer can only see external signs and end results.

>Then again, I don't have so much faith in testing to really think making
>an exam is the best direction for PSF resources (just think what they
I didn't mean using PSF resources (except by the usual way of side effects
from Python jocks and peacocks doing their thing). I just meant an open
project under PSF-style license, with copyrights assigned to the PSF.
Maybe it could be useful in the CP4E context also? (Maybe there is even
a grant to be pursued?)

>could accomplish if Tim Peters were to sell his second hand copy of _How
>to Form a Non-Profit_, and spend the proceeds on something important).

>Nonetheless, if someone wants to pay me more money than Brainbench did,
>I would be happy to prepare a better exam than Brainbench let me do :-).
>
I haven't seen the Brainbench tests. I was musing in terms of a kind of
topic-mastery validation suite, run as a script with opportunity to type
in actual code snippets as well as T/F and multiple choice answers for
evaluation and comparison to expected results.

Dynamic evaluation could allow some interesting test/tutorial questions,
and modules could have try-again/skip options. And the whole test/exercise
could be automatically configured to take into account correct prior results,
so you could concentrate on filling in holes. Like some (natural) language
learning programs.

Collaborative development of the material is an idea to get around the
pre-conditions imposed by a direct profit motivation and its likely
narrowing effect on focus and ideas. (So here is my OTTOMH preconceptions ;-)
There would just have to be some pattern for a test module so that it could
be easily integrated into the whole. (The test suite that comes in the Python
distribution probably has a lot of useable stuff)?

IOW, people who had trouble (or particularly enjoyed an aha moment,
as the case may be) might be motivated to contribute a tutorial/test
module that would make sure the user knew that there was something to know,
and whether s/he knew it or not.

Eventually it might be cross-indexed with the regular documentation
(e.g., results in HTML with links from wrongly answered questions
to relevant docs).

Just musing...

Regards,
Bengt Richter





More information about the Python-list mailing list