[Edu-sig] Python Programming: Procedural Online Test

Rodrigo Senra rsenra at acm.org
Mon Dec 5 22:53:00 CET 2005


On 5Dec 2005, at 7:50 AM, damon bryant wrote:

> One of the main reasons I decided to use an Item Response Theory (IRT)
> framework was that the testing platform, once fully operational,  
> will not
> give students questions that are either too easy or too difficult  
> for them,
> thus reducing anxiety and boredom for low and high ability students,
> respectively. In other words, high ability students will be  
> challenged with
> more difficult questions and low ability students will receive  
> questions
> that are challenging but matched to their ability.

So far so good...

> Each score is on the same scale, although some students will not
> receive the same questions. This is the beautiful thing!

I'd like to respectfully disagree. I'm afraid that would cause more  
harm than good.
One side of student evaluation is to give feedback *for* the  
students. That is a
relative measure, his/her performance against his/her peers.

If I understood correctly the proposal is to give a "hard"-A for some  
and an "easy"-A
for others, so everybody have A's (A=='good score'). Is that it ?  
That sounds like
sweeping the dirt under the carpet. Students will know. We have to  
prepare them to
tackle failure as well as success.

I do not mean such efforts are not worthy, quite the reverse. But I  
strongly disagree
with an adaptive scale. There should be a single scale fro the whole  
spectre of tests.
If some students excel their results must show this, as well as if  
some students perform
poorly that should not be hidden from them. Give them a goal and the  
means to pursue
their goal.

If I got your proposal all wrong, I apologize ;o)

best regards,
Senra


Rodrigo Senra
______________
rsenra @ acm.org
http://rodrigo.senra.nom.br






More information about the Edu-sig mailing list