[Edu-sig] Python Programming: Procedural Online Test

Scott David Daniels Scott.Daniels at Acm.Org
Tue Dec 6 04:04:45 CET 2005


damon bryant wrote:
> Hi Rodrigo!
> 
>> If I understood correctly the proposal is to give a "hard"-A for some
>> and an "easy"-A
>> for others, so everybody have A's (A=='good score'). Is that it?
> 
> No, students are not receiving a hard A or an easy A. I make no 
> classifications such as those you propose. My point is that questions are 
> placed on the same scale as the ability being measured (called a theta 
> scale). Grades may be mapped to the scale though, but a hard A or easy A 
> will not be assigned under aforementioned conditions described.
> 
> Because all questions in the item bank have been linked, two students can 
> take the same computer adaptive test but have no items in common between the 
> two administrations. However, scores are on the same scale. Research has 
> shown that even low ability students, despite their performance, prefer 
> computer adaptive tests over static fixed-length tests. It has also been 
> shown to lower test anxiety while serving the same purpose as fixed-length 
> linear tests in that educators are able to extract the same level of 
> information about student achievement or aptitude without banging a 
> student's head up against questions that he/she may have a very low 
> probability of getting correct. The high ability students, instead of being 
> bored, are receiving questions on the higher end of the theta scale that are 
> appropriately matched to their ability to challenge them.
> 
>> That sounds like
>> sweeping the dirt under the carpet. Students will know. We have to
>> prepare them to
>> tackle failure as well as success.
> 
> .... The item is appropriately match for Examinee B because s/he has approximately
 > a 50% chance of getting this one right - not a very high chance or a 
very low
> chance of getting it correct but a equi-probable opportunity of either a 
> success or a failure....

Two comments:
   (1) You may find target a higher probability of correct gives a better
       subjective experience without significantly increasing the length
       of the test required to be confident of the score.

   (2) You should track each question's history vs. the final score for
       the test-taker.  This practice can help validate your scoring,
       as well as help you in weeding out mis-scored questions.

--Scott David Daniels
Scott.Daniels at Acm.Org



More information about the Edu-sig mailing list