Table 14.3 Item option & distractor frequencies in entry order

(controlled by Distractors=Y, OSORT=, CFILE=, PTBIS=)


ITEM OPTION FREQUENCIES are output if Distractors=Y. These show occurrences of each of the valid data codes in CODES=, and also of MISSCORE= in the input data file. Counts of responses forming part of extreme scores are included. Only items included in the corresponding main table are listed. These statistics are also in DISFILE=, which includes entries even if the code is not observed for the item.


OSORT= controls the ordering of options within items. The standard is the order of data codes in CODES=.


         ITEM CATEGORY/OPTION/DISTRACTOR FREQUENCIES:  ENTRY ORDER                                                   


|ENTRY   DATA  SCORE |     DATA   |      ABILITY     S.E.  INFT OUTF PTMA |                       |                  

|NUMBER  CODE  VALUE |  COUNT   % |    MEAN    P.SD  MEAN  MNSQ MNSQ CORR.| ITEM                  |                  


|   13   4       *** |     11  16#|     .79     1.41  .45             .08 |M. STAIRS              | 4 75% Independent

|        1         1 |     30  52 |   -1.85      .99  .18   .8  1.0  -.89 |                       | 1 0% Independent 

|        3         3 |      5   9 |    1.07      .80  .40   .6   .4   .10 |                       | 3 50% Independent

|        5         5 |     15  26 |    2.63     1.06  .28  1.7  1.5   .56 |                       | 5 Supervision    

|        6         6 |      7  12 |    3.25      .81  .33  1.2  1.2   .44 |                       | 6 Device         

|        7         7 |      1   2 |    4.63      .00        .7   .6   .23 |                       | 7 Independent    

|        MISSING *** |      1   1#|   -1.99      .00                 -.12 |                       |                  


 * Average ability does not ascend with category score                                                               

 # Missing % includes all categories. Scored % only of scored categories                                             



ENTRY NUMBER is the item sequence number.
The letter next to the sequence number is used on the fit plots.


DATA CODE is the response code in the data file.
MISSING means that the data code is not listed in the CODES= specification.
Codes with no observations are not listed.


SCORE VALUE is the value assigned to the data code by means of NEWSCORE=, KEY1=, IVALUEA=, etc.
*** means the data code is missing and so ignored, i.e., regarded as not administered. MISSCORE=1 scores missing data as "1".


DATA COUNT is the frequency of the data code in the data file (unweighted) - this includes observations for both non-extreme and extreme persons and items. For counts weighted by PWEIGHT=, see DISFILE=


DATA % is the percent of scored data codes. For dichotomies, the % are the proportion-correct-values for the options.
For data with score value "***", the percent is of all data codes, indicated by "#".


ABILITY MEAN is the observed, sample-dependent, average measure of persons (relative to each item) in this analysis who responded in this category (adjusted by PWEIGHT=). This is equivalent to a "Mean Criterion Score" (MCS) expressed as a measure. It is a sample-dependent quality-control statistic for this analysis. (It is not the sample-independent value of the category, which is obtained by adding the item measure to the "score at category", in Table 3.2 or higher, for the rating (or partial credit) scale corresponding to this item.) For each observation in category k, there is a person of measure Bn and an item of measure Di. Then: average measure = sum( Bn - Di ) / count of observations in category.
An "*" indicates that the average measure for a higher score value is lower than for a lower score value. This contradicts the hypothesis that "higher score value implies higher measure, and vice-versa".
The "average ability" for missing data is the average measure of all the persons for whom there is no response to this item. This can be useful. For instance, we may expect the "missing" people to be high or low performers, or to be missing random (and so they average measure would be close to the average of the sample).

These values are plotted in Table 2.6.


ABILITY P.SD is the population standard deviation of the ABILITY values = √(Σ (ABILITY - (ABILITY MEAN))²/COUNT)


S.E. MEAN is the standard error of the mean (average) measure of the sample of persons from a population who responded in this category (adjusted by PWEIGHT=) = √(Σ (ABILITY - (ABILITY MEAN))²/(COUNT*(COUNT-1))


INFT MNSQ is the Infit Mean-Square for observed responses in this category (weighted by PWEIGHT=, and omitting responses in extreme person scores). Values greater than 1.0 indicate unmodeled noise. Values less than 1.0 indicate loss of information.


OUTF MNSQ is the Outfit Mean-Square for observed responses in this category (weighted by PWEIGHT=, and omitting responses in extreme person scores). Values greater than 1.0 indicate unmodeled noise. Values less than 1.0 indicate loss of information.


PTMA CORR is the point-correlation between the data code, scored 1, or non-occurrence, scored 0, of this category or distractor and the person raw scores or measures chosen by PTBISERIAL=. The computation is described in Correlations. Example: for categories 0,1,2, then the correlation is between [1 for the target score (0 , 1, or 2) and 0 for the other scores ( 1 and 2, 0 and 2, or 0 and 1) ] and the person ability measures for the persons producing each score.


ITEM (here, ACT) is the name or label of the item.


Data codes and Category labels are shown to the right of the box, if CLFILE= or CFILE= is specified.


* Average ability does not ascend with category score. The average ability of the persons observed in this category is lower than the average ability of the persons in the next lower category. This contradicts the Rasch-model assumption that "higher categories <-> higher average abilities."


# Missing % includes all categories. Scored % only of scored categories. The percentage for the missing category is based on all the COUNTS. The percentages for the SCOREd categores are based only on those category COUNTs.


""BETTER FITTING OMIT" appears in fit-ordered Tables, where items better fitting than FITI= are excluded.



Multiple-Choice (MCQ) Distractors


The distractor table reports what happened to the original observations after they were scored dichotomously.

Item writers often intend the MCQ options to be something like:

A. Correct option (scored 1) - high-ability respondents - highest positive point-biserial

B. Almost correct distractor (scored 0) - almost-competent respondents - somewhat positive point-biserial

C. Mostly wrong distractor (scored 0) - slightly competent respondents - zero to negative point-biserial

D. Completely wrong distractor (scored 0) - ignorant respondents - highly negative point-biserial

The distractor table reports whether this happened.

You will obtain exactly the same Rasch measures if you score your MCQ items in advance and submit a 0-1 dataset to Winsteps.






Example 1: Missing observations are scored "1"





codes = 01A







A001   ; A is in Codes= but scored "missing"

B101   ; B is not in Codes= but is scored 1 by MISSCORE=1





|--------------------+------------+--------------------------+------|     Code with Score

|    1   A       *** |      1  25*|    -.70             -.50 |I0001 | A {0,0,1,-}{2,1,1,2}

|        0         0 |      1  33 |    -.01         .7  1.00 |      | 0 {1,0,-,-}{2,1,1,2}

|        1         1 |      1  33 |    -.01*       1.4 -1.00 |      | 1 {0,1,-,-}{2.1,1,2}

|        MISSING   1 |      1  33 |    1.26         .4   .50 |      | B {0,0,-,1}{2,1,1,2}



Example 2: We want summary statistics that exclude responses in extreme scores.

Delete extreme scores from an analysis.  Here's how


1. Do a standard analysis

2. Output the IFILE= and PFILE= to Excel

3. Sort the Excel spreadsheets on "STATUS".


4. for extreme persons

In the control file


(paste the entry numbers for persons with STATUS less than 1 here)



5. for the extreme items

In the control file


(paste the entry numbers for items with STATUS less than 1 here)



6. save the control file and reanalyze. Extreme scores will be deleted from the summary statistics.

Help for Winsteps Rasch Measurement Software: Author: John Michael Linacre

The Languages of Love: draw a map of yours!

For more information, contact or use the Contact Form

Facets Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download
Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download

State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied
Rasch, Winsteps, Facets online Tutorials


Forum Rasch Measurement Forum to discuss any Rasch-related topic

Click here to add your email address to the Winsteps and Facets email list for notifications.

Click here to ask a question or make a suggestion about Winsteps and Facets software.

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments, George Engelhard, Jr. & Stefanie Wind Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez
Winsteps Tutorials Facets Tutorials Rasch Discussion Groups



Coming Winsteps & Facets Events
May 22 - 24, 2018, Tues.-Thur. EALTA 2018 pre-conference workshop (Introduction to Rasch measurement using WINSTEPS and FACETS, Thomas Eckes & Frank Weiss-Motz),
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),
June 27 - 29, 2018, Wed.-Fri. Measurement at the Crossroads: History, philosophy and sociology of measurement, Paris, France.,
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps),
July 25 - July 27, 2018, Wed.-Fri. Pacific-Rim Objective Measurement Symposium (PROMS), (Preconference workshops July 23-24, 2018) Fudan University, Shanghai, China "Applying Rasch Measurement in Language Assessment and across the Human Sciences"
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets),
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps),



Our current URL is

Winsteps® is a registered trademark

John "Mike" L.'s Wellness Report: I'm 72, take no medications and, March 2018, my doctor is annoyed with me - I'm too healthy!
According to Wikipedia, the human body requires about 30 minerals, maybe more. There are 60 naturally-occurring minerals in the liquid Mineral Supplement which I take daily.