Table 26.3 Item option & distractor frequencies 
(controlled by Distractors=Y, OSORT=, CFILE=, PTBIS=)
ITEM OPTION FREQUENCIES are output if Distractors=Y. These show occurrences of each of the valid data codes in CODES=, and also of MISSCORE= in the input data file. Counts of responses forming part of extreme scores are included. Only items included in the corresponding main table are listed. These statistics are also in DISFILE=, which includes entries even if the code is not observed for the item.
OSORT= controls the ordering of options within items. The standard is the order of data codes in CODES=.
ITEM CATEGORY/OPTION/DISTRACTOR FREQUENCIES: ENTRY ORDER

ENTRY DATA SCORE  DATA  ABILITY S.E. INFT OUTF PTMA  
NUMBER CODE VALUE  COUNT %  MEAN P.SD MEAN MNSQ MNSQ CORR. ITEM 
+++
 13 4 ***  11 16# .79 1.41 .45 .08 M. STAIRS  4 75% Independent
 1 1  30 52  1.85 .99 .18 .8 1.0 .89   1 0% Independent
 3 3  5 9  1.07 .80 .40 .6 .4 .10   3 50% Independent
 5 5  15 26  2.63 1.06 .28 1.7 1.5 .56   5 Supervision
 6 6  7 12  3.25 .81 .33 1.2 1.2 .44   6 Device
 7 7  1 2  4.63 .00 .7 .6 .23   7 Independent
 MISSING ***  1 1# 1.99 .00 .12  

* Average ability does not ascend with category score
# Missing % includes all categories. Scored % only of scored categories
ENTRY NUMBER is the item sequence number.
The letter next to the sequence number is used on the fit plots.
DATA CODE is the response code in the data file.
MISSING means that the data code is not listed in the CODES= specification.
Codes with no observations are not listed.
SCORE VALUE is the value assigned to the data code by means of NEWSCORE=, KEY1=, IVALUEA=, etc.
*** means the data code is missing and so ignored, i.e., regarded as not administered. MISSCORE=1 scores missing data as "1".
DATA COUNT is the frequency of the data code in the data file (unweighted)  this includes observations for both nonextreme and extreme persons and items. For counts weighted by PWEIGHT=, see DISFILE=
DATA % is the percent of scored data codes. For dichotomies, the % are the proportioncorrectvalues for the options.
For data with score value "***", the percent is of all data codes, indicated by "#".
ABILITY MEAN is the observed, sampledependent, average measure of persons (relative to each item) in this analysis who responded in this category (adjusted by PWEIGHT=). This is equivalent to a "Mean Criterion Score" (MCS) expressed as a measure. It is a sampledependent qualitycontrol statistic for this analysis. (It is not the sampleindependent value of the category, which is obtained by adding the item measure to the "score at category", in Table 3.2 or higher, for the rating (or partial credit) scale corresponding to this item.) For each observation in category k, there is a person of measure Bn and an item of measure Di. Then: average measure = sum( Bn  Di ) / count of observations in category.
An "*" indicates that the average measure for a higher score value is lower than for a lower score value. This contradicts the hypothesis that "higher score value implies higher measure, and viceversa".
The "average ability" for missing data is the average measure of all the persons for whom there is no response to this item. This can be useful. For instance, we may expect the "missing" people to be high or low performers, or to be missing random (and so they average measure would be close to the average of the sample).
These values are plotted in Table 2.6. 
ABILITY P.SD is the population standard deviation of the ABILITY values = √(Σ (ABILITY  (ABILITY MEAN))²/COUNT)
S.E. MEAN is the standard error of the mean (average) measure of the sample of persons from a population who responded in this category (adjusted by PWEIGHT=) = √(Σ (ABILITY  (ABILITY MEAN))²/(COUNT*(COUNT1))
INFT MNSQ is the Infit MeanSquare for observed responses in this category (weighted by PWEIGHT=, and omitting responses in extreme person scores). Values greater than 1.0 indicate unmodeled noise. Values less than 1.0 indicate loss of information.
OUTF MNSQ is the Outfit MeanSquare for observed responses in this category (weighted by PWEIGHT=, and omitting responses in extreme person scores). Values greater than 1.0 indicate unmodeled noise. Values less than 1.0 indicate loss of information.
PTMA CORR is the pointcorrelation between the data code, scored 1, or nonoccurrence, scored 0, of this category or distractor and the person raw scores or measures chosen by PTBISERIAL=. The computation is described in Correlations. Example: for categories 0,1,2, then the correlation is between [1 for the target score (0 , 1, or 2) and 0 for the other scores ( 1 and 2, 0 and 2, or 0 and 1) ] and the person ability measures for the persons producing each score.
ITEM (here, ACT) is the name or label of the item.
Data codes and Category labels are shown to the right of the box, if CLFILE= or CFILE= is specified.
* Average ability does not ascend with category score. The average ability of the persons observed in this category is lower than the average ability of the persons in the next lower category. This contradicts the Raschmodel assumption that "higher categories <> higher average abilities."
# Missing % includes all categories. Scored % only of scored categories. The percentage for the missing category is based on all the COUNTS. The percentages for the SCOREd categores are based only on those category COUNTs.
""BETTER FITTING OMIT" appears in fitordered Tables, where items better fitting than FITI= are excluded.
MultipleChoice (MCQ) Distractors
The distractor table reports what happened to the original observations after they were scored dichotomously.
Item writers often intend the MCQ options to be something like:
A. Correct option (scored 1)  highability respondents  highest positive pointbiserial
B. Almost correct distractor (scored 0)  almostcompetent respondents  somewhat positive pointbiserial
C. Mostly wrong distractor (scored 0)  slightly competent respondents  zero to negative pointbiserial
D. Completely wrong distractor (scored 0)  ignorant respondents  highly negative pointbiserial
The distractor table reports whether this happened.
You will obtain exactly the same Rasch measures if you score your MCQ items in advance and submit a 01 dataset to Winsteps.
Example:
Example 1: Missing observations are scored "1"
ni=4
codes=01
name1=1
item1=1
codes = 01A
ptbis=YES
misscore=1
&end
END LABELS
0110
1001
A001 ; A is in Codes= but scored "missing"
B101 ; B is not in Codes= but is scored 1 by MISSCORE=1

ENTRY DATA SCORE  DATA  AVERAGE S.E. OUTF PTBSE 
NUMBER CODE VALUE  COUNT %  MEASURE MEAN MNSQ CORR. ITEM  Correlation:
+++ Code with Score
 1 A ***  1 25* .70 .50 I0001  A {0,0,1,}{2,1,1,2}
 0 0  1 33  .01 .7 1.00   0 {1,0,,}{2,1,1,2}
 1 1  1 33  .01* 1.4 1.00   1 {0,1,,}{2.1,1,2}
 MISSING 1  1 33  1.26 .4 .50   B {0,0,,1}{2,1,1,2}
Example 2: We want summary statistics that exclude responses in extreme scores.
Delete extreme scores from an analysis. Here's how
1. Do a standard analysis
2. Output the IFILE= and PFILE= to Excel
3. Sort the Excel spreadsheets on "STATUS".
4. for extreme persons
In the control file
PDFILE=*
(paste the entry numbers for persons with STATUS less than 1 here)
*
5. for the extreme items
In the control file
IDFILE=*
(paste the entry numbers for items with STATUS less than 1 here)
*
6. save the control file and reanalyze. Extreme scores will be deleted from the summary statistics.
Help for Winsteps Rasch Measurement Software: www.winsteps.com. Author: John Michael Linacre
For more information, contact info@winsteps.com or use the Contact Form
Facets Rasch measurement software.
Buy for $149. & site licenses.
Freeware student/evaluation download Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download 

Stateoftheart : singleuser and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied Rasch, Winsteps, Facets online Tutorials 

Forum  Rasch Measurement Forum to discuss any Raschrelated topic 
Click here to add your email address to the Winsteps and Facets email list for notifications.
Click here to ask a question or make a suggestion about Winsteps and Facets software.
Coming Raschrelated Events  

April 1012, 2018, Tues.Thurs.  Rasch Conference: IOMW, New York, NY, www.iomw.org 
April 1317, 2018, Fri.Tues.  AERA, New York, NY, www.aera.net 
May 22  24, 2018, Tues.Thur.  EALTA 2018 preconference workshop (Introduction to Rasch measurement using WINSTEPS and FACETS, Thomas Eckes & Frank WeissMotz), https://ealta2018.testdaf.de 
May 25  June 22, 2018, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
June 27  29, 2018, Wed.Fri.  Measurement at the Crossroads: History, philosophy and sociology of measurement, Paris, France., https://measurement2018.sciencesconf.org 
June 29  July 27, 2018, Fri.Fri.  Online workshop: Practical Rasch Measurement  Further Topics (E. Smith, Winsteps), www.statistics.com 
July 25  July 27, 2018, Wed.Fri.  PacificRim Objective Measurement Symposium (PROMS), (Preconference workshops July 2324, 2018) Fudan University, Shanghai, China "Applying Rasch Measurement in Language Assessment and across the Human Sciences" www.promsociety.org 
Aug. 10  Sept. 7, 2018, Fri.Fri.  Online workshop: ManyFacet Rasch Measurement (E. Smith, Facets), www.statistics.com 
Sept. 3  6, 2018, Mon.Thurs.  IMEKO World Congress, Belfast, Northern Ireland www.imeko2018.org 
Oct. 12  Nov. 9, 2018, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
Our current URL is www.winsteps.com
Winsteps^{®} is a registered trademark
John "Mike" Linacre, author of Winsteps, and Jenny use and recommend ecofriendly, safe and effective wellness and beauty products such as skincare 