Rating (or partial credit) scale (or Response model) = 
The Rating (or partial credit) scale= statement provides a simple way to provide further information about the scoring model beyond that in the Model= specification. You can name each category of a scale, provide RaschAndrich thresholdvalues (step calibrations) for anchoring or starting values, and recode observations.
Components of Rating (or partial credit) scale= 

Format: 
Rating scale = user name, structure, scope, numeration 
user name of scale or response model 
any set of alphanumeric characters, e.g., "Likert". To be used, it must match exactly a user name specified in a Model= statement.

structure 
D, R, B, P = any scale code in table below, except a user name 
scope 
S = Specific (or # in a Model= specification) means that each occurrence of this scale name in a different Model= specification refers to a separate copy of the scale, with its own RaschAndrich thresholds (step calibrations), though each has the same number of categories, category names, etc. G = General means that every reference to this scale in any Model= specification refers to the same, single manifestation of the scale. 
O = Ordinal means that the category labels are arranged cardinally, representing ascending, but adjacent, qualitative levels of performance regardless of their values. K =Keep means that the category labels are cardinal numbers, such that all intermediate numbers represent levels of performance, regardless as to whether they are observed in any particular data set. 
Model= and Scale codes 
Meaning for this model 
D 
Dichotomous data. Only 0 and 1 are valid. 
Dn 
Dichotomize the counts. Data values 0 through n1 are treated as 0. Data values n and above are treated as 1. E.g., "D5" recodes "0" to "4" as 0 and "5" and above as 1. 
R 
The rating scale (or partial credit) categories are in the range 0 to 9. The actual valid category numbers are those found in the data. RK to maintain unobserved intermediate categories in the category ordering. 
Rn 
The rating scale (or partial credit) categories are in the range 0 to "n". Data values about "n" are missing data. The actual valid category numbers are those found in the data. If 20 is the largest category number used, then specify "R20". 
RnK 
Suffix "K" (Keep) maintains unobserved intermediate categories in the category ordering. If K is omitted, the categories are renumbered consecutively to remove the unobserved intermediate numbers 
M 
Treat all observations matching this model as Missing data, i.e, a way to ignore particular data, effectively deleting these data. 
Bn 
Binomial (Bernoulli) trials, e.g., "B3" means 3 trials. In the Model= statement, put the number of trials. In the Data= statement, put the number of successes. Use Rating Scale= for anchored discrimination. 
B1 
1 binomial trial, which is the same as a dichotomy, "D". 
B100 
Useful for ratings expressed as percentages %. Use Rating Scale= for anchored discrimination. 
P 
Poisson counts, with theoretical range of 0 through infinity, though observed counts must be in the range 0 to 255. Use Rating Scale= for anchored discrimination. 
the name of a userdefined scale 
A name such as "Opinion". This name must match a name given in a Rating (or partial credit) scale= specification. 
Components of category description lines 

Format: 
Rating scale = myscale, R5 category number, category name, measure value, anchor flag, recoded values, reordered values category number, category name, measure value, anchor flag, recoded values, reordered values ..... * 
category number 
quantitative count of ordered qualitative steps, e.g., 2. 1 is treated as missing data and is used when recoding data. 
category name 
label for category, e.g., "agree" 
measure value 
These provide starting or preset fixed values. 
anchor flag 
For rating scales (or partial credit items), ",A" means Anchor this category at its preset RaschAndrich threshold (step calibration) value. If omitted, or any other letter, the logit value is only a starting value. Anchoring a category with a preset RaschAndrich threshold forces it to remain in the estimation even when there are no matching responses. Anchor ",A" the lowest category with "0" to force it to remain in the estimation. For binomial trials and Poisson counts: ",A" entered for category 0 means anchor (fix) the scale discrimination at the assigned value. 
recoded values 
Data values to be recoded, separated by "+" signs (optional). Numeric ranges to be recoded are indicated by "". Examples: 5+6+Bad recodes "5", "6" or "Bad" in the data file to the category number. "58" recodes "5", "6", "7" or "8" to the category number. 1+58 recodes "1", "5", "6", "7", "8" to the category number. 
Example 1: Anchor a rating scale (or partial credit) at preset RaschAndrich thresholds (step calibrations.)
Model=?,?,faces,1 ; the "Liking for Science" faces
*
Rating (or partial credit) scale=faces,R3
1=dislike,0,A ; always anchor bottom category at "0"
2=don't know,0.85,A ; anchor first step at 0.85 RaschAndrich threshold
3=like,0.85,A ; anchor second step at +0.85 RaschAndrich threshold
* ; as usual, RaschAndrich thresholds sum to zero.
Example 2: Center a rating scale (or partial credit) at the point where categories 3 and 4 are equally probable. Note: usually a scale is centered where the first and last categories are equally probable. More detailed scale rating scale anchoring example.
Model=?,?,friendliness,1 ; the scale
*
Rating (or partial credit) scale=friendliness,R4
1=obnoxious
2=irksome
3=passable
4=friendly,0,A ; Forces categories 3 and 4 to be equally probable at a relative logit of 0.
Example 3: Define a Likert scale of "quality" for persons and items, with item 1 specified to have its own RaschAndrich thresholds (scale calibrations). Recoding is required.
Model=
?,1,quality,1 ; a scale named "quality" for item 1
?,?,quality,1 ; a scale named "quality" for all other items
*
Rating (or partial credit) scale=quality,R3,Specific ; the scale is called "quality"
0=dreadful
1=bad
2=moderate
3=good,,,5+6+Good ; "5","6","Good" recoded to 3.
; ",,," means logit value and anchor status omitted
1=unwanted,,,4 ; "4" was used for "no opinion", recoded to 1 so ignored
* ; "0","1","2","3" in the data are not recoded, so retain their values.
Example 4: Define a Likert scale of "intensity" for items 1 to 5, and "frequency" for items 6 to 10. The "frequency" items are each to have their own scale structure.
Model=
?,15,intensity ; "intensity" scale for items 15
?,610#,frequency ; "frequency" scale for items 610 with "partial credit" format
*
Rating (or partial credit) scale=intensity,R4 ; the scale is called "intensity"
1=none
2=slightly
3=generally
4=completely
*
Rating (or partial credit) scale=frequency,R4 ; the scale is called "frequency"
1=never
2=sometimes
3=often
4=always
*
The components of the Rating (or partial credit) scale= specification:
Rating (or partial credit) scale=quality,R3,Specific ; the scale is called "quality"
"quality" (or any other name you choose)
is the name of your scale. It must match the scale named in a Model= statement.
R3 an Andrich rating scale (or partial credit) with valid categories in the range 0 through 3.
Specific each model statement referencing quality generates a scale with the same structure and category names, but different RaschAndrich thresholds (step calibrations).
Example 5: Items 1 and 2 are rated on the same scale with the RaschAndrich thresholds. Items 3 and 4 are rated on scales with the same categories, but different RaschAndrich thresholds:
Model=
?,1,Samescale
?,2,Samescale
?,3,Namesonly
?,4,Namesonly
*
Rating (or partial credit) scale=Samescale,R5,General
; only one set of RaschAndrich threshold is estimated for all model statements
; category 0 is not used ; this is a potentially 6 category (05) rating scale (or partial credit)
1,Deficient
2,Satisfactory
3,Good
4,Excellent
5,Prize winning
*
Rating (or partial credit) scale=Namesonly,R3,Specific
; one set of RaschAndrich thresholds per model statement
0=Strongly disagree ; this is a 4 category (03) rating scale (or partial credit)
1=Disagree
2=Agree
3=Strongly Agree
*
Example 6: Scale "flavor" has been analyzed, and we use the earlier values as starting values.
Rating (or partial credit) scale=Flavor,R
0=weak ; bottom categories always have 0.
1=medium,3 ; the RaschAndrich threshold from 0 to 1 is 3 logits
2=strong,3 ; the step value from 1 to 2 is 3 logits
* ; The sum of the anchor RaschAndrich thresholds is the conventional zero.
Example 7: Collapsing a four category scale (03) into three categories (02):
Rating (or partial credit) scale=Accuracy,R2
0=wrong ; no recoding. "0" remains "0"
1=partial,,,2 ; "2" in data recoded to "1" for analysis.
; "1" in data remains "1" for analysis, ",,," means no preset logit value and no anchoring.
2=correct,,,3 ; "3" in data recoded to "2" for analysis.
; "2" in data already made "1" for analysis.
*
data=
1,2,0 ; 0 remains category 0
4,3,1 ; 1 remains category 1
5,4,2 ; 2 recoded to category 1
6,23,3 ; 3 recoded to category 2
13,7,4 ; since 4 is not recoded and is too big for R2, Facets terminates with the message:
Data is: 13,7,4
Error 26 in line 53: Invalid datum value: nonnumeric or too big for model
Execution halted
Example 8: Recoding nonnumeric values.
Categories do not have to be valid numbers, but must match the data file exactly, so that, for a data file which contains "R" for right answers, and "W" or "X" for wrong answers, and "M" for missing:
Rating (or partial credit) scale=Keyed,D ; a dichotomous scale called "Keyed"
0=wrong,,,W+X ; both "W" and "X" recoded to "0", "+" is a separator
1=right,,,R ; "R" recoded to "1"
1=missing,,,M ; "M" recoded to "1"  ignored as missing data
*
data=
1,2,R ; R recoded 1
2,3,W ; W recoded 0
15,23,X ; X recoded 0
7,104,M ; M recoded to 1, treated as missing data
Example 9: Maintaining the rating scale (or partial credit) structure with unobserved intermediate categories. Unobserved intermediate categories can be kept in the analysis.
Model=?,?,Multilevel
Rating (or partial credit) scale=Multilevel,R2,G,K ; means that 0, 1, 2 are valid
; if 0 and 2 are observed, 1 is forced to exist.
Dichotomies can be forced to 3 categories, to match 3 level partial credit items, by scoring the dichotomies 0=wrong, 2=right, and modeling them R2,G,K.
Example 10: An observation is recorded as "percents". These are to be modelled with the same discrimination as in a previous analysis, 0.72.
Model=?,?,Percent
Rating (or partial credit) scale=Percent,B100,G ; model % at 0100 binomial trials
0=0,0.72,A ; Anchor the scale discrimination at 0.72
Example 11: Forcing structure (step) anchoring with dichotomous items. Dichotomous items have only one step, so usually the RaschAndrich threshold is at zero logits relative to the item difficulty. To force a different value:
Facets = 2
Model = ?,?,MyDichotomy
Rating scale = MyDichotomy, R2
0 = 0, 0, A ; anchor bottom category at 0  this is merely a placeholder
1 = 1, 2, A ; anchor the second category at 2 logits
2 = 2 ; this forces Facets to run a rating scale model, but it drops from the analysis because the data are 0, 1.
*
If the items are centered, this will move all person abilities by 2 logits. If the persons are centered, the item difficulties move by 2 logits.
Example 12: The itemperson alignment is to be set at 80% success on dichotomous items, instead of the standard 50% success.
Model = ?,?,?, Dichotomous
Rating scale = Dichotomous, R1 ; define this as a rating scale with categories 0,1 rather than a standard dichotomy (D)
0 = 0, 0, A ; Placekeeper for bottom category
1 = 1, 1.39, A ; Anchor Rasch  Andrich threshold for 01 threshold at 1.39 logits
*
Table 6 Standard 50% Offset  kct.txt

Measr+ChildrenTapping i

+ 1 + **. + +
   11 
   
   
* 0 * * *
  ******   
    
    80% probability of succes
+ 1 + + + 
  *.   
   10  
   
+ 2 + + +

Measr * = 2 Tapping i

Table 6 80% offset 1.39 logits

Measr+ChildrenTapping i

+ 1 + + +
   11 
  **  
   
* 0 * * *
   
  **.  
   
+ 1 + + +
   
  ******  10  < Item with 80% probability of success targeted
   
+ 2 + + +

Measr * = 2 Tapping i

Example 13. Data has the range 01000, but Facets only accepts 0254. Convert the data with the Rating Scale= specification:
models = ?,?,...,spscale
rating scale=spscale,R250,Keep ; keep unobserved intermediate categories in the rating scale structure
0,01,,,0+1 "01" is the category label.
1,25,,,2+3+4+5 ; this can be constructed in Excel, and then pasted into the Facets specifications
2,69,,,6+7+8+9
3,1013,,,10+11+12+13
....
248,990993,,,990+991+992+993
249,994997,,,994+995+996+997
250,9981000,,,998+999+1000
*
Example 14: The ratingscale anchor values are the relative logodds of adjacent categories. For instance, if the category frequencies are
0 20
1 10
2 20
and all other measures (person abilities, item difficulties, etc.) are 0. Then Facets would show:
Rating (or partial credit) scale=RS1,R2,G,O
0=,0,A,, ; this "0" is a conventional value to indicate that this is the bottom of the rating scale
1=,0.69,A,, ; this is log(frequency(0)/frequency(1)) = loge(20/10)
2=,.0.69,A,, ; this is log(frequency(1)/frequency(2)) = loge(10/20)
In rating scale applications, we may want to impose the constraint that the logodds values increase, if so, we we will only accept rating scale which conceptually have category structure similar to:
0 10
1 20
2 10
This would produce:
Rating (or partial credit) scale=RS1,R2,G,O
0=,0,A,, ; this "0" is a conventional value to indicate that this is the bottom of the rating scale
1=,0.69,A,, ; this is log(frequency(0)/frequency(1)) = loge(10/20)
2=,.0.69,A,, ; this is log(frequency(1)/frequency(2)) = loge(20/10)
Example 15: The test has a 7category rating scale, but some items are oriented forward and others are reversed, inverted, negative:
Facets =3 ; Facet 3 are the items
Models =
?, ?, 1, Forward
?, ?, 2, Reversed
?, ?, 38, Forward
?, ?, 912, Reversed
*
Rating scale= Forward, R7, General
1 =
2 =
3 =
4 =
5 =
6 =
7 =
*
Rating scale= Reversed, R7, General
1 = , , , 7
2 = , , , 6
3 = , , , 5
4 = , , , 4
5 = , , , 3
6 = , , , 2
7 = , , , 1
*
Example 16: The rating scales do not have the same number of rating categories: fluency 40100, accuracy 070, coherence 030, etc.
Let's assume the items are facet 3,
1. Every category number between the lowest and the highest is observable
Models=
?, ?, #, R100K ; K means "keep unobserved intermediate categories"
*
2. Every category number between the lowest and the highest category is not observable, but every observable category has been observed in this dataset
Models=
?, ?, #, R100 ; unobserved categories will be collapsed out of the rating scales
*
3. Only some categories are observable, but not all of those have been observed
Models=
?, ?, 1, Fluency
?, ?, 2, Accuracy
?, ?, 3, Coherence
*
Rating scale = Fluency, R100, Keep
0 = 0
1 = 10, , , 10 ; rescore 10 as "1"
2 = 20, , , 20
....
10 = 100, , , 100
*
Rating scale = Accuracy, R70, Keep
0 = 0
1 = 10, , , 10 ; rescore 10 as "1"
2 = 20, , , 20
....
7 = 70, , , 70
*
Rating scale = Coherence, R30, Keep
0 = 0
1 = 5, , , , 5
2 = 10, , , 10 ; rescore 10 as "1"
3 = 15, , , 15
4 = 20, , , 20
5 = 25, , , 25
6 = 30, , , 30
*
And there are more possibilities ...
Help for Facets Rasch Measurement Software: www.winsteps.com Author: John Michael Linacre.
For more information, contact info@winsteps.com or use the Contact Form
Facets Rasch measurement software.
Buy for $149. & site licenses.
Freeware student/evaluation download Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download 

Stateoftheart : singleuser and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied Rasch, Winsteps, Facets online Tutorials 

Forum  Rasch Measurement Forum to discuss any Raschrelated topic 
Click here to add your email address to the Winsteps and Facets email list for notifications.
Click here to ask a question or make a suggestion about Winsteps and Facets software.
Coming Raschrelated Events  

Jan. 5  Feb. 2, 2018, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
Jan. 1016, 2018, Wed.Tues.  Inperson workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement 
Jan. 1719, 2018, Wed.Fri.  Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website 
Jan. 2224, 2018, MonWed.  Inperson workshop: Rasch Measurement for Everybody en español (A. Tristan, Winsteps), San Luis Potosi, Mexico. www.ieia.com.mx 
April 1012, 2018, Tues.Thurs.  Rasch Conference: IOMW, New York, NY, www.iomw.org 
April 1317, 2018, Fri.Tues.  AERA, New York, NY, www.aera.net 
May 22  24, 2018, Tues.Thur.  EALTA 2018 preconference workshop (Introduction to Rasch measurement using WINSTEPS and FACETS, Thomas Eckes & Frank WeissMotz), https://ealta2018.testdaf.de 
May 25  June 22, 2018, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
June 27  29, 2018, Wed.Fri.  Measurement at the Crossroads: History, philosophy and sociology of measurement, Paris, France., https://measurement2018.sciencesconf.org 
June 29  July 27, 2018, Fri.Fri.  Online workshop: Practical Rasch Measurement  Further Topics (E. Smith, Winsteps), www.statistics.com 
July 25  July 27, 2018, Wed.Fri.  PacificRim Objective Measurement Symposium (PROMS), (Preconference workshops July 2324, 2018) Fudan University, Shanghai, China "Applying Rasch Measurement in Language Assessment and across the Human Sciences" www.promsociety.org 
Aug. 10  Sept. 7, 2018, Fri.Fri.  Online workshop: ManyFacet Rasch Measurement (E. Smith, Facets), www.statistics.com 
Sept. 3  6, 2018, Mon.Thurs.  IMEKO World Congress, Belfast, Northern Ireland www.imeko2018.org 
Oct. 12  Nov. 9, 2018, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
Our current URL is www.winsteps.com
Winsteps^{®} is a registered trademark
Concerned about aches, pains, youthfulness? Mike and Jenny suggest Liquid Biocell 
