Model to be used in the analysis = ?,?,D,1 
This specifies how the facets interact to produce the data. Its form parallels that of the data. One indicator, such as "?", is specified for each facet in the data, followed by another indicator, such as "D", for the measurement model which is specified to produce the data. Additional models can be listed after the Model= statement and followed by an "*". Model weighting can be specified after the modeltype indicator.
How Models= functions with data: Matching data with measurement models
The process is:
Suppose 6 experts rate 19 items for quality of manufacture on a rating scale from 1 to 4:
A. Decide how many facets you have.
There are 2 facets: experts and items
Let's call experts facet 1 and items facet 2.
Then, in your Facets specification file, you will have
Facets=2
B. Identify the individual elements in the two facets
Then, in your Facets specification file, you will have
Labels=
1, experts
1, first expert
.....
6, sixth expert
*
2, items
1, first item
2, second item
....
19, nineteenth item
*
C. Decide how they interact
Any expert  this is indicated by "?" can interact with any item indicated by "?"
D. Decide on the response structure.
It is a rating scale, indicated by "R", with the highest category "4"
So in your Facets specification file you will have
Models = ?,?,R4 ; and element in facet 1 (expert) can interact with any element in facet 2 (item) to produce a rating on a scale whose highest category is 4.
E. The data will look like:
element from facet 1, element from facet 2, rating
3, 18, 2 ; expert 3 gave item 18 a rating of 2.
A wide variety of models can be constructed to enable measures to be estimated from many types of qualitative data. Facets are specified in the same order in the Model= specification as they are in the data lines. Each model definition includes one entry for each facet specified in the Labels= specification, unless overridden by an Entry= specification. Zero terms, "0", in the Entry= specification are bypassed and not referenced in Model= specifications.
Each model specification includes
a) control characters, such as "?", except for "0" facets in an Entry= specification.
b) a code specifying the type of scale (dichotomy, rating scale, partial credit, etc.), or giving the name of a scale explicitly defined by a Rating (or partial credit) scale= specification.
c) Optionally, a weight to be assigned to data matching this model. The standard value is 1.
Weights are always arbitrary, based on other information and value judgements external to the data. Use weights only when nonmeasurement considerations have a specific, justifiable priority, e.g., when a 100 item MCQ test and one essay graded on a 5 point scale are to be given equal weight in the final, combined measure.
d) Optionally, as a final parameter following the weight, a scale description,
Model=?,?,R,,Farley stress scale ; ",," indicates the standard weight of 1 applies
Control characters can be difficult to understand at first. On first reading, skip down to the "Examples" to get the feel of what this is all about.
Facets are positioned in the same order in the Model= specification as they are positioned in the data lines. Each model definition includes one entry for each facet in a data line. Zero terms, "0", in the Entry= specification are bypassed.
Model= Facet control characters 
Meaning for this facet 
? or ?? or $

Model can match a datum with any element number in this facet, e.g., any examinee. 
# or ## 
Model can match a datum with any element number in this facet, e.g., any item. Also, each element of this facet matched to this model has its own rating scale, i.e., "#" specifies a "partial credit" model. 
blank 
Ignore this facet when matching the Model= statement to a datum, but verify that the element number for this facet for a datum that matches this model statement is listed after Labels=. Typed as ",,". 
X 
Ignore this facet when matching data to this Model= statement. Do not check for the element number in this facet for validity when a match occurs. 
0 or Keep zero= value 
Model can only match a datum in which this facet does not participate, i.e., when element number 0 is used for this facet in the datum reference. 
 e.g., ? or # 
Reverses the orientation of the measure of the element of facet, when combined with other facet control characters. "?" means can match any element, but with the element's measure reversed in direction and sign. When using ?, it is recommended that the data be entered twice, once with each facet as ?, and the models= be weighted 0.5. 
an element number, e.g., 23 
Model can only match a datum with exactly this element number in this facet. Element labels are not allowed. 
numbernumber, e.g., 2336 
Model can match a datum with any element number from the specified range in this facet. 
number# 23# or 2336# 
Model can match a datum with a matching element number from the specified number, but each element number is associated with a unique "partial credit" scale. 
B e.g., Model= ?B,?,?B,R 
Generate Bias interaction estimates for combinations of the elements of each facet marked by "B". At least two "B" terms are needed. The "B" is appended to one of the other facet control characters, e.g., "?B". The bias interactions are coded in one or more model statements, but act as though they are coded in all model statements. Model statements with different combinations of facets marked by "B" each produce separate sets of bias estimates for all the data. 
Model= and Scale codes 
Meaning for this model 
D 
Dichotomous data. Only 0 and 1 are valid. 
Dn 
Dichotomize the counts. Data values 0 through n1 are treated as 0. Data values n and above are treated as 1. E.g., "D5" recodes "0" to "4" as 0 and "5" and above as 1. 
R 
The rating scale (or partial credit) categories are in the range 0 to 9. The actual valid category numbers are those found in the data. RK to maintain unobserved intermediate categories in the category ordering. 
Rn 
The rating scale (or partial credit) categories are in the range 0 to "n". Data values about "n" are missing data. The actual valid category numbers are those found in the data. If 20 is the largest category number used, then specify "R20". 
RnK 
Suffix "K" (Keep) maintains unobserved intermediate categories in the category ordering. If K is omitted, the categories are renumbered consecutively to remove the unobserved intermediate numbers 
M 
Treat all observations matching this model as Missing data, i.e, a way to ignore particular data, effectively deleting these data. 
Bn 
Binomial (Bernoulli) trials, e.g., "B3" means 3 trials. In the Model= statement, put the number of trials. In the Data= statement, put the number of successes. Use Rating Scale= for anchored discrimination. 
B1 
1 binomial trial, which is the same as a dichotomy, "D". 
B100 
Useful for ratings expressed as percentages %. Use Rating Scale= for anchored discrimination. 
P 
Poisson counts, with theoretical range of 0 through infinity, though observed counts must be in the range 0 to 255. Use Rating Scale= for anchored discrimination. 
the name of a userdefined scale 
A name such as "Opinion". This name must match a name given in a Rating (or partial credit) scale= specification. 
Example: A test in which every item has a different rating scale (partial credit) with a different numbers of categories. The highest numerical category of any item "6".
Model = ?, ?, #, R6 ; this allows items with categories such as 0,1,2 and also 1,2,3,4,5,6
There are more examples at Model statement examples.
Data weighting: This specifies the weight to be assigned to each datum in estimating measures, fit statistics and bias analyses. This is entered in the Model= specification after the scale code, e.g., Model=?,12,D,2 specifies a weight of 2 for responses to item 12.
Model= Weighting control 
Meaning for this model 
1 (the standard) 
Give the datum the standard weight of 1 in estimating measures, fit statistics and bias sizes. 
n 
Give the datum a weight of "n", e.g., 2.5, in estimating measures and fit statistics. This gives it greater influence than data with lesser weights. 
0 
Give the datum zero weight, i.e., treat the datum as missing (but report, if possible, in the residual file.) 
Adjust the weights as a set so that the standard errors reported for the persons by weighted and unweighted analyses are about the same. This prevents the weighting misleading you about test reliability, etc. 
Data replication: Data point replication or weighting can be done by prefixing R (or another replication character) + weight before the data point after Data=, e.g.,
R12.5 , 1 , 2, 3 means weight 12.5 times, the observation of "3" for element 1 of facet 1 and element 2 of facet 2.
Multiple identical sets of observations can be replicated on the same line, by preceding the data for one observation by R and the number of replications, e.g, 20 replications are indicated by R20.
Example 1: Survey data has been summarized by response rating. 237 people responded in category 3 on item 27.
Data=
R237,27,3
Example 1: The basic Rasch model for dichotomous interactions between objects and agents is specified by:
Model=?,?,D
"?,?,D" specifies that any element of the first facet (the first "?") can interact with any element of the second facet (the second "?") to produce a dichotomous observation (the "D"). Record a dichotomous observation as a "1" for success/right/more, or a "0" for failure/wrong/less. This implements the basic Rasch dichotomous model:
log(Pni1/Pni0) = Bn  Di
where
Pni1 is the probability of person n getting item i right
Pni0 is the probability of person n getting item i wrong
Bn is the ability of person n
Di is the difficulty of item i.
Example 2: The Andrich rating scale model for judges, persons and items is specified by:
Model=?,?,?,R
"?,?,?,R" states that any judge,"?", can rate any person, "?", on any item, "?", using a common rating scale, "R".
This implements an Andrich rating scale model:
log(Pnijk/Pnijk1) = Bn  Di  Cj  Fk
where
Pnijk is the probability that person n is awarded, on item i by judge j, a rating of k
Pnijk1 is the probability that person n is awarded, on item i by judge j, a rating of k1
Bn is the ability of person n
Di is the difficulty of item i
Cj is the severity of judge j
Fk is the RaschAndrich threshold (step calibration) of step k of the rating scale. This is the location on the latent variable (relative to the item difficulty) where categories k and k1 are equally probable.
Example 3: More than one model can be specified. See: Matching data with models. A multiplemodel analysis of only items 4 and 5. Item 4 is a true/false dichotomous item, but item 5 is a Likert rating scale (or partial credit) item. The examinees are facet 1, and the items are facet 2:
Model=?,4,D
?,5,R
*
or, all models may be specified on lines following Model=,
Model=
?,4,D
?,5,R
*
Example 4: I have a 2 rating scale instrument of 32 items. The first 19 items are on one 6 category rating scale and the remaining 13 items are on a different 6 category rating scale. There are 4 facets, and the items are the 4th facet.
Facet = 4
Models =
?,?,?,119,R6 ; items 119 are on a rating scale with highest category numbered "6"
?,?,?,2032,R10K ; items 2032 are on another rating scale with highest category numbered "10". Unobserved intermediate categories are to be maintained in the ordering "K".
*
Example 5: If one item is to have more weight than another, e.g., a correct answer on item 31 is worth 2 points.
Models=
?, ?, 31, D, 2 ; weight 2
?, ?, ? , D, 1 ; default weighting of 1
*
Example 6: Some responses are to be treated as missing data
Facets=3
Models=
2,1,20,M ; this is the "missing data" model
2,1,24,M
2,2,20,M
?,?,?,R
*
Example 7: Two different items to the same rating scale:
Facets=3
Models =
1,?,?,MyScale ; item 1 uses MyScale
4,?,?,MyScale ; item 4 uses MyScale
?,?,?,D ; everything else is a dichotomy
*
Rating Scale = MyScale,R, G ; G means all General, so items 1 and 4 share the same scale
or
Rating Scale = MyScale,R, S ; S means all Specific, so items 1 and 4 have different versions of MyScale
Example 8: More examples of model statements
Example 9: Weighting:
Two Cases: A and B. Four aspects: Taste, Touch, Sound, Sight.
Case A Taste weight twice as important as the rest.
Case B Sound weight twice as important as the rest.
Labels =
1, Examinees
11000
*
2, Case
1=A
2=B
*
3, Aspect
1=Taste
2=Touch
3=Sound
4=Sight
*
Models=
?, 1, 1, MyScale, 2 ; Case A Taste weighted 2
?, 2, 3, MyScale, 2 ; Case B Sound weighted 2
?, ?, ?, MyScale, 1 ; everything else weighted 1
*
Rating scale = MyScale, R9, General ; this rating scale is the same for all models
If you want to keep the "reliabilities" and standard errors meaningful then adjust the weights:
Original total weights = 2 cases x 4 aspects = 8
New total weights = 2 + 2 + 6 = 10
Weight adjustment to maintain total weight is 8/10.
So adjusted weighting is:
Models=
?, 1, 1, MyScale, 1.6 ; Case A Taste
?, 2, 3, MyScale, 1.6 ; Case B Sound
?, ?, ?, MyScale, 0.8 ; everything else
*
Help for Facets Rasch Measurement Software: www.winsteps.com Author: John Michael Linacre.
For more information, contact info@winsteps.com or use the Contact Form
Facets Rasch measurement software.
Buy for $149. & site licenses.
Freeware student/evaluation download Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download 

Stateoftheart : singleuser and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied Rasch, Winsteps, Facets online Tutorials 

Forum  Rasch Measurement Forum to discuss any Raschrelated topic 
Click here to add your email address to the Winsteps and Facets email list for notifications.
Click here to ask a question or make a suggestion about Winsteps and Facets software.
Coming Raschrelated Events  

Jan. 5  Feb. 2, 2018, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
Jan. 1016, 2018, Wed.Tues.  Inperson workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement 
Jan. 1719, 2018, Wed.Fri.  Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website 
Jan. 2224, 2018, MonWed.  Inperson workshop: Rasch Measurement for Everybody en español (A. Tristan, Winsteps), San Luis Potosi, Mexico. www.ieia.com.mx 
April 1012, 2018, Tues.Thurs.  Rasch Conference: IOMW, New York, NY, www.iomw.org 
April 1317, 2018, Fri.Tues.  AERA, New York, NY, www.aera.net 
May 22  24, 2018, Tues.Thur.  EALTA 2018 preconference workshop (Introduction to Rasch measurement using WINSTEPS and FACETS, Thomas Eckes & Frank WeissMotz), https://ealta2018.testdaf.de 
May 25  June 22, 2018, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
June 27  29, 2018, Wed.Fri.  Measurement at the Crossroads: History, philosophy and sociology of measurement, Paris, France., https://measurement2018.sciencesconf.org 
June 29  July 27, 2018, Fri.Fri.  Online workshop: Practical Rasch Measurement  Further Topics (E. Smith, Winsteps), www.statistics.com 
July 25  July 27, 2018, Wed.Fri.  PacificRim Objective Measurement Symposium (PROMS), (Preconference workshops July 2324, 2018) Fudan University, Shanghai, China "Applying Rasch Measurement in Language Assessment and across the Human Sciences" www.promsociety.org 
Aug. 10  Sept. 7, 2018, Fri.Fri.  Online workshop: ManyFacet Rasch Measurement (E. Smith, Facets), www.statistics.com 
Sept. 3  6, 2018, Mon.Thurs.  IMEKO World Congress, Belfast, Northern Ireland www.imeko2018.org 
Oct. 12  Nov. 9, 2018, Fri.Fri.  Online workshop: Practical Rasch Measurement  Core Topics (E. Smith, Winsteps), www.statistics.com 
Our current URL is www.winsteps.com
Winsteps^{®} is a registered trademark
Concerned about aches, pains, youthfulness? Mike and Jenny suggest Liquid Biocell 
