Model to be used in the analysis = ?,?,D,1

This is for 32-bit Facets 3.87. Here is Help for 64-bit Facets 4

This specifies how the facets interact to produce the data. Its form parallels that of the data. One indicator, such as "?", is specified for each facet in the data, followed by another indicator, such as "D", for the measurement model which is specified to produce the data. Additional models can be listed after the Model= statement and followed by an "*". Model weighting can be specified after the model-type indicator.

 

How Models= functions with data: Matching data with measurement models

and Model statement examples

 


The process is:

 

Suppose 6 experts rate 19 items for quality of manufacture on a rating scale from 1 to 4:

 

A. Decide how many facets you have.

There are 2 facets: experts and items

Let's call experts facet 1 and items facet 2.

 

Then, in your Facets specification file, you will have

Facets=2

 

B. Identify the individual elements in the two facets

Then, in your Facets specification file, you will have

Labels=

1, experts

1, first expert

.....

6, sixth expert

*

2, items

1, first item

2, second item

....

19, nineteenth item

*

 

C. Decide how they interact

Any expert - this is indicated by "?" can interact with any item indicated by "?"

 

D. Decide on the response structure.

It is a rating scale, indicated by "R", with the highest category "4"

 

So in your Facets specification file you will have

Models = ?,?,R4  ; and element in facet 1 (expert) can interact with any element in facet 2 (item) to produce a rating on a scale whose highest category is 4.

 

E. The data will look like:

element from facet 1, element from facet 2, rating

3, 18, 2  ; expert 3 gave item 18 a rating of 2.

 


 

A wide variety of models can be constructed to enable measures to be estimated from many types of qualitative data. Facets are specified in the same order in the Model= specification as they are in the data lines. Each model definition includes one entry for each facet specified in the Labels= specification, unless overridden by an Entry= specification. Zero terms, "0", in the Entry= specification are bypassed and not referenced in Model= specifications.

 

Each model specification includes

 

a) control characters, such as "?", except for "0" facets in an Entry= specification.

 

b) a code specifying the type of scale (dichotomy, rating scale, partial credit, etc.), or giving the name of a scale explicitly defined by a Rating (or partial credit) scale= specification.

 

c) Optionally, a weight to be assigned to data matching this model. The standard value is 1.

Weights are always arbitrary, based on other information and value judgements external to the data. Use weights only when non-measurement considerations have a specific, justifiable priority, e.g., when a 100 item MCQ test and one essay graded on a 5 point scale are to be given equal weight in the final, combined measure.

 

d) Optionally, as a final parameter following the weight, a scale description,

Model=?,?,R,,Farley stress scale ; ",," indicates the standard weight of 1 applies

 


 

Control characters can be difficult to understand at first. On first reading, skip down to the "Examples" to get the feel of what this is all about.

 

Facets are positioned in the same order in the Model= specification as they are positioned in the data lines. Each model definition includes one entry for each facet in a data line. Zero terms, "0", in the Entry= specification are bypassed.

 

Model= Facet control characters

Meaning for this facet

? or ?? or $

 

Model can match a datum with any element number in this facet, e.g., any examinee.

# or ##

Model can match a datum with any element number in this facet, e.g., any item. Also, each element of this facet matched to this model has its own rating scale, i.e., "#" specifies a "partial credit" model. Usually only one facet has # in a PCM model specification.

blank

Ignore this facet when matching the Model= statement to a datum, but verify that the element number for this facet for a datum that matches this model statement is listed after Labels=. Typed as ",,".

X

Ignore this facet when matching data to this Model= statement. Do not check the element number in this facet for validity when a match occurs.

0 or Keep zero= value

Model can only match a datum in which this facet does not participate, i.e., when element number 0 is used for this facet in the datum reference.

- e.g., -? or -#

Reverses the orientation of the measure of the element of facet, when combined with other facet control characters. "-?" means can match any element, but with the element's measure reversed in direction and sign. When using -?, it is recommended that the data be entered twice, once with each facet as -?, and the models= be weighted 0.5.

an element number, e.g., 23

Model can only match a datum with exactly this element number in this facet. Element labels are not allowed.

number-number, e.g., 23-36

Model can match a datum with any element number from the specified range in this facet.

number# 23# or 23-36#

Model can match a datum with a matching element number from the specified number, but each element number is associated with a unique "partial credit" scale.

@ or @number or @number-number

This facet is used for reference by Dvalues= or for model selection. It is ignored (not used) for measurement.

B e.g., Model= ?B,?,?B,R

Generate Bias interaction estimates for combinations of the elements of each facet marked by "B". At least two "B" terms are needed. The "B" is appended to one of the other facet control characters, e.g., "?B". The bias interactions are coded in one or more model statements, but act as though they are coded in all model statements. Model statements with different combinations of facets marked by "B" each produce separate sets of bias estimates for all the data.

For example: for 2-way interactions between 4 facets (also from Output Tables menu):

Models=

?,?,?,?,R4

?B,?B,?,?,R4  ; interaction between facets 1 and 2

?B,?,?B,?,R4  ; interaction between facets 1 and 3

?B,?,?,?B,R4  ; interaction between facets 1 and 4

?,?B,?B,?,R4  ; interaction between facets 2 and 3

?,?B,?,?B,R4  ; interaction between facets 2 and 4

?,?,?B,?B,R4  ; interaction between facets 3 and 4

; 3-way interaction, only from Specification file

?B,?,?B,?B,R4  ; interaction between facets 1, 3 and 4

*

 

More than one model can be specified. See: Matching data with models.

 

Model= and

Rating Scale=

Scale codes

Meaning for this model

D

Dichotomous data. Only 0 and 1 are valid.

Dn

Dichotomize the counts. Data values 0 through n-1 are treated as 0. Data values n and above are treated as 1. E.g., "D5" recodes "0" to "4" as 0 and "5" and above as 1.

R

The rating scale (or partial credit) categories are in the range 0 to 9. The actual valid category numbers are those found in the data. RK to maintain unobserved intermediate categories in the category ordering.

Rn

The rating scale (or partial credit) categories are in the range 0 to "n". Data values about "n" are missing data. The actual valid category numbers are those found in the data. If 20 is the largest category number used, then specify "R20".

RnK

Suffix "K" (Keep) maintains unobserved intermediate categories in the category ordering. If K is omitted, the categories are renumbered consecutively to remove the unobserved intermediate numbers

M

Treat all observations matching this model as Missing data, i.e, a way to ignore particular data, effectively deleting these data.

Bn

Binomial (Bernoulli) trials, e.g., "B3" means 3 trials. In the Model= statement, put the number of trials. In the Data= statement, put the number of successes. Use Rating Scale= for anchored discrimination.

B1

1 binomial trial, which is the same as a dichotomy, "D".

B100

Useful for ratings expressed as percentages %.  Use Rating Scale= for anchored discrimination.

P

Poisson counts, with theoretical range of 0 through infinity, though observed counts must be in the range 0 to 255.  Use Rating Scale= for anchored discrimination.

the name of a user-defined scale

A name such as "Opinion". This name must match a name given in a Rating (or partial credit) scale= specification.

 

Example: A test in which every item has a different rating scale (partial credit) with a different numbers of categories. The highest numerical category of any item "6".

Model = ?, ?, #, R6 ; this allows items with categories such as 0,1,2 and also 1,2,3,4,5,6

 

There are more examples at Model statement examples.

 


 

Data weighting: This specifies the weight to be assigned to each datum in estimating measures, fit statistics and bias analyses. This is entered in the Model= specification after the scale code, e.g., Model=?,12,D,2 specifies a weight of 2 for responses to item 12.

 

Model= Weighting control

Meaning for this model

1 (the standard)

Give the datum the standard weight of 1 in estimating measures, fit statistics and bias sizes.

n

Give the datum a weight of "n", e.g., 2.5, in estimating measures and fit statistics. This gives it greater influence than data with lesser weights.

0

Give the datum zero weight, i.e., treat the datum as missing (but report, if possible, in the residual file.)

Adjust the weights as a set so that the standard errors reported for the persons by weighted and unweighted analyses are about the same. This prevents the weighting misleading you about test reliability, etc.

 

Data replication: Data point replication or weighting can be done by prefixing R (or another replication character) + weight before the data point after Data=, e.g.,

R12.5 , 1 , 2, 3 means weight 12.5 times, the observation of "3" for element 1 of facet 1 and element 2 of facet 2.

 

Multiple identical sets of observations can be replicated on the same line, by preceding the data for one observation by R and the number of replications, e.g, 20 replications are indicated by R20.

 

Example 1: Survey data has been summarized by response rating. 237 people responded in category 3 on item 27.

Data=

R237,27,3

 


 

Example 1: The basic Rasch model for dichotomous interactions between objects and agents is specified by:

Model=?,?,D

 

"?,?,D" specifies that any element of the first facet (the first "?") can interact with any element of the second facet (the second "?") to produce a dichotomous observation (the "D"). Record a dichotomous observation as a "1" for success/right/more, or a "0" for failure/wrong/less. This implements the basic Rasch dichotomous model:

log(Pni1/Pni0) = Bn - Di

where

Pni1 is the probability of person n getting item i right

Pni0 is the probability of person n getting item i wrong

Bn is the ability of person n

Di is the difficulty of item i.

 

Example 2: The Andrich rating scale model for judges, persons and items is specified by:

Model=?,?,?,R

"?,?,?,R" states that any judge,"?", can rate any person, "?", on any item, "?", using a common rating scale, "R".

This implements an Andrich rating scale model:
log(Pnijk/Pnijk-1) = Bn - Di - Cj - Fk

where

Pnijk is the probability that person n is awarded, on item i by judge j, a rating of k

Pnijk-1 is the probability that person n is awarded, on item i by judge j, a rating of k-1

Bn is the ability of person n

Di is the difficulty of item i

Cj is the severity of judge j

Fk is the Rasch-Andrich threshold (step calibration) of step k of the rating scale. This is the location on the latent variable (relative to the item difficulty) where categories k and k-1 are equally probable.

 

Example 3: More than one model can be specified. See: Matching data with models. A multiple-model analysis of only items 4 and 5. Item 4 is a true/false dichotomous item, but item 5 is a Likert rating scale (or partial credit) item. The examinees are facet 1, and the items are facet 2:
Model=?,4,D
?,5,R
*

 

or, all models may be specified on lines following Model=,
Model=
?,4,D
?,5,R
*

 

Example 4: I have a 2 rating scale instrument of 32 items. The first 19 items are on one 6 category rating scale and the remaining 13 items are on a different 6 category rating scale. There are 4 facets, and the items are the 4th facet.

Facet = 4

Models =

?,?,?,1-19,R6  ; items 1-19 are on a rating scale with highest category numbered "6"

?,?,?,20-32,R10K ; items 20-32 are on another rating scale with highest category numbered "10". Unobserved intermediate categories are to be maintained in the ordering "K".

*

 

Example 5: If one item is to have more weight than another, e.g., a correct answer on item 31 is worth 2 points.

Models=

?, ?, 31, D, 2 ; weight 2

?, ?, ? , D, 1 ; default weighting of 1

*

 

Example 6: Some responses are to be treated as missing data

Facets=3

Models=

2,1,20,M ; this is the "missing data" model

2,1,24,M

2,2,20,M

?,?,?,R

*

 

Example 7: Two different items to the same rating scale:

Facets=3

Models =

1,?,?,MyScale ; item 1 uses MyScale

4,?,?,MyScale ; item 4 uses MyScale

?,?,?,D  ; everything else is a dichotomy

*

Rating Scale = MyScale,R, G ; G means all General, so items 1 and 4 share the same scale

or

Rating Scale = MyScale,R, S ; S means all Specific, so items 1 and 4 have different versions of MyScale

 

Example 8: More examples of model statements

 

Example 9: Weighting:

Two Cases: A and B. Four aspects: Taste, Touch, Sound, Sight.

Case A Taste weight twice as important as the rest.

Case B Sound weight twice as important as the rest.

 

Labels =

1, Examinees

1-1000

*

2, Case

1=A

2=B

*

3, Aspect

1=Taste

2=Touch

3=Sound

4=Sight

*

Models=

?, 1, 1, MyScale, 2 ; Case A Taste weighted 2

?, 2, 3, MyScale, 2 ; Case B Sound weighted 2

?, ?, ?, MyScale, 1 ; everything else weighted 1

*

Rating scale = MyScale, R9, General ; this rating scale is the same for all models

 

If you want to keep the "reliabilities" and standard errors meaningful then adjust the weights:

 

Original total weights = 2 cases x 4 aspects = 8

New total weights = 2 + 2 + 6 = 10

Weight adjustment to maintain total weight is 8/10.

 

So adjusted weighting is:

Models=

?, 1, 1, MyScale, 1.6 ; Case A Taste

?, 2, 3, MyScale, 1.6 ; Case B Sound

?, ?, ?, MyScale, 0.8 ; everything else

*


Help for Facets Rasch Measurement and Rasch Analysis Software: www.winsteps.com Author: John Michael Linacre.
 

Facets Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Minifac download
Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation Ministep download

Rasch Books and Publications: Winsteps and Facets
Applying the Rasch Model (Winsteps, Facets) 4th Ed., Bond, Yan, Heene Advances in Rasch Analyses in the Human Sciences (Winsteps, Facets) 1st Ed., Boone, Staver Advances in Applications of Rasch Measurement in Science Education, X. Liu & W. J. Boone Rasch Analysis in the Human Sciences (Winsteps) Boone, Staver, Yale Appliquer le modèle de Rasch: Défis et pistes de solution (Winsteps) E. Dionne, S. Béland
Introduction to Many-Facet Rasch Measurement (Facets), Thomas Eckes Rasch Models for Solving Measurement Problems (Facets), George Engelhard, Jr. & Jue Wang Statistical Analyses for Language Testers (Facets), Rita Green Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments (Facets), George Engelhard, Jr. & Stefanie Wind Aplicação do Modelo de Rasch (Português), de Bond, Trevor G., Fox, Christine M
Exploring Rating Scale Functioning for Survey Research (R, Facets), Stefanie Wind Rasch Measurement: Applications, Khine Winsteps Tutorials - free
Facets Tutorials - free
Many-Facet Rasch Measurement (Facets) - free, J.M. Linacre Fairness, Justice and Language Assessment (Winsteps, Facets), McNamara, Knoch, Fan
Other Rasch-Related Resources: Rasch Measurement YouTube Channel
Rasch Measurement Transactions & Rasch Measurement research papers - free An Introduction to the Rasch Model with Examples in R (eRm, etc.), Debelak, Strobl, Zeigenfuse Rasch Measurement Theory Analysis in R, Wind, Hua Applying the Rasch Model in Social Sciences Using R, Lamprianou El modelo métrico de Rasch: Fundamentación, implementación e interpretación de la medida en ciencias sociales (Spanish Edition), Manuel González-Montesinos M.
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Rasch Models for Measurement, David Andrich Constructing Measures, Mark Wilson Best Test Design - free, Wright & Stone
Rating Scale Analysis - free, Wright & Masters
Virtual Standard Setting: Setting Cut Scores, Charalambos Kollias Diseño de Mejores Pruebas - free, Spanish Best Test Design A Course in Rasch Measurement Theory, Andrich, Marais Rasch Models in Health, Christensen, Kreiner, Mesba Multivariate and Mixture Distribution Rasch Models, von Davier, Carstensen
As an Amazon Associate I earn from qualifying purchases. This does not change what you pay.

facebook Forum: Rasch Measurement Forum to discuss any Rasch-related topic

To receive News Emails about Winsteps and Facets by subscribing to the Winsteps.com email list,
enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Winsteps.com
The Winsteps.com email list is only used to email information about Winsteps, Facets and associated Rasch Measurement activities. Your email address is not shared with third-parties. Every email sent from the list includes the option to unsubscribe.

Questions, Suggestions? Want to update Winsteps or Facets? Please email Mike Linacre, author of Winsteps mike@winsteps.com


State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied
 
Rasch, Winsteps, Facets online Tutorials

Coming Rasch-related Events
May 17 - June 21, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 12 - 14, 2024, Wed.-Fri. 1st Scandinavian Applied Measurement Conference, Kristianstad University, Kristianstad, Sweden http://www.hkr.se/samc2024
June 21 - July 19, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 5 - Aug. 7, 2024, Mon.-Wed. 2024 Inaugural Conference of the Society for the Study of Measurement (Berkeley, CA), Call for Proposals
Aug. 9 - Sept. 6, 2024, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 4 - Nov. 8, 2024, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 17 - Feb. 21, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
May 16 - June 20, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 20 - July 18, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Further Topics (E. Smith, Facets), www.statistics.com
Oct. 3 - Nov. 7, 2025, Fri.-Fri. On-line workshop: Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

Our current URL is www.winsteps.com

Winsteps® is a registered trademark