Glossary

Glossary - Dictionary - Lexicon of Rasch Measurement Terminology

Glosario Español www.rasch.org/rmt/glosario.htm

Ability

the level of successful performance of the objects of measurement on the variable.

Agent of Measurement

the tool (items, questions, etc.) used to define a variable and position objects or persons along that variable.

Analytic rating

a rating of a specific aspect of a performance (cf. Holistic rating)

Anchor

the process of using anchor values to insure that different analyses produce directly comparable results.

Anchor Value

a pre-set logit value assigned to a particular object, agent or step to be used as a reference value for determining the measurements (calibrations) of other objects, agents or steps.

Anchor Table

the table of Anchor Values used during Rasch analysis of an Input Grid and so included in the Results Table produced. The Anchor Table has the same format as the Results Table.

Anchoring

the process of using anchor values to insure that different analyses produce directly comparable results.

Best Test Design

Wright, B.D. & Stone, M.H., Best Test Design: Rasch Measurement. Chicago: Mesa Press, 1979

Bias

A change in logit values based on the particular agents or objects measured.

BOTTOM

The value shown in the Results Table for an agent on which all objects were successful, (so it was of bottom difficulty), or for an object which had no success on any agent (so it was of bottom ability)

Bottom Category

the response category at which no level of successful performance has been manifested.

Calibration

a difficulty measure in logits used to position the agents of measurement along the variable. Also "step calibration"

CAT  Test

Computer-Adaptive Test. A test administered by computer in which the display of the next item depends on the response to the previous item.

Categories

CATS

levels of performance on an observational or response format.

Cell

Location of data in the spreadsheet, given by a column letter designation and row number designation e.g. B7

Classical Test Theory

CTT

Item analysis in which the raw scores are treated as linear numbers.

PMLE

Pairwise Maximum Likelihood Estimation. This was devised by Bruce Choppin and is used in RUMM2030. A customized version is implemented in Facets for Models = ?,-?

Common Scale

a scale of measurement on which all agents and objects can be represented.

Column

Vertical line of data in the Spreadsheet data, usually representing in an Input Grid all responses to a particular item, or in a Results Table, all statistics measuring the same attribute of agents or objects.

Comment

A semi-colon ; followed by text. This is ignored by Winsteps and Facets

Complete data

Data in which every persons responds to every item. It makes a completely-filled rectangular data matrix.

Computer-Adaptive Test

CAT Test. A test administered by computer in which the display of the next item depends on the response to the previous item.

Construct validity

The correlation between the item difficulties and the latent trait as intended by the test constructor. "Is the test measuring what it is intended to measure?"

Content

the subject area evoked and defined by an agent.

Continuation line

A separate line of text which Winsteps analyses as appended to the end of the previous line. These are shown with "+".

Contrast component

In the principal components analysis of residuals, a principal component (factor) which is interpreted by contrasting the items (or persons) with opposite loadings (correlations) on the component.

Control file

A DOS-text file on your disk drive containing the Winsteps control variables.

Control variable

In Winsteps, "control variable = value", is an instruction for controlling the computer program, e.g., "ITEM1 = 4".

Convergence

the point at which further improvement of the item and person estimates makes no useful difference in the results. Rasch calculation ends at this point.

Correlation

the relationship between two variables

CTT

Classical Test Theory

Data file

Winsteps: file containing the person labels and the responses to the items. It is part of the Control file if DATA= or MFORMS= are not used.

Demographics

Information about the person included the person label, e.g., "F" for female or "M" for male

Deterministic

Exactly predictable without any uncertainty. This contrasts with Probabilistic.

Dichotomous Response

a response format of two categories such as correct-incorrect, yes-no, agree-disagree.

DIF Differential item functioning

Change of item difficulty depending on which person classification-group are responding to the item, also called "item bias"

Difficulty

the level of resistance to successful performance of the agents of measurement on the variable.

Dimension

a latent variable which is influencing the data values.

Discrepancy

one or more unexpected responses.

Distractor

Incorrect answer to a multiple-choice question, which is intended to distract the examinee away from the correct option. Sometimes all the options, correct and incorrect, are called "distractors".

Disturbance

one or more unexpected responses.

Diverging

the estimated measures at the end of an iteration are further from convergence than at the end of the previous iteration.

Easiness

the level of susceptibility to successful performance of the agents of measurement on the latent variable. An item with high easiness has a high marginal score.

Eigenvalue

The value of a characteristic root of a matrix, the numerical "size" of the matrix

Element

Individual in a facet, e.g., a person, an item, a judge, a task, which participates in producing an observation.

Empirical

Based on observation or experiment

Empirical data

data derived from observation or experimentation

END LABELS

END NAMES

Winsteps: the end of the list of item identifying labels. This is usually followed by the data.

Equating

Putting the measures from two tests in the same frame of reference

Estimate

A value obtained from the data. It is intended to approximate the exactly true, but unknowable value.

EXP

Expected value

Value predicted for this situation based on the measures

Expected Response

the predicted response by an object to an agent, according to the Rasch model analysis.

EXP()

Exponential

Mathematical function used in estimating the Rasch measures

Exponential form

The Rasch model written in terms of exponentials, the form most convenient for computing response probabilities.

Facet

The components conceptualized to combine to produce the data, e.g., persons, items, judges, tasks.

Fit Statistic

a summary of the discrepancies between what is observed and what we expect to observe.

Focal group

The person classification-group which is the focus of a differential-item-functioning investigation

Frame of reference

The measurement system within which measures are directly comparable

Guttman

Louis Guttman (1916-1987) organized data into Scalograms intending that the observed response by any person to any items could be predicted deterministically from its position in the Scalogram.

Guttman pattern

Success on all the easier items. Failure on all the more difficulty items.

Heading

an identifier or title for use on tables, maps and plots.

Holistic rating

One rating which captures all aspects of the performance (cf. Analytic rating)

Hypothesis test

Fit statistics report on a hypothesis test. Usually the null hypothesis to be tested is something like "the data fit the model", "the means are the same", "these is no DIF". The null hypothesis is rejected if the results of the fit test are significant (p<.05) or highly significant (p<.01). The opposite of the null hypothesis is the alternate hypothesis.

Imputed data

Data generated by the analyst or assumed by the analytical process instead of being observed.

Independent

Not dependent on which particular agents and objects are included in the analysis. Rasch analysis is independent of agent or object population as long as the measures are used to compare objects or agents which are of a reasonably similar nature.

Infit

an information weighted fit statistic that focuses on the overall performance of an item or person, i.e, the information-weighted average of the squared standardized deviation of observed performance from expected performance. The statistic plotted and tabled by Rasch is this mean square normalized.

Interval scale

Scale of measurement on which equal intervals represent equal amounts of the variable being measured.

Item

agent of measurement, not necessarily a test question, e.g., a product rating.

Item bank

Database of items including the item text, scoring key, difficulty measure and relevant statistics, used for test construction or CAT tests

Iteration

one run through the data by the Rasch calculation program, done to improve estimates by minimizing residuals.

Knox Cube Test

a tapping pattern test requiring the application of visual attention and short term memory.

Latent Trait

The idea of what we want to measure. A latent trait is defined by the items or agents of measurement used to elicit its manifestations or responses.

Link

Relating the measures derived from one test with those from another test, so that the measures can be directly compared.

LN()

Logarithm

Natural or Napierian logarithm. A logarithm to the base e, where e = 2.718... This contrasts with logarithms to the base 10.

Local origin

Zero point we have selected for measurement, such as sea-level for measuring mountains, or freezing-point for Celsius temperature. The zero point is chosen for convenience. In Rasch measurement, it is often the average difficulty of the items.

Log-odds

The natural logarithm of the ratio of two probabilities (their odds).

Logit

Log-odds unit: the unit of measure used by Rasch for calibrating items and measuring persons. A log odds transformation of the probability of a correct response.

Logistic curve-fitting

an estimation method in which the improved value of an estimate is obtained by incrementing along a logistic ogive from its current value, based on the size of the current raw-score residual.

Logistic ogive

the relationship between linear measures and the probabilities of dichotomous outcomes.

Logit-linear

The Rasch model written in terms of log-odds, so that the measures are seen to form a linear, additive combination

Map

a bar chart showing the frequency and spread of agents and objects along the variable.

Matrix

a rectangle of responses with rows (or columns) defined by objects and columns (or rows) defined by agents.

MCQ

Multiple-Choice Question.

This is an item format often used in educational testing where the examinee selects the letter corresponding to the answer.

Mean-square

MnSq

Also called the relative chi-square and the normed chi-square. A mean-square fit statistic is a chi-square statistic divided by its degrees of freedom (d.f.). Its expectation is 1.0. Values below 1.0 indicate that the data are too predictable = overly predictable = overfit of the data to the model. Values above 1.0 indicate the data too unpredictable = underfit of the data to the model

Measure

Measurement

the location (usually in logits) on the latent variable. The Rasch measure for persons is the person ability. The Rasch measure for items is the item difficulty.

Menu bar

This is at the top of a program's window, and shows a list of standard program operations

Misfit

Any difference between the data the model predictions. Misfit usually refers to "underfit". The data are too unpredictable.

Missing data

Data which are not responses to the items. They can be items which the examinees did not answer (usually score as "wrong") or items which were not administered to the examinee (usually ignored in the analysis).

Model

Mathematical conceptualization of a relationship

Muted

Overfit to the Rasch model. The data are too predictable. The opposite is underfit, excessive noise.

Newton-Raphson iteration

A general method for finding the solution of non-linear equations

Normal

a random distribution, graphically represented as a "bell" curve which has a mean value of 0 and a standard deviation of 1.

Normalized

1. the transformation of the actual statistics obtained so that they are theoretically part of a unit-normal distribution. "Normalized" means "transformed into a unit-normal distribution". We do this so we can interpret the values as "unit-normal deviates", the x-values of the normal distribution. Important ones are ±1.96, the points on the x-axis for which 5% of the distribution is outside the points, and 95% of the distribution is between the points.

2. linearly adjusting the values so they sum to a predetermined amount. For instance, probabilities always sum to 1.0.

Not administered

an item which the person does not see. For instance, all the items in an item bank which are not part of a computer-adaptive test.

Object of Measurement

people, products, sites, to be measured or positioned along the variable.

OBS

Observed

Value derived from the data

Observation

Observed Response

the actual response by an object to an agent.

Odds ratio

Ratio of two probabilities, e.g., "odds against" is the ratio of the probability of losing (or not happening) to the probability of winning (or happening).

Outfit

an outlier sensitive fit statistic that picks up rare events that have occurred in an unexpected way. It is the average of the squared standardized deviations of the observed performance from the expected performance. Rasch plots and tables use the normalized unweighted mean squares so that the graphs are symmetrically centered on zero.

Outliers

unexpected responses usually produced by agents and objects far from one another in location along the variable.

Overfit

The data are too predictable. There is not enough randomness in the data. This may be caused by dependency or other constraints.

Perfect score

Every response "correct" or the maximum possible score. Every observed response in the highest category.

Person

the object of measurement, not necessarily human, e.g., a product.

Plot

an x-y graph used by Rasch to show the fit statistics for agents and objects.

Point Labels

the placing on plots of the identifier for each point next to the point as it is displayed.

Point-measure correlation

PT-MEASURE, PTMEA

The correlation between the observations in the data and the measures of the items or persons producing them.

Poisson Counting

a method of scoring tests based on the number of occurrences or non-occurrences of an event, e.g. spelling mistakes in a piece of dictation.

Polarity

The direction of the responses on the latent variable. If higher responses correspond to more of the latent variable, then the polarity is positive. Otherwise the polarity is negative.

Polytomous response

responses in more than two ordered categories, such as Likert rating-scales.

Population

Every person (or every item) with the characteristics we are looking for.

Predictive validity

This is the amount of agreement between results obtained by the evaluated instrument and results obtained from more directly, e.g., the correlation between success level on a test of carpentry skill and success level making furniture for customers. "Do the person measures correspond to more and less of what we are looking for?"

Probabilistic

Predictable to some level of probability, not exactly. This contrasts with Deterministic.

Process

the psychological quality, i.e.,the ability, skill, attitude, etc., being measured by an item.

PROX

the normal approximation estimation formula, used by some Rasch programs for the first part of the iteration process.

Rack

Placing the responses to two tests in adjacent columns for each person, as though the items were being placed on a rack, c.f., stack.

Rasch, Georg

Danish Mathematician (1906-1980), who first propounded the application of the statistical approach used by Rasch.

Rasch measure

linear, additive value on an equal-interval scale representing the latent variable

Rasch Model

a mathematical formula for the relationship between the probability of success (P) and the difference between an individual's ability (B) and an item's difficulty (D). P=exp(B-D)/(1+exp(B-D)) or log [P/(1-P)] = B - D

Rasch-Andrich Threshold

Step calibration. Location on the latent variable (relative to the center of the rating scale) where adjacent categories are equally probable.

Rating Scale

A format for observing responses wherein the categories increase in the level of the variable they define, and this increase is uniform for all agents of measurement.

Rating Scale Analysis

Wright, B.D. & Masters, G.N., Rating Scale Analysis: Rasch Measurement. Chicago: Mesa Press, 1982.

Ratio scale

Scale with a defined origin (reference point) so we can say that measure X is twice measure Y. See Ratio scale

Raw score

the marginal score; the sum of the scored observations for a person, item or other element.

Reference group

The person classification-group which provides the baseline item difficulty in a differential-item-functioning investigation

Reliability

the ratio of sample or test variance, corrected for estimation error, to the total variance observed.

Residuals

the difference between data observed and values expected.

Response

The value indicating degree of success by an object on an agent, and entered into the appropriate cell of an Input Grid.

Response set

Choosing the same response on every item, such as always selecting option "C" on a multiple-choice test, or always selecting "Agree" on an attitude survey.

Results Table

a report of Rasch calculations.

Rigidity

when agents, objects and steps are all anchored, this is the logit inconsistency between the anchoring values, and is reported on the Iteration Screen and Results Table. 0 represents no inconsistency.

Row

a horizontal line of data on a Spreadsheet, usually used, in the Input Grid, to represent all responses by a particular object. The top row of each spreadsheet may be reserved for Rasch control information.

Rule-of-thumb

A tentative suggestion that is not a requirement nor a scientific formula, but is based on experience and inference from similar situations. Originally, the use of the thumb as a unit of measurement.

Sample

the persons (or items) included in this analysis

Scale

the quantitative representation of a variable.

Scalogram

Picture of the data in which the persons (rows) and items (columns)  are arranged by marginal raw scores.

Score points

the numerical values assigned to responses when summed to produce a score for an agent or object.

Scoring key

The list of correct responses to multiple-choice (MCQ) items.

Scree plot

Plot showing the fraction of total variance in the data in each variance component.

Separation

the ratio of sample or test standard deviation, corrected for estimation error, to the average estimation error.

This is the number of statistically different levels of performance that can be distinguished in a normal distribution with the same "true" S.D. as the current sample. Separation = 2: high measures are statistically different from low measures.

Specification

Winsteps and Facets: A control-variable and its value, e.g., "Name1=17"

Stack

Analyzing the responses of the same person to multiple administrations of the same test as though they were made by separate persons, by "stacking" the person records in one long data file, c.f., "rack"

Standard Deviation

the root mean square of the differences between the calculated logits and their mean.

Standard Error

an estimated quantity which, when added to and subtracted from a logit measure or calibration, gives the least distance required before a difference becomes meaningful.

Step calibration

Step difficulty

Rasch-Andrich threshold. Location on the latent variable (relative to the center of the rating scale) where adjacent categories are equally probable.

Steps

the transitions between adjacent categories ordered by the definition of the variable.

Strata

= (4*Separation+1)/3 This is the number of statistically different levels of performance that can be distinguished in a normal distribution with the same "true" S.D. as the current sample, when the tales of the normal distribution are due to "true" measures, not measurement error. Strata=3: very high, middle, and very low measures can be statistically distinguished.

Sufficient statistic

A statistic (a number) which contains all the information in the data from which to estimate the value of a parameter.

Suffix

The letters added to a file name which specify the file format, e.g., ".txt" means "text file". If you do not see the suffix letters, instruct Windows to display them. See the Lesson 1 Appendix.

Table

Lists of words and numbers, arrange in columns, usually surrounded by "|".

Targeted

when the item difficulty is close to the person ability, so that he probability of success on a dichotomous item is near to 50%, or the expected rating is near to the center of the rating scale.

Targeting

Choosing items with difficulty equal to the person ability.

Task bar

This shows the Windows programs at the bottom of your computer screen

Template

a specially formatted input file.

Test length

The number of items in the test

Test reliability

The reliability (reproducibility) of the measure (or raw score) ordering (hierarchy) according to this sample for this test. The reported reliability is an estimate of (true variance)/(observed variance), as also are Cronbach Alpha and KR-20.

TOP

The value shown in the Results Table for an agent on which no objects were successful, (so it was of top difficulty), or for an object which succeeded on every agent (so it was of top ability)

Top Category

the response category at which maximum performance is manifested.

UCON

the unconditional (or "joint" JMLE) maximum likelihood estimation formula, used by some Rasch programs for the second part of the iteration process.

Underfit

The data are too unpredictable. The data underfit the model. This may be because of excessive guessing, or contradictory dimensions in the data.

UNSURE

Rasch was unable to calibrate this data and treated it as missing.

Unweighted

the situation in which all residuals are given equal significance in fit analysis, regardless of the amount of the information contained in them.

Variable

the idea of what we want to measure A variable is defined by the items or agents of measurement used to elicit its manifestations or responses.

Weighted

the adjustment of a residual for fit analysis, according to the amount of information contained in it.

Zero score

Every response "incorrect" or the minimum possible score. Every observed response in the lowest category.

&END

The end of the list of Winsteps control variables

&INST

The beginning of the list of Winsteps control variables. This is not necessary.


Help for Facets Rasch Measurement Software: www.winsteps.com Author: John Michael Linacre.
 

For more information, contact info@winsteps.com or use the Contact Form
 

Facets Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download
Winsteps Rasch measurement software. Buy for $149. & site licenses. Freeware student/evaluation download

State-of-the-art : single-user and site licenses : free student/evaluation versions : download immediately : instructional PDFs : user forum : assistance by email : bugs fixed fast : free update eligibility : backwards compatible : money back if not satisfied
 
Rasch, Winsteps, Facets online Tutorials

 

Forum Rasch Measurement Forum to discuss any Rasch-related topic

Click here to add your email address to the Winsteps and Facets email list for notifications.

Click here to ask a question or make a suggestion about Winsteps and Facets software.

Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement with Raters and Rating Scales: Rasch Models for Rater-Mediated Assessments, George Engelhard, Jr. & Stefanie Wind Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez
Winsteps Tutorials Facets Tutorials Rasch Discussion Groups

 

Coming Rasch-related Events
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
Jan. 22-24, 2018, Mon-Wed. In-person workshop: Rasch Measurement for Everybody en español (A. Tristan, Winsteps), San Luis Potosi, Mexico. www.ieia.com.mx
April 10-12, 2018, Tues.-Thurs. Rasch Conference: IOMW, New York, NY, www.iomw.org
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 22 - 24, 2018, Tues.-Thur. EALTA 2018 pre-conference workshop (Introduction to Rasch measurement using WINSTEPS and FACETS, Thomas Eckes & Frank Weiss-Motz), https://ealta2018.testdaf.de
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 27 - 29, 2018, Wed.-Fri. Measurement at the Crossroads: History, philosophy and sociology of measurement, Paris, France., https://measurement2018.sciencesconf.org
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 25 - July 27, 2018, Wed.-Fri. Pacific-Rim Objective Measurement Symposium (PROMS), (Preconference workshops July 23-24, 2018) Fudan University, Shanghai, China "Applying Rasch Measurement in Language Assessment and across the Human Sciences" www.promsociety.org
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Sept. 3 - 6, 2018, Mon.-Thurs. IMEKO World Congress, Belfast, Northern Ireland www.imeko2018.org
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com

 

Our current URL is www.winsteps.com

Winsteps® is a registered trademark
 

Concerned about aches, pains, youthfulness? Mike and Jenny suggest Liquid Biocell