Soc 497 - Research Methods » Spring 2019 » Chapter 7

Need help with your exam preparation?

Question #1
A careful, systematic definition of a construct that is explicitly written down.
A.   Discrete Variables
B.   Conceptual Definition
C.   Conceptualization
D.   Operational Definition
Question #2
The process of moving from a construct's conceptual definition to specific activities or measures that allow a researcher to observe it empirically.
A.   Conceptualization
B.   Operationalization
C.   Stability Reliability
D.   Conceptual Definition
Question #3
The definition of a variable in terms of the specific operations or actions a researcher carries out to measure it.
A.   Conceptual Definition
B.   Operational Definition
C.   Face Validity
D.   Discrete Variables
Question #4
Measurement of reliability across time; a measure that yields consistent results at different points assuming what is being measured does not change
A.   Representative Reliability
B.   Conceptual Definition
C.   Stability Reliability
D.   Equivalence Reliability
Question #5
A measure that yields consistent results for various groups or subpopulations.
A.   Representative Reliability
B.   Stability Reliability
C.   Equivalence Reliability
D.   Face Validity
Question #6
A measure that yields consistent results using different specific indicators, assuming that all measure the same thing.
A.   Stability Reliability
B.   Representative Reliability
C.   Equivalence Reliability
D.   Content Validity
Question #7
All these are measures to improve reliability EXCEPT one
A.   conduct extensive research on the topic
B.   Clearly conceptualize all constructs.
C.   Use multiple indicators of a variable
D.   Use pilot studies and replication
E.   Increase the level of measurement
Question #8
A type of measurement validity in which an indicator makes sense as a measure of a construct when judged by others in the scientific community.
A.   Face Validity
B.   Content Validity
C.   Criterion Validity
D.   Convergent Validity
Question #9
A type of measurement validity that requires that a measure represent all aspects of the conceptual definition of a construct.
A.   Face Validity
B.   Convergent Validity
C.   Criterion Validity
D.   Content Validity
Question #10
uses some standard or criteria to indicate a construct accurately
A.   Construct Validity
B.   Discriminant Validity
C.   Predictive Validity
D.   Criterion Validity
Question #11
When an indicator predicts future events that are logically related to the construct.
A.   Predictive Validity
B.   Construct Validity
C.   Criterion Validity
D.   Face Validity
Question #12
an indicator must be associated with a preexisting indicator that is already judged to be valid.
A.   Discriminant Validity
B.   Concurrent Validity
C.   Content Validity
D.   Discriminant Validity
Question #13
A type of measurement validity that relies on some independent, outside verification.
A.   Construct Validity
B.   Construct Validity
C.   Discriminant Validity
D.   Predictive Validity
Question #14
It means that multiple indicators in the same construct will act alike or operate in similar ways.
A.   Discriminant Validity
B.   Face Validity
C.   Construct Validity
D.   Convergent Validity
Question #15
A type of measurement validity for Multiple indicators based on the idea that indicators of different
A.   Construct Validity
B.   Construct Validity
C.   Discriminant Validity
D.   Discriminant Validity
Question #16
Variables that are measured on a continuum and have a large number of values/attributes.
A.   Content Validity
B.   Exhaustive attributes
C.   Continuous Variables
D.   Likert Scales
Question #17
Variables that have a relatively fixed, and limited set of values/attributes.
A.   Mutually Exclusive Attributes
B.   Discrete Variables
C.   Construct Validity
Question #18
A system for organizing information in the measurement of variables
A.   Mutually Exclusive Attributes
B.   Content Validity
C.   Levels of Measurement
D.   Content Validity
Question #19
The lowest, least precise level of measurement for which there is a difference in type only among the categories/attributes of a variable.
A.   Criterion Validity
B.   Continuous Variables
C.   Ordinal Level
D.   Nominal Level
Question #20
A level of measurement that identifies a difference among categories/attributes of a variable and allows the categories to be rank ordered.
A.   Ordinal Level
B.   Mutually Exclusive Attributes
C.   Interval Level
D.   Continuous Variables
Question #21
A level of measurement that identifies differences among variable attributes, ranks categories, and Measures distance between categories, but has no true zero
A.   Interval Level
B.   Nominal Level
C.   Discrete Variables
D.   Ratio Level
Question #22
the highest, most precise level of measurement; variable attributes can be rank ordered, the distance between them precisely measured, and there is an absolute zero.
A.   Discrete Variables
B.   Nominal Level
C.   Ordinal Level
D.   Ratio Level
Question #23
It is one of the principles of good measurement in which the variable attributes or categories in a measure are organized so that responses fit into only one category and there is no overlap.
A.   Nominal Level
B.   Interval Level
C.   Mutually Exclusive Attributes
D.   Ordinal Level
Question #24
The principle that attributes or categories in a measure should provide a category for all possible responses.
A.   Nominal Level
B.   Ratio Level
C.   Exhaustive attributes
D.   Ordinal Level
Question #25
A measure in which a researcher wants to capture the intensity, direction, level, or potency of a variable along a continuum.
A.   Scale
B.   Concurrent Validity
C.   Exhaustive attributes
D.   Mutually Exclusive Attributes
Question #26
These scales are widely used in survey research. They were developed in the 1930’s by Rensis Likert to provide an ordinal-level measure of a person’s attitude.
A.   Predictive Validity
B.   Likert Scales
C.   Construct Validity
D.   Exhaustive attributes

Need help with your exam preparation?