Psychology 321 - Psychology Research Method » Spring 2022 » Scale Reliability & Validity

Need help with your exam preparation?

Question #1
Which of the following is NOT true about reliability?
A.   The observed reliability score is computed from a true score plus error score.
B.   All these are true.
C.   Unreliable measures produce results that are meaningless.
D.   Reliability is not estimated, it is measured.
E.   There are multiple sources of error
Question #2
Which of the following is NOT true about Cronbach's alpha (one form of internal consistency reliability)?
A.   Cronbach's alpha is the most popular measure of internal consistency.
B.   For widely used scales, the Cronbach's alpha should .80 or above.
C.   In general, more items in a scale results in a higher Cronbach’s alpha.
D.   All these are true.
E.   Cronbach's alphas range from 0-1, with scores closer to zero indicating lower reliability.
Question #3
Which of the following is NOT true about Cronbach's alpha (one form of internal consistency reliability)?
A.   A Cronbach's alpha score is generally lower than the actual reliability, thus, it is considered a conservative estimate.
B.   For an exploratory study, a Cronbach's alpha above .70 is considered acceptable.
C.   Cronbach's alphas are often used for Likert-type scales.
D.   For standardized test scores, a Cronbach's alpha of .90 or greater is recommended.
E.   All these are true.
Question #4
Which of the following is NOT true?
A.   A split-halves reliability assesses whether scores on half the items relate to scores on the other half of the items.
B.   All these are true.
C.   An omega coefficient is probably better with ordinal level data than a Cronbach's alpha.
D.   A parallel form reliability assesses whether two different scales of the same construct are related to each other.
E.   A Kuder-Richardson reliability is designed for scales using dichotomous items.
Question #5
Which of the following is NOT true?
A.   All these are true.
B.   Construct validity is the most rigorous validity test
C.   Because face validity is so basic, it can be skipped.
D.   Expert opinion is often used to establish content validity
E.   Content validity assesses how well the items represent the entire universe of items from which they are drawn
Question #6
Measures how much the items in a scale relate to each other.
A.   Predictive validity
B.   Construct validity
C.   Convergent validity
D.   Internal consistency reliability
E.   Test-retest reliability
Question #7
Assesses the degree that measures that should not be related are not actually related.
A.   Discriminant validity
B.   Interrater / Interobserver reliability
C.   Construct validity
D.   Convergent validity
E.   Scaling
Question #8
Measures whether theoretically related are significantly correlated
A.   Interrater / Interobserver reliability
B.   Construct validity
C.   Convergent validity
D.   Test-retest reliability
E.   Predictive validity
Question #9
The art of constructing an instrument to measure and connect abstract constructs to measurable items.
A.   Test-retest reliability
B.   Construct validity
C.   Predictive validity
D.   Internal consistency reliability
E.   Scaling
Question #10
Measures consistency of agreement between multiple raters on a phenomenon.
A.   Interrater / Interobserver reliability
B.   Discriminant validity
C.   Internal consistency reliability
D.   Convergent validity
E.   Scaling
Question #11
A measure of how well a measurement instrument assesses an underlying construct
A.   Face validity
B.   Construct validity
C.   Test-retest reliability
D.   Content validity
E.   Concurrent validity
Question #12
Measures how stable a score on a scale is over time.
A.   Interrater / Interobserver reliability
B.   Test-retest reliability
C.   Discriminant validity
D.   Scaling
E.   Convergent validity
Question #13
Measures whether a measurement instrument seems to be measuring the underlying construct to people beyond the researchers.
A.   Construct validity
B.   Convergent validity
C.   Interrater / Interobserver reliability
D.   Content validity
E.   Face validity
Question #14
Measures how correlated a scale is to another scale that is supposedly measuring the same underlying construct.
A.   Concurrent validity
B.   Internal consistency reliability
C.   Predictive validity
D.   Scaling
E.   Predictive validity
Question #15
Dr. Nefario assesses whether his new stress scale correlates highly with a well-established stress scale.
A.   Face validity
B.   Concurrent validity
C.   Convergent validity
D.   Discriminant validity
Question #16
Professor John Nerdelbaum Frink, Jr assesses whether his new stress scale correlates with rumination, anxiety, depression, and quality of life (all variables that should be related to stress)
A.   Face validity
B.   Discriminant validity
C.   Convergent validity
D.   Face validity
Question #17
Dr. Alphonse Mephisto assesses whether his new stress scale seems like it is measuring stress by asking people in South Park what they think about the scale.
A.   Discriminant validity
B.   Convergent validity
C.   Face validity
D.   Concurrent validity
Question #18
Professor Hubert J. Farnsworth assesses whether his new stress scale is related to future stress.
A.   Face validity
B.   Convergent validity
C.   Discriminant validity
D.   Predictive validity

Need help with your exam preparation?