Psychology 321 - Psychology Research Method » Spring 2022 » Scale Reliability & Validity
Need help with your exam preparation?
Get Answers to this exam for $6 USD.
Get Answers to all exams in [ Psychology 321 - Psychology Research Method ] course for $25 USD.
Existing Quiz Clients Login here
Question #1
Which of the following is NOT true about reliability?
A.
All these are true.
B.
Unreliable measures produce results that are meaningless.
C.
The observed reliability score is computed from a true score plus error score.
D.
There are multiple sources of error
E.
Reliability is not estimated, it is measured.
Question #2
Which of the following is NOT true about Cronbach's alpha (one form of internal consistency reliability)?
A.
Cronbach's alphas range from 0-1, with scores closer to zero indicating lower reliability.
B.
For widely used scales, the Cronbach's alpha should .80 or above.
C.
All these are true.
D.
In general, more items in a scale results in a higher Cronbach’s alpha.
E.
Cronbach's alpha is the most popular measure of internal consistency.
Question #3
Which of the following is NOT true about Cronbach's alpha (one form of internal consistency reliability)?
A.
Cronbach's alphas are often used for Likert-type scales.
B.
For standardized test scores, a Cronbach's alpha of .90 or greater is recommended.
C.
For an exploratory study, a Cronbach's alpha above .70 is considered acceptable.
D.
All these are true.
E.
A Cronbach's alpha score is generally lower than the actual reliability, thus, it is considered a conservative estimate.
Question #4
Which of the following is NOT true?
A.
An omega coefficient is probably better with ordinal level data than a Cronbach's alpha.
B.
A split-halves reliability assesses whether scores on half the items relate to scores on the other half of the items.
C.
A parallel form reliability assesses whether two different scales of the same construct are related to each other.
D.
All these are true.
E.
A Kuder-Richardson reliability is designed for scales using dichotomous items.
Question #5
Which of the following is NOT true?
A.
Content validity assesses how well the items represent the entire universe of items from which they are drawn
B.
Because face validity is so basic, it can be skipped.
C.
Construct validity is the most rigorous validity test
D.
Expert opinion is often used to establish content validity
E.
All these are true.
Question #6
Measures how much the items in a scale relate to each other.
A.
Internal consistency reliability
B.
Construct validity
C.
Convergent validity
D.
Test-retest reliability
E.
Predictive validity
Question #7
Assesses the degree that measures that should not be related are not actually related.
A.
Discriminant validity
B.
Scaling
C.
Construct validity
D.
Convergent validity
E.
Interrater / Interobserver reliability
Question #8
Measures whether theoretically related are significantly correlated
A.
Convergent validity
B.
Test-retest reliability
C.
Interrater / Interobserver reliability
D.
Construct validity
E.
Predictive validity
Question #9
The art of constructing an instrument to measure and connect abstract constructs to measurable items.
A.
Internal consistency reliability
B.
Test-retest reliability
C.
Construct validity
D.
Scaling
E.
Predictive validity
Question #10
Measures consistency of agreement between multiple raters on a phenomenon.
A.
Discriminant validity
B.
Interrater / Interobserver reliability
C.
Internal consistency reliability
D.
Convergent validity
E.
Scaling
Question #11
A measure of how well a measurement instrument assesses an underlying construct
A.
Content validity
B.
Test-retest reliability
C.
Face validity
D.
Construct validity
E.
Concurrent validity
Question #12
Measures how stable a score on a scale is over time.
A.
Test-retest reliability
B.
Scaling
C.
Interrater / Interobserver reliability
D.
Discriminant validity
E.
Convergent validity
Question #13
Measures whether a measurement instrument seems to be measuring the underlying construct to people beyond the researchers.
A.
Convergent validity
B.
Interrater / Interobserver reliability
C.
Construct validity
D.
Face validity
E.
Content validity
Question #14
Measures how correlated a scale is to another scale that is supposedly measuring the same underlying construct.
A.
Concurrent validity
B.
Internal consistency reliability
C.
Scaling
D.
Predictive validity
E.
Predictive validity
Question #15
Dr. Nefario assesses whether his new stress scale correlates highly with a well-established stress scale.
A.
Discriminant validity
B.
Convergent validity
C.
Concurrent validity
D.
Face validity
Question #16
Professor John Nerdelbaum Frink, Jr assesses whether his new stress scale correlates with rumination, anxiety, depression, and quality of life (all variables that should be related to stress)
A.
Face validity
B.
Face validity
C.
Convergent validity
D.
Discriminant validity
Question #17
Dr. Alphonse Mephisto assesses whether his new stress scale seems like it is measuring stress by asking people in South Park what they think about the scale.
A.
Convergent validity
B.
Discriminant validity
C.
Face validity
D.
Concurrent validity
Question #18
Professor Hubert J. Farnsworth assesses whether his new stress scale is related to future stress.
A.
Face validity
B.
Discriminant validity
C.
Predictive validity
D.
Convergent validity
Need help with your exam preparation?
Get Answers to this exam for $6 USD.
Get Answers to all exams in [ Psychology 321 - Psychology Research Method ] course for $25 USD.
Existing Quiz Clients Login here