Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

PSY 402 TEST BANK EXAM 2 Q & A LATEST 2025-2026 GRADED A, Exams of Psychology

PSY 402 TEST BANK EXAM 2 QUESTIONS AND DETAILED ANSWERS LATEST 2025-2026 GRADED A

Typology: Exams

2024/2025

Available from 07/11/2025

BestieNurse
BestieNurse 🇬🇧

1.5K documents

1 / 43

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
PSY 402 TEST BANK EXAM 2
QUESTIONS AND DETAILED ANSWERS
LATEST 2025-2026 GRADED A
"Subject Variables" - ANS• Motivation
• Anxiety
• Illness
• Medications
• Hormones
• Sleep
-etc
- its possible that other issues may end up effecting test scores
[di|poly]chotomous Issues - ANSPros:
• neutral, fair scoring
• Types of knowledge:
• Recall vs. Recognition
• Receptive vs. Expressive
• Skill =? test taking ability
• Solution: Essay test format
4 Kinds of Validity - ANSFace, Content, Criterion, Construct
Accessing Knowledge - ANSRecalling information is different than
Recognizing it
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b

Partial preview of the text

Download PSY 402 TEST BANK EXAM 2 Q & A LATEST 2025-2026 GRADED A and more Exams Psychology in PDF only on Docsity!

PSY 402 TEST BANK EXAM 2

QUESTIONS AND DETAILED ANSWERS

LATEST 2025-2026 GRADED A

"Subject Variables" - ANS• Motivation

  • Anxiety
  • Illness
  • Medications
  • Hormones
  • Sleep -etc
  • its possible that other issues may end up effecting test scores [di|poly]chotomous Issues - ANSPros:
  • neutral, fair scoring
  • Types of knowledge:
  • Recall vs. Recognition
  • Receptive vs. Expressive
  • Skill =? test taking ability
  • Solution: Essay test format 4 Kinds of Validity - ANSFace, Content, Criterion, Construct Accessing Knowledge - ANSRecalling information is different than Recognizing it
  • Neuropsychology suggests different brain systems. Recall can be stronger or weaker than Recognition
  • Issues for testing:
  • What type of access is involved in polychotomous testing?
  • Is it fair to test using a method which prefers one type over the other? Advice from textbooks - ANS- Dont use "all of the above" (80%)
  • Dont use none of the above (75%) -all choices should be plausible (70%) -negative wording shouldn't not be un-used (55%) All Validity is Construct Validity? - ANSModern theory: only one type of validity -- Construct validity
  • Other types of validity are just sub-types of Construct validity Are your intestines too long? - ANSExpectancy effects higher on tests with subjective scoring
  • Cannell (1974) : "yes" answers to physical symptoms increased when interviewer gave approving nod.
  • Yes answers increased to nonsense questions: "Do the ends of your hair itch"
  • Reactivity...
  • Drift...
  • Expectancies... Behavioral Observation - ANS• Deception and Malingering...
  • The Low Base Rate /False Positive Paradox... Category (Rating Scale) Format - ANSSimilar to Likert format, but #s are used instead
  • Pros -- responses are more precise than with Likert scales (10 vs. 5 or 6)
  • Cons -- context effects stronger
  • Solution: clearly define endpoints
  • Precision vs. Accuracy? Category Example - ANSOn a 1 to 10 scale how much do you like your partner? 1 Planning to break up 2 3 4 5 6 7 8 9

10 Planning to get Married soon

  • Issues:
  • Unbalanced (is 5 or 6 the middle?)
  • Hard to interpret : what does a "2" or "3" really mean? Ceiling effect: - ANSmany scores near the top range of possible scores Charles Spearman - ANS"The proof and measurement of association between two things" - Rho - correlation for Ordinal variables. Field: intelligence; Contributions: found that specific mental talents were highly correlated, concluded that all cognitive abilities showed a common core which he labeled 'g' (general ability) Classical Test-Score Theory - ANSTrue score (T) : the "actual" score that exists
  • Observed score (X) : score as measured by a test
  • Error (E) : difference between Observed and True score
  • X = T + E
  • E = X - T
  • Assumptions: True scores have no variability. Errors are random (e.g. a normal distribution with mean of zero)
  • Reliability = correlation between Observed score and True score Confidence Interval - ANS• "How likely is a true score to fall within a range"

psychological concepts (example: Intelligence / IQ)

  • Problem: how to measure validity of a test if the criterion can't be measured? Construct Validity 2 - ANSSolution -- the world is complicated. In Psychology (as in other sciences) things can exist even if they aren't easy to measure.
  • Method -- collect evidence for the construct via multiple methods, multiple sources, multiple subjects Constructs & Measurement - ANSPsychology as "soft science"
  • Construct
  • exists but can't be directly measured
  • examples
  • Measurement
  • "true value" - intelligence
  • measured or observed value (e.g. IQ test score)
  • discrepancy - "error" Constructs, Factors, Facets - ANSConstructs may be "high" or "low" (also called "top" or "bottom")
  • Top-level constructs are made of smaller constructs
  • aka Factors, Facets,

Dimensions, Domains... Content Validity - ANS• Do the test questions do a good job covering the content?

  • Most related to educational settings (achievement/aptitude testing)
  • E.g. does an Algebra test contain questions about Algebra?
  • A Logical, rather than statistical argument
  • Somewhat fuzzy definition
  • Modern theories consider Content Validity a sub-set of other types of validity Content validity - ANSDescription: does test questions cover the topic? Notes: logic and judgment - no stats to calculate Content Validity 2 - ANSIf a test is supposed to test a specific Construct, problems may arise:
  • Construct underrepresentation
  • test misses important information
  • Construct-irrelevant variance
  • scores are influenced by outside factors
  • e.g. anxiety, reading comprehension, IQ, etc. Convergent Validity - ANS• Multiple factors within a construct

M = 1/5 = 20%)

  • A student takes the test, guesses on each item, and gets 20 correct (P*N = 0.2 * 100 = 20)
  • Correction for guessing subtracts (1/M-1) points for each wrong answer = 1/(5-1) = 1/4 = 0.25 points.
  • Adjusted score? Criterion Validity - ANS• Criterion -- a well defined measure of performance in the real world
  • Criterion validity -- the correlation between a test score and the specific criterion
  • Predictive vs. Concurrent
  • Predictive High School SAT score (predictor) predicts later College GPA (criterion)
  • Concurrent Work samples from mechanics Criterion Validity - ANSDescription: does the test predict a specific event? Notes: requires a well defined criteria Stat: Pearsons R correlation - between test and criteria Criterion-referenced Test - ANSInstead of arbitrary criteria such as "70% = pass" use one with more validity.
  • Criteria = the learning outcome(s) desired
  • Method:
  • create a good test
  • give it to two groups of students
  • those who have had the material
  • those who have not
  • Determine cut-point score from histogram Deception - ANSPeople are very poor at detecting deception
  • Polygraph "Lie Detector" tests
  • invented in 1921 by John Larson (medical student and police officer in Berkeley CA)
  • developed to scientifically measure deception
  • Theory: physiological responses happen when subject lies, and can be measured objectively. Definitions of Validity - ANS• Agreement between test scores and the thing (construct) it claims to measure
  • Many other definitions; some confusing or incompatible with each other
  • AREA/NCME (1985, 1999, 2012) "Standards for Educational and Psychological Testing"
  • One informal definition: Face Validity
  • Three formal definitions: Content, Criterion, Construct
  • Point Biserial
  • p.b. correlation between item and test score
  • low or negative values represent "bad" items Disparate Impact - ANS• aka "Adverse Impact"... • disproportional... • adverse effect... • on a protected class Disproportionate - ANSThe proportion of a protected class affected by the behavior is different from a nonprotected class
  • 80% rule
  • Example
  • 50% of men are hired
  • 45% of women are hired
  • 45/50 = 90% Distractors? - ANSToo few distractors --> dichotomous
  • Too many distractors --> slow, confusing
  • Optimal is 3-5 distractors. Thus, most multiple-choice tests should have between 4 and 6 possible answers per question.
  • Distractors should cover a wide range of abilities w/o being cute or trite Divergent Validity - ANSOther factors (not part of a construct)
  • Should have low to zero correlation

Does Face Validity Matter? - ANS• Naive view = face validity

  • Tests with very little face validity...
  • what does the average test taker feel about the test?
  • motivation?
  • confusion? Domain Sampling - ANS(1) A sample of behaviors from all possible behaviors that could be indicative of a particular construct; (2) a sample of test items from all possible items that could be used to measure a particular construct. How to calculate r1T
  • Any two tests r
  • r1j = average of all pairs Domain Sampling - ANS• Problem: no way to measure True score / no possible way to measure every possible item
  • Sample a limited subset of items, do this in multiple ways
  • Create one or more tests
  • For two given tests, correlation between the two tests will be lower than the correlation between one test and the True score
  • r1t = √r1j
  • Different subjects
  • Different situation
  • Different criterion Example of a poor test item? - ANSWhat is 0.4 plus 0. (A) 0. (B) 0. (C) 0. (D).
  • Is answering (A) better or worse than answering (D)? Expectancies in Beh. Obs. - ANSExpectancy effects, in Behavioral Observation situations, are similar to those seen in Testing
  • Effects are subtle, small, but real and can be a significant problem in some contexts
  • Effects seem to be largest when Observer is rewarded for reporting certain behaviors. Expectancy Effects - ANSfinding evidence biased towards a pre-existing hypothesis... -have a subject & an examiner, examiner giving test to subject - examiner might ASSUME student is smart and gives them extra points

Expectancy Effects: Rosenthal (1966) - ANSRate faces on "Success" or "Failure"

  • All subjects get same faces, but
  • Half told faces were successful people.
  • Result: were about 1 point higher (on a 20 point scale)
  • Conclusion: expectation influences judgement
  • Effects even seen when rating non-humans (e.g. rats) Expectancy Effects: Testing - ANSSattler et al. (1970): expectancy effects when rating ambiguous response on an IQ test.
  • same response given to raters
  • Half told it is a "smart" child.
  • Results: "smart" children scored better.
  • Satler (1998)
  • same even when the test answer was not ambiguous.
  • Literature Review: inconsistent results, which are small
  • Conclusion: small, but real problem, design tests with clear scoring rules External Criteria - ANS• Internal Criteria = total test score
  • External Criteria = thing that actually matters (e.g. "do you crash the plane")

single test reduces reliability

  • Methods:
  • Informal — remove items with poor face validity (chapter 5)
  • Statistical:
  • Discriminability Analysis (chapter 6)
  • Factor Analysis (chapter 13) Griggs v. Duke Power - 2 - ANSSupreme court found If a test impacts different ethnic groups disparately, the business must demonstrate the test is a "reasonable measure of job performance"
  • In scientific terms: tests must be valid predictors of specific criteria. Griggs v. Duke Power (1971) - ANS• Group of l3 people employed as laborers -- sweeping & cleaning
  • Wanted to be promoted to next higher classification (coal handler)
  • Duke Power company required passing score on IQ test to be promoted
  • Of 95 employees at power station, 14 were Black, 13 of 14 were assigned sweeping/cleaning duties
  • Court case -- was the IQ test requirement valid or discriminatory?
  • Supreme Court decision : "invalid" Guessing : Expected Score - ANSProbability of getting any item correct, using a random guessing strategy, p is equal to 1 divided by the # of answers.
  • On a dichotomous (T/F) test the probability P = 1/2 = 50% = 0.
  • On a multiple choice test with M answers per question, the probability = 1 / M. For a 4 item test P= 1/4 = .25 = 25%
  • Total score due to guessing = # of questions times average score per item or N * P.
  • Example: an 100 item test with 4 answers Guessing : Probability - ANS• M = # of answer choices per question
  • Pcorrect with random guessing = 1/M
  • On a dichotomous (T/F), P = ____
  • On a multiple choice test with M answers per question, the probability = ______
  • Total score from guessing:
  • Nquestions x Pcorrect How many choices in question? - ANS• Research suggests optimal # of choices is between 4 and 7
  • Using up to 10 choices is OK if