Welcome to the Psychologist's Toolkit!
Welcome to one of the most important chapters in your OCR Psychology course. Think of Research Methods as a "toolkit." Just like a builder needs a hammer and a saw to build a house, a psychologist needs these methods to build our understanding of human behavior. Don't worry if some of the terms seem a bit like a new language at first—once you see how they work in real life, they’ll start to make perfect sense! In this chapter, we will learn how to design, carry out, and analyze psychological research.
1. The Big Four: Research Methods
Psychologists use four main ways to collect data. Every study you learn this year will use one of these.
A. Experiments
In an experiment, the researcher changes one thing to see if it causes a change in another. It’s all about cause and effect.
Key Terms:
• Independent Variable (IV): The thing you change (e.g., the amount of caffeine given).
• Dependent Variable (DV): The thing you measure (e.g., how fast someone runs).
• Extraneous Variables: "Nuisance" variables that might mess up your results if you don't control them (like the temperature of the room).
Types of Experiments:
1. Laboratory Experiment: Done in a controlled environment.
Example: Testing memory in a quiet university lab.
2. Field Experiment: Done in a real-world setting.
Example: Testing if people help a stranger on a busy street.
3. Quasi Experiment: The researcher doesn't change the IV because it already exists (like age or gender).
Example: Comparing the memory of 10-year-olds vs. 50-year-olds.
B. Observations
Sometimes we just want to watch what people do without interfering.
• Naturalistic: Watching behavior in its normal setting.
• Controlled: Watching behavior in a set-up situation (like a play laboratory).
• Participant: The researcher joins in with the group they are watching.
• Non-participant: The researcher watches from the sidelines.
• Overt: Participants know they are being watched.
• Covert: Participants are watched in secret (undercover).
C. Self-Reports
This is simply asking people what they think or feel.
• Questionnaires: Written questions. Can use Open questions (explain in your own words) or Closed questions (Yes/No or tick boxes).
• Interviews:
◦ Structured: Set list of questions, no straying.
◦ Unstructured: A free-flowing conversation.
◦ Semi-structured: A mix of both.
D. Correlations
A correlation looks at the relationship between two things (co-variables).
Analogy: Think of a see-saw. If both sides go up together, it’s positive. If one goes up while the other goes down, it’s negative.
• Positive Correlation: As one variable increases, the other increases (e.g., more revision usually means higher grades).
• Negative Correlation: As one increases, the other decreases (e.g., more time playing games might mean less time sleeping).
• No Correlation: No link at all (e.g., shoe size and intelligence).
Quick Review: The "Big Four"
• Experiments: Do something to see what happens (Cause/Effect).
• Observations: Watch what happens.
• Self-Report: Ask what happened.
• Correlations: See if two things happen together.
2. Planning Research: Aims and Hypotheses
Before starting, you need a plan!
• Research Aim: A general statement of what you want to study ("I want to see if music affects focus").
• Hypothesis: A clear, testable prediction.
• Null Hypothesis: Predicts no difference or no relationship ("Music will not affect focus").
• Alternative Hypothesis: Predicts there will be an effect.
◦ One-tailed (Directional): Predicts exactly which way it will go ("Music will improve focus").
◦ Two-tailed (Non-directional): Predicts a difference but doesn't say which way ("Music will change focus levels").
Mnemonic for Hypotheses:
One-tailed = One specific direction (Up or Down).
Two-tailed = It could go either way (Any change).
3. Sampling: Choosing your Participants
You can't study everyone in the world! You have a Target Population (the group you are interested in) and you pick a Sample (the people you actually test).
• Random Sampling: Everyone has an equal chance (like pulling names from a hat).
• Opportunity Sampling: Using whoever is available at the time (like asking people in the canteen).
• Self-selected (Volunteer) Sampling: People sign up themselves (responding to an advert).
• Snowball Sampling: You find one person, and they find their friends, who find their friends.
Common Mistake to Avoid: Don't assume "Random" means "haphazard." In Psychology, a random sample is a very specific, mathematical way of choosing people to make it fair!
4. Data Recording and Analysis
Once you have your data, you need to make sense of it.
Levels of Data
1. Nominal: Data in categories (e.g., "Yes/No," "Smoker/Non-smoker").
2. Ordinal: Data that can be put in order or ranked (e.g., 1st, 2nd, and 3rd place in a race).
3. Interval: Data measured on a scale with equal gaps (e.g., Temperature or time).
Descriptive Statistics
These summarize your data:
• Measures of Central Tendency: Mean (average), Median (middle value), and Mode (most common).
• Measures of Dispersion: Range (highest minus lowest) and Standard Deviation (how much scores spread out around the mean).
Inferential Statistics
These tell us if our results are "Significant" or just a lucky fluke.
Did you know? Psychologists usually look for a significance level of \(p \le 0.05\). This means there is only a 5% chance the results happened by accident!
You need to know when to use specific tests (based on the type of data and design):
• Mann-Whitney U
• Wilcoxon Signed Ranks
• Chi-square
• Binomial Sign test
• Spearman’s Rho
Key Takeaway:
Descriptive stats describe the data you have. Inferential stats infer whether your results apply to the whole population.
5. Methodological Issues: Reliability and Validity
How do we know if a study is actually "good"?
Reliability = Consistency
If you did the test again, would you get the same result?
Analogy: A bathroom scale is reliable if it gives you the same weight three times in a row. It’s "broken" (unreliable) if it gives you three different weights in five minutes!
• Inter-rater reliability: Do two different observers see the same thing?
• Test-retest: Doing the test again with the same people later.
Validity = Accuracy/Truth
Are you measuring what you claim to be measuring?
• Internal Validity: Is the IV really causing the change in the DV? (Or is it a mess of extraneous variables?)
• Ecological Validity: Does the behavior in the study reflect real life?
• Face Validity: Does the test "look like" it measures what it should at first glance?
6. Ethics: Staying Professional
Psychologists must follow the BPS Code of Ethics. Think of the acronym CRISP to remember some key parts (though OCR focuses on these four pillars):
1. Respect: Includes Informed Consent (telling them what they will do), Right to Withdraw (letting them leave), and Confidentiality (keeping names secret).
2. Competence: The researcher must be qualified.
3. Responsibility: Protection of participants (no physical or mental harm) and Debriefing (explaining the study afterward).
4. Integrity: Avoiding Deception (lying to participants) unless absolutely necessary.
7. How Science Works
Psychology aims to be a science. This means we value:
• Objectivity: Keeping feelings/opinions out of it; using facts.
• Replicability: Can someone else copy your study and find the same thing?
• Falsification: Being able to prove a theory wrong.
• Standardisation: Keeping the procedure exactly the same for every participant so it’s fair.
Final Summary:
Research Methods is the "how" of Psychology. By mastering Experimental Designs, Sampling, Statistics, and Ethics, you can look at any study and decide if it is a strong piece of evidence or a flawed experiment. Don't worry if the statistical tests seem hard—focus on why we use them first!