Welcome to the World of Psychological Research!
Ever wondered how psychologists actually "prove" their theories? They don't just guess; they use a specific toolkit of methods to investigate the human mind and behavior. This chapter is all about those tools. Think of yourself as a detective learning the science behind the investigation. Don't worry if it seems like a lot of terms at first—we'll break them down step-by-step!
1. Designing Your Research: The Blueprint
Before a psychologist starts, they need a plan. This starts with identifying variables—things that can change or be changed.
Independent vs. Dependent Variables
Imagine you want to see if drinking coffee helps students perform better on a memory test.
The Independent Variable (IV): This is the thing you change or manipulate. In our example, it's the amount of coffee given to the students.
The Dependent Variable (DV): This is what you measure. In our example, it's the score on the memory test.
Quick Review: The IV is the "Cause" and the DV is the "Effect."
The "Party Crashers": Extraneous Variables
Sometimes, other things can mess up your results. These are extraneous variables. There are two main types:
1. Situational Variables: Things in the environment (e.g., a loud noise during the test).
2. Participant Variables: Differences between the people in the study (e.g., one student naturally has a better memory than others).
How to Keep Control
To keep the research fair, psychologists use these tricks:
• Standardised procedures: Keeping everything exactly the same for every person (same room, same instructions).
• Counterbalancing: Half the group does Task A then Task B, the other half does Task B then Task A. This stops "practice effects."
• Randomisation: Picking things by chance (like flipping a coin) to avoid bias.
• Single-blind: The participant doesn't know the aim of the study.
• Double-blind: Neither the participant nor the researcher knows who is in which group. This is the "gold standard" to prevent bias!
Key Takeaway: A good experiment changes the IV, measures the DV, and controls extraneous variables to ensure a "fair test."
2. Hypotheses: Making a Prediction
A hypothesis is a clear, testable statement about what you think will happen.
• Null Hypothesis: Predicts that nothing will happen or there will be no relationship. (e.g., "Drinking coffee will not affect test scores.")
• Alternative Hypothesis: Predicts that there will be a difference or relationship. (e.g., "Students who drink coffee will score higher than those who don't.")
3. Sampling: Choosing Your Participants
You can't study everyone in the world! You have a Target Population (the whole group you are interested in) and you pick a Sample (the small group you actually study).
Sampling Methods:
• Random Sampling: Everyone in the target population has an equal chance of being picked (like pulling names from a hat).
Strength: Very fair/unbiased. Weakness: Can take a long time.
• Stratified Sampling: The sample reflects the proportions of people in the target population (e.g., if the population is 60% girls, your sample is 60% girls).
Strength: Highly representative. Weakness: Very complicated to organize.
• Volunteer Sampling: People offer to take part (e.g., replying to an advert).
Strength: Easy to get people. Weakness: "Volunteer bias"—only certain types of people might sign up.
• Opportunity Sampling: Using whoever is available at the time (e.g., asking people in the canteen).
Strength: Quick and easy. Weakness: Not very representative of the whole population.
4. Experimental Designs: How to Organise Your Groups
How do you split your participants into groups?
1. Independent Measures: Different people in each group (Group A does coffee, Group B does water).
2. Repeated Measures: The same people do both conditions (Everyone does water on Monday and coffee on Tuesday).
3. Matched Pairs: You pair up two similar people (e.g., two people with the same IQ) and put one in Group A and one in Group B.
Common Mistake to Avoid: Don't confuse "Sampling" (how you get the people) with "Experimental Design" (how you sort them into groups once you have them)!
5. Reliability and Validity: The "Big Two"
Psychologists always ask: "Can I trust these results?"
• Reliability = Consistency. If you did the study again, would you get the same results? (Think of a bathroom scale: if it gives you a different weight every 5 minutes, it's not reliable).
• Validity = Accuracy/Truth. Is the study measuring what it claims to measure? (If you use a history test to measure math ability, it's not valid).
Key Takeaway: You want your research to be both consistent (Reliable) and truthful (Valid).
6. Ethics: Doing the Right Thing
Psychologists must follow strict rules to protect participants. Use the mnemonic "I Do Care Really Pointedly" to remember them:
• Informed Consent: Participants must agree to take part after knowing what will happen.
• Deception: You shouldn't lie to participants unless it's absolutely necessary for the study.
• Confidentiality: Keep names and personal data private.
• Right to Withdraw: Participants can leave at any time.
• Protection of Participants: No physical or mental harm allowed.
7. The Methods Themselves
Experiments:
• Laboratory: Controlled environment. (High control, but feels "fake").
• Field: Real-world setting. (More natural behavior, but harder to control variables).
• Natural: The researcher doesn't change the IV; it happens on its own (e.g., studying the effect of a natural disaster).
Other Methods:
• Questionnaires: Written questions. Closed-ended questions (Yes/No) give Quantitative data (numbers). Open-ended questions give Qualitative data (words/descriptions).
• Interviews: Face-to-face. Can be Structured (set questions), Unstructured (casual conversation), or Semi-structured (a mix).
• Correlation: Looking for a relationship between two variables. Important: Correlation does not prove that one thing caused the other!
• Case Study: An in-depth look at one unique person or small group.
• Observation: Watching and recording behavior.
8. Data Analysis: Making Sense of the Numbers
Don't let the math scare you! Here is what you need for the exam:
Descriptive Statistics
• Mean: The average. Add all scores together and divide by the number of scores.
• Median: The middle score when they are in order.
• Mode: The most common score.
• Range: The difference between the highest and lowest score (Highest minus Lowest).
• Normal Distribution: A "bell-shaped" curve where most people score in the middle.
Math Skills
You may need to calculate percentages, fractions, and ratios. You should also be able to use standard form (e.g., \( 1.2 \times 10^3 \)) and significant figures.
Types of Data
• Primary Data: Data you collected yourself for your own study.
• Secondary Data: Data collected by someone else (e.g., from a book or website).
• Quantitative: Numerical data (numbers). Easy to graph.
• Qualitative: Descriptive data (words). Rich in detail.
Graphs to Know
• Bar Charts: For different categories.
• Histograms: For continuous data (the bars touch).
• Scatter Diagrams: Used for Correlations to show a relationship between two variables.
Quick Review Box:
Mean: Total / Count
Range: High - Low
IV: What I change
DV: What I measure
Don't worry if this seems tricky at first! The more you practice identifying IVs and DVs in different studies, the easier it becomes. You've got this!