Welcome to the Engine Room of Psychology!
Ever wondered how psychologists go from having a "hunch" to actually proving something? It all starts here, in Planning and Conducting Research. This is the most important part of the subject because even the most brilliant idea is useless if the study isn't planned correctly. Think of this chapter as the "blueprint" for building a house—if the blueprint is wrong, the house will fall down!
Don’t worry if some of these terms feel like a different language at first. We’ll break them down piece by piece with simple examples you can relate to.
1. Aims and Hypotheses: What are we looking for?
Before a psychologist starts, they need to know what they are doing and what they expect to find.
The Research Aim and Question
The research aim is a general statement of what the study is about.
Example: "To investigate if caffeine affects memory."
The research question is just that—a question!
Example: "Does drinking coffee help students remember words better?"
The Hypotheses (The Predictions)
A hypothesis is a testable statement. In your exam, you usually need to know four types:
- Null Hypothesis: This predicts no effect or no difference. It’s the "boring" prediction.
Example: "There will be no difference in memory scores between coffee drinkers and water drinkers." - Alternative Hypothesis: This predicts that there will be an effect.
- One-tailed (Directional) Hypothesis: This predicts exactly how things will change.
Example: "Students who drink coffee will remember more words than those who drink water." - Two-tailed (Non-directional) Hypothesis: This predicts a difference, but you aren't sure which way it will go.
Example: "There will be a difference in memory scores between students who drink coffee and students who drink water."
Memory Aid: Think of a dog’s tail! A one-tailed dog points in one specific direction (e.g., "It will get better"). A two-tailed dog is wagging all over the place—it knows something is happening, but it’s not pointing in just one direction!
Quick Review: Use a one-tailed hypothesis if previous research suggests what will happen. Use a two-tailed if it’s a brand-new area of study.
2. Populations and Sampling: Who are we testing?
You can't test every single person on Earth, so you need a sample.
Key Terms
- Target Population: The whole group you are interested in (e.g., "All A Level students in the UK").
- Sample: The small group of people from that population who actually take part in your study.
Sampling Techniques
- Random Sampling: Every member of the population has an equal chance of being picked (like pulling names out of a hat).
Strength: Very fair and usually representative.
Weakness: Hard to do with large populations. - Opportunity Sampling: You just ask whoever is available at the time (e.g., standing in the canteen and asking people who walk past).
Strength: Quick and easy.
Weakness: Biased! You only get a certain type of person who happens to be there. - Self-selected (Volunteer) Sampling: People sign up themselves, usually in response to an advert.
Strength: People are willing to help and won't drop out easily.
Weakness: "Volunteer bias"—volunteers are often more motivated or helpful than the "average" person. - Snowball Sampling: You find one person, and they "recruit" their friends, who recruit their friends.
Strength: Great for finding "hidden" groups (e.g., people with a rare hobby).
Weakness: The sample isn't very representative because everyone knows each other.
Key Takeaway: The goal of sampling is representativeness. If your sample is like the population, you can generalise your results to everyone else.
3. Variables and Operationalisation: Making it Measurable
Psychology can be "fuzzy." We need to turn ideas into numbers.
IV and DV
- Independent Variable (IV): The thing the researcher changes or manipulates.
- Dependent Variable (DV): The thing the researcher measures.
The "Operationalisation" Trick: You must be specific. Don't just say "memory." Say "the number of words recalled from a list of 20." Don't just say "caffeine." Say "200mg of Nescafe instant coffee."
Extraneous Variables
These are "annoying" variables that might mess up your results. If you are testing memory, an extraneous variable might be the noise in the room or how much sleep the participant had. If they affect the results, they become confounding variables.
Did you know? Standardisation (keeping everything exactly the same for every participant) is the best way to control extraneous variables!
4. Experimental Designs: How do we group people?
Once you have your participants, how do you use them?
- Independent Measures: Different people in each condition (Group A drinks coffee, Group B drinks water).
Problem: "Participant variables"—Group A might just have better natural memories than Group B! - Repeated Measures: The same people do both conditions (Everyone drinks water on Day 1, and coffee on Day 2).
Problem: "Order effects"—participants might get better because they've practiced, or worse because they are bored/tired. - Matched Participants: You find two people who are very similar (e.g., same age, IQ, and gender). You put one in Group A and the other in Group B.
Problem: It’s very difficult and time-consuming to match people perfectly.
Quick Review Box:
- Independent: Quick, but watch out for individual differences.
- Repeated: Fewer people needed, but watch out for practice effects.
- Matched: The best of both worlds, but a nightmare to organise!
5. Designing Observations
Sometimes we don't experiment; we just watch. But we can't just "watch"—we need a plan.
- Behavioural Categories: Breaking down a behavior into a checklist. Instead of "watching aggression," you look for "punching," "kicking," or "shouting."
- Coding Frames: Using symbols or numbers to record those behaviors quickly.
- Event Sampling: You count every single time a behavior happens (e.g., every time a child kicks a ball).
- Time Sampling: You only record behavior at set time intervals (e.g., "What are they doing every 60 seconds?").
6. Designing Self-Reports (Questionnaires and Interviews)
This is where we ask people what they think or feel.
Question Types
- Open Questions: Participants answer in their own words. (Gives qualitative data - rich in detail).
- Closed Questions: Fixed options like "Yes/No" or multiple choice. (Gives quantitative data - easy to turn into a graph).
Rating Scales
- Likert Scale: Usually a 5 or 7-point scale from "Strongly Disagree" to "Strongly Agree."
- Semantic Differential Scale: Two opposite adjectives with a scale in between.
Example: Happy [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Sad
Common Mistake to Avoid: Avoid "Leading Questions." If you ask "How much do you hate smokers?", you are pushing the participant to give a negative answer. Keep it neutral!
Final Summary Takeaways
1. Hypotheses are your "bet" on what will happen. Use null for no effect and alternative for an effect.
2. Sampling is about getting a "slice" of the population. Random is usually best but hardest.
3. Experimental design is about whether you use the same people or different people in your groups.
4. Operationalisation is the secret to good science—be specific and make it measurable!
5. Observations and Self-reports need clear structures (like categories or scales) to avoid researcher bias.
You've got this! Understanding the "how" of research is 50% of the battle in Psychology. Keep practicing writing those hypotheses!