Introduction: The Blueprint of Psychology
Welcome! Before a psychologist can discover why we dream or how we learn, they have to act like an architect and draw up a "blueprint" for their study. This is called planning and conducting research. In these notes, we will look at how researchers decide what to study, who to involve, and how to set up their experiments so the results are fair and accurate. Don't worry if some of the terms seem like a lot at first—we'll break them down piece by piece!
1. Aims and Hypotheses: What are we looking for?
Every study starts with an idea. But in science, we have to be very specific about that idea.
• Research Aim: This is a broad statement of what the researcher intends to investigate. It's the "purpose" of the study. Example: "To investigate if music helps students concentrate."
• Research Question: This is the aim turned into a specific question. Example: "Does listening to Mozart improve test scores in Year 12 students?"
The Hypothesis (The Prediction)
A hypothesis is a clear, testable prediction of what you think will happen. There are two main types:
• Alternative Hypothesis: Predicts that there will be a relationship or a difference.
• Null Hypothesis: Predicts that there will be no relationship or difference. It basically says any result you find is just down to chance. Memory Tip: "Null" sounds like "Nil"—as in zero difference!
Choosing a Direction
When you write an alternative hypothesis, you have to decide how specific to be:
• One-tailed (directional) hypothesis: You predict exactly which way the results will go. Example: "Students who drink coffee will score higher than those who don't."
• Two-tailed (non-directional) hypothesis: You say there will be a difference, but you aren't sure which way yet. Example: "There will be a difference in scores between coffee drinkers and non-coffee drinkers."
Quick Review: Use a one-tailed hypothesis if previous research suggests which way the result will go. Use a two-tailed one if the area is brand new!
Key Takeaway: Aims are the "goal," while hypotheses are the "prediction." You always need a Null hypothesis to stay scientifically "neutral."
2. Populations and Sampling: Who is in the study?
Psychologists usually want to understand everyone (the Target Population), but they can’t test billions of people! Instead, they pick a smaller group called a Sample.
Sampling Techniques
• Random Sampling: Every person in the target population has an equal chance of being picked (like pulling names out of a hat). It’s very fair but hard to do with large groups.
• Opportunity Sampling: You just ask whoever is available at the time. Example: Asking people in the cafeteria at lunch. It's quick, but might not represent everyone.
• Self-selected (Volunteer) Sampling: People offer to take part, usually in response to an ad. Example: Putting a poster in the common room. You get willing participants, but they might all be a certain "type" of person who likes helping out.
• Snowball Sampling: You find one person, and they "recruit" their friends, who recruit their friends. This is great for finding "hidden" groups, like members of an illegal street racing club.
Common Mistake to Avoid: Don't confuse the "Target Population" (the whole group you care about) with the "Sample" (the actual people you tested).
3. Experimental Designs: How do we group them?
If you are comparing two things (like "Music" vs. "Silence"), how do you organize your people?
• Independent Measures: Different people go into different groups. One group does the "Music" task, a totally different group does "Silence."
Analogy: A race where Group A runs in Nikes and Group B runs in Adidas.
• Repeated Measures: The same people do both tasks. They do the task with music, then come back later and do it in silence.
Analogy: You try the Nikes on Monday and the Adidas on Tuesday to see which you like better.
• Matched Participants: You use different people for each group, but you "pair them up" based on qualities like IQ or age so the groups are as similar as possible.
Key Takeaway: Independent measures avoids people getting bored or practicing (Order Effects), but Repeated measures is better for making sure "person differences" don't ruin the results.
4. Research Designs: The Element of Time
Sometimes psychologists want to see how people change over time.
• Longitudinal Research: Following the same group of people for a long time (years or even decades). It’s like a "movie" of their life.
• Cross-sectional Research: Looking at different groups of people (e.g., a group of 5-year-olds, 10-year-olds, and 15-year-olds) all at the same time. It’s like a "snapshot" comparing different ages.
5. Variables: The Cause and the Effect
To be a science, we must measure things accurately.
• Independent Variable (IV): The thing the researcher changes or manipulates. (The "Cause").
• Dependent Variable (DV): The thing the researcher measures. (The "Effect").
• Extraneous Variables: These are "annoying" extra things that might mess up your results if you don't control them. These include:
1. Researcher variables: Does the researcher's body language accidentally give away the answer?
2. Situational variables: Is the room too hot? Is it too noisy?
3. Participant variables: Is one participant just naturally smarter or more tired than another?
Operationalisation: Being Super Specific
Don't let the word "operationalise" scare you! It just means defining your variables so they can be measured exactly.
Example: Instead of saying you are measuring "intelligence," you operationalise it as "the score achieved on a 20-minute IQ test."
Quick Review Box:
IV = What I change.
DV = What I measure.
Operationalise = How exactly am I measuring it?
6. Designing Observations and Self-Reports
Not every study is an experiment. Sometimes we just watch or ask.
Observations
• Behavioural Categories: Breaking down a stream of behavior into a checklist. Instead of saying "the child played," you check boxes for "kicked ball," "ran," or "shouted."
• Event Sampling: Counting every time a specific behavior happens (e.g., every time someone smiles).
• Time Sampling: Recording behavior only at specific intervals (e.g., checking what the participant is doing every 60 seconds).
Self-Reports (Questionnaires and Interviews)
• Open Questions: Participants answer in their own words. (Gives lots of detail).
• Closed Questions: Fixed answers like Yes/No or Tick Boxes. (Easy to turn into graphs).
• Rating Scales: Used to measure attitudes or feelings.
1. Likert Scale: Usually 5 points from "Strongly Disagree" to "Strongly Agree."
2. Semantic Differential: A scale between two opposite words (e.g., Happy [ ] - [ ] - [ ] - [ ] Sad).
3. Numerical Rating Scale: Rate your pain from 1 to 10.
Did you know? Likert scales are named after psychologist Rensis Likert. They are the most common way to measure "how much" someone likes or dislikes something!
Final Key Takeaway: Planning research is all about control and clarity. If a researcher is clear about their aim, picks a fair sample, and defines their variables perfectly, their study is much more likely to be respected by the scientific community.