Welcome to Planning and Conducting Research!
Ever wondered how psychologists actually "do" science? It’s not just about sitting in a chair and listening to people talk. It’s about building a solid plan to discover the truth about human behavior. This chapter is your "instruction manual" for designing a psychological study. Don't worry if it seems like a lot of steps at first—think of it like learning a recipe. Once you know the ingredients, you can cook up some great research!
1. Starting with a Plan: Aims and Hypotheses
Before a psychologist starts a study, they need to know exactly what they are looking for. They start with a research aim (a general statement of what they want to study) and a research question (the specific question they want to answer).
Alternative vs. Null Hypotheses
A hypothesis is a formal, testable prediction. In every study, we actually have two:
1. Alternative Hypothesis: This predicts that there will be a relationship or a difference. For example: "Students who eat breakfast will score higher on tests than those who don't."
2. Null Hypothesis: This predicts that there will not be a relationship or difference. It basically says any result found is just down to chance. For example: "There will be no difference in test scores between students who eat breakfast and those who don't."
Which Way Does it Go? Directional vs. Non-Directional
When writing your alternative hypothesis, you have two choices:
• One-tailed (directional) hypothesis: You predict exactly which way the results will go. (e.g., "Group A will be faster than Group B").
• Two-tailed (non-directional) hypothesis: You predict there will be a difference, but you aren't sure which way it will go. (e.g., "There will be a difference in speed between Group A and Group B").
Quick Review: If you are confident because of previous research, use a one-tailed hypothesis. If the area is brand new, play it safe with a two-tailed one!
Key Takeaway: Aims are general goals; hypotheses are specific, testable predictions that can either be directional (one-tailed) or non-directional (two-tailed).
2. Who Are We Studying? Populations and Samples
Psychologists usually want to understand a large group of people, like "all teenagers in the UK." This large group is the target population. However, we can't test everyone, so we pick a smaller group called a sample.
Sampling Techniques
How do we pick people for our sample?
• Random Sampling: Every person in the target population has an equal chance of being picked (like pulling names out of a hat). Strength: Very fair and usually representative.
• Snowball Sampling: You find one person, and they "recruit" their friends, who recruit their friends. Strength: Great for finding hard-to-reach groups (like secret societies!).
• Opportunity Sampling: You just ask whoever is available at the time (like people walking past you in the hallway). Strength: Quick and easy. Weakness: Not very representative.
• Self-selected (Volunteer) Sampling: People sign themselves up, usually by responding to an ad. Strength: People are motivated to take part. Weakness: "Volunteer bias"—volunteers might be different from the general public.
Key Takeaway: The sample must represent the target population for the results to be useful for everyone.
3. Variables: The Building Blocks
To keep things scientific, we need to define what we are changing and what we are measuring.
• Independent Variable (IV): The thing the researcher changes or manipulates (The "Cause").
• Dependent Variable (DV): The thing the researcher measures (The "Effect").
• Extraneous Variables: These are "annoying" extra variables that might mess up the results if we don't control them (like background noise during a memory test).
Operationalisation
This sounds like a scary word, but it just means making variables measurable.
Example: If you are studying "happiness," how do you measure that? You could operationalise it as "the number of times a person smiles in 10 minutes." Now it’s a number we can record!
Memory Aid: IV is what I change. DV is the Data I collect.
4. Experimental Designs: How to Organize Groups
When doing an experiment, you need to decide how to use your participants across different conditions.
1. Independent Measures: Different participants are used in each condition. (Group A does the "coffee" test, Group B does the "water" test). No "practice effect," but you need more people.
2. Repeated Measures: The same participants do both conditions. (Everyone does the "water" test, then the "coffee" test). Fewer people needed, but they might get bored or better at the task by the second time (order effects).
3. Matched Participants: Different people in each group, but they are "paired up" based on similarities (like age or IQ). This is the "gold standard" but it’s very difficult and time-consuming to do.
Key Takeaway: Choose your design based on whether you want to avoid "order effects" (Independent Measures) or "participant variables" (Repeated Measures).
5. Designing Observations
If you aren't doing an experiment, you might just watch people. To do this scientifically, you need a plan.
• Behavioural Categories: Breaking down a stream of behavior into specific, measurable actions (e.g., "hitting," "shouting," "pushing").
• Coding Frames: Using symbols or numbers to record these behaviors quickly.
• Event Sampling: You count every time a specific behavior happens (e.g., every time a child smiles).
• Time Sampling: You only record behavior at set time intervals (e.g., what is the person doing every 60 seconds?).
Did you know? Without clear behavioural categories, two observers might see the same thing differently. One might call a "nudge" play, while another calls it aggression!
6. Designing Self-Reports (Questionnaires and Interviews)
Sometimes the best way to find out what someone is thinking is to just ask them!
Question Types
• Open Questions: Allow participants to answer in their own words. (e.g., "How do you feel today?"). Provides rich detail (qualitative data).
• Closed Questions: Provide fixed options. (e.g., "Are you happy? Yes/No"). Easy to turn into graphs (quantitative data).
Rating Scales
Psychologists love scales to measure attitudes:
• Likert Scale: Usually a 5-point scale from "Strongly Agree" to "Strongly Disagree."
• Semantic Differential Scale: Participants mark a point between two opposite adjectives (e.g., Happy [---X---] Sad).
Common Mistake: Don't forget that closed questions are easier to analyze, but open questions tell you why someone feels that way. Most good researchers use a mix of both!
Key Takeaway: Designing a study is all about consistency. Whether it’s how you pick your sample, how you define your variables, or how you ask your questions, the goal is to be as clear and objective as possible.