Welcome to Research Methods 1!

Ever wondered how psychologists actually "prove" their theories? They don't just guess; they use a toolkit of Research Methods. Think of this chapter as your "Psychology Toolkit." We are going to learn how to design a study, how to pick the right people to test, and how to make sense of the numbers at the end. Don't worry if it seems like a lot of technical terms at first—we will break it down step-by-step!

Part 1: The Methods (How we collect data)

1. Experimental Methods

An experiment is the only way psychologists can say that one thing caused another. We change one thing (the Independent Variable) to see if it affects something else (the Dependent Variable). There are three main types:

Laboratory Experiments: Done in a controlled environment (like a lab).
Example: Testing memory by asking students to learn a list of words in a quiet room.
Strengths: You have high control over everything.
Limitations: It can feel "fake" (low ecological validity), so people might not act naturally.

Field Experiments: Done in a real-world setting, but the researcher still changes something.
Example: A researcher "accidentally" drops a book in a busy shopping mall to see who helps.
Strengths: Behavior is more natural.
Limitations: Harder to control outside "noise" or distractions.

Natural Experiments: The researcher doesn't change anything because the change happens on its own (like a new law being passed).
Strengths: Allows us to study things that would be unethical to create (like the effect of a natural disaster).
Limitations: You can't be 100% sure the "change" was the only thing affecting the results.

2. Observation Techniques

This is when a researcher watches and records behavior. It’s like being a "fly on the wall."

Naturalistic vs. Controlled: Naturalistic happens in the person's own environment (like a playground). Controlled happens in a setup space (like a play-lab).
Covert vs. Overt: In Covert, the people don't know they are being watched (undercover). In Overt, they know you are there with a clipboard.
Participant vs. Non-participant: In Participant, the researcher joins the group (e.g., joining a cult to study them). In Non-participant, the researcher stays outside the group.

3. Self-Report Techniques

Sometimes the best way to find out what someone thinks is to just ask them!

Questionnaires: A list of written questions. They can be Open (letting people write whatever they want) or Closed (Yes/No or 1-10 scales).
Interviews: Face-to-face or over the phone. They can be Structured (fixed questions) or Unstructured (more like a free-flowing chat).

4. Correlations

This looks at the relationship between two things (co-variables).
Example: Does the amount of revision time relate to exam scores?
Important: Correlation does not equal causation! Just because two things happen together doesn't mean one caused the other (e.g., ice cream sales and shark attacks both go up in summer, but ice cream doesn't cause shark attacks!).

Quick Review: Experiments look for causes, while correlations look for links.

Part 2: Scientific Processes (Designing your study)

Aims and Hypotheses

Aim: A general statement of what the researcher intends to study.
"I want to see if coffee affects concentration."

Hypothesis: A clear, testable prediction. It can be:
1. Directional: Predicts exactly what will happen (e.g., "Coffee will increase concentration").
2. Non-directional: Predicts a change but doesn't say which way (e.g., "Coffee will affect concentration").

Sampling (Who do we test?)

You can't test everyone in the world! The Population is the whole group you are interested in. The Sample is the small group you actually test.

Random Sampling: Everyone has an equal chance (like pulling names out of a hat). It is the most fair and unbiased.
Opportunity Sampling: You just ask whoever is available at the time (like people in the library). It's easy, but often biased because it doesn't represent everyone.

Experimental Designs

This is how you organize your participants into groups:

Independent Groups: Different people in each group (Group A does the test with coffee, Group B does it without).
Repeated Measures: The same people do both conditions.
Watch out! People might get better the second time because they practiced (this is called an order effect). We use Counterbalancing (splitting the group so half do coffee first, half do no-coffee first) to fix this.
Matched Pairs: You find two people who are very similar (like twins or same IQ) and put one in Group A and one in Group B.

Variables and Controls

Independent Variable (IV): What you change (The "Cause").
Dependent Variable (DV): What you measure (The "Effect").
Extraneous Variables: "Nuisance" variables that might mess up the results (like a loud noise outside during a memory test). We must control these.
Operationalisation: Making a variable measurable. Instead of saying "I'm measuring 'smartness'," you say "I'm measuring the 'score on a 20-question IQ test'."

Common Mistake to Avoid: Don't confuse the IV and DV! Remember: I change the Independent variable.

Ethics: The Rules of Conduct

Psychologists must be "good people." The key rules are:
1. Informed Consent: Participants must agree to take part after knowing what will happen.
2. Deception: You shouldn't lie to people unless absolutely necessary.
3. Protection from Harm: They should leave the study in the same state they arrived.
4. Confidentiality: Keep their data private.
5. Right to Withdraw: They can quit at any time.
6. Debrief: Tell them the true aim of the study at the end.

Key Takeaway: A good study is valid (truthful), reliable (consistent), and ethical (fair).

Part 3: Data Handling and Analysis

Quantitative vs. Qualitative Data

Quantitative: Data in numbers. Easy to put into graphs and compare.
Qualitative: Data in words (descriptions, feelings). It is much deeper and more detailed, but harder to summarize.

Primary and Secondary Data

Primary: Data you collected yourself for your own study.
Secondary: Data that already exists (like using government statistics or another researcher's work).
Meta-analysis: A "study of studies." A researcher looks at the results of many previous studies on the same topic to get one big conclusion.

Descriptive Statistics

These help us summarize our numbers:

Measures of Central Tendency (The "Averages"):
1. Mean: Add all scores and divide by the number of scores. It is the most sensitive but can be "thrown off" by one very high or low number.
2. Median: The middle score when you put them in order.
3. Mode: The most common score.

Measures of Dispersion (The "Spread"):
1. Range: The highest score minus the lowest score.
2. Standard Deviation: A more complex calculation that tells us how much scores "spread out" around the mean. A low standard deviation means everyone scored similarly. A high one means the scores were all over the place!

Presenting Data

Bar Charts: Used for categories (e.g., Boys vs. Girls). The bars don't touch.
Line Graphs: Show how something changes over time.
Scattergrams: Used for Correlations. Each dot represents a person's score on two different things.

Distributions

Normal Distribution: The "Bell Curve." Most people are in the middle, with very few at the extreme ends (like height or IQ).
Skewed Distributions: When the data is pushed to one side.
- Positive Skew: Most scores are low (the "tail" points to the right toward the higher numbers).
- Negative Skew: Most scores are high (the "tail" points to the left toward the lower numbers).

Memory Aid for Skew: Look at the "tail" of the graph. If the tail is pointing to the right (positive numbers), it's a Positive Skew!

Key Takeaway: Statistics help us turn a messy pile of numbers into a clear story about human behavior.