Welcome to Research Methods 2!
Hello there! You’ve already mastered the basics in Research Methods 1. Now, it’s time to level up. Think of Research Methods 2 as the "Pro Version" of psychology. We are going to look at how psychologists ensure their findings are actually true, how they analyze complex data, and how they share their work with the world.
Don’t worry if some of the math or technical terms look scary at first—we’re going to break them down into bite-sized pieces with plenty of easy examples!
1. Reliability and Validity
In psychology, we need to know if our "ruler" (our test or experiment) is actually working. We do this through reliability and validity.
Reliability: Consistency
Reliability is all about consistency. If you step on a scale and it says you weigh 60kg, then step off and back on again and it says 75kg, the scale is unreliable. In psychology, a study is reliable if we get the same results every time we repeat it.
How to check for it:
- Test-retest: Giving the same person the same test on two different occasions. If the scores are similar, it’s reliable.
- Inter-observer reliability: Having two different researchers watch the same behavior. If they both record the same thing (e.g., they both agree the child was "aggressive"), the study is reliable.
Validity: Accuracy
Validity is about truth or accuracy. Does the test actually measure what it says it measures? If I try to measure your intelligence by measuring the length of your thumb, my "test" is reliable (your thumb stays the same length), but it isn't valid (thumb size has nothing to do with IQ!).
Types of Validity:
- Internal Validity: Did the Independent Variable really cause the change in the Dependent Variable? Or did something else (an extraneous variable) mess it up?
- External Validity: Can we generalize the findings?
- Ecological Validity: Does it apply to real life? (Or was the lab setting too artificial?)
- Temporal Validity: Do the findings from 1950 still apply today?
Quick Review: Think of a dartboard. If all your darts hit the same spot but it’s not the bullseye, you are reliable but not valid. If they all hit the bullseye, you are both!
Key Takeaway: Reliability = Consistency. Validity = Accuracy/Truth.
2. Case Studies and Content Analysis
Sometimes, an experiment isn't the best way to learn. Sometimes we need to look deeper.
Case Studies
A case study is an in-depth investigation of a single person, group, or event. Psychologists love these for rare cases.
Example: Studying a patient with a specific type of brain damage (like HM) to learn how memory works.
The Good: Provides rich, qualitative data and can study things that would be unethical to "trigger" in an experiment.
The Bad: You can't really generalize from one person to the whole world.
Content Analysis
This is a way of turning qualitative data (words, images) into quantitative data (numbers).
Example: Counting how many times men vs. women are shown in leadership roles in TV commercials.
Step-by-Step Process:
- Decide on your sample (e.g., 50 TV ads).
- Create coding categories (e.g., "Giving an order," "Cleaning," "Cooking").
- Watch the ads and tally every time a category appears.
Key Takeaway: Case studies are deep dives into individuals; content analysis turns media/text into countable data.
3. Features of Science
Psychology wants to be treated as a "hard science" (like Physics or Biology). To do that, it must follow certain rules:
- Objectivity: Researchers shouldn't let personal feelings or biases influence the data. Just the facts!
- Empiricism: Knowledge should come from direct observation and experience, not just "thinking about it."
- Replicability: Other scientists should be able to repeat your study and get the same result. This is why we write clear "Method" sections.
- Falsifiability: A theory isn't scientific unless it's possible to prove it wrong.
Analogy: If I say "There is a tiny invisible unicorn in my room that disappears whenever anyone tries to detect it," that is not falsifiable because you can't prove me wrong. Therefore, it's not science!
- Paradigms: A shared set of assumptions or methods. (e.g., the "Cognitive Revolution" changed the paradigm of how we study the mind).
Key Takeaway: Science is about being objective, repeatable, and able to be proven wrong.
4. Inferential Testing: The Math Part
Don't panic! You don't usually have to do massive calculations by hand, but you do need to understand why we use these tests.
Why use a test?
We use inferential tests to see if our results happened because of our experiment, or if it was just chance (luck). We look for a significance level, usually written as \(p \leq 0.05\).
What does \(p \leq 0.05\) mean? It means there is a 5% (or less) probability that our results happened by fluke. If \(p\) is less than 0.05, we say our results are statistically significant!
Levels of Measurement
Before picking a test, you need to know what kind of data you have. Think of the NOIR mnemonic:
- Nominal: Data in categories (e.g., Yes/No, Blue eyes/Brown eyes).
- Ordinal: Data that can be put in order, but the gaps between them aren't equal (e.g., finishing 1st, 2nd, and 3rd in a race).
- Interval: Data measured on a scale with equal units (e.g., Temperature in Celsius).
- Ratio: Same as interval, but with a true zero (e.g., Height or Weight).
Choosing the Right Test
To choose a test, you ask:
1. Am I looking for a difference or a correlation?
2. What experimental design did I use (Repeated Measures or Independent Groups)?
3. What level of measurement is my data (Nominal, Ordinal, or Interval)?
The Sign Test: This is the specific calculation you need to know for the AS level. It is used when you have nominal data and a repeated measures design. You count the number of pluses (+) and minuses (-) and find the "S" value (the less frequent sign).
Type I and Type II Errors
Sometimes, scientists get it wrong.
- Type I Error: The "False Positive." You think you found a significant result, but it was actually just luck. (Accepting the experimental hypothesis when you shouldn't).
- Type II Error: The "False Negative." You think nothing happened, but there actually was an effect. (Rejecting the experimental hypothesis when you shouldn't).
Key Takeaway: We use stats to prove our results aren't just luck. Aim for \(p \leq 0.05\)!
5. Reporting Psychological Investigations
When a psychologist finishes a study, they write it up in a specific format so others can read it. It’s like a recipe book for science.
- Abstract: A short summary of the whole study (150-200 words).
- Introduction: Why they did it and what previous research says.
- Method: How they did it (must be detailed enough to replicate). This includes Design, Participants, Apparatus, and Procedure.
- Results: The numbers, tables, and graphs.
- Discussion: What the results mean and how they link back to the intro.
- References: Giving credit to other researchers.
Common Mistake to Avoid: Don't confuse the "Results" and "Discussion" sections. Results are just the facts/numbers; Discussion is where you explain what those numbers actually mean for humans.
Key Takeaway: A scientific report is a standardized way to share findings clearly and professionally.
Final Encouragement
You've made it through the hardest part of Research Methods! Remember, these tools—reliability, validity, and statistics—are what turn a simple observation into a powerful scientific discovery. Keep practicing identifying "Levels of Measurement" (NOIR), and the rest will fall into place. You've got this!