Welcome to the World of Methodological Issues!

Ever wondered why some psychology studies are world-famous while others get ignored? It usually comes down to Methodological Issues. Think of this chapter as your "Psychology BS Detector." You’re going to learn how to look at a piece of research and ask: "Is this actually true? Does it apply to me? Was it done fairly?"

Don’t worry if some of these terms look like a different language at first. By the end of these notes, you'll be evaluating research like a pro!


1. Who are we talking about? Representativeness and Generalisability

Imagine you want to know what the favorite food of all UK teenagers is. If you only ask five people standing outside a pizza shop, your results won’t be very good. This brings us to our first two big ideas.

Representativeness

This is about whether your sample (the people in the study) reflects your target population (the group you are interested in).
Example: If you study "human memory" but only test 19-year-old university students, your sample isn't very representative of the whole human race!

Generalisability

This is the "So what?" factor. If a study is generalisable, it means we can safely apply the findings to people who weren't actually in the study.
Quick Tip: If a sample is representative, it usually means the results are generalisable. They go hand-in-hand!

Key Takeaway: If a study has sampling bias (e.g., only using one gender or one culture), we cannot easily generalise the results to everyone else.

2. Can we trust it? Reliability

Reliability is all about consistency. If you stepped on a bathroom scale and it said 70kg, then you stepped off and back on again and it said 85kg, that scale is unreliable. It’s not consistent.

Types of Reliability:

Internal Reliability: Do all the different parts of a test or questionnaire measure the same thing?
Example: If a "Happiness Quiz" has 10 questions, but question #5 asks about your height, that question is ruining the internal reliability because it doesn't fit with the others.

External Reliability: Does the study produce the same results if you do it again at a different time?

Inter-rater Reliability: This happens when two or more researchers watch the same behavior and compare their notes. If they both record the same thing, the inter-rater reliability is high.
Analogy: Think of two referees in a football match. If they both agree it was a foul, their "inter-rater reliability" is good!

Test-retest: Testing the same participants twice using the same task to see if the scores are similar.

Split-half: A way to check internal reliability by splitting a test in half and seeing if participants get similar scores on both halves.

Quick Review: Reliability = Consistency. If you see the word "consistent" in an exam question, think Reliability!

3. Is it "The Truth"? Validity

Validity is about accuracy. Does the test actually measure what it claims to measure?

Internal Validity

Is the researcher actually measuring the Independent Variable (the thing they changed), or is something else (an extraneous variable) messing up the results?
Example: You want to see if coffee makes people run faster. If the group drinking coffee also happens to be professional athletes, your study lacks internal validity because you don't know if it's the coffee or the skill making them fast!

Other ways to check Validity:

Face Validity: Does the test look like it measures what it's supposed to at first glance?

Construct Validity: Does the test measure the actual "theory" or idea? (e.g., does an IQ test really measure "intelligence"?)

Concurrent Validity: Does the result of this new test match the results of an older, established test?

Criterion Validity: How well does the test predict future behavior?

External Validity

Can the results be applied outside of the specific study setting?
1. Population Validity: Can we apply the results to other groups of people?
2. Ecological Validity: Can we apply the results to real-life settings?
Example: A memory test in a quiet, boring laboratory might have low ecological validity because real life is noisy and full of distractions.

Key Takeaway: A study can be reliable (consistent) but not valid (accurate). A scale that is always 5kg too heavy is reliable because it's always 5kg off, but it's not valid because it's the wrong weight!

4. Things that "mess up" research

Psychology is tricky because people know they are being studied, and they change their behavior!

Demand Characteristics

This is when participants guess the aim of the study and change their behavior to "help" the researcher (the "Please-U effect") or "ruin" the study (the "Screw-U effect").

Social Desirability

Participants often act in a way that makes them look "good" or "normal."
Example: If a researcher asks "Do you wash your hands every time you use the bathroom?", most people say "Yes" even if they don't, because it's socially desirable.

Researcher/Observer Bias

Researchers are human! Sometimes they see what they want to see to prove their theory right.

Researcher/Observer Effect(s)

The mere presence of a researcher can change how a participant acts. Imagine how you act when a teacher is standing right behind you vs. when they are out of the room!


5. Ethics: Doing the "Right Thing"

In the UK, the British Psychological Society (BPS) has a Code of Ethics. You can remember the four main pillars with the mnemonic: C.R.R.I.

1. Respect

Informed Consent: Participants should know what they are signing up for.
Right to Withdraw: Participants can leave at any time and take their data with them.
Confidentiality: Keep names and personal details private.

2. Competence

Psychologists should only conduct research they are qualified to do.

3. Responsibility

Protection of Participants: Participants must not be harmed physically or mentally.
Debrief: After the study, tell the participants the true aim and make sure they are okay.

4. Integrity

Deception: Researchers should avoid lying to participants unless it's absolutely necessary for the study (and even then, they must debrief them afterwards!).

Quick Review Box:
Ethical Principles:
  • Respect: Consent, Withdraw, Privacy.
  • Competence: Knowing what you're doing.
  • Responsibility: Protection from harm, Debriefing.
  • Integrity: Honesty/Avoiding Deception.

6. Ethnocentrism: A common trap

Ethnocentrism is when we look at the world only from the point of view of our own culture.
Did you know? A lot of early psychology was based on Western, white, middle-class people. If we assume that their behavior is "the norm" for the whole world, we are being ethnocentric.

Common Mistake to Avoid: Don't confuse Ethnocentrism with Sampling Bias. Sampling bias is who is in your study; Ethnocentrism is the mindset that your culture is the standard for everyone else.


Final Summary of Methodological Issues

When you are evaluating a study in your exam, ask yourself these "Big Four" questions:

  1. Reliability: Is the measurement consistent?
  2. Validity: Is it measuring what it says it is?
  3. Ethics: Were the participants treated fairly?
  4. Generalisability: Can we apply this to the real world and other people?

Master these, and you are well on your way to an A!