Welcome to Psychological Skills!

Welcome to the engine room of Psychology! You might be wondering, "Why do I need to learn about math and methods when I want to learn about people?" The answer is simple: Psychology is a science. To understand why people think and act the way they do, we need reliable tools to measure behavior.

In this section, you’ll learn how psychologists design studies, how they handle data, and how they stay ethical. Think of these skills as your "Psychology Toolkit." Once you master these, you’ll be able to look at any study and figure out if it’s solid science or just a lucky guess. Don't worry if it seems a bit "maths-heavy" at first—we will break it down step-by-step!

1. The Research Toolbox: How We Gather Data

Psychologists use different methods depending on what they want to find out. Here are the main ones you need to know from your syllabus:

A. Self-Report: Questionnaires and Interviews

This is simply asking people about themselves.
- Questionnaires: Written questions. They can use closed questions (like "Yes/No" or a 1-5 scale) which give us quantitative data (numbers), or open questions which allow for detailed descriptions, giving us qualitative data (words).
- Interviews: These can be structured (fixed questions), unstructured (like a conversation), or semi-structured (a mix of both).

B. Experiments

The "Gold Standard" for finding cause and effect.
- Laboratory Experiments: Done in a controlled environment. Great for control, but can feel "fake" (low ecological validity).
- Field Experiments: Done in the real world. More natural, but harder to control "outside" factors.

C. Observations

Watching what people do.
- Naturalistic: Watching people in their own environment.
- Controlled: Watching people in a set-up lab.
- Participant: The researcher joins in!
- Overt vs. Covert: Overt means they know you're watching; Covert means you’re "undercover."

D. Correlations

This looks for a relationship between two co-variables (e.g., Is there a link between age and memory?).
- Positive Correlation: Both go up together.
- Negative Correlation: As one goes up, the other goes down.
- Zero Correlation: No link at all.

Quick Review: Remember, a correlation does NOT mean one thing caused the other. It just shows they are linked. For example, ice cream sales and sunburns are correlated, but ice cream doesn't cause sunburns—the hot sun causes both!

Key Takeaway: Choose your method based on your goal. Want numbers from 1,000 people? Use a questionnaire. Want to find the cause of a behavior? Use an experiment.

2. Planning Your Study: The Blueprint

Before you start, you need a plan. Here are the building blocks of a psychological study.

Hypotheses (The Prediction)

A hypothesis is a clear, testable statement.
- Null Hypothesis: Predicts NO effect or NO difference (e.g., "Age will not affect memory score").
- Alternative Hypothesis: Predicts there WILL be an effect.
- Directional (One-tailed): Predicts exactly which way the result will go (e.g., "Older people will score lower").
- Non-directional (Two-tailed): Just says there will be a difference, but doesn't say which way (e.g., "There will be a difference in memory scores between young and old").

Variables (The Moving Parts)

- Independent Variable (IV): The thing you change/manipulate.
- Dependent Variable (DV): The thing you measure.
- Extraneous Variables: "Nuisance" variables that might mess up your results if you don't control them (like noise during a memory test).

Sampling (Picking Your Participants)

How do you pick people from the target population?
- Random Sampling: Everyone has an equal chance (like pulling names from a hat).
- Opportunity Sampling: Using whoever is available at the time (easy, but biased).
- Volunteer Sampling: People sign themselves up (often attracts a certain "type" of person).
- Stratified Sampling: Making sure your sample perfectly reflects the sub-groups of the population (e.g., 50% male, 50% female).

Memory Aid: Think of sampling like soup. To know if the whole pot of soup is good, you only need one spoonful. But that spoonful must have a bit of everything in it (vegetables, broth, salt) to be a "representative" sample!

Key Takeaway: A good study has a clear hypothesis, controlled variables, and a representative sample so the results can be generalized to others.

3. Ethics: The Rules of the Game

Psychologists must follow the British Psychological Society (BPS) guidelines to protect participants.
- Informed Consent: Participants must agree to take part and know what they are getting into.
- Deception: Avoid lying to participants unless absolutely necessary.
- Right to Withdraw: They can leave at ANY time.
- Protection from Harm: They shouldn't be more stressed than they would be in everyday life.
- Confidentiality: Keep their data private and anonymous.

Did you know? In the past, studies like Milgram’s (the electric shock study) were very stressful. Today’s ethical rules are much stricter to make sure everyone stays safe!

Key Takeaway: Ethical research respects the dignity and safety of participants. If a study is unethical, it's usually considered "bad science."

4. Working with Data: The "Mathsy" Bit

Once you have your results, you need to describe them using Descriptive Statistics.

Measures of Central Tendency (The Average)

- Mean: Add all scores and divide by the number of scores. (The "mathematical average").
- Median: The middle score when they are in order.
- Mode: The most common score.

Measures of Dispersion (The Spread)

- Range: The difference between the highest and lowest score.
- Standard Deviation: A more precise way to see how much scores "spread out" around the mean. A low standard deviation means most people scored very similarly.

Inferential Statistics (The Decision)

This helps us decide if our results happened by chance or if they are significant. You will come across tests like the Wilcoxon, Spearman’s Rank, and Chi-Squared.

The magic number is usually \(p \leq 0.05\). This means there is a 5% (or less) probability that the results were just a fluke. If \(p\) is small, we can be 95% confident that our IV actually caused the change in the DV!

Common Mistake: Students often think "significant" means "important." In psychology, it just means "statistically unlikely to be a fluke."

Key Takeaway: Descriptive stats summarize the data; Inferential stats tell us if the data actually proves our hypothesis.

5. Evaluating Research: Is it any good?

When you evaluate a study, you are looking for two things: Reliability and Validity.

Reliability = Consistency

If you did the study again, would you get the same result?
- Example: If a bathroom scale shows 70kg, then 80kg, then 60kg in two minutes, it is not reliable.

Validity = Truth/Accuracy

Are you measuring what you claim to be measuring?
- Internal Validity: Was the study well-controlled? Did the IV cause the DV?
- Ecological Validity: Does this reflect real life?
- Objectivity vs. Subjectivity: Objectivity is based on facts and measurements (good!). Subjectivity is based on personal feelings or opinions (risky!).

Quick Comparison:
- Reliability is like a clock that is always 5 minutes fast. It's consistent (reliable), but it's not telling the truth (not valid).
- Validity is like a clock that shows the exact right time.

Key Takeaway: For a study to be excellent, it needs to be both reliable (consistent) and valid (truthful).

Summary Checklist for Success:

- Can I name the IV and DV in a study?
- Do I know the difference between a directional and non-directional hypothesis?
- Can I list at least 3 BPS ethical guidelines?
- Do I understand that \(p \leq 0.05\) means the results are likely not due to chance?
- Can I explain the difference between reliability and validity?

Don't worry if this seems tricky at first! These skills will come up again and again in every topic you study, and you'll become an expert in no time.