Chapter 1 (c)
Need of psychological Science
As
we will see, our thinking, memory, and attitudes operate on two levels—conscious and
unconscious—with the larger part operating automatically, off-screen. Like jumbo jets,
we fly mostly on autopilot.
So, are we smart to listen to the whispers of our inner wisdom, to simply trust “the force
within”? Or should we more often be subjecting our intuitive hunches to skeptical scrutiny?
This much seems certain. We often underestimate intuition’s perils. My geographical
intuition tells me that Reno is east of Los Angeles, that Rome is south of New York, that
Atlanta is east of Detroit. But I am wrong, wrong, and wrong.
Chapters to come will show that experiments have found people greatly overestimat-
ing their lie detection accuracy, their eyewitness recollections, their interviewee assess-
ments, their risk predictions, and their stock-picking talents. “The first principle,” said
Richard Feynman (1997), “is that you must not fool yourself—and you are the easiest
person to fool.”
Indeed, observed novelist Madeleine L’Engle, “The naked intellect is an extraordi-
narily inaccurate instrument” (1973). Three phenomena—hindsight bias, judgmental
overconfidence, and our tendency to perceive patterns in random events—illustrate why we
cannot rely solely on intuition and common sense.
WHY We Need Psychological Research:
We need psychological research because we as a human are suseptible to following three logical mistakes
1) Hindsight Bias
2) Overconfidence
3) Perceiving order in random events
1) Hindsight Bias: This hindsight bias (also known as the I-knew-it-all-along phenomenon) is easy to
demonstrate: Give half the members of a group some purported psychological finding,
and give the other half an opposite result. Tell the first group, “Psychologists have found
that separation weakens romantic attraction. As the saying goes, ‘Out of sight, out of
mind.’” Ask them to imagine why this might be true. Most people can, and nearly all will
then view this true finding as unsurprising.
Tell the second group the opposite, “Psychologists have found that separation strength-
ens romantic attraction. As the saying goes, ‘Absence makes the heart grow fonder.’”
People given this untrue result can also easily imagine it, and most will also see it as unsurprising. When two opposite findings both
seem like common sense, there is a problem.
Such errors in our recollections and expla-
nations show why we need psychological
research. Just asking people how and why they
felt or acted as they did can sometimes be mis-
leading—not because common sense is usu-
ally wrong, but because common sense more
easily describes what has happened than what
will happen. As physicist Niels Bohr reportedly
said, “Prediction is very difficult, especially
about the future.”
Some 100 studies have observed hindsight
bias in various countries and among both chil-
dren and adults (Blank et al., 2007). Neverthe-
less, Grandma’s intuition is often right. But sometimes Grandma’s intuition, informed by countless casual observations, has
it wrong.
2) Overconfidence: We humans tend to think we know more than we do. Asked how sure we are of our
answers to factual questions (Is Boston north or south of Paris?), we tend to be more con-
fident than correct.
Or consider these three anagrams, which Richard Goranson (1978)
asked people to unscramble:
WREAT → WATER
ETRYN → ENTRY
GRABE → BARGE
About how many seconds do you think it would have taken you to unscramble each
of these? Did hindsight influence you? Knowing the answers tends to make us overconfident—surely the solution would take only 10 seconds or so, when in real-
ity the average problem solver spends 3 minutes, as you also might, given a similar
anagram without the solution: OCHSA??
We also suffer from overconfidence while predicting social behavior..
Are we any better at predicting social behavior? University of Pennsylvania
psychologist Philip Tetlock (1998, 2005) collected more than 27,000 expert pre-
dictions of world events, such as the future of South Africa or whether Quebec
would separate from Canada. His repeated finding: These predictions, which
experts made with 80 percent confidence on average, were right less than 40
percent of the time. Nevertheless, even those who erred maintained their con-
fidence by noting they were “almost right.” “The Québécois separatists almost
won the secessionist referendum.”
Perceiving order in random events:
In our natural eagerness to make sense of our world—what poet Wallace Stevens called
our “rage for order”—we are prone to perceive patterns. People see a face on the moon,
hear Satanic messages in music, perceive the Virgin Mary’s image on a grilled cheese
sandwich. Even in random data we often find order, because—here’s a curious fact of
life—random sequences often don’t look random (Falk et al., 2009; Nickerson, 2002, 2005).
In actual random sequences, patterns and streaks (such as repeating digits) occur more
often than people expect (Oskarsson et al., 2009).
The Scientific Attitude:
1) Curious
2) Skeptical
3) Humble
Underlying all science is, first, a hard-headed curiosity, a passion to explore and under-
stand without misleading or being misled.
No matter how sensible-seeming or wild an idea, the smart thinker asks: Does it work?
When put to the test, can its predictions be confirmed? Subjected to such scrutiny,
crazy-sounding ideas sometimes find support. During the 1700s, scientists scoffed at the
notion that meteorites had extraterrestrial origins. When two Yale scientists challenged
the conventional opinion, Thomas Jefferson jeered, “Gentlemen, I would rather believe
that those two Yankee professors would lie than to believe that stones fell from Heaven.”
Sometimes scientific inquiry turns jeers into cheers.
More often, science becomes society’s garbage disposal, sending crazy-sounding ideas
to the waste heap, atop previous claims of perpetual motion machines, miracle cancer
cures, and out-of-body travels into centuries past. To sift reality from fantasy, sense from
nonsense, therefore requires a scientific attitude: being skeptical but not cynical, open
but not gullible.
“To believe with certainty,” says a Polish proverb, “we must begin by doubting.” As
scientists, psychologists approach the world of behavior with a curious skepticism, persis-
tently asking two questions: What do you mean? How do you know?
When ideas compete, skeptical testing can reveal which ones best match the facts. Do
parental behaviors determine children’s sexual orientation? Can astrologers predict your
future based on the position of the planets at your birth? Is electroconvulsive therapy
(delivering an electric shock to the brain) an effective treatment for severe depression?
Putting a scientific attitude into practice requires not only curiosity and skepticism
but also humility—an awareness of our own vulnerability to error and an openness to sur-
prises and new perspectives. In the last analysis, what matters is not my opinion or yours,
but the truths nature reveals in response to our questioning. If people or other animals
don’t behave as our ideas predict, then so much the worse for our ideas. This humble
attitude was expressed in one of psychology’s early mottos: “The rat is always right.”
CRITICAL THINKING:
The scientific attitude prepares us to think smarter. Smart thinking, called critical
thinking, examines assumptions, discerns hidden values, evaluates evidence, and
assesses conclusions. Whether reading a news report or listening to a conversation,
critical thinkers ask questions. Like scientists, they wonder, How do they know that?
What is this person’s agenda? Is the conclusion based on anecdote and gut feelings,
or on evidence? Does the evidence justify a cause – effect conclusion?
Scientific Method in Psychology:
In everyday conversation, we often use theory to mean “mere hunch.” In science, a theory
explains with principles that organize observations and predict behaviors or events. By orga-
nizing isolated facts, a theory simplifies. By linking facts with deeper principles, a theory
offers a useful summary. As we connect the observed dots, a coherent picture emerges.
A good theory about the effects of sleep deprivation on memory, for example, helps us
organize countless sleep-related observations into a short list of principles. Imagine that
we observe over and over that people with poor sleep habits cannot answer questions in
class, and they do poorly at test time. We might therefore theorize that sleep improves
memory. So far so good: Our sleep-retention principle neatly summarizes a list of facts
about the effects of sleep loss.
HOW a successful theory looks like:
Yet no matter how reasonable a theory may sound—and it does seem reasonable to sug-
gest that sleep loss could affect memory—we must put it to the test. A good theory pro-
duces testable predictions, called hypotheses. By enabling us to test and to reject or revise
our theory, such predictions direct research. They specify what results would support the
theory and what results would disconfirm it. To test our theory about the effects of sleep on
memory, we might assess people’s retention of course materials after a good night’s sleep, or
a shortened night’s sleep.
#CSSPsychology #CSS2024 #CSSspecial
#CSSpastpaper #PastPaperPsychology
#ScienceofPsychology