コンテンツへスキップ
Home » Personality Lab » Low Reading Skills Linked to Conspiracy Beliefs? 5 Key Findings

Low Reading Skills Linked to Conspiracy Beliefs? 5 Key Findings

    学力と性格、読解力

    People with low reading comprehension beliefs — that is, those who trust their ability to understand text without truly questioning it — may be far more vulnerable to meaningless-sounding wisdom than they realize. Have you ever scrolled through social media, stumbled upon a phrase like “The universe harmonizes with your inner consciousness,” and felt a quiet sense of depth, even though you couldn’t quite explain why? You are not alone. Research suggests this experience is surprisingly common, and it reveals something important about how the human mind processes language.

    A landmark study published in the academic journal Judgment and Decision Making explored exactly this phenomenon. Researchers at the University of Waterloo in Canada examined how roughly 800 participants responded to sentences that sounded profound but were, in fact, constructed by randomly combining impressive-sounding words. The findings challenge the comfortable assumption that being a competent reader automatically protects you from being misled by empty language. In this article, we unpack what the research revealed — and what it means for the way you consume information every day.

    Once again, personality researcher and author of Villain Encyclopedia, Tokiwa (@etokiwa999), will provide the explanation.
    ※We have developed the HEXACO-JP Personality Assessment! It has more scientific basis than MBTI. Tap below for details.

    What Is “Pseudo-Profound Bullshit” and Why Does It Feel So Deep?

    Pseudo-profound bullshit is language that sounds meaningful but contains no verifiable or specific content. The term was coined by researchers to describe a very particular type of deceptive communication — one that is distinct from lying. Understanding that distinction is the first step to recognizing it in the wild.

    When someone tells a lie, they know the truth and deliberately say the opposite. Pseudo-profound bullshit is different: the speaker (or poster, or influencer) is simply indifferent to whether the statement is true or false. The goal is to create an impression — of wisdom, spiritual authority, or deep insight — without being pinned down to any testable claim. The sentences are grammatically correct, which is key. Because the structure feels familiar and orderly, our brains tend to assume the content must also be meaningful.

    Classic examples from the study included phrases like “Wholeness quiets infinite phenomena” or “Consciousness harmonizes with the universe.” Notice what these sentences share:

    • Abstract vocabulary — words like “wholeness,” “consciousness,” and “phenomena” sound important but are vague by nature.
    • Correct grammar — the sentences follow standard grammatical rules, triggering an assumption of coherence.
    • No falsifiable claim — there is no way to say these statements are wrong, because they don’t assert anything specific enough to be tested.
    • Resistance to counter-argument — because nothing specific is being said, it is very hard to push back.

    Research suggests that when all 4 of these features combine, a significant portion of readers will rate the sentence as at least somewhat profound — not because they are unintelligent, but because the brain’s default mode is to accept before it questions. Understanding this mechanism is what separates people who can detect this kind of language from those who absorb it uncritically.

    What the Experiment Actually Found: Numbers That Should Surprise You

    When shown grammatically correct but meaningless sentences, the majority of participants rated them as at least slightly profound — and about 27% rated them as genuinely deep. These are not fringe numbers. They represent a systematic, measurable bias in how humans process vague but well-formed language.

    In the first phase of the study, approximately 280 university students were shown 10 randomly constructed sentences. Each sentence was rated on a scale of 1 (“not at all profound”) to 5 (“very profound”). The average rating across all participants was 2.6 — sitting between “slightly” and “moderately” profound. More strikingly, only about 18% of participants gave an average score below 2, meaning roughly 82% found the meaningless sentences at least a little deep.

    A second phase compared these randomly generated sentences against real quotations attributed to a well-known new-age figure. The real quotations averaged approximately 2.77 — only marginally higher than the fabricated ones. But here is the critical detail: the correlation between how a person rated the fake sentences and how they rated the real ones was an extremely high 0.88. In other words, if you were inclined to find the nonsense deep, you were almost equally inclined to find the actual quotations deep — and vice versa.

    The researchers also included a control group of mundane, clearly factual sentences such as “Newborn babies require a lot of attention.” These averaged just 1.4, with more than 80% of participants assigning the lowest possible score. This contrast confirmed that the effect was not just a general tendency to rate everything highly — it was specifically triggered by the combination of abstraction, vagueness, and grammatical correctness.

    Key takeaways from the data:

    • Average profundity rating for fabricated sentences: 2.6 out of 5, suggesting a default bias toward mild acceptance.
    • Approximately 27% of participants rated fabricated sentences 3 or above, meaning more than 1 in 4 considered them genuinely meaningful.
    • Correlation of 0.88 between ratings of fake and real quotations, showing the same cognitive pattern drives both judgments.
    • Mundane sentences averaged only 1.4, confirming the effect is specific to abstract, vague language rather than a general acquiescence bias.

    These findings indicate that the human tendency to perceive depth in vague language is not random — it is structured, predictable, and tied to specific characteristics of both the text and the reader.

    Low Reading Comprehension Beliefs, Intuitive Thinking, and Susceptibility to Misinformation

    People who rely more heavily on intuition — and those with lower scores on analytical thinking assessments — tend to rate pseudo-profound sentences significantly higher. This is one of the most practically important findings in the entire study, because it links a cognitive style to a real-world vulnerability.

    Participants completed what researchers call a Cognitive Reflection Test (CRT). This is a short set of problems specifically designed to have an obvious but incorrect intuitive answer. For example, a classic item goes: “A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?” The intuitive answer is 10 cents, but the correct answer is 5 cents. People who slow down and reconsider tend to get these right. People who trust their first instinct tend to get them wrong.

    The study found a correlation of approximately −0.33 between CRT scores and pseudo-profound bullshit receptivity. While not enormous, this is a meaningful relationship: higher analytical ability correlates with lower susceptibility to vague, impressive-sounding nonsense. Participants who scored well on the CRT were more likely to give low profundity ratings to fabricated sentences while still giving appropriate high ratings to genuinely meaningful content — demonstrating not blanket skepticism, but discernment.

    Participants who reported a strong preference for intuitive thinking showed the opposite pattern. They rated fabricated sentences higher, were more likely to endorse belief in the paranormal, and showed greater receptivity to conspiracy theory psychology — the tendency to see hidden, meaningful patterns in random or ambiguous information. This suggests that low reading comprehension beliefs alone are not the full picture; the cognitive style a person habitually uses when processing information plays an equally important role.

    The Connection Between Pseudoprofound Bullshit Receptivity, Religious Belief, and Conspiracy Theory Psychology

    Research indicates that susceptibility to pseudo-profound language tends to cluster with several other belief patterns, including stronger endorsement of conspiratorial thinking, paranormal belief, and certain forms of religious or spiritual conviction. This is not an attack on any of these belief systems — it is an observation about the cognitive pathways that make them feel compelling.

    The study found that participants who rated fabricated sentences as profound were also significantly more likely to:

    • Report belief in the paranormal — such as psychic abilities, astrology, or supernatural phenomena.
    • Endorse conspiratorial thinking — the sense that powerful hidden forces secretly shape world events.
    • Score lower on measures of reflective thinking — indicating a preference for fast, intuition-based conclusions over slow, evidence-based reasoning.
    • Show higher openness to alternative medicine and unverified health claims — which often use similarly vague, impressive-sounding language.

    It is important to note what this research does not say. It does not claim that religious belief is equivalent to believing nonsense, nor that all spiritual language is pseudo-profound. What it does suggest is that a general tendency toward pattern-seeking in ambiguous information — sometimes called apophenia — may make certain types of vague language feel more resonant across multiple domains simultaneously. In other words, the same cognitive style that makes someone susceptible to one type of unfounded claim tends to make them susceptible to others as well.

    The implications for misinformation susceptibility are significant. In an era of social media, where short, abstract, emotionally resonant sentences travel faster than nuanced arguments, understanding this clustering of beliefs helps explain why debunking one false claim often does little to reduce a person’s overall susceptibility to misinformation. The underlying cognitive style remains unchanged.

    Why the Social Media Era Makes This Worse — and What Low Reading Comprehension Beliefs Cost You

    The structural features of modern social media — short character limits, emotional triggers, and rapid scrolling — create an almost ideal environment for pseudo-profound content to thrive. Understanding why this matters, and what the real cost of low reading comprehension beliefs can be in daily life, is essential for anyone trying to navigate information responsibly.

    Consider how a typical social media post is consumed. A reader spends perhaps 2 to 3 seconds on a sentence before deciding whether to pause or scroll on. In that window, the brain is not conducting a careful analysis of logical structure or evidence quality. It is making a rapid, heuristic judgment: does this feel true? Does this feel important? Does this match something I already believe? Abstract, vague language that uses impressive vocabulary is perfectly calibrated to trigger a “yes” on all 3 counts — without actually saying anything that could be verified or falsified.

    The research found that participants who were more susceptible to pseudo-profound language were not simply less intelligent. Many were functioning university students. The vulnerability is not primarily about raw intelligence — it is about whether a person habitually pauses to question first impressions. This habit of pausing — of asking “what does this actually mean, and how would I know if it were true?” — is precisely what critical thinking skills training aims to build.

    The practical costs of low reading comprehension beliefs can include:

    • Financial decisions — being persuaded by vague promises in investment pitches, wellness products, or self-help programs that use the language of transformation without specific claims.
    • Health choices — accepting unverified alternative therapies described in abstract but confident language.
    • Political and social views — being drawn into conspiratorial narratives that feel profound because they claim to explain everything, even though they explain nothing specifically.
    • Personal relationships — being unduly influenced by charismatic figures who speak in vague, elevated terms that feel wise but commit to nothing.

    Recognizing this landscape is not about becoming permanently cynical. It is about developing the mental reflex to pause before accepting — especially when something sounds impressively deep but you cannot quite explain why.

    How to Strengthen Your Critical Thinking Skills Against Vague Language

    Building resistance to pseudo-profound language is a learnable skill, and research on cognitive reflection suggests that even small interventions can shift a person’s default processing style. The goal is not to become suspicious of everything, but to develop a more calibrated response to the specific features that make vague language seductive.

    Here are 5 evidence-informed strategies, each with a clear rationale and a practical way to begin using it today:

    1. Practice the “Plain Language Test”

    When you encounter a sentence that feels profound, try to restate it in completely plain, specific language. If you cannot do so without losing the entire meaning, that is a strong signal the original sentence may not have had any meaning to begin with. For example, “Consciousness harmonizes with the universe” cannot be restated specifically — which tells you something important. Why it works: This forces your brain out of impression-based processing and into meaning-based processing, the same shift that higher CRT scorers appear to make naturally.

    2. Ask “What Would Make This Wrong?”

    A hallmark of pseudo-profound bullshit is that it cannot be falsified — you cannot imagine any evidence that would prove it incorrect. Train yourself to routinely ask: “What would have to be true for this claim to be false?” If no answer comes to mind, the statement is likely making no real claim at all. Why it works: This is a core element of scientific thinking, and practicing it even informally has been shown to improve resistance to misinformation over time.

    3. Slow Down Deliberately on Social Media

    Because pseudo-profound content is optimized for fast, intuitive processing, the simplest countermeasure is to artificially slow down. Before sharing or strongly reacting to a post, give yourself a rule: read it twice, and on the second reading, focus only on what specific, verifiable claim — if any — is being made. Why it works: Research on dual-process thinking consistently shows that slowing down activates more analytical processing, reducing the influence of impressions and emotional resonance.

    4. Study the Vocabulary of Vagueness

    Certain words appear repeatedly in pseudo-profound language: “energy,” “vibration,” “consciousness,” “wholeness,” “infinity,” “quantum” (used non-technically), and “universe” (used as a moral agent). None of these words are inherently problematic, but their presence in a sentence — especially in combination — should trigger a more careful read. Why it works: Pattern recognition is how experts in any domain quickly flag anomalies. Training yourself to notice these vocabulary clusters is building a form of expertise in detecting vague language.

    5. Balance Intuition with Structured Reflection

    The research does not suggest that intuition is bad — it suggests that relying on intuition exclusively for evaluating the meaning and truth of language is risky. A healthy approach treats the initial feeling of profundity as a prompt to look more carefully, not as a conclusion. Journaling, discussing ideas with skeptical friends, or simply sleeping on a strong impression before acting on it can all serve this function. Why it works: These practices introduce deliberate analytical processing after the initial intuitive response, mimicking the cognitive style of those who naturally score higher on reflective thinking measures.

    Frequently Asked Questions

    What exactly are “low reading comprehension beliefs” and why do they matter?

    Low reading comprehension beliefs refer to a tendency to feel confident that you have understood a piece of text even when its meaning is genuinely unclear or absent. They matter because research suggests this overconfidence in comprehension is linked to higher susceptibility to pseudo-profound language, conspiracy narratives, and other forms of misinformation. People with these beliefs may accept vague, impressive-sounding sentences as meaningful without pausing to verify whether any real content exists.

    What is the Cognitive Reflection Test, and what does it measure?

    The Cognitive Reflection Test (CRT) is a short set of problems designed so that each question has an obvious but incorrect intuitive answer. Successfully answering them requires the participant to override their first instinct and think more carefully. Research uses CRT scores as a proxy for analytical thinking style — a tendency to pause and reconsider rather than act on first impressions. In the bullshit receptivity study, higher CRT scores correlated with lower ratings of pseudo-profound sentences.

    Does this research mean that religious or spiritual people are more gullible?

    Not exactly. The study found statistical correlations between certain types of spiritual or paranormal belief and higher receptivity to vague language — but correlation is not the same as causation, and neither implies gullibility. Many forms of religious belief involve rich, centuries-old interpretive traditions that are anything but intellectually careless. The research identifies a specific cognitive style — preference for fast, intuitive processing — as the key variable, and this style exists across all belief systems and none.

    Why does grammatically correct language feel more meaningful, even when it isn’t?

    Psychological research suggests humans have a deep-seated tendency to associate grammatical structure with meaning. When a sentence “sounds right” syntactically, our brains tend to assume there is content to match. This is a useful shortcut in everyday communication — most well-formed sentences do have meaning — but it becomes a vulnerability when someone deliberately constructs grammatically correct sentences with no real content. The form of the sentence triggers acceptance before the content can be evaluated.

    How does social media specifically increase susceptibility to pseudo-profound content?

    Social media platforms reward content that triggers fast emotional responses — surprise, awe, a sense of hidden truth. Short-form content with character limits naturally reduces context and explanation, which increases abstraction and vagueness. Rapid scrolling behavior further limits the time available for analytical processing. All 3 of these features — brevity, abstraction, and speed — align almost perfectly with the conditions under which pseudo-profound language has the most impact on readers with less developed critical thinking habits.

    Can you actually train yourself to become more resistant to misinformation?

    Research indicates that yes, targeted practice can meaningfully shift how people evaluate ambiguous claims. Studies on “inoculation theory” suggest that briefly exposing people to the techniques used in misinformation — and explaining those techniques — builds resistance to future encounters. Similarly, regularly practicing the habit of asking “what specific claim is being made here?” appears to strengthen analytical processing over time, reducing the automatic acceptance of vague but impressive-sounding language.

    Is susceptibility to pseudo-profound bullshit related to lower intelligence?

    Not in a simple, direct way. The study’s participants were mostly university students — a group with above-average educational attainment. What distinguished higher from lower receptivity was not raw intelligence but cognitive style: the degree to which a person habitually slows down to question first impressions. Someone with very high general intelligence can still be highly susceptible if they rely predominantly on fast, intuition-driven processing. Conversely, developing deliberate analytical habits can substantially reduce susceptibility regardless of baseline intelligence.

    The research discussed in this article offers a genuinely useful lens for understanding why smart, educated people sometimes embrace empty ideas — and why low reading comprehension beliefs represent a subtle but real risk in a world saturated with persuasive, abstract language. The takeaway is not that you should distrust everything you read, but that genuine comprehension requires active effort: pausing, questioning, and demanding that impressive-sounding words actually point to something real. As you scroll through your feeds today, consider applying the Plain Language Test to the next sentence that gives you that quiet thrill of profundity — you may be surprised by what disappears when you ask it to be specific.