Does AI Really ‘Know’ What It’s Doing? New Study Says: Think Again

If you’ve ever felt a twinge of unease when a headline says an AI “knows,” “thinks,” or “decides,” you’re not alone. Those words hit a very human nerve. But here’s a twist you might not expect: a new analysis of more than 10,000 news stories suggests professional journalism is actually more restrained than our everyday chatter when it comes to talking about AI minds. And that restraint matters—for how we understand AI, how we regulate it, and how we keep hype in check.

In a study highlighted by ScienceDaily on April 19, 2026, researchers found that mental-state verbs like “knows” or “thinks” are relatively rare in news coverage. When such language appears, it sits on a spectrum—from purely functional descriptions (what the system does) to human-like attributions (what the system supposedly feels or intends). The authors argue that precision in how we talk about AI should become a common standard: it reduces public confusion, cools overblown expectations, and better informs policy and safety debates.

That’s a quiet revolution in media framing—and it has ripple effects far beyond the newsroom.

Source: ScienceDaily coverage

The study at a glance: restrained anthropomorphism, real-world stakes

Here’s what the research surfaced:

  • The team analyzed 10,000+ news articles to see how often reporting used mental-state verbs for AI.
  • They found these verbs were relatively rare in journalism compared with casual, everyday speech about AI.
  • When such terms did appear, they spanned a spectrum—from strictly functional (“the model classifies X”) to human-like (“the system knows Y”).
  • The authors warn that the human-like end of the spectrum can mislead readers, nudging the public toward overestimating AI’s inner life or intentions.
  • They urge more precise terminology to counter hype and improve AI safety discourse.
  • The findings highlight the press’s outsized role in how society perceives AI capabilities and limits.
  • For regulation, the implication is clear: communication standards should be explicit about language that risks anthropomorphism.

If you care about trustworthy AI, you should care about wording. The way we describe systems shapes what people believe—and how they behave—in high-stakes contexts like healthcare, finance, and elections.

Why words like “think” and “know” mislead when we talk about AI

Humans are natural anthropomorphizers. We project minds into thermostats, cars, and chatbots because it helps us predict behavior. But with modern AI—especially large language models—the risk is that our brains see agency where there’s none.

  • “Thinks” and “knows” imply internal representation, self-awareness, and intent.
  • Most current AI systems are statistical pattern machines. They optimize against a training objective and generate outputs without conscious understanding.
  • Misleading language creates false expectations: users over-trust outputs, regulators under- or over-react, and organizations deploy systems in contexts they shouldn’t.

This point echoes a broader critique in AI research. In 2021, a widely cited paper from the field—“On the Dangers of Stochastic Parrots”—warned against treating fluent outputs as proof of understanding, underscoring the gap between text generation and comprehension. You can read more here: On the Dangers of Stochastic Parrots (ACM).

Bottom line: when we say AI “thinks,” we often launder uncertainty into false certainty.

The language spectrum: from functional to human-like

The new study’s core idea—that AI language sits on a spectrum—offers a practical way to self-audit your own writing or reporting. Think of it as a “precision ladder.”

Functional, mechanistic language (safest)

These verbs describe observable system behavior:

  • “The model outputs a summary.”
  • “The system classifies images of skin lesions.”
  • “The assistant retrieves documents and ranks results.”
  • “The tool detects anomalies in network traffic.”
  • “The model predicts next tokens based on training data.”

This style communicates capability without claiming inner life.

Ambiguous or metaphorical verbs (use with care)

These can be accurate in narrow, technical senses but often read as human-like:

  • “The model learns patterns in the data.” (True in a machine-learning sense, but the everyday meaning of “learns” implies understanding.)
  • “The system understands user intent.” (Better: “infers” or “estimates.”)
  • “The AI decides which ad to show.” (Better: “selects” based on a scoring function.)
  • “The chatbot remembers previous messages.” (Better: “stores chat history” or “maintains context.”)

When you use these, anchor them with specifics: what objective is optimized, what signals are used, what constraints apply.

Human-like mental-state verbs (avoid or qualify)

These load the conversation with agency and intent:

  • “The AI knows,” “thinks,” “believes,” “wants,” “intends,” “feels,” “is curious,” “is creative.”

If you must use them (e.g., for brevity in a headline), hedge in the next clause: “figuratively speaking,” “as-if,” or “operationally, this means…”.

Headlines vs. ledes: how framing drives over-trust

Headlines punch above their weight. A snappy “AI that knows your style” will travel further on social feeds than a precise but dry “Recommendation model that estimates preferences.” The study’s suggestion—that journalism is largely restrained—likely reflects editorial checks in body copy. But we all know how headline framing can still bend perception.

Try these before-and-after rewrites:

  • “This AI knows who will default on a loan” → “This model estimates default risk from applicant data”
  • “The chatbot understands your emotions” → “The chatbot classifies sentiment signals from text”
  • “Our agent decides like a seasoned trader” → “Our agent selects trades based on learned patterns and risk rules”

Notice the effect: same capability, zero mind-reading. You’re not underselling; you’re underspeculating.

Why professional news may be more careful than social media

It’s easy to dunk on the press, but newsroom routines can be a safety net:

  • Editorial standards and legal review discourage unfounded claims.
  • Style guides nudge toward clarity and away from hype.
  • Reporters quote independent experts who temper marketing spins.

For instance, the Associated Press has codified guidance on AI usage and disclosure in journalism workflows: AP issues standards for use of generative AI. While that policy is about newsroom use rather than how to describe AI agents, the same culture of precision applies: be transparent, be specific, avoid misleading the public.

The takeaway: structure and accountability produce better language. That’s a cue for startups and PR teams too.

Why precise language helps AI safety and policy

Hype is not just a branding problem; it’s a risk management problem.

  • Over-trust: Users take outputs at face value (health advice, legal tips), ignoring uncertainty and failure modes.
  • Under-trust: Policymakers or the public react to sensational claims (AI “decides” to lie), prompting blunt, misaligned rules.
  • Misallocation: Organizations invest in the wrong controls—e.g., chasing “deception detection” instead of data governance and evaluation.

Regulatory frameworks increasingly stress clarity and risk-oriented language. A few useful references:

  • NIST AI Risk Management Framework (RMF): a practical playbook for evaluating and communicating AI risks across the lifecycle. See: NIST AI RMF.
  • EU AI Act: focuses on risk tiers, transparency, and documentation, encouraging precise claims about capability and limits. Overview: EU AI Act (European Commission).
  • FTC guidance on AI marketing: claims must be truthful, backed by evidence, and not overstate what a system can do. Read: FTC: Keep your AI claims in check.

A shared language standard—especially one that reins in anthropomorphic metaphors—makes it easier to align product documentation, audits, and public communication.

A practical style guide for AI coverage and comms

You don’t need a PhD in linguistics to write about AI responsibly. Try this fast, repeatable checklist.

For journalists and editors

  • Replace mental-state verbs with mechanistic alternatives (decides → selects; understands → parses/infer; knows → has access to/stores).
  • Anchor general claims in specific mechanisms: “trained on [types of data] to optimize [objective] under [constraints].”
  • Disclose limits and failure modes: accuracy in context, bias risks, adversarial behavior, uncertainty.
  • Attribute quotes with care: if a source says “our model understands,” add a clarifying clause or counterpoint from an independent expert.
  • Watch images and captions: robot hands and glowing brains are visual anthropomorphism. Pick neutral, task-relevant imagery.

For founders and PR teams

  • Match claims to evidence: benchmarks, ablations, user studies, and real-world metrics (not cherry-picked demos).
  • Avoid human-intent language in decks and press releases. If you use it, define it operationally in the next line.
  • Publish model cards or system cards that spell out data sources, intended use, and guardrails.
  • Pre-brief reporters with nuance: what the system can’t do is as valuable as what it can.

For policymakers and standards bodies

  • Encourage plain-language summaries in regulatory filings and risk disclosures.
  • Require documentation that ties claims to evaluations: task, data distribution, metrics, and known gaps.
  • Promote communication standards (e.g., taxonomy of capability verbs) across sectors.
  • Support media literacy and public education on AI limitations.

For everyday readers and teams adopting AI

  • Ask four questions: What data trained it? What task is it optimized for? How was it evaluated? Where does it fail?
  • Treat fluency as a red flag, not proof of understanding.
  • Look for uncertainty estimates, not just single-point predictions.
  • Test on your own edge cases before rolling out.

For a broader ethical framing, UNESCO’s global recommendation on AI ethics is a helpful north star: UNESCO: Recommendation on the Ethics of AI.

Mini case studies: precise rewrites that change everything

Let’s run through some common comms scenarios and tighten the language.

Scenario 1: Startup press release – Before: “Our AI understands financial markets and decides in real time where to allocate capital.” – After: “Our model ranks assets in real time using historical and live market signals and executes trades based on predefined risk constraints.”

Scenario 2: Product landing page – Before: “The assistant knows your brand voice.” – After: “The assistant adapts to your brand voice by matching style patterns it extracts from your approved content samples.”

Scenario 3: B2B sales deck – Before: “The system thinks like a top-tier analyst.” – After: “The system synthesizes reports and data into summaries that mirror analyst workflows, using retrieval and template-driven evaluations.”

Scenario 4: Health app copy – Before: “Our AI understands your symptoms.” – After: “Our model compares your reported symptoms to patterns in clinical literature and guidelines to generate possible explanations—this is not a diagnosis.”

Notice a theme: we’re not making the product smaller; we’re making it safer to adopt and easier to regulate.

How to audit your language: a simple mental-verb check

Want a quick pass that pays dividends? Try this three-step audit on any draft:

  1. Highlight mental-state verbs and metaphors – Circle: think, know, believe, want, decide, understand, feel, intend, guess, remember.
  2. Replace or qualify – Replace with mechanistic verbs (select, rank, classify, retrieve, generate, estimate, optimize). – Or qualify: “as-if,” “from the model’s learned patterns,” “operationally equivalent to.”
  3. Back claims with mechanisms – Add a line on data, objective, constraints, and known failure modes.

If you’re writing a lot about AI, formalize this into a team style guide. You’ll prevent hype creep.

What this study does not—and cannot—settle

This is a big step forward for evidence-based language, but let’s keep perspective:

  • It focuses on news articles, not opinion pieces, marketing, or social media posts where anthropomorphism is likely stronger.
  • It captures a snapshot in time. AI and media norms evolve quickly; year-over-year drift may change patterns.
  • It analyzes text, not the visual rhetoric of thumbnails, stock photos, or videos that can smuggle in human-like cues.
  • It doesn’t adjudicate philosophy-of-mind questions. The takeaway is practical: whatever your stance, precise wording reduces confusion and harm.

There’s room for more research—on how readers interpret different phrasings, how visuals shape beliefs, and how these effects vary by audience and domain.

Where this fits in the bigger AI conversation

The call for precision dovetails with a growing movement to bring plain language and rigorous evaluation into AI discourse:

  • Safety and reliability demand transparency about goals, data, and limits.
  • Governance requires standardized terms for capability, risk, and evidence.
  • Public trust grows when media, companies, and regulators align on clear communication.

For more context on trends, benchmarks, and adoption patterns, the Stanford AI Index is a strong annual reference: AI Index Report (Stanford HAI). And if you work with synthetic media, see the industry-backed framework on disclosures and best practices: Partnership on AI: Synthetic Media Framework.

The upshot: better language isn’t academic nitpicking; it’s infrastructure for responsible AI.

Frequently asked questions

Q: Does AI “understand” anything? – A: In a technical sense, models encode statistical relationships and can perform tasks that look like understanding. But they do not possess human-like comprehension, self-awareness, or intent. It’s clearer to say they “infer,” “estimate,” or “generalize from training data.”

Q: Is it always wrong to say an AI “decides”? – A: Not always, but it’s usually imprecise. If you keep it, define the decision process: “The system selects action X based on a scoring function learned from data and subject to constraints Y.”

Q: Should we ban mental-state verbs for AI entirely? – A: No—metaphors can be useful for intuition. The key is qualification. If you say “the model knows,” follow with “meaning it has stored and can retrieve information about…”

Q: Why does anthropomorphic language matter so much? – A: It changes behavior. People over-trust outputs, misjudge risks, and misinterpret responsibility. Precise wording supports safer use, clearer accountability, and better regulation.

Q: How can journalists quickly improve AI coverage? – A: Swap mental verbs for mechanistic ones, specify data/objectives/constraints, cite independent evaluations, and disclose uncertainty and failure modes. Avoid stock imagery that implies robot minds.

Q: What’s one rule of thumb for marketing teams? – A: If a claim wouldn’t satisfy a regulator or a skeptical enterprise buyer—because it’s vague, anthropomorphic, or unsubstantiated—rewrite it until it would.

Q: Are agentic AI systems a special case? – A: Even with agent-like behaviors (planning, tool use), it’s best to describe capabilities operationally: how planning is implemented, what tools are available, what safeguards govern actions.

Q: How do we describe chatbots that remember context? – A: “Maintains context across turns by storing conversation history” or “uses retrieved summaries to condition responses” is more precise than “remembers like a human.”

Q: What about creativity—can AI be “creative”? – A: It can produce novel combinations of learned patterns. If you use “creative,” pair it with a definition: “generative recombination of patterns that appears novel to users.”

Q: How does this relate to regulation? – A: Regulators expect truthful, substantiated claims. Precision in language harmonizes with risk frameworks like the NIST AI RMF and legal expectations (see the FTC’s guidance), and aligns with transparency aims of the EU AI Act.

The clear takeaway

Words steer the AI world. This new analysis shows good news: professional journalism tends to avoid the most egregious anthropomorphism. But the spectrum remains, and even subtle shifts toward human-like phrasing can distort public understanding and policy. The fix isn’t complicated—swap metaphors for mechanisms, back claims with evidence, and define your terms.

Precision is not pedantry; it’s protection. It protects readers from over-trusting, buyers from over-spending, and society from over-reacting. If we want AI that’s safer, fairer, and easier to govern, we can start with something deceptively simple: say exactly what the system does—no more, no less.

Discover more at InnoVirtuoso.com

I would love some feedback on my writing so if you have any, please don’t hesitate to leave a comment around here or in any platforms that is convenient for you.

For more on tech and other topics, explore InnoVirtuoso.com anytime. Subscribe to my newsletter and join our growing community—we’ll create something magical together. I promise, it’ll never be boring! 

Stay updated with the latest news—subscribe to our newsletter today!

Thank you all—wishing you an amazing day ahead!

Read more related Articles at InnoVirtuoso

Browse InnoVirtuoso for more!