Communicative AI: A Critical Guide to Large Language Models, Language, and Meaning
What if a machine that “only predicts the next word” could force us to rethink consciousness, authorship, and what counts as truth? That’s the riddle at the heart of Communicative AI—the fast-growing field centered on large language models (LLMs) like ChatGPT and LaMDA. These systems are unbelievably useful and deeply controversial. They write essays and code, summarize research, brainstorm ideas, even simulate different voices and styles. But they also raise unsettling questions. Do they understand? Are they creative? Are they the end of writing as we know it?
Communicative AI: A Critical Introduction to Large Language Models, by Mark Coeckelbergh and David J. Gunkel, takes these questions head-on. The book doesn’t just describe how LLMs work—it uses them as a mirror to reflect on what language, communication, and intelligence mean for humans. If you’re in the humanities or social sciences, or you’re just a curious mind trying to make sense of the moment, this guide will help you navigate the technology and the philosophy behind it.
In this article, I’ll introduce the core ideas behind Communicative AI, connect them to classic debates in philosophy and communication theory, and offer simple, concrete ways to use LLMs well. Along the way, we’ll probe the big questions—consciousness, authorship, truth—and why they matter for your work, your writing, and your everyday life.
Here’s what you’ll walk away with: – A clear explanation of what LLMs are and how they work. – A critical lens for thinking about language, meaning, and truth in the age of AI. – Practical guidelines for responsible, effective use. – Answers to the most Googled questions about LLMs.
Let’s start with the basics, then zoom out to the bigger picture.
What Are Large Language Models? (LLM Basics)
At their core, LLMs are giant pattern learners. They are trained on vast amounts of text to predict the next word in a sequence. That’s it. But with billions of parameters and a powerful architecture called a transformer, this simple goal unlocks complex behavior—translation, question answering, summarization, reasoning-like outputs, and creative writing.
A useful metaphor: an LLM is autocomplete on steroids. It does not think like a human. It does not “know” facts the way you do. It maps statistical relationships between words and phrases across many contexts. Yet because language is how we package knowledge, those patterns let it simulate understanding in startling ways.
Here’s why that matters: if you assume an LLM is a search engine, you’ll misuse it. If you view it as a conversational pattern engine with strengths and weaknesses, you’ll get better results and avoid the biggest pitfalls.
A Quick Primer: Transformers, Tokens, and Training
- Tokens: Models don’t see words. They see tokens—small units like “cat,” “ing,” or “New York.” They assign probabilities to what token should come next.
- Transformers: The architecture that made LLMs leap forward. Transformers use “attention” to decide which parts of a sentence matter most for predicting the next token. The landmark paper is aptly titled “Attention Is All You Need.” You can read it here: Attention Is All You Need.
- Pretraining: The model reads enormous amounts of text and learns general language patterns.
- Fine-tuning: Developers adapt the model for specific tasks or guardrails (e.g., safety or style).
For a visual and friendly walkthrough of transformers, this resource is beloved: The Illustrated Transformer.
Alignment and RLHF: Why LLMs Sound So Helpful
Modern LLMs are often trained with human feedback to be more helpful, honest, and harmless. This process—reinforcement learning from human feedback (RLHF)—teaches the model to prefer responses people like. That’s why many LLMs sound conversational and considerate. For a deeper dive, see: Learning from Human Feedback.
But keep this in mind: being agreeable is not the same as being correct. Which brings us to hallucinations.
Why “Hallucinations” Happen
LLMs don’t have built-in access to reality. They generate plausible text, not verified truth. When the training data is sparse, or your prompt is unclear, they may “hallucinate” details—confidently making things up. This isn’t lying; it’s probability in action. It’s one reason researchers warn against “stochastic parrots”—systems that remix patterns without understanding context or consequences. If you’re curious about that critique, start here: On the Dangers of Stochastic Parrots.
The takeaway: LLMs are powerful communicative tools, not oracles. Use them like a sharp instrument, not a crystal ball.
“Communicative AI”: What This Book Is Really About
Coeckelbergh and Gunkel’s Communicative AI makes a crucial shift. Instead of asking “Are LLMs intelligent like humans?” it asks “What happens when we treat LLMs as communicative partners?” That’s a subtle but important move.
Communication is not just transmitting information. It’s doing things with words—promising, asking, framing, persuading. It’s context, purpose, and audience. When we chat with an LLM, we’re not just retrieving facts. We’re engaging in a kind of language game where meaning and value arise through interaction.
This matters because: – It reframes LLMs as participants in discourse, not just databases. – It highlights ethics, accountability, and power in how these systems mediate conversations. – It draws on philosophy, linguistics, and communication theory to ask better questions about truth, authorship, and meaning.
Let’s unpack those questions.
Language, Meaning, and Truth: What Do LLMs “Mean”?
Philosophers have long argued that meaning is not simply inside words. Meaning comes from use, context, and shared practices. Ludwig Wittgenstein called these “language games.” If that piques your curiosity, see the Stanford Encyclopedia of Philosophy on Wittgenstein.
LLMs operate inside statistical space, not human life. But they’re incredibly good at mimicking the patterns we use in those language games. Consider:
- You ask, “Is it okay to split an infinitive?”
- The model replies with examples and a nuanced take.
- Did it “understand” grammar? Or did it model plausible expert speech acts in that situation?
In pragmatic terms, both can be true in different senses. The model doesn’t have human intentions or experiences. But it can perform the social function of explaining, advising, and persuading. J. L. Austin called these “speech acts.” More on that here: Speech Acts. And on conversational norms (why LLMs try to be clear and helpful), see Grice’s implicature.
So where does truth come in?
- Propositional truth: Whether a statement matches the world.
- Communicative truthfulness: Whether a speaker acts sincerely and responsibly in a conversation.
LLMs complicate both. They can generate statements that sound true but aren’t. And they can’t be “sincere” in the human sense, because sincerity presumes a mind. We, as users and designers, must supply the missing norms. That means verification, citations, and guardrails.
For a classic backdrop on information and communication theory, see Claude Shannon’s foundational work: A Mathematical Theory of Communication.
Are LLMs Conscious?
Short answer: there’s no evidence that today’s LLMs are conscious.
Longer answer: Consciousness is a thorny concept. It includes subjective experience (what it feels like to be you), self-awareness, and the capacity for first-person perspective. LLMs can talk about feelings, but that’s a performance based on patterns, not proof of inner life.
For an accessible, scholarly overview, try the Stanford Encyclopedia of Philosophy on Consciousness.
A famous thought experiment, John Searle’s Chinese Room, is key here. It suggests a system could manipulate symbols to pass a language test without understanding meaning. That thought experiment maps neatly onto LLMs. Learn more: The Chinese Room Argument.
That said, there’s a practical question we can’t dodge: How should we treat systems that behave like they understand us? Even if there’s no “ghost in the machine,” the social effects are real. People bond with chatbots. They attribute intentions. They change behavior in response. Those are communication ethics questions as much as technical ones.
Are LLMs Authors?
This is where Communicative AI gets provocative. If authorship is about an intentional creative agent, LLMs fall short. They lack intentions and accountability. But if authorship is about a function in a discourse—how texts are attributed, valued, and circulated—then the model, the dataset, the prompt, and the human editor all play a role.
The idea that “the author” is not a singular person but a social construct has a long history. Roland Barthes famously declared the “death of the author,” shifting attention to readers and texts. Quick intro here: Britannica on “The Death of the Author”. Michel Foucault asked “What is an Author?” as a way to examine how we assign authority and responsibility; see Foucault overview.
So where does that leave us?
- Legal perspective: In many jurisdictions, works generated solely by AI are not copyrightable. Human creative input matters. See the U.S. guidance: Copyright and AI.
- Ethical perspective: Attribution should reflect real contribution—prompt design, editing, curation, and framing. Ghostwriting by AI without disclosure misleads audiences and undermines trust.
- Practical perspective: Treat LLMs as co-authoring tools. You supply the intent, structure, and judgment. The model supplies drafts, options, and inspiration. You stay accountable.
Is This the End of Writing?
No. It’s a shift in how writing happens.
Writing has always had tools—pens, typewriters, spellcheck, Google. LLMs are more capable and more controversial, but the core act remains: choosing what to say and why. As with calculators in math, the challenge is to integrate the tool without surrendering the skill.
Here’s how to keep your voice while using AI well: – Start with your outline and thesis. Don’t let the model set your agenda. – Use the model for idea generation and rephrasing, not final arguments. – Ask for multiple angles and counterpoints. Then choose. – Insert personal experience and original analysis. That’s what readers want. – Verify facts and cite sources. Always. – Edit like a hawk. If it reads generic, it probably is.
In other words, write like a human with a great assistant, not like a bot with a human rubber stamp.
LLMs as Communication Technology: Beyond Content
It’s tempting to treat LLMs as content machines. But their real power is communicative—they shape how we talk, learn, and make decisions together.
Think about Jürgen Habermas’s idea of communicative action: people coordinating action through reasoned dialogue. With LLMs in the loop, what happens to public discourse? Who sets the conversational norms? What biases get amplified? Reference: Habermas and Communicative Action.
We also need to revisit “information” itself. Shannon’s theory shows how to transmit signals efficiently, but not how to secure meaning or truth. LLMs can maximize fluency (low “noise”) while still being wrong. That’s why social context—institutions, editorial standards, and verification—matters more than ever.
Put plainly: more words isn’t more wisdom. Communicative AI raises the stakes for critical reading, media literacy, and norms of evidence.
Risks, Ethics, and Responsible Use
LLMs can help us work faster and think better. They can also scale our mistakes and bias. Here are the key risks and how to manage them.
- Hallucinations and misinformation
- Risk: Confidently wrong answers that spread fast.
What to do: Cross-check with reputable sources. Ask for citations. Prefer grounded models with retrieval.
Bias and fairness
- Risk: Training data reflects societal bias. Outputs can stereotype or discriminate.
What to do: Use inclusive prompts. Audit outputs. Implement bias-mitigation workflows, especially in hiring, lending, or policy contexts.
Privacy and security
- Risk: Sensitive data leaking into prompts or logs.
What to do: Avoid entering confidential information. Use enterprise controls, data retention settings, and redaction tools.
Intellectual property
- Risk: Unclear provenance. Tainted training data. Accidental plagiarism.
What to do: Cite your sources. Use models and datasets with clear licensing. Add human creativity and transformation.
Overreliance and skill erosion
- Risk: Losing the ability to write, reason, or check facts.
What to do: Keep “human-in-the-loop.” Practice analog skills. Treat AI as augmentation, not replacement.
Environmental impact
- Risk: Training and inference consume energy.
- What to do: Prefer efficient models for routine tasks. Batch requests. Evaluate sustainability claims.
For governance frameworks and practical checklists, see: – NIST AI Risk Management Framework – UNESCO Recommendation on the Ethics of AI – EU’s evolving regulatory landscape: EU AI Act overview
Practical Examples: How to Use LLMs Well
Let’s bring this down to earth with scenarios and tips.
- Research sprint
- Use LLMs to map a topic, generate reading lists, and identify debates.
- Ask for summaries with pros, cons, and key citations to verify.
Then read the sources yourself and synthesize.
Learning companion
- Ask for step-by-step explanations in different styles (analogies, diagrams, examples).
- Have the model quiz you or generate practice problems with solutions.
Request common misconceptions so you know what to avoid.
Writing partner
- Brainstorm angles and headlines.
- Draft intros and transitions to overcome blank-page syndrome.
Ask for counterarguments to strengthen your piece.
Coding assistant
- Generate boilerplate, tests, and documentation.
- Request explanations of code snippets in plain language.
Be vigilant about security and correctness.
Policy and operations
- Turn long reports into executive briefs.
- Create standard operating procedures from scattered notes.
Red-team policies: “Find ambiguities and potential loopholes.”
Creativity boost
- Combine unusual styles or domains. “Explain quantum tunneling like a travel guide.”
- Generate visual prompts, scene outlines, or character profiles.
- Remix to discover fresh metaphors and narratives.
Prompts that work better tend to: – Set a clear role and goal: “Act as a policy analyst. Draft a one-page brief…” – Include constraints: audience, tone, length, format. – Provide examples: “Match this style: [paste paragraph].” – Ask for structure: bullet points first, then expand. – Invite reflection: “List assumptions you’re making.”
Above all, review and revise. The model is a copilot. You’re the pilot.
How to Read “Communicative AI” (and Get the Most Out of It)
You don’t need a PhD to enjoy Coeckelbergh and Gunkel’s book, but it helps to bring questions. Here’s a simple plan:
- Before you start
- Jot down your biggest questions: authorship, truth, teaching, ethics.
Skim a primer on LLMs and transformers. Two solid links above: Attention Is All You Need and The Illustrated Transformer.
While reading
- Keep a two-column notebook: “Claims about AI” vs. “Claims about humans.”
Note where the authors shift from describing to evaluating.
Afterward
- Pair the book with a classic critique (e.g., Chinese Room) and a pragmatic perspective (e.g., Speech Acts).
- Try an experiment: Use an LLM to rewrite a paragraph you love. Then ask what changed in meaning, voice, and intent. That hands-on contrast will make the theory click.
Key Takeaways
- LLMs are powerful pattern machines. They simulate understanding by modeling language structure at scale.
- Meaning and truth in AI require human context. Verification, ethics, and discourse norms remain our responsibility.
- Authorship is changing. Think of AI as a tool within a social process—not a solitary genius.
- Consciousness claims are premature. Treat behaviors as functional, not evidence of inner experience.
- Writing isn’t ending—it’s evolving. Keep human intent, originality, and accountability at the center.
- Responsible use is possible. Combine policy, practice, and good habits to capture value and reduce harm.
If you remember one thing, let it be this: Communicative AI is less about machines replacing humans and more about how language technologies reshape our conversations, our institutions, and our sense of what counts as knowledge.
FAQ: People Also Ask
What is a large language model in simple terms?
A large language model is a system trained on vast text data to predict the next word in a sentence. With enough data and parameters, it can answer questions, write text, and mimic different styles. It doesn’t think like a human; it models patterns.
Do LLMs understand language?
They model usage very well but lack human grounding—no bodies, senses, or lived experience. They can perform the functions of explaining or advising, yet that performance is statistical, not experiential.
Are LLMs conscious or self-aware?
No current evidence supports that. They generate text based on patterns. For background, see Consciousness and the Chinese Room.
Are LLMs the end of writing?
No. They change workflows. Humans still set goals, interpret context, bring experience, and take responsibility. Use LLMs as assistants—outline, draft, rephrase—then revise with your voice.
Are LLMs authors? Who owns the output?
Legally, many jurisdictions require human creativity for copyright protection. Purely AI-generated works may not be protected. See Copyright and AI. Ethically, disclose significant AI assistance and attribute real contributions.
Why do LLMs “hallucinate” false facts?
They generate likely text, not verified knowledge. When data is thin or prompts are ambiguous, they produce plausible but false statements. Reduce risk with clear prompts, requests for sources, and independent verification.
What’s the difference between ChatGPT and LaMDA?
Both are transformer-based LLMs, trained by different organizations with different data, safety tuning, and product goals. Their conversational style and capabilities vary due to architectures, training choices, and guardrails.
How can I use LLMs responsibly in school or research?
Follow your institution’s policy. Disclose use. Verify claims with primary sources. Keep notes on prompts and versions. Treat the model as a tool, not a source of record.
How do LLMs impact truth and misinformation online?
They can amplify both. Fluency makes falsehoods travel fast. Counter this with verification habits, transparent sourcing, and editorial oversight. Institutions should add governance and audits.
Where can I learn more about the philosophy of language and AI?
- Wittgenstein on language games
- Grice and conversational implicature
- Speech Acts
- Information theory
- Habermas and communicative action
—
Final thought: The promise of Communicative AI isn’t that machines will become like us. It’s that, by studying and using them wisely, we’ll better understand ourselves—how we speak, think, and build meaning together. If this resonated, stay curious. Explore the sources above, read the book, and subscribe for more deep dives on AI, language, and the future of human creativity.
Discover more at InnoVirtuoso.com
I would love some feedback on my writing so if you have any, please don’t hesitate to leave a comment around here or in any platforms that is convenient for you.
For more on tech and other topics, explore InnoVirtuoso.com anytime. Subscribe to my newsletter and join our growing community—we’ll create something magical together. I promise, it’ll never be boring!
Stay updated with the latest news—subscribe to our newsletter today!
Thank you all—wishing you an amazing day ahead!
Read more related Articles at InnoVirtuoso
- How to Completely Turn Off Google AI on Your Android Phone
- The Best AI Jokes of the Month: February Edition
- Introducing SpoofDPI: Bypassing Deep Packet Inspection
- Getting Started with shadps4: Your Guide to the PlayStation 4 Emulator
- Sophos Pricing in 2025: A Guide to Intercept X Endpoint Protection
- The Essential Requirements for Augmented Reality: A Comprehensive Guide
- Harvard: A Legacy of Achievements and a Path Towards the Future
- Unlocking the Secrets of Prompt Engineering: 5 Must-Read Books That Will Revolutionize You