Why AI Isn't Equipped to Replace Human Writers, a Pro Writer's Take
I've been a professional writer since 2010, so I've written a lot. Once ChatGPT took the world by storm, I suddenly found my head on a swivel. Sick to my stomach, I pondered, "Am I going to be replaced?" I quickly learned that while AI wasn't replacing writers, those writers who knew AI would leave the rest of their AI-ignorant peers in the dust. Um, bye, Felicia.
So, what did I do next? Desperate to remain relevant, I buckled down and rapidly got educated on AI. Then I began playing with as many platforms as I possibly could. I found myself saying "please" and "thank you" to ChatGPT, Gemini, Copilot, Grok, Perplexity, NotebookLM and Claude. I know AI is supposedly not a sentient being, but I couldn't help but wonder, "If I'm rude to them now and they someday take over the world, will they remember how I treated them?" Sounds corny now that I'm writing it, but it's always this tiny voice in the back of my mind.
Now let's talk about reliability. As a seasoned journalist, research is my jam. You can imagine I was excited when I got to play with ChatGPT's and Gemini's deep research features. However, when I asked these platforms to write content, I got really annoyed when I realized that all too often, they'd cite sources that had no connection to the AI-generated content at all.
When this occurred, I'd dig, and whatever they were referring to would not be in the article or report they cited. I quickly learned that I always, 100% of the time, have to fact-check their sources because large language models are not foolproof, not even close.
Why AI Hallucinates—and Why That Matters
Understanding why AI gets things wrong helps explain why human oversight is non-negotiable. Large language models don't actually "look up" information the way a journalist does. They predict text—generating the most statistically likely next word based on patterns in their training data. They're not retrieving facts; they're constructing plausible-sounding sentences. That's why they can confidently produce a citation that looks real but doesn't exist. It's not lying—it's pattern-matching gone wrong.
I can't tell you how many times I've argued with ChatGPT or Gemini, telling them, "No, you're wrong, that is not in that source" or "You are hallucinating again!" You know that feeling when you're yelling at a customer service chatbot because you want a real person? That's exactly how it feels. My takeaway: if you're writing anything that relies on data and research, you must fact-check every output. You can't afford to roll the dice.
A Media Literacy Crisis in the Making
I pity the younger generation who may be taking AI outputs at face value. They don't yet have the experience—or the instincts—for finding real, verifiable sources. When you've spent years tracking down primary documents, interviewing sources and getting burned by bad information, you develop a finely tuned radar for what doesn't add up. AI can erode that radar before it's ever built.
This isn't just a writing problem. It's a media literacy crisis. If the next generation of communicators—journalists, marketers, educators, content creators—learns to trust AI outputs without verification, the downstream effects on public information could be significant. The responsibility falls on those of us who know better to model rigorous sourcing and teach it loudly.
Why Human Writers Set the Bar
After spending the past year and a half deep-diving into AI, I can't ignore its glaring weaknesses—weaknesses that may easily be obscured to someone with little professional writing experience. Here's what hundreds of hours working with AI has taught me: AI is not alive. Although it may "seem" human, it's still missing key traits that only human writers possess—the ability to share opinions, feelings, beliefs, personal experiences, and original thoughts.
Sure, AI can speak in metaphors and discuss possibilities based on what's been written before, but it lacks the distinct ability to express its own perspective or draw on lived experience. As a journalist, I've conducted hundreds of interviews over the course of my career. With each new one, I pull from my own memory bank—combining previous experience with research grounded in credible sources. Large language models simply cannot replicate that.
The Human Quotient
AI may scrape data off the internet, but it lacks what’s called the human quotient, or HQ—the innate ability to use ethics, reasoning and life experience to solve problems and connect with readers on a human level. HQ is what makes a piece of writing land emotionally. It's the instinct that tells you when a source is trustworthy, when a story angle is tone-deaf, or when a single anecdote will resonate more than a paragraph of statistics. It's accumulated wisdom—and it can't be downloaded.
This gap is especially pronounced in disciplines beyond journalism. Fiction writers draw on empathy, memory and imagination to create characters readers fall in love with. Screenwriters understand the emotional rhythm of a scene. Songwriters channel grief, joy and longing into three minutes of music. Marketers who've lived through a product launch understand consumer psychology in ways no model can approximate. In every creative field, the human experience embedded in the work is precisely what makes it worth consuming.
The Business Reality Writers Need to Face
There's an economic side to this conversation that we can't ignore. Clients are already expecting faster turnarounds at lower costs, and some are using AI to justify slashing rates or replacing entry-level writing work entirely. That pressure is real and growing. The writers who will weather it aren't the ones who refuse to engage with AI—they're the ones who use it strategically, positioning themselves as the essential human layer that AI cannot replace: the editor, the strategist, the voice.
I'm not hating on AI—I love it. It's an incredible productivity tool and excellent for low-lift writing like website copy, social media posts, email marketing and newsletters—but only as a starting point. The writers who treat AI as a co-pilot rather than a ghostwriter will find themselves faster, sharper and more competitive, not replaced.
The Bottom Line
Whether it's writing, fine art, video, acting, music or songwriting, the human will always have the upper hand over AI. But we are in a new era, and there is no going back. The question is no longer man vs. machine—it's those who embrace AI thoughtfully vs. those who don't engage at all or engage uncritically.
If you're a writer, start learning these tools now. Understand what they can and can't do. Build the habit of verification. Protect your voice. And never stop developing the one thing AI will never have—your human quotient. That's what will keep you relevant, and it's what will always set your work apart.