What Every Writer Should Know About Using AI Tools in 2025
When I started my writing career in 2010, if someone told me artificial intelligence would soon infiltrate newsrooms, I probably would have laughed. But here we are—summer 2025—and AI is no longer science fiction. It's being used to write headlines, transcribe interviews, crunch data, and even help writers structure stories.
When ChatGPT took the world by storm in 2023, many writers and journalists wondered: Am I going to be replaced? I know I sure did. In fact, that fear from a couple years ago is the basis for this article—I know I wasn’t the only one wondering how AI would impact the livelihoods of journalists everywhere.
Now that some time has passed from the initial scare, the reality has become clear: AI is not replacing journalism. It’s reflecting it—highlighting our industry’s values, priorities, and pressure points in a fragmented attention economy.
So where does the human end and the machine begin?
The Modern Newsroom—Where AI Fits In
What’s Actually Happening
87% of newsroom leaders say they are “fully or somewhat transformed” by generative AI (Reuters Institute 2025).
Otter.ai is now standard for transcribing interviews and briefings—saving many journalists (myself included) an unbelievable amount of hours manually transcribing each month.
Reuters Tracer and Associated Press proprietary tools handle content tagging, translation, and early alerts.
These AI tools aren't replacing journalists. They’re making workflows faster, cleaner, and more efficient—allowing us to focus on analysis, nuance, and storytelling.
Why Human Oversight Matters More Than Ever
Across every major publication, one rule reigns: AI must not publish without human review.
Here’s why:
AI can hallucinate facts—presenting plausible, but false, information.
It lacks ethical judgment—especially around defamation, privacy, and context.
It can’t interpret tone or emotional nuance.
Major outlets are responding with:
Clear policies on what AI can (and cannot) do.
Mandatory editorial sign-offs.
Transparent disclosures (“AI-assisted draft by…”).
Ongoing human verification, even for automated reports.
What the Law (Actually) Says About Using AI in Journalism
The legal landscape around AI and content creation in the U.S. is murky at best—and that's a problem for writers. While AI regulation has gained momentum, most laws focus on consumer protection, biometric privacy, or automated decision-making—not journalism specifically.
There Are Few U.S. Laws Focused on Writing
As of mid-2025, every U.S. state, plus Puerto Rico, D.C., and the U.S. Virgin Islands, has introduced AI-related legislation. But only 28 states and one territory have enacted over 75 AI-specific measures—mostly dealing with deepfakes, transparency, and bot disclosures (NCSL).
If you're a journalist, there's almost nothing written for you, which means the onus is on editors and publishers to define responsible use. There are no federal requirements for labeling AI-generated content in journalism or advertising (yet).
Utah: The Test Case to Watch
Utah is leading the charge. In March 2025, Governor Spencer Cox signed Senate Bill 226, known as the Artificial Intelligence Consumer Protection Amendments. Here's what writers should know:
If a consumer asks, businesses must disclose if AI is being used.
If you're giving "high-risk" advice—like medical, legal, or financial—you must disclose AI use upfront.
Fines can hit $2,500 per violation.
If you're transparent from the start, you qualify for safe harbor.
Then came SB 332, which extended the original law through July 2027 and refined definitions. HB 452 further mandates visible AI disclosures for mental health bots, especially those collecting or interpreting sensitive data (Davis Polk).
While these laws don't specifically mention journalism, writers and content creators should take them seriously—especially those in health, finance, or legal verticals.
What About Federal Law?
In July 2025, the U.S. Senate voted 99–1 to remove a proposed 10-year federal AI moratorium that would’ve stopped states from enacting their own rules (The Verge).
Meanwhile, Executive Order 14179, signed earlier in the year, reduced consumer protections to favor innovation.
Bottom line: you're mostly on your own when it comes to AI and content law in the U.S.
Meanwhile, in Europe…
The EU AI Act, passed in March 2024, is far stricter and already impacting global content practices:
Requires labeling of AI-generated content and synthetic media.
Bans certain types of manipulative or deceptive AI.
Classifies AI tools by risk. Journalism-related tools generally fall under the “limited risk” category—meaning they’re allowed with strict transparency requirements. For example, if a newsroom uses generative AI to draft content or engage with the public, they would have to disclose that AI was involved.
If you're publishing internationally—or freelancing for a global outlet—expect to comply with Europe's standards even if you're based in the U.S.
How U.S. Publishers Are Using (and Controlling) AI
Associated Press
Partnered with OpenAI in 2023.
Pioneering the use of automation in journalism, the AP uses Wordsmith—a platform developed by Automated Insights—to automatically generate routine stories such as company earnings reports and sports summaries from structured data. This approach allows reporters to focus on deeper investigative work and storytelling. Additionally, the AP leverages proprietary algorithms to pull insights and surface leads from government databases, helping reporters identify and act on relevant news faster.).
The AP does not allow AI to draft or rewrite publishable news stories. AI-generated material is limited to backend functions like metadata tagging, transcription, and templated data population. As clarified in its policy: “The tool cannot be used to create publishable content and images for the news service” (AP News).
The New York Times
Greenlit tools like GitHub Copilot, Google Vertex AI, and OpenAI API.
Launched Echo, an internal AI assistant for tagging, SEO, and summaries.
Enforces strict editorial rules: no full article drafting, no publishing without human oversight (Nieman Lab, The New York Times)
Hearst
Uses AI for lifestyle/real estate coverage—only in early draft or outline stages.
Editors must verify, rewrite, and sign off before publishing (CT Insider).
Best Practices Every Writer Should Follow
Whether you're freelance or staff, here's your AI checklist:
Learn prompt engineering and tool limitations.
Always fact-check AI output—double sources when needed.
Label AI use when appropriate (and when in doubt).
Get editorial sign-off—especially for sensitive topics.
Understand your state’s laws if writing from or about regulated sectors.
Why Human Judgment Still Leads the Story
AI can transcribe, summarize, and even suggest structure—but it can’t replace the human ability to ask follow-up questions, read between the lines, or capture the emotional truth of a moment.
The truth is that in 2025, the most competitive journalists aren’t just using AI—they’re shaping it. They know how to wield these tools to streamline the tedious parts of their jobs without ever surrendering their editorial instincts.
In crafting this very story, I turned to a trio of AI tools—Gemini, NotebookLM, and ChatGP to act as research aides and creative sounding boards. These tools helped me explore angles, refine my structure, and organize a sea of data. But the heavy lifting? That was all human. I read the legislation, tracked down and verified the sources, and shaped the narrative using my own editorial judgment.
Because no matter how powerful these tools become, the heartbeat of a good story will always come from the journalist who digs, questions, and crafts with intention.