
Why Journalists Are Flagging Your AI Press Release (And What to Do About It)
If you work in PR or communications, you have probably used ChatGPT to draft a press release. You are not alone. Journalists know it too — and many of them are now actively checking.
Here is what nobody warns you about: a press release that reads like AI is not just annoying to reporters. It actively kills your chances of getting coverage. Some journalists delete it without reading past the first paragraph. Others quietly flag your whole firm. And a growing number are running every submission through a detection tool before they decide if the story is even worth pursuing.
Why Are Journalists Suddenly Checking for AI?
Journalists verify AI-generated press releases because newsrooms are drowning in AI-written content — and a lot of it is low-quality, factually thin, and impossible to trust. Think of it this way: imagine getting 200 emails a day, and 80% of them were written by the same robot. You would start pattern-matching fast.
Reporters worry that AI-generated quotes from executives are fabricated, that statistics are hallucinated, and that the whole pitch is just fluff with no real story underneath. Trust is the real issue. Once a journalist suspects AI, the entire press release becomes suspect.
How Journalists Spot AI Without Any Tools
Experienced reporters can often detect AI writing on instinct alone. Here are the patterns that give it away:
- The hollow executive quote: AI loves lines like "We are thrilled to announce this innovative solution that will transform the industry." No real person talks like that on the record.
- No tension or nuance: Human PR writers acknowledge context. AI generates pure positive spin, every time.
- Adjectives instead of data: "Significant growth" instead of "47% increase in Q1." AI fills space with vague language when it does not have real numbers to work with.
- Flat sentence rhythm: AI text flows at a steady, uniform pace. Human writing speeds up, slows down, backtracks. It has texture.
- AI fingerprint phrases: "It is worth noting," "in today's dynamic environment," "seamless experience" — journalists recognize these instantly now.
What Tools Do Journalists Actually Use to Detect AI Press Releases?
Journalists use several AI detection tools to verify press releases, with GPTZero, Originality.ai, Copyleaks, and ZeroGPT being the most common. GPTZero is probably the most widely mentioned by name — it was built specifically for professional and academic contexts, and reporters at major publications have cited it publicly.
Originality.ai is popular with digital editors because it combines plagiarism detection with AI scoring, which is useful when a PR firm appears to be recycling templated content across multiple clients.
To understand why these tools flag certain writing, it helps to know how AI detectors work. In short: they measure things like how predictable each word choice is and how much sentence length varies. Human writers naturally score well on both. AI writers, by default, do not.
Want to know your score before you hit send? Run your press release through our free AI detector and see exactly what a journalist's tool would report.
What Actually Happens When a Press Release Gets Flagged?
At best, it goes straight to the trash. At worst, the journalist mentions your agency in a piece about AI spam, or shares the example in one of the private Slack groups and Discord servers where reporters discuss bad PR practices. Blocklists are real. Reputations move fast in those communities.
This mirrors what happens in academic settings — a detection flag triggers consequences that are hard to walk back, even when the tool is not 100% accurate. If you have ever read about AI detection false positives, you know that even well-intentioned human writing can sometimes get flagged. The difference is that journalists are not required to give you a second chance the way a professor might.
How to Fix an AI Press Release Before Sending It
The goal is not to hide that you used AI. The goal is to make the final document read like a skilled communicator produced it — because that is what gets picked up.
- Replace every generic executive quote with something the actual person said. A two-minute call is enough.
- Add at least two specific data points. Real numbers signal real reporting was done.
- Rewrite the opening paragraph from scratch. AI almost always writes a weak lede.
- Vary sentence length on purpose. Short. Then a longer one that adds context or contrast. Short again.
- Delete any sentence containing "it is worth noting," "seamless," or "in today's world."
After editing manually, run the release through a humanizer. WriteMask is built for exactly this — it restructures AI-generated text to read naturally while keeping your original meaning intact. It has a 93% pass rate across the major detectors journalists are using right now.
Is Using AI for Press Releases Actually Wrong?
Not at all. AI is a drafting tool. The problem is when the draft gets sent without human judgment applied to it. A journalist does not care how you wrote the first version. They care whether the story is real, the quotes are genuine, and the writing respects their time.
Draft with AI. Edit with human judgment. Verify with a detection check. That is the workflow that actually gets coverage.