Is This News Real? A Guide to Verifying News in 2026

Ann Marie Vanderveen
February 17th, 2026

As AI-generated content takes over our social media feeds and internet searches, it is also infiltrating our newspapers. Studies using Pangram’s AI-detector found that nearly 60,000 AI-generated news articles are published every day. Some media executives fear the death of journalism as AI slop proliferates the sites where we typically seek information. With nearly a tenth of all news coverage estimated to contain AI-generated text, readers can no longer assume that a byline implies a human author. This guide will explain the state of the industry and how to use tools like the Pangram Chrome Extension to verify what you read.

Doom and Gloom: Why Media Executives are Worried

Recent research by the Reuters Institute for the Study of Journalism found that only 38% of editors, CEOs and digital executives were confident about the future prospects for journalism, 22% lower than reported in their survey from four years ago. Respondents cited significant declines in traffic to online news sites as a concern for the year ahead. Publishers anticipate a substantial reduction in traffic from search engines as AI overviews draw attention away from their sites. Additionally, AI content farms continue to pump out content at dizzying rates that traditional news outlets could never accomplish.

The rise in AI-generated content online is threatening reader trust in new ways. AI-generated content and summaries are rife with hallucinations unlike rigorously fact checked reporting done by a human. However, when the two types of content are hard to distinguish, trust in all media erodes.

The Paradox: Journalists See the Threat but Use the Tools

Yes, media executives are shaking in their boots, but many journalists still willingly and knowingly use artificial intelligence in their content creation process. The New York Times openly uses artificial intelligence to shift through massive amounts of data for investigations and to help generate headlines and translations of stories. Many journalists use AI tools to quickly generate transcripts of interviews. So where is the line? When does artificial intelligence turn from assistant to author?

The concept of “human-in-the-loop” asserts that any work done by artificial intelligence should be monitored and checked by real people. When generating transcripts and headlines, good journalists check themselves that the information is correct. While artificial intelligence has a greater capacity to sort through and interpret large amounts of data, it also has a greater capacity to misinterpret or hallucinate data. Allowing AI tools a long leash and even a hand in authorship greatly harms reader trust.

How to Detect AI News with the Pangram Chrome Extension

So the lines are blurred and readers are unsure if they can even trust that the information they’re reading was written by a person. Now what? The fastest way to confirm if an article was generated by artificial intelligence is to use the Pangram Chrome Extension. With this tool you simply highlight the text you’d like to assess, right click and select “Check for AI”. Immediately you’ll see the likelihood score of the article being machine-generated. With Pangram’s high accuracy, it’s a result you can trust. So before you send an interesting article to your friends or your grandmother, you can be sure you’re not risking sending them an AI-generated piece containing potential hallucinations.

Suspicions Arise: Cluing in on AI writing

There are certain key words and sentence structures favored by AI chatbots in their writing. Without using a tool, there are clues you can look out for that tell you when to be suspicious that an article is AI-written. Vague, monotonous and repetitive phrasing could be a sign of AI-generated writing. As could the overuse of summary phrases like “In conclusion or “It is important to note” – very uncommon phrasing in journalistic writing. Without any personal experience or on-the-ground information gathering, AI journalism lacks a distinct voice or clear culture context that any human reporter would include. Also, if your worries are growing, check the date. Hallucinations are more likely to occur when chatbots attempt to bridge the gap between their training data and real-time events.

In conclusion It is important to note All that to say, the journalism landscape is changing, bringing about both threats and opportunities. As readers, we have limited power in restricting the amount of AI content that proliferates our trusted news sites but we do have the power to verify. With tools like Pangram, readers can regain their confidence and trust in journalism they read and make sure to support authentic human reporting.

Don’t let a bot try to give you the morning news report. Verify any article instantly with our browser extension.


Subscribe to our newsletter
We share monthly updates on our AI detection research.