Most Popular Choices
Share on Facebook 12 Printer Friendly Page More Sharing Summarizing
Sci Tech    H4'ed 1/3/26  

If This Is the Future, We're F**ked: When AI Decides Reality Is Wrong

By       (Page 1 of 2 pages)   No comments

Carl Petersen
Follow Me on Twitter     Message Carl Petersen
Become a Fan
  (5 fans)
"ChatGPT can make mistakes. Check important info."

- Warning below ChatGPT's prompt


(Image by Carl J. Petersen)   Details   DMCA

While I've often suspected I was arguing with a bot on Twitter, this was the first time I knowingly entered an exchange with a machine.

Needing an illustration to accompany a tribute to Limp Bizkit bassist Sam Rivers, I uploaded a screenshot from the band's Woodstock '99 performance into an AI engine and asked for a pencil-sketch version with the caption "Sam Rivers 1977-2025."

Instead of producing an image, the chatbot informed me that my request violated its terms of service. When I asked why, it replied: "There is no indication that Sam Rivers is dead."

I should have moved on. Instead, I tried to persuade the AI that the bassist was, in fact, deceased. I provided links to articles reporting his death, along with a press release from the band itself.

The AI was unyielding. No matter what proof I offered, it insisted it could not create the image using "false information." Eventually, I gave up. I had it generate the image without the dates and added them myself using a graphic editor.

The incident turned out not to be unique, but part of a broader pattern.

After completing the first draft of an article analyzing partisan reactions to the murders of Charlie Kirk and Rob Reiner, I submitted it to ChatGPT for suggestions on tightening the piece. Instead of stylistic feedback, the AI flagged what it described as "very serious"factual credibility risks," warning that these issues "undermine the article."

That accusation was deeply concerning. I take great care to ensure the accuracy of my writing. Readers may disagree with my conclusions, but they should never have reason to doubt that those conclusions are grounded in fact.

Curious to see where I had supposedly gone wrong, I read on. According to the AI:

Charlie Kirk is alive. Writing as though he was murdered is a fatal factual error unless the piece is speculative fiction, satire, or an unstated alternate reality. If the claim is metaphorical or hypothetical, it must be made explicit immediately. As written, the article is disqualifying for publication.

Unlike the news of Sam Rivers' death, which was only hours old when I encountered resistance from an AI, Kirk's murder had been extensively documented and had occurred three months earlier. The event had reshaped the country's political and social landscape, spawning major secondary stories of its own, including the temporary suspension of Jimmy Kimmel following his comments on the national response.

The AI continued. It asserted that there was "no public record of Rob Reiner being murdered," claiming that USA Today links dated December 2025 appeared to be fabricated or "future-dated," and therefore "severely damaging" to my credibility. It also insisted that "JD Vance is not currently Vice President as of real-world timelines."

This was no longer a matter of incomplete data or delayed updates. It was a system confidently rewriting reality and doing so while presenting itself as an authority.

Whether the cause is from a preprogrammed bias, flawed training data, or simple design limitations, mistakes are inevitable. The question is what happens when those mistakes are treated not as errors, but as facts. What mechanisms of accountability exist when AI systems are empowered to override documented reality, and increasingly, to mediate our access to it?

What unsettled me was not that the AI made mistakes, humans do that constantly, but that it insisted its mistakes were reality. That is the part Hollywood has been warning us about for decades.
With real-world AI still in its infancy, the HAL 9000 in 2001: A Space Odyssey killed most of the crew aboard Discovery One after being given contradictory directives. This serves as a cautionary tale about the risks of developing advanced AI without clear ethical and operational safeguards. WarGames showed what happens when we let machines make decisions that require human judgment, empathy, and hesitation. In The Terminator, Skynet becomes self"'aware and launches a nuclear war as an act of self-preservation, illustrating the nightmare scenario of an AI so deeply embedded in our infrastructure that humans can no longer intervene.

Next Page  1  |  2

(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).

Rate It | View Ratings

Carl Petersen Social Media Pages: Facebook Page       Twitter Page       Linked In Page       Instagram Page

Carl Petersen is a proud father of five adult children, including two daughters on the severe end of the Autism spectrum. A passionate advocate for special education, he ran as a Green Party candidate for the LAUSD School Board. Renowned (more...)
 

Go To Commenting
The views expressed herein are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.
Follow Me on Twitter     Writers Guidelines

 
Contact AuthorContact Author Contact EditorContact Editor Author PageView Authors' Articles
Support OpEdNews

OpEdNews depends upon can't survive without your help.

If you value this article and the work of OpEdNews, please either Donate or Purchase a premium membership.

STAY IN THE KNOW
If you've enjoyed this, sign up for our daily or weekly newsletter to get lots of great progressive content.
Daily Weekly     OpEdNews Newsletter

Name
Email
   (Opens new browser window)
 

Most Popular Articles by this Author:     (View All Most Popular Articles by this Author)

Finding Hope in Florida

Make it a Headline When Trump Actually Tells the Truth

California Senate Candidate Alison Hartson on Education

Three Headlines That Got Buried Last Week

Bright Shiny Objects: Trump's Real Art is Diverting Attention

If Money Continues to Talk, We're Screwed

To View Comments or Join the Conversation:

Tell A Friend