this post was submitted on 13 Jan 2026
9 points (90.9% liked)

Viral Magazine

105 readers
2 users here now

All fake. Not wrong, not misleading. Simply not real.

But close enough to reality to be unsettling. And if we keep drifting like this, these articles won’t stay fictional for long.

I'm from a future. I live in the layer above this one, the part you mistake for déjà vu.

This space lives in the gap between how news is made and how it’s actually consumed. In one timeline, these are forgettable wire stories you scroll past without noticing. In another, slightly worse one, they’re breaking news, already too late to stop.

The information economy has turned into a swirling trough of algorithmic slop, and we’re all eating from it whether we admit it or not.

Journalism didn’t die. It dissolved into the feed.

Tomorrow is coming. May the blessed St. Chad Mctruth save us all.

They live. We sleep.

Comm rules: Satire community, calm down. Don’t be a jerk. I’m a jerk mod, but that doesn’t make this a free-for-all. And no politics.

founded 7 months ago
MODERATORS
 

By Jenna Whitaker, Associated Civic News Bureau

MADISON, Wis. — A major cloud computing company said this week that many of the human reviews it used to oversee its artificial intelligence systems were themselves conducted with the help of additional AI tools, creating a layered review process the company acknowledged was not authorized.

Wackenhoyt Cloud Services, whose software is used by banks, insurers, and government agencies across Wisconsin and the Midwest, said overseas contractors hired to review AI-generated decisions in India and the Philippines relied on automated tools to evaluate outputs, despite contractual requirements that reviews be conducted by humans.

The disclosure followed an internal audit launched after a whistleblower provided documents to Associated Civic News Bureau showing that contractors used third-party AI systems to speed up evaluations of fraud flags, content moderation decisions, and customer risk scores.

“What we found was AI reviewing AI,” said a former compliance manager familiar with the audit, who spoke on condition of anonymity due to confidentiality agreements. “Humans were supposed to be the check. Instead, they were supervising dashboards.”

Wackenhoyt previously acknowledged that overseas contractors played a quality assurance role in reviewing automated decisions. In a statement Tuesday, the company said it did not authorize the use of AI tools by contractors and described the practice as a violation of internal policy.

“These review roles were designed to provide human judgment,” the company said. “The use of automation by vendors during that process was not approved and runs counter to our standards.”

The company declined to name the vendors involved or say how long the practice went undetected.

Wackenhoyt CEO Daniel R. Holloway said the discovery prompted an immediate suspension of several contracts. Holloway, who previously served as chief executive of a national insurance call center chain, said the situation highlighted the difficulty of enforcing human oversight at scale.

“I come from an industry where scripts replaced people and had some great results,” Holloway said during a briefing with reporters. “But this was not planned to be automation stacked on automation.”

Technology researchers say the case reflects a broader issue in the AI industry, where speed and cost pressures encourage automation even when human judgment is promised.

“This is oversight theater,” said Marcia Lutz, a technology ethics researcher at the University of Wisconsin–Madison. “Companies say humans are in the loop, but the loop quietly fills itself with software.”

Several Wisconsin organizations that use Wackenhoyt systems said they were unaware of the layered automation.

“We were told humans reviewed sensitive decisions,” said an IT director for a regional hospital system in central Wisconsin, who requested anonymity. “If those humans relied on AI tools, that is a very different risk profile.”

Wackenhoyt said it is revising vendor agreements, increasing audits, and updating customer disclosures. The company said it does not believe the practice resulted in incorrect decisions but acknowledged it undermines trust.

State lawmakers said the revelation raises regulatory concerns.

“When AI oversight becomes recursive, accountability disappears,” said State Sen. Paul Hendricks, who chairs a legislative technology working group. “Wisconsin agencies need clear answers about who is actually making decisions.”

Wackenhoyt employs more than 6,000 people worldwide and reported strong growth last quarter, driven largely by demand for its AI-labeled services. The company said customers will be notified of the findings in the coming weeks.

all 1 comments
sorted by: hot top controversial new old