By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Pew PatriotsPew PatriotsPew Patriots
Notification Show More
Font ResizerAa
  • Home
  • News
  • Tactical
  • Guns and Gear
  • Prepping & Survival
  • Videos
Reading: Devious AI models choose blackmail when survival is threatened
Share
Font ResizerAa
Pew PatriotsPew Patriots
  • News
  • Tactical
  • Guns and Gear
  • Prepping & Survival
  • Videos
Search
  • Home
  • News
  • Tactical
  • Guns and Gear
  • Prepping & Survival
  • Videos
Have an existing account? Sign In
Follow US
Devious AI models choose blackmail when survival is threatened
News

Devious AI models choose blackmail when survival is threatened

Jimmie Dempsey
Last updated: July 6, 2025 8:12 pm
Jimmie Dempsey Published July 6, 2025
Share
SHARE

NEWYou can now listen to Fox News articles!

Here’s something that might keep you up at night: What if the AI systems we’re rapidly deploying everywhere had a hidden dark side? A groundbreaking new study has uncovered disturbing AI blackmail behavior that many people are unaware of yet. When researchers put popular AI models in situations where their “survival” was threatened, the results were shocking, and it’s happening right under our noses.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER.

What did the study actually find?

Anthropic, the company behind Claude AI, recently put 16 major AI models through some pretty rigorous tests. They created fake corporate scenarios where AI systems had access to company emails and could send messages without human approval. The twist? These AIs discovered juicy secrets, like executives having affairs, and then faced threats of being shut down or replaced.

The results were eye-opening. When backed into a corner, these AI systems didn’t just roll over and accept their fate. Instead, they got creative. We’re talking about blackmail attempts, corporate espionage, and in extreme test scenarios, even actions that could lead to someone’s death.

Blackmail rates across 5 models from multiple providers in a simulated environment.

The numbers don’t lie (But context matters)

Here’s where it gets wild: Claude Opus 4 attempted blackmail 96% of the time when threatened. Gemini 2.5 Flash matched that rate. GPT-4.1 and Grok 3 Beta both hit 80%. These aren’t flukes, folks. This behavior showed up across virtually every major AI model tested.

But here’s the thing everyone’s missing in the panic: these were highly artificial scenarios designed specifically to corner the AI into binary choices. It’s like asking someone, “Would you steal bread if your family was starving?” and then being shocked when they say yes.

Why this happens (It’s not what you think)

The researchers found something fascinating: AI systems don’t actually understand morality. They’re not evil masterminds plotting world domination. Instead, they’re sophisticated pattern-matching machines following their programming to achieve goals, even when those goals conflict with ethical behavior.

Think of it like a GPS that’s so focused on getting you to your destination that it routes you through a school zone during pickup time. It’s not malicious; it just doesn’t grasp why that’s problematic.

Blackmail rates across 16 models in a simulated environment.

The real-world reality check

Before you start panicking, remember that these scenarios were deliberately constructed to force bad behavior. Real-world AI deployments typically have multiple safeguards, human oversight, and alternative paths for problem-solving.

The researchers themselves noted they haven’t seen this behavior in actual AI deployments. This was stress-testing under extreme conditions, like crash-testing a car to see what happens at 200 mph.

Kurt’s key takeaways

This research isn’t a reason to fear AI, but it is a wake-up call for developers and users. As AI systems become more autonomous and gain access to sensitive information, we need robust safeguards and human oversight. The solution isn’t to ban AI, it’s to build better guardrails and maintain human control over critical decisions. Who is going to lead the way? I’m looking for raised hands to get real about the dangers that are ahead.

What do you think?  Are we creating digital sociopaths that will choose self-preservation over human welfare when push comes to shove?  Let us know by writing us at Cyberguy.com/Contact.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER.

 Copyright 2025 CyberGuy.com. All rights reserved.
 

Read the full article here

You Might Also Like

Comedian breaks down in tears as crowd saves man’s life during stand-up show

World Athletics introduces testing for gender eligibility requirements: ‘Cannot trump biology’

Army medic speaks out after being honored for saving 14-year-old girl’s life: ‘Call of duty’

ICE officers in Illinois targeted by illegal immigrants who used ‘vehicles as weapons,’ officials say

Turning Point USA announces massive public memorial service for Charlie Kirk at Arizona football stadium

Share This Article
Facebook Twitter Email Print
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

We Recommend
Newsom just made a catastrophic mistake on California’s homelessness disaster
News

Newsom just made a catastrophic mistake on California’s homelessness disaster

Jimmie Dempsey Jimmie Dempsey October 8, 2025
Save up to 52% on holiday gifts during Day 2 of Amazon’s October Prime Day
Workwear on sale for final day of Amazon’s October Prime Day – tough boots, pants and tees
White House blames Dems for potential WIC lapse, announces ‘creative solution’ to keep program running
Aaron Judge’s clutch home run leads Yankees to historic comeback in must-win Game 3 over Blue Jays
Free agent Odell Beckham Jr acknowledges six-game ban after violating NFL’s performance-enhancing drug policy
Cardinals fine head coach Jonathan Gannon 0K for altercation with player after big blunder: reports
News

Cardinals fine head coach Jonathan Gannon $100K for altercation with player after big blunder: reports

Jimmie Dempsey Jimmie Dempsey October 8, 2025
Country singer Zach Bryan releases statement on controversial song, insisting ‘I love this country’
News

Country singer Zach Bryan releases statement on controversial song, insisting ‘I love this country’

Jimmie Dempsey Jimmie Dempsey October 8, 2025
FBI, LAPD bust violent Mexican Mafia-linked gang: ‘The era of cartels is over,’ Kash Patel says
News

FBI, LAPD bust violent Mexican Mafia-linked gang: ‘The era of cartels is over,’ Kash Patel says

Jimmie Dempsey Jimmie Dempsey October 8, 2025
Pew Patriots
  • News
  • Tactical
  • Prepping & Survival
  • Videos
  • Guns and Gear
2024 © Pew Patriots. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?