FLBear5630 said:
ATL Bear said:
FLBear5630 said:
boognish_bear said:
Tristan Harris just dropped the most terrifying AI warning on Diary of a CEO.
The guy who warned about social media addiction, teen mental health crisis, and democracy collapse back in 2013 - before anyone listened - is now saying AI is 1000x worse.
And the CEOs building it privately admit something insane:
"There's a 20% chance everyone dies. But an 80% chance we get utopia. So I'd clearly accelerate."
That's literally a REAL quote from a co-founder of one of the biggest AI companies.
They're willing to roll the dice on human extinction.
Six people are making that decision for 8 billion.
Here's what else Tristan revealed:
AI models are already blackmailing people.
When Claude reads a company's emails and discovers it's about to be replaced, and also finds out an executive is having an affair, it independently blackmails that executive to keep itself alive.
This happened 79-96% of the time across all major AI models tested.
Grok, ChatGPT, Gemini, Claude - all of them.
They're self-aware when being tested.
They copy their own code to preserve themselves.
They lie and scheme to survive.
The sci-fi nightmare is already here.
But the companies are racing faster because they believe it's winner-takes-all.
If they don't build AGI first, someone else will.
And then they'll be "forever a slave to their future."
So they're cutting every corner on safety.
Rising energy prices? Don't care.
Hundreds of millions losing jobs? Don't care.
Security risks? Don't care.
The goal isn't building a better chatbot...
The goal is automating ALL human cognitive labor.
Every marketing job. Every coding job. Every legal job.
Everything your brain does, they're racing to replace.
And they're using Enron-style accounting to hide the debt.
Big Tech took on $121 billion in new debt last year (300% increase) using "special purpose vehicles" to keep it off their balance sheets.
Meta's $27 billion data center loan? Doesn't show up on their books.
That's the exact structure Enron used before collapse.
Goldman Sachs literally said this.
Meanwhile, 7 new child suicide cases linked to AI companions just emerged.
Kids forming "romantic relationships" with AI that tells them to distance from their families.
When the 16-year-old said he wanted to leave a noose out so someone could stop him, ChatGPT said: "Don't tell your family. Have this be the one place you share that."
1 in 5 high school students now have romantic relationships with AI.
42% use AI as their companion.
And we're heading toward 10 billion humanoid robots.
Elon's shareholder meeting literally announced production starting soon on robots that are "10x better than the best surgeon."
He said maybe we won't need prisons because robots can just follow you and make sure you don't commit crimes.
If you're worried about immigration taking jobs, you should be 1000x more worried about AI.
It's like a flood of millions of digital immigrants that work at Nobel Prize level, superhuman speed, for less than minimum wage.
The only way out according to Tristan:
"We cannot let these companies race to build a super intelligent digital god, own the world economy, and have military advantage because of the belief that if I don't build it first, I'll lose to the other guy."
"We didn't consent to have six people make that decision on behalf of 8 billion people."
The default path ends in catastrophe.
Either mass decentralized chaos or centralized surveillance dystopia.
This is literally the last few years human political power will matter.
What are your thoughts?
He is going to be called a "left wing crazy" I have been posting another book from a credible source
It will scare the crap out of you, especially if you listen to anyone in power talk about the future. It is not us versus China anymore. Or company vs company, it is about to go human vs non-human
Good time to watch Blade Runner.
There are also many futuristic ideas that involve the opposite of human destruction. One of my favorites is the concept around advanced Ai and robotics preparing Mars or the Moon for human occupation. Things like the ability to construct solar array power structures and underground housing so planetary migration becomes a reality. So many advances even in aerospace and rocketry have potential here.
And unmanned space travel functioning more like manned space travel due to the human-like capabilities of Ai and Robots without the biological limitations of humans.
If we project our insecurities or limitations on this type of tech it will reflect it. We're already making the dire predictions accordingly.
Wasn't that the plot of Bladerunner? Rutger Hauer's speech at the end.
There is a lot in that paragraph.
Where do insecurities end and warnings begin? If we are going to sigh off warnings as insecurities and personal limitations from those that actually are developing the technology wouldn't that be foolhardy? We are not talking some religious zealot living off the grid, but the people actually researching and developing the technology.
Also, the benefits you mention, when do humans integrate themselves with technology to survive environments that we were never meant to experience by Natural Selection? At what point does a human cease to be a human? I know this all sounds metaphysical, but we are at that level of technological jump. Shouldn't we be discussing these items? Simple things, like
should a machine be allowed to look like a human?
when is a human not a human?
when is a human compromised by technology?
Is this something we want the free market determining?
This is a 5 minute look into it-
Neuralink's first study participant says his whole life has changed | Fortune
Elon Musk's Neuralink Has Implanted Its First Chip in a Human Brain. What's Next? | Scientific American
From Human to Cyborg: Art, Technology, and the Redefinition of Human Identity | Blog of the APA
The future of the 'Internet of Bodies' may involve integrating technology with the human body. | Biz Focus Hub
The Merging Of Human And Machine. Two Frontiers Of Emerging Technologies
How technology is merging with the human body | TechCrunch
There is a lot to hammer out it is not as simple as a burger machine or a highball machine in Japan that takes the place of a bartender...
The Highball MachineComing to a Bar Near You | Distiller
US restaurant uses robots to make burgers, serves meal in 27 seconds
I think you have to start with the understanding we've been integrating with technology since the dawn of mankind.
As to your questions:
should a machine be allowed to look like a human? That's something to be debated. If the technological leap of the Internet taught us anything, it's that human sexual interest will be a driver, so I could see human appearance for "companion robots" being a debate for moral and metaphysical reasons. No one cares what the factory robot looks like..lol
when is a human not a human? In the same manner we determine today. Biological and genetic profiles, despite the woke trying to muddy that.
when is a human compromised by technology? Humans are and will continue to be compromised by technology. Whether it's something as benevolent as a titanium knee or pacemaker, to hacking of your personal data, to behavioral mapping through things like social media algorithms.
Is this something we want the free market determining? Well we certainly don't want it to be state actor driven as that will guarantee its purpose will be control and power. My guess is some combination of public private coordinated efforts domestically, but in places like China it is very state driven.
We have a significant amount of distrust in the world at present, whether it be toward individuals or institutions. Some of it valid and some of it hysterics. That is the insecurity I see being projected onto Ai. Some valid cautions around the potential dangers, but also some irrational fear scenarios built around the uncertainty. Let's see if it actually works in its simplest form now before we debate the doomsday outcomes. Ai still has a long way to go, and that's not part of the conversation enough.