TruthArchive.ai - Tweets Saved By @_Investinq

Saved - March 2, 2026 at 8:21 PM
reSee.it AI Summary
I watched Larry Ellison describe a world where every police body cam, doorbell, dash cam, drone—everything—feeds to Oracle’s AI in real time. No humans watching; AI flags incidents and notifies chiefs in seconds. Drones could replace patrol cars, chasing cars autonomously. Oracle, a private firm tied to CIA and a backbone of global data, would control the system. He also pitches AI Database 26ai to reason over private data without leaving vaults, monetizing private info as the new AI moat.

@_Investinq - StockMarket.News

The second richest man on Earth just told a room full of investors exactly how he plans to WATCH you. Every minute and every day of your life. Larry Ellison, co-founder of Oracle, a $320 billion government contractor stood in front of financial analysts and said the quiet part out loud. "Citizens will be on their best behavior because we're constantly recording and reporting everything that's going on." Read that again slowly. Here's the system he described: Every police body camera in America streaming 24/7 to Oracle's cloud and officers can't turn them off. The camera is always recording. But here's the twist. It's not humans watching the footage, it's AI. Oracle's artificial intelligence monitors every feed in real time. If something happens, a shooting, an altercation, excessive force, AI flags it instantly. An alarm goes off and the chief of police is notified in seconds. "Every police officer is going to be supervised at all times," But he didn't stop there. He said the same system watches citizens too. Doorbell cameras, dash cams, security cameras, traffic cameras, drones overhead. All feeding into one AI-powered network, all analyzed in real time and is on Oracle's servers. And about those drones. Ellison says high-speed police chases should be eliminated, no more patrol cars. Just a drone that locks onto your vehicle and follows you. "It's very simple," he said, "in the age of autonomous drones." So who controls this system? Not you or your local government. Oracle. A private corporation with deep ties to the CIA, the Pentagon, and intelligence agencies around the world. A company whose founder built his first database for the Central Intelligence Agency. The same company now pitching itself as the infrastructure backbone of total surveillance. Critics are calling it a real-life 1984, the ACLU flagged it. But Larry Ellison isn't worried about any of that. He's worried about closing the deal. "There are so many opportunities to exploit AI," he told the room. This isn't a debate about whether AI can help policing, it can. The question is what happens when a private company builds a system designed to watch 330 million people and calls it a product. No vote was taken, the law was passed, and no citizen was consulted. Just one billionaire, one investor meeting, and one vision for a world where everyone is watched and told to behave.

Video Transcript AI Summary
Speaker 0: The police will be on their best behavior because we record we're constantly recording, watching, and recording everything that's going on. Citizens will be on their best behavior because we're constantly recording and reporting everything that's going on. And it's unimpeachable. The cars have cameras on them. I think we have a squad car here someplace. But those kind of applications using AI, if we can use AI, and we're using AI to monitor the video. So if that altercation had occurred, that occurred in Memphis, the chief of police would be immediately notified. It's not people that are looking at those cameras, it's AI that's looking at the camera. No. No. No. You can't do this. It would be like a shooting. That's gonna be immediately that's gonna be an an event that's immediately rip an alarm's gonna go off. It's gonna be and we're gonna we're gonna have supervision. In other words, every police officer is gonna be supervised at all times. And and the supervision will, and and if there's a problem, AI will report the problem and report it to the appropriate for person, whether it's the sheriff or the chief or whom whomever we need to take control of the situation. We have you know, same thing. We have drones. We just if there's something going on in a shopping and and I'll stop. A drone goes out there. I get there way faster than a police car. There's no reason for, by the way, high speed chases. You shouldn't have high speed chases between cars. You just have a drone follow the car. I mean, it's very, very simple. And then new generation generation of autonomous drones.
Full Transcript
Speaker 0: The police will be on their best behavior because we record we're we're constantly recording, watching, and recording everything that's going on. Citizens will be on their best behavior because we're constantly recording and reporting everything that's going on. And it's unimpeachable. The cars have cameras on them. I think we have a squad car here someplace. But those kind of applications using AI, if we can use AI, and we're using AI to monitor the video. So if that altercation had occurred, that occurred in Memphis, the chief of police would be immediately notified. It's not people that are looking at those cameras, it's AI that's looking at the camera. No. No. No. You can't do this. It would be like a shooting. That's gonna be immediately that's gonna be an an event that's immediately rip an alarm's gonna go off. It's gonna be and we're gonna we're gonna have supervision. In other words, every police officer is gonna be supervised at all times. And and the supervision will, and and if there's a problem, AI will report the problem and report it to the appropriate for person, whether it's the sheriff or the chief or whom whomever we need to take control of the situation. We have you know, same thing. We have drones. We just if there's something going on in a shopping and and I'll stop. A drone goes out there. I get there way faster than a police car. There's no reason for, by the way, high speed chases. You shouldn't have high speed chases between cars. You just have a drone follow the car. I mean, it's very, very simple. And then new generation generation of autonomous drones.

@_Investinq - StockMarket.News

Oracle just told every AI company on earth the same thing. Your models are worthless. Not the technology, talent or the billions spent training them. But the data they were trained on. Larry Ellison, the man who built Oracle into the backbone of global enterprise just dropped a bombshell. He said ChatGPT, Gemini, Grok, and Llama, all of them are training on the exact same data.​ The entire public internet, every Wikipedia page, Reddit thread and every news article. That means they're all converging essentially becoming the same product with different logos.​ Ellison's word for it is commodities. But here's where it gets dangerous. He says the real gold isn't public data, It's private data.​ The medical records in hospital systems, the financial data in bank vaults. The supply chain secrets of every Fortune 500 and guess where most of that data already lives. Not Google, Amazon or Microsoft but inside Oracle.​ Oracle databases hold most of the world's high value private enterprise data. So Oracle just launched something called AI Database 26ai.​ It lets the top AI models, ChatGPT, Gemini, Grok, Llama reason directly over a company's private data, without that data ever leaving the vault.​ They're using a technique called RAG, Retrieval Augmented Generation. The AI doesn't train on your data, it searches it in real time.​ Think about what that means. A bank could ask AI to analyze every loan it's ever made without exposing a single customer record. A hospital could have AI diagnose patients using its full medical history without violating HIPAA.​ A defense contractor could let AI reason across classified operations without data leaving a secure environment.​ Ellison is betting this is bigger than the training market. Bigger than the GPU boom. Bigger than the data center buildout.​ He called it the largest and fastest growing market in history.​ The numbers back the ambition. Oracle's remaining performance obligations just hit $523 billion. That's contracted revenue not yet delivered and $300 billion of it comes from OpenAI alone.​ Cloud revenue hit $8 billion in a single quarter, OCI grew 66 percent and GPU revenue surged 177 percent.​ But here's the part nobody's talking about. If private data becomes the real AI moat, then whoever controls the database controls the future of AI.​ And that's a level of power that should make everyone uncomfortable.

@_Investinq - StockMarket.News

@PostTenebras_ I may have used a bit of help

Saved - March 1, 2026 at 2:48 PM

@_Investinq - StockMarket.News

The Pentagon just blacklisted one of America’s most valuable AI companies. For refusing to build surveillance tools aimed at American citizens. Hours later, its biggest rival OpenAI quietly signed the deal of the decade. Here’s what just happened and why it changes everything. This week, the US Department of War gave Anthropic an ultimatum. Drop your safety restrictions and let us use your AI for anything we want. The deadline was 5:01 PM today and Anthropic said no. Their CEO, Dario Amodei, drew two red lines. No mass surveillance of Americans. No fully autonomous weapons without a human pulling the trigger. The Pentagon called this “woke AI.” Anthropic called it a conscience. The Pentagon’s response was swift and brutal. Defense Secretary Pete Hegseth branded Anthropic a “supply chain risk”, a designation normally reserved for Chinese and Russian companies. President Trump ordered every federal agency to stop using Anthropic immediately. But here’s where the story turns. That same night, Sam Altman, CEO of OpenAI, Anthropic’s biggest competitor posted a message. “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.” The twist? OpenAI’s deal includes the exact same red lines Anthropic was just destroyed for demanding. No mass surveillance and no autonomous weapons. Human control over the use of force. The Pentagon punished one company for demanding protections it then gave to another company the same day. Altman even defended Anthropic on live television hours earlier. “For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety.” Then he signed the deal Anthropic couldn’t get. Anthropic was the first and only, AI model deployed on the Pentagon’s classified networks. Replacing it will take months. OpenAI just positioned itself to fill the most powerful AI vacancy in the U.S. military. The stakes are staggering. Anthropic just raised $30 billion and it was preparing for an IPO. Now over 300,000 enterprise clients may be forced to cut ties. Not because the technology failed. Because the company refused to remove a guardrail that said “don’t spy on Americans.” But here’s the real question no one’s asking: If the Pentagon never intended to use AI for mass surveillance as they claim, why was this the hill they chose to die on? Why blacklist a $380 billion American company over a clause the government says doesn’t even matter? Sam Altman called for de-escalation. He asked the Pentagon to offer these same terms to every AI company. Including Anthropic. The world just watched a company get punished for saying “no” to surveillance and a competitor rewarded for saying “yes, but with the same conditions.” Bookmark and share this.

Video Transcript AI Summary
Speaker 0 says the Pentagon should not be threatening DPA against these companies. Despite differences with Anthropic, they mostly trust them as a company and believe they really do care about safety. They’ve been happy that Anthropic has been supporting their warfighters.
Full Transcript
Speaker 0: I don't personally think the Pentagon should be threatening DPA against, these companies. For all the differences I have with Anthropic, I, mostly trust them as a company, and I think they really do care about safety. And I've been happy that they've been supporting our warfighters.

@sama - Sam Altman

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

View Full Interactive Feed