TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Israel uses a system called Lavender to decide who to kill, assigning scores to Palestinians and drone striking those above a certain threshold. Palantir creates these "murder lists" by scraping data from Facebook, satellite imagery, and other surveillance sources, compiling personal information to assign weighted scores and identify targets. Palantir, founded by individuals with ties to the Israeli government and the CIA, built this surveillance platform in Israel to target Palestinians. Palantir also maintains an "enemies list" of 1 to 2 million US citizens for the CIA and federal law enforcement, classifying them as potential political dissidents. This database uses surveillance and AI to identify Americans deemed threats to the government, including those with anti-government views or potential involvement in domestic extremism.

Video Saved From X

reSee.it Video Transcript AI Summary
In 2021, Israeli intelligence developed an AI program called Lavender to target individuals in war. The system designated 37,000 Palestinians as targets, resulting in civilian casualties. The IDF used mass surveillance to assess the likelihood of each person being a militant and targeted them accordingly. The Lavender system tracked individuals with patterns similar to Hamas, leading to the deaths of many innocent civilians. This AI targeting system has similarities to the US surveillance system.

Video Saved From X

reSee.it Video Transcript AI Summary
If you care about not being surveilled illegally, about the treatment of people who come into the country illegally but deserve adequate treatment, and about lives in Gaza, Ukraine, and worldwide where Palantir is used, you're gonna want the best software in the world because it's the only way you can reduce and more precisely target the people and justify it; and actually the only way where you can say this person did this and they deserve to go.

Video Saved From X

reSee.it Video Transcript AI Summary
Israel uses a system like Lavender to decide who to kill, assigning scores to Palestinians and drone striking those above a certain threshold. Palantir creates these "murder lists" by scraping data from Facebook, satellite imagery, and other sources to build databases with personal information, geolocation, bank information, healthcare information, and relationships. The company assigns weighted scores using algorithms and AI to advise the military on drone strikes. Palantir, founded by individuals with ties to the Israeli government and the CIA, allegedly built this platform in Israel to target Palestinians. Palantir also maintains an "enemies list" of one to two million US citizens for the CIA and federal law enforcement, classifying them as potential political dissidents. This database uses surveillance, AI, and weighted scores to identify Americans deemed a threat to the government, including those with anti-government views or those who might be a concern in a martial law or civil war scenario.

Video Saved From X

reSee.it Video Transcript AI Summary
First Speaker argues that Microsoft provided services and access to data, including Palestinian data, which allowed Israel to set up systems to mass target and mass kill Palestinians. They mention an application called "Where is Daddy?" that allows the army to randomly track people and reach them when they are with their families in order to inflict the most harm, describing it as brutal. They state agreement with this view and emphasize the importance of understanding that this represents the end of humanity and the civilization people have pretended to belong to. They claim Israel has the most sophisticated military in the region and has known exactly what it is doing for two years. They assert that many soldiers are breaking down and suicide rates are increasing among young Israelis who have served in the army, noting they are older than teenagers and have been turned by indoctrination into willing executioners of a genocide. They call for intervention by people who love Israel to save what remains of Israel. First Speaker contends that the biggest harm is being done by those outside of Israel who defend the regime. They describe the regime as having imposed a military dictatorship for decades on Palestinians in the West Bank and Jerusalem, and until 2005 in Gaza, and claim this regime also extends to some Israelis who are part of the system. They argue that brutality toward others undermines one's own humanity. Second Speaker agrees and seeks clarification, asking if there is an app, possibly by an American company, called "Where's Daddy" that allows the Israeli government to murder men in front of their children. They reference the prior statements and want confirmation of that claim. First Speaker responds that Israel has developed not just a system but an automatized system to decide targets through a computing system, and that data has been provided by technology. They reiterate that this is part of a broader system of targeting.

Video Saved From X

reSee.it Video Transcript AI Summary
Israel uses a system called Lavender to decide who to kill, assigning scores to Palestinians and drone striking those above a certain threshold. Palantir creates these "murder lists" by scraping data from Facebook, satellite imagery, and other surveillance sources, compiling personal information to assign weighted scores and identify targets. Palantir, founded by individuals with ties to the Israeli government and the CIA, also maintains an "enemies list" of 1 to 2 million US citizens for the CIA and federal law enforcement. This list classifies Americans as potential political dissidents based on surveillance data and AI, assessing their threat level to the government, extremist views, and potential for anti-government activity in scenarios like martial law or civil war.

Video Saved From X

reSee.it Video Transcript AI Summary
Hello, everyone. We're discussing fusion centers, which compile extensive data on individuals in America, similar to a comprehensive dossier. The integration of AI amplifies this issue by incorporating public records, surveillance data, and other sources, creating a scenario reminiscent of "Minority Report." This technology can be misused to target individuals labeled as "deplorables," as suggested by figures like Harari. Elon Musk aims to develop an AI that seeks truth rather than perpetuating biases against certain groups. My background in high-tech reveals how this technology has been exploited in cases like the Portland Christmas Tree bomber. Raising awareness about these issues is crucial, especially as we seek reforms to ensure that government technology serves the citizens rather than opposes them.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker states that artificial intelligence is being used to create mass assassinations, blurring the lines between assassination and warfare. They claim that many targets in Gaza are bombed as a result of AI targeting. The speaker emphasizes the connection between AI and surveillance, asserting that AI needs information to generate targets, ideas, or propaganda. Surveillance data from telephones and the internet is key to training the algorithms used to conduct these mass assassinations.

Video Saved From X

reSee.it Video Transcript AI Summary
Natalie asks about the AI piece, expressing cynicism that there may be a push for a “war bot” to circumvent consumer AI limits that block starting wars with WMDs, and wonders if there is a benevolent reason. Matthew responds that it’s worse than that: Hengseth described a platform to run on military desktops worldwide—secure, like ChatGPT or Claude but for the Pentagon and military services—that “doesn’t allow information to get out.” The core issue, he says, is who controls the AI, and two key questions about the future of war with AI: who ultimately owns these AI platforms, and who informs them—who gives them the algorithm and programming and essentially orders on how to answer questions. He notes increasing concerns about reliability of information, including how ChatGPT handles questions about trustworthy news sources. He mentions that ChatGPT defers to institutional structures rather than historical accuracy. The risk, he says, is that military AI programs may not provide honest, candid, objective information to military personnel, but rather information based on narratives the Pentagon or manufacturers want. A common belief is that technology makes war more precise and reduces civilian harm, but Matthew contends this is a myth. He explains that precision-guided munitions were not about preventing civilian casualties but about increasing efficiency—“the purpose was to make the weapons more efficient, so we had to drop less bombs to, say, blow up a bridge.” He cites the small diameter bomb as evidence that the aim is not to limit civilian casualties but to allow more bombs to be delivered from aircraft. He highlights real-world examples of AI in warfare, referencing Israeli systems in Gaza. He explains that three AI programs—Lavender, Gospel, and Where’s Daddy?—play roles in targeting and timing strikes. Lavender scans theInternet and databases to identify targets (e.g., labeling someone as a Hamas supporter based on a past online activity), and Where’s Daddy? coordinates that information to ensure bombs hit resistance fighters “when they are with their families,” not away from them. He notes reporting from Israeli media and Nine Two Magazine about these programs and urges viewers to examine that reporting; Tucker Carlson’s coverage is mentioned as example. Matthew argues this demonstrates the dystopian potential of AI in war and cautions against assuming American AI would be more benevolent. He mentions commentator references to justify or excuse actions, including a remark attributed to Mike Huckabee that “Israel did not attack Qatar. They just sent a missile into their country aimed at one person,” noting the nearby injuries or deaths. He ends with a reminder of Orwell’s reflections on war and the idea that those who cheer for war may be less enthusiastic if they experience its costs, suggesting a broader aim to make the costs of war felt among ruling elites who benefit from it.

Video Saved From X

reSee.it Video Transcript AI Summary
AI can be used to oppress people, as highlighted in an expose by 972 Magazine. The article discusses how Israel employed AI to identify suspects, but this technology resulted in the deaths of many civilians who were not the intended targets.

Video Saved From X

reSee.it Video Transcript AI Summary
Where's daddy? And according to this reporting in September, again based on what Israeli military officials told these journalists, it's easier to kill somebody when they're home than it is when they're out in the street. So this AI will monitor the location of your cell phone, your smartphone. And when it arrives at the location that's been determined to be your residence, then there will be a ping to the guy who can target your house and bring it down. I guess whoever came up with this name, where's daddy, thought that many of them would be fathers. And that's why it's called where's daddy. When they reach their homes daddy's home, and then the entire house and everybody in it could be blown up.

Video Saved From X

reSee.it Video Transcript AI Summary
An AI system marked 37,000 Palestinians in Gaza as suspected militants based on various factors. Despite knowing it made errors in 10% of cases, the IDF used this system to target individuals in their homes with unguided missiles, resulting in civilian casualties.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker explains that hacking millions of people only requires access to their data, allowing others to know individuals better than they know themselves. This poses a threat to democracy and free markets, as it enables manipulation and prediction of people's actions. Total surveillance regimes, like those seen in Xinjiang and the occupied territories of Israel, are emerging, where a small number of soldiers can control millions of people with the help of data.

Video Saved From X

reSee.it Video Transcript AI Summary
An AI system called Lavender marked 37,000 Palestinians in Gaza as suspected militants based on small signs like phone usage. The Israeli military used this information to target and bomb these individuals, even though the system made errors in 10% of cases. This led to civilian casualties when unguided missiles were used on family homes, killing up to 20 civilians per suspected militant.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 1 and Speaker 0 discuss the implications of AI in military use. They consider whether consumer AI is being bypassed with a secure, military-specific platform that would be sealed—essentially one-way in and no information out—for the Pentagon and military services. The key questions raised are: who controls the AI, who informs its algorithms, and who gives it its orders on how to answer questions, highlighting concerns about privatization and outsourcing of war. Speaker 1 argues that the future of war with AI hinges on two issues: ownership of AI platforms and the sources of their programming. They note that AI can deflect or defer to institutional structures rather than empirical accuracy, raising concerns about the reliability of information provided to military personnel. They also reference the myth that advancing technology automatically reduces civilian harm, citing that precision-guided munitions were designed for efficiency, not necessarily to prevent civilian casualties, noting that the intent was to reduce the number of bombs needed to achieve targets. The conversation shifts to the concept of precision in weapons. Speaker 1 points out that laser- and GPS-guided bombs were not primarily invented to minimize civilian casualties but to increase efficiency. They mention the small diameter bomb as an example, explaining that its use increases the number of bombs that can be deployed rather than primarily limiting collateral damage. The discussion then moves to real-world AI systems used in conflict zones. Speaker 1 cites Israeli programs—Lavender, Gospel, and Where’s Daddy?—as examples of nefarious and insidious AI in war. Lavender supposedly scans the Internet and other databases to identify targets, for example flagging someone as a Hamas supporter based on years of activity. Where’s Daddy? allegedly guides Israeli drones to strike fighters when they are with their families, not away from them. This reporting is linked to coverage from Israeli media and Nine Seven Two magazine, and Speaker 2 references Tucker Carlson’s coverage of these issues. Speaker 2 amplifies the point by noting the emotional impact of such capabilities, arguing that targeting men when they are with their children is particularly disturbing. They also discuss broader political reactions, including a remark attributed to Ambassador Huckabee about Israel not attacking Qatar but “sending a missile there” that injured nearby people. Speaker 1 concludes by invoking Orwell’s reflection on the Spanish Civil War, suggesting that those who cheer for war may be confronted by the consequences when modern aircraft enable distant bombing. They emphasize the need to make the costs of war felt by the ruling classes who benefit from it, not just the people on the ground.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 describes a 2021 claim by the commander of Israeli intelligence to design a machine to resolve a human bottleneck in locating and approving targets in war. A recent investigation by Plus 972 Magazine and Local Call reveals that the Israeli army developed an AI-based Lavender system to designate targets and direct airstrikes. During the initial weeks of the Lavender operation, the system designated about 37,000 Palestinians as targets and directed airstrikes on their homes. The system reportedly had an error rate of about 10%, and there was no requirement to verify the machine’s data. The Israeli army systematically attacked targeted individuals at night in their homes while their whole family was present. An automated component, known as “where’s daddy,” tracked targeted individuals and carried out bombings when they entered their family residences. The result, according to the report, was that thousands of women and children were killed by Israeli airstrikes. Israeli intelligence officers allegedly stated that the IDF bombed homes as a first option, and in several cases entire families were murdered when the actual target was not inside. In one instance, four buildings were destroyed along with everyone inside because a single target was in one of them. For targets marked as low level by Lavender, cheaper bombs were used, destroying entire buildings and killing mostly civilians and entire families. It was alleged that the IDF did not want to waste expensive bombs on “unimportant people,” and it was decided that for every low-level Hamas operative Lavender marked, it was permissible to kill up to 15 or 20 civilians; for a senior Hamas official, more than 100 civilians could be killed. Most AI targets were never tracked before the war. Lavender analyzed information collected on the 2,300,000 residents of the Gaza Strip through mass surveillance, assessing the likelihood of each person being a militant and giving a rating from 1 to 100. If the rating was high enough, the person and their entire family were killed. Lavender flagged individuals with patterns similar to Hamas, including police, civil defense, relatives, and residents with similar names or nicknames. The report notes that this kind of tracking system has existed in the US for years. Speaker 1 presents a counterpoint: a “fine gentleman of the secret service” claims to provide a list of every threat made about the president since February 3 and profiles of every threat maker, implying that targets could be identified through broad data collection including emails, chats, SMS. The passage suggests a tool akin to a Google search but including private communications. Speaker 0 adds that although some claim Israel controls the US, Joe Biden says Israel serves US interests. Speaker 2: A speaker asserts, “There’s no apology to be made. None. It is the best $3,000,000,000 investment we make,” and claims that without Israel the United States would have to invent an Israel to protect its regional interests. Speaker 0 closes reporting for Infowars, credited to Greg Reese.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker addresses the use of artificial intelligence or Lavender by the IDF in identifying Hamas targets. They state they are not on top of all the details of what’s happening in Israel and that their bias is to defer to Israel. They say it’s not for us to second guess everything. They conclude that broadly the IDF gets to decide what it wants to do and that they’re broadly in the right.

Video Saved From X

reSee.it Video Transcript AI Summary
You're getting wealthy off of killing Palestinians with Al Qaeda kills Palestinians with their AI and technology. Killing my family in Palestine. the primary source of of death in Palestine is the fact that Hamas has realized that there are millions and millions of useful idiots that will excuse. why you don't get a flash on this speech because you are killing my people and you're justifying it because of Hamad or anyone else. kill Palestinians. Mostly terrorists. That's true. I believe she believes I'm evil. I believe she's an unwitting product of an evil force from us, that she unwittingly is part of their strategy, that she is a product. And the most important thing arguably in the book or what Palantir or what you could learn is do not become a product of an ideology that sounds sensible,

Video Saved From X

reSee.it Video Transcript AI Summary
The segment centers on a US-led Civil-Military Coordination Center in southern Israel, established in October 2025 to monitor the Gaza ceasefire. It showcases a map of the Strip, footage of trucks, and a Dataminer report. Dataminer is a private US tech company that uses artificial intelligence to mine social media in real time to issue warnings of critical situations, highlighting the growing relationship between private AI firms and militaries and signaling a structural shift in how warfare is conducted, who controls it, who profits, and how accountability works. Heidi Khalaf, chief AI scientist at the AI Now Institute, explains that militaries rely too heavily on commercial technologies and are not investing in their own traceable, explainable models, instead using a “black box.” Gaza provides the first confirmation that commercial AI models are being directly used in warfare, justified by speed at the cost of accuracy. The report asserts that Israel’s war in Gaza was not driven solely by soldiers but also by data prediction, location tracking, drone feeds, and AI models built by private tech firms. Palantir is described as a key player, with reports claiming they supplied AI tools to help identify and accelerate targeting of individuals in Gaza, though Palantir has denied these claims. Amazon and Google are said to have provided Israel with cloud infrastructure needed for military AI systems; both companies maintain their services are commercial, not military. These tools are said to have shifted the war from human intelligence to a data industry. While defense contracting is not new, earlier conflicts such as the 2003 US invasion of Iraq relied more on informants and interrogations; AI then involved a human in the loop, with clearer military applications. Now, the line between commercial and military use of AI is blurred, and corporations play a larger role. A key question raised is what it means when a private AI company controls the infrastructure the military depends on, rather than the state. Khalaf notes that militaries are ceding control and state obligations to faulty technology developed by private companies with different incentives, which can lead to AI being used to evade accountability for mass civilian casualties due to model inaccuracy. The analysis concludes that war is no longer just a battlefield—it is also about who builds and controls the software governing mass civilian data.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: Welcome back to Jake GTV news. Did you see ICE shooting American citizens? Speaker 1: I thought they were supposed to get rid of the illegals, though. Speaker 0: Me too. Let's go to Ching Chong on the murder scene. Speaker 1: Chloe and Michael, good morning. We're here in Minneapolis where ICE agents trained by Israel are causing chaos. We go to John for more. Speaker 0: Thanks, Ching Chong. Thought it was only Libtards who opposed this, but they are literally murdering Americans. Back to you in the studio. Speaker 2: Stand back. Speaker 1: Please don't hurt me, sir Ed. I'm here to get rid of the illegals, grandma. Speaker 0: Wow. Thanks, John. Check this out here. It's from the protest. Here we see an agent assault a woman for simply being at the protest. Speaker 3: Then Alex steps in to help her Speaker 0: get back on her feet, and Speaker 4: the agents pepper spray him and proceed to assault him. Speaker 0: They then proceed to remove his legally owned firearm and shoot him in the back roughly 10 times, not even kidding. Holy shit. Speaker 1: Please tell me they're gonna jail. Speaker 0: Nope. They're on administrative leave while the FBI pretends to care. Dude, what? Let's see what Trump's team has to say. Speaker 5: Very, very unfortunate incident. I don't like that he had a gun. I don't like the fact that he was carrying a gun. Speaker 6: You know, you can't have guns. You can't walk in with guns. You just can't. And you can't listen. You can't walk in with guns. You can't do that, but it's it's a very unfortunate incident. Speaker 7: Do you Speaker 1: agree with Trump, Steen? Speaker 6: Oh, hell yeah. Guns are bad now. Didn't you get the memo? Speaker 1: What about the second amendment? Speaker 6: It's all four d chess, honey. Trust the plan. Speaker 1: Sup, bro? How do you feel about ICE? Speaker 0: This country needs more Indians than blacks. Check your privilege. Speaker 1: Dude, when did everybody get so retarded? Was it the vaccines or something? We go to the investigation team to learn more. Speaker 8: Thanks, Ching Chung. So basically, we uncovered that not only is ICE Embassy located in Tel Aviv, but they're using the same technology they used to genocide the Palestinians. Speaker 0: It's a freaking Jewish spyware by Paragon Solutions called Graphite, and check this out. Tell me why Alex Pretty was googled a month prior to the shooting and, again, five minutes before his death. Make of that what you will. Back to you guys. Wow. Wasn't the Homeland Security's own Twitter page being run from Israel? Speaker 1: Yeah. Same with ICE's embassy, Tel Aviv to be exact. Speaker 0: Freaking Jews, man. Speaker 9: Shut it down. He was an unhinged lefty who thought our Chobus Goy Trumpstein was a dictator. He kicked the taillight the week prior, so he deserved to be gunned down like a dog. Speaker 1: Air that. Jeez, Producer Berg, chill. Speaker 0: Gosh, he's so Talmudic. Speaker 1: Right. Always victim. Speaker 0: Anyways, here's their emotional justification for cold blooded murder. Speaker 1: That was a pretty good leg kick. Speaker 0: Right? Let's get Shapiro Steen's take on this whole thing. Speaker 10: Just because we didn't arrest anyone for the Epstein files, genocide, or our poisonous mRNA doesn't mean we won't also get away with murdering Boyum. After all, he kicked a taillight. Speaker 0: Yeah. I guess you're right, Shapiro Steen. Israel is our greatest ally. Speaker 1: You're not getting a raise. Speaker 0: Discount on your only freaks? Speaker 1: Not a chance. Ching chong, take it away. Gosh, dude. You're such a weak little simp. She's a literal succubus. Speaker 0: Anyways, let's take a tour with the IDF, I mean ice. Whoops. What was your training like? We were supposed to be trained for this? Speaker 0: Yeah. We've got an antiseptic on the next block. Get ready to murder. Stop resisting. Did you see me shoot that senior citizen? Yeah. Definitely not an immigrant, he sure had it coming. Let's see what Diego's up to. Speaker 2: I will tell you this, brother. What? You know? I will tell you this. You raise your voice? I raise your voice. Speaker 1: Wow. Isn't that like against the law? Speaker 0: You'd think so but they'll end up getting paid administrative leave and mental health support. Speaker 1: Seriously? Speaker 0: Dead ass. If I Speaker 11: raise my voice, you'll erase Speaker 2: my Exactly. Yeah. Yeah. Speaker 11: Are you serious? You said, if I raise my voice, you'll erase my voice? Speaker 1: Yes. Mhmm. Mhmm. Ice. You guys are saving this country. Speaker 0: Didn't they kill that American woman last week? Renee Good or something? Speaker 1: That non chosen person? She was lesbian leftist Karen. Who cares? Speaker 0: Whatever you say, Daisy. No. Speaker 7: No. Shit. Shit. Oh my fucking god. What the fuck? What What the the fuck? Fuck? Speaker 0: You might be wondering, why Minneapolis? Tim Waltz ushered in a defund the police initiative, which created a perfect opportunity for Trump's team to bring about the first AI surveillance state. You know what they say, create the problem, usher in the solution. Tom, back to you. Exactly. Speaker 0: So Peter Thiel, a close advisor to J. D. Vance, founded Palantir, the company that built the AI surveillance system used to target sand people. That same technology was sold to ICE and rebranded as Immigration OS, creating a satanic surveillance network to monitor Americans. Speaker 9: Shut it down, Tom. That's not for the normies to understand. Keep it up and I'll turn you into a lampshade like I did with Jackie. Back to the Goyslop or you're canceled. Speaker 12: Goyslop Junior's Goyslop Filet is back, and it's got more seed oils than ever. Speaker 0: I hate myself. Goyslop Junior. Speaker 7: Go on. Speaker 6: Enjoy cancer. Speaker 1: Gosh, that looks good. Speaker 0: Producer Verk said if we stop talking about Palantir, Goyslap Junior will cater to the Super Bowl party. Speaker 1: Alright. Speaker 0: Zipped. Let's just have Eric Warsaw break it down for us. Speaker 12: Palantir. The same company that is run by the hardline Zionist Alex Karp who works closely with Israeli military, will now be in charge of America's civilian data collection. We built Foundry, which was just was used to distribute the COVID vaccine and saved millions of lives globally. Palantir is here to disrupt and make our the institutions we partner with the very best in the world, and when it's necessary to scare enemies and on occasion kill them. Speaker 12: And also, the target selections for the US military, police forces, and even target selections for ICE officers. Speaker 1: That's right, Eric. We're giving our data to the Israeli Jew whose AI targeted over fifty percent of the civilian deaths in Gaza. Here he is. Speaker 7: Your AI and your technology from Palestine to kill Palestinians. Speaker 13: Mostly terrorists. Speaker 1: And by terrorists, he means anyone who opposes their families being genocided, including women and children. This guy. Speaker 9: Shut it the heck down. Say goodbye to your Goyslav junior catering. Remember what happened to Charlie? You're next. Run the freaking commercials. Speaker 0: Want to express yourself? Well, now you can. I always wonder how dumb this going sometimes can be. Speaker 7: TikTok, Speaker 0: Now owned by the Jews at BlackRock. Speaker 7: We're watching that. Speaker 0: Wow. I thought China owning our data was bad. Now you can't even say Zionist without getting flagged. Speaker 1: Straight up. It's like, give it back to China at this point. Speaker 0: Anything's better than Jews at this point. Speaker 1: Right? It's like take a freaking joke, let alone facts. Speaker 0: That's based. We go to John for some breaking news. Thanks, guys. Couldn't have said it better. And this just in, we're taking over Greenland because it was promised to us by Lucifer himself. So take it away, Satan. Speaker 14: By the way, what are we doing with Greenland? We gotta do something with Greenland. Where's my advance team? Go to Greenland. They must have some satellite needs or something that we could do there. But we are coloring the world blue. Speaker 0: So satanic. Speaker 1: Right? Isn't Greenland the central hub for the undersea data cables connecting North America, Europe, and Asia? Speaker 0: Bingo. Speaker 0: Ching Chong joins us live from Greenland. Speaker 1: We're here in Greenland, and not only is it located on a gold mine of rare earth minerals, but its freezing temperatures are the perfect natural coolant for the AI supercomputers needed to power the new world order that will enslave humanity. Eric Morsaw, break it down for us. Speaker 12: If you thought George Orwell's 1984 was a bad surveillance state, wait until you see what Israel's Palantir can do with AI technology or America. It's gonna make the movie The Matrix look mild. Speaker 1: Thanks, Eric. But to truly understand the endgame, you need to understand their ultimate prize, Jerusalem's Golden Dome. The satanic cabal believes controlling this one holy site lets them hijack God's story for billions and install the Antichrist. Let's hear what Trump's theme has to say about it. Speaker 5: We will have all everything we want. We're getting everything we want at no cost. Speaker 10: So the so the Golden Dome will be on Greenland? Speaker 5: A piece of it, yes. And it's a very important part because it's everything comes over Greenland. If the bad guys start shooting, it comes over Greenland. Speaker 1: So what he means by that is the satanic cabal is taking a piece of God's throne and putting it on their AI brain in Greenland to legitimize the antichrist. Speaker 6: Is that some sort of question? Speaker 1: How does that make you feel? Speaker 6: Get the out of our country. Speaker 10: So what are we talking about? An acquisition of Greenland? Are you going to pay for it? Speaker 5: I mean We're talking about it's really being negotiated now, the details of it, but essentially it's total access. It's there's no end. Speaker 0: We're making Iran great again, Venezuela, and now Greenland. How exciting. Speaker 1: Why can't we just fix this country? Speaker 0: Because Israel is our greatest ally. Speaker 1: Right, Shapiro Steen? Speaker 0: Well. I'm so sick of pretending we're Israel first. Speaker 10: I heard that. Just because you stupid goyim think you can expose our satanic agenda doesn't mean you won't fall for our next tie up. Dennis, shut this episode down or you're all fired. Speaker 0: Thanks, Shapiro Steen. Suck on this. Anyways, if you're still not following Jake GTV, you're either brainwashed or legally retarded. Speaker 15: I think I figured out where our data's going. Just let me hack into Homeland Security real quick, and we're in. Speaker 0: And time to get rid of their lice For antiseptic purposes, of course. Did you hear we gave Jake GTV a strike on his YouTube? Speaker 9: Oh, someone's hacked into our system. Another pizza cost. Speaker 1: Look who it is, my base fucking noticer. If you wanna stop wondering what's going on and know, check out my new book on jakegtv.com. Otherwise, just hit the like, comment, and subscribe, and I'll see you on the next one. Speaker 9: Did you hit him with a YouTube strike? Speaker 0: Sir, we did, but he's not stopping. Speaker 9: Shadow ban his accounts. We must shut him down before the red Speaker 7: heifer Speaker 0: is sacrificed.

Video Saved From X

reSee.it Video Transcript AI Summary
Israel is using artificial intelligence (AI) to target and assassinate individuals in Gaza, even if it means killing Palestinian civilians. The Israeli military has a division called the targets division, which uses AI algorithms and automated software to accelerate target creation. The goal is to create a shock effect and continue the war, as previous operations have run out of targets. The AI tools have created 12,000 targets in this war alone, twice as many as in the entire 2014 war. The military has loosened restrictions on harming civilians, knowingly striking targets that may result in civilian casualties. This war policy, aided by AI, has led to civilian devastation and potential war crimes.

Video Saved From X

reSee.it Video Transcript AI Summary
- The speakers claim that American financial institutions and tech companies are deeply involved in the Gaza killings. They name banks, pension funds, Amazon, Google, and Microsoft as having provided services and access to Palestinian data that enabled Israel to set up systems to mass target and kill Palestinians. - They describe an application called Where is Daddy, asserting it allows the army to randomly track people and reach them even when they are with their families, facilitating harm. - The discussion characterizes Israel as possessing the most sophisticated military in the region, knowing precisely what it is doing for two years, and notes that many Israeli soldiers are breaking down, with rising suicidality among young soldiers who have served. - They argue that soldiers have been indoctrinated into becoming executioners of genocide, and that intervention is necessary to prevent further brutality. - The speakers contend that much of this action is driven by people outside Israel who defend the regime, which they describe as having imposed a military dictatorship on Palestinians in the West Bank, Jerusalem, and Gaza (the latter until 2005), and also affecting Israelis who are part of the system. They state that brutalizing others compromises humanity. - Speaker 0 presses for clarification about the existence of the Where is Daddy app, asking if it was a dream or a claim already stated. - Speaker 1 clarifies that Israel has developed an automated system to determine targets through computing, with data supplied by tech companies. He mentions Palantir as one company that publicly supports Israel. He references a public debate in which a Polish person protests that he is killing families, and the response is “you are killing civilians in Gaza,” to which the other person replies that the targets are “most probably terrorists.”

Video Saved From X

reSee.it Video Transcript AI Summary
The discussion centers on Palantir Technologies and a proposed March 2025 executive order that would require federal agencies to share and control data, aiming to centralize government data using Palantir’s Foundry platform. It is claimed that Palantir has already deployed Foundry in at least four agencies, including the Department of Homeland Security and Health and Human Services, and that the company has received over $113 million in federal contracts since Trump took office, with a recent $795 million Department of Defense contract. The speakers allege that the initiative could enable a comprehensive database on all Americans—“light years beyond Real ID, the Patriot Act, and Prism”—and that those who control it seek “complete power over you and everyone else.” They warn of mass surveillance and privacy violations, lack of oversight, and potential political abuse. Key concerns include the breadth of data that Palantir’s system could merge, such as bank accounts, medical records, driving records, student debt, disability status, political affiliation, credit card expenditures, online purchases, tax filings, and travel and phone records, creating “detailed profiles on every single American.” The speakers argue this centralization would enable unchecked monitoring with “zero oversight,” increasing data security risks and the potential for breaches, leaks, or mismanagement. They emphasize a history of opaqueness in Palantir’s operations and tie the company’s AI tools to predictive policing and military applications lacking public accountability. They cite Palantir’s CEO Alex Karp as having controversial views and describe the firm as aligned with a profit-driven push for technomilitarism. The talk links Palantir to broader power dynamics, including ties to Elon Musk’s and Peter Thiel’s spheres, and suggests a technocratic oligarchy could emerge that prioritizes corporate and political agendas over public interest. While acknowledging stated goals like fraud detection and national security, the speakers assert the lack of checks and balances, and fear that the surveillance infrastructure would be embedded to be expanded by future governments. The “kill chain” terminology is discussed both in military and cyber contexts, with Palantir’s Gotham platform described as designed to shorten the kill chain by fusing large datasets into actionable intelligence, enabling faster targeting decisions. They provide examples like the use of Palantir to improve the accuracy and speed of Ukraine’s artillery strikes and, publicly, the Israeli Defense Forces’ use for striking targets in Gaza. The segment also mentions Palantir’s use in predictive policing, including tools used by the Los Angeles Police Department, and argues that Palantir aims to track “everybody, not just immigrants.” The speakers conclude that this centralized system is “light years beyond Real ID, the Patriot Act, or Prism” and advocate resisting it and “thinking of ways we can break the links in the kill chain.”

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the potential dangers of phone surveillance and the Pegasus software. They mention that the phone could be a portal to the CIA and criticize the lack of oversight and safeguards imposed by Congress. The speaker also highlights Israel's role in developing surveillance and AI technology. They mention instances where the Pegasus software has been used to target human rights activists and journalists. The speaker expresses concern about the tracking of digital information by foreign governments and emphasizes that the US government is equally sinister in tracking digital footprints without oversight. They caution listeners to be mindful of their online activities.

ColdFusion

AI is Now Being Used in War
reSee.it Podcast Summary
The episode surveys the deployment of AI in military operations, focusing on reports that the Pentagon used Anthropic’s Claude in targeting and a real-time system that helped prioritize and execute strikes across multiple theaters. It explains how the military uses customized AI models on dedicated hardware, contrasting this with consumer AI and highlighting concerns about reliability and human oversight in high-stakes decisions. The host traces the fallout between Anthropic and the U.S. government, including contractual demands for mass surveillance and autonomous weapons, and the consequential shift in relationships with OpenAI as the private sector pivots toward national-security deals. It also recounts public reactions, such as boycotts of ChatGPT and debates over safeguards, while noting that military-integrated AI can accelerate planning and execution beyond civilian capabilities. The discussion broadens to surveillance risks, the legal ambiguities around data, and potential policy responses aimed at limiting or reshaping state use of AI for war and mass monitoring.
View Full Interactive Feed