TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Israel uses a system called Lavender to decide who to kill, assigning scores to Palestinians and drone striking those above a certain threshold. Palantir creates these "murder lists" by scraping data from Facebook, satellite imagery, and other surveillance sources, compiling personal information to assign weighted scores and identify targets. Palantir, founded by individuals with ties to the Israeli government and the CIA, built this surveillance platform in Israel to target Palestinians. Palantir also maintains an "enemies list" of 1 to 2 million US citizens for the CIA and federal law enforcement, classifying them as potential political dissidents. This database uses surveillance and AI to identify Americans deemed threats to the government, including those with anti-government views or potential involvement in domestic extremism.

Video Saved From X

reSee.it Video Transcript AI Summary
In 2021, Israeli intelligence developed an AI program called Lavender to target individuals in war. The system designated 37,000 Palestinians as targets, resulting in civilian casualties. The IDF used mass surveillance to assess the likelihood of each person being a militant and targeted them accordingly. The Lavender system tracked individuals with patterns similar to Hamas, leading to the deaths of many innocent civilians. This AI targeting system has similarities to the US surveillance system.

Video Saved From X

reSee.it Video Transcript AI Summary
First Speaker argues that Microsoft provided services and access to data, including Palestinian data, which allowed Israel to set up systems to mass target and mass kill Palestinians. They mention an application called "Where is Daddy?" that allows the army to randomly track people and reach them when they are with their families in order to inflict the most harm, describing it as brutal. They state agreement with this view and emphasize the importance of understanding that this represents the end of humanity and the civilization people have pretended to belong to. They claim Israel has the most sophisticated military in the region and has known exactly what it is doing for two years. They assert that many soldiers are breaking down and suicide rates are increasing among young Israelis who have served in the army, noting they are older than teenagers and have been turned by indoctrination into willing executioners of a genocide. They call for intervention by people who love Israel to save what remains of Israel. First Speaker contends that the biggest harm is being done by those outside of Israel who defend the regime. They describe the regime as having imposed a military dictatorship for decades on Palestinians in the West Bank and Jerusalem, and until 2005 in Gaza, and claim this regime also extends to some Israelis who are part of the system. They argue that brutality toward others undermines one's own humanity. Second Speaker agrees and seeks clarification, asking if there is an app, possibly by an American company, called "Where's Daddy" that allows the Israeli government to murder men in front of their children. They reference the prior statements and want confirmation of that claim. First Speaker responds that Israel has developed not just a system but an automatized system to decide targets through a computing system, and that data has been provided by technology. They reiterate that this is part of a broader system of targeting.

Video Saved From X

reSee.it Video Transcript AI Summary
Hello, everyone. We're discussing fusion centers, which compile extensive data on individuals in America, similar to a comprehensive dossier. The integration of AI amplifies this issue by incorporating public records, surveillance data, and other sources, creating a scenario reminiscent of "Minority Report." This technology can be misused to target individuals labeled as "deplorables," as suggested by figures like Harari. Elon Musk aims to develop an AI that seeks truth rather than perpetuating biases against certain groups. My background in high-tech reveals how this technology has been exploited in cases like the Portland Christmas Tree bomber. Raising awareness about these issues is crucial, especially as we seek reforms to ensure that government technology serves the citizens rather than opposes them.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker A: The moral concern is that if you can remove the human element, you can use AI or autonomous targeting on individuals, and that could absolve us of the moral conundrum by making it seem like a mistake or that humans weren’t involved because it was AI or a company like Palantir. This worry is top of mind after the Min Minab girls school strike, and whether AI machine-assisted targeting played any role. Speaker B: In some ongoing wars, targeting decisions have been made by machines with no human sign-off. There are examples where the end-stage decision is simply identify and kill, with input data fed in but no human vetting at the final moment. This is a profound change and highly distressing. The analogy is like pager attacks where bombs are triggered with little certainty about who is affected, which many would label an act of terror. There is knowledge of both the use of autonomous weapons and mass surveillance as problematic points that have affected contracting and debates with a major AI company and the administration. Speaker A: In the specific case of the bombing of the girls’ school attached to the Iranian military base, today’s inquiries suggested that AI is involved, but a human pressed play in this particular instance. The key question becomes where the targeting coordinates came from and who supplied them to the United States military. Signals intelligence from Iran is often translated by Israel, a partner in this venture, and there are competing aims: Israel seeks total destruction of Iran, while the United States appears to want to disengage. There is speculation, not confirmation, about attempts to target Iran’s leaders or their officers’ families, which would have far-reaching consequences. The possibility of actions that cross a diplomatic line is a concern, especially given different endgames between the partners. Speaker C: If Israel is trying to push the United States to withdraw from the region, then the technology born and used in Israel—Palantir Maven software linked to DataMiner for tracking and social-media cross-checking—could lead to targeting in the U.S. itself. The greatest fear is that social media data could be used to identify who to track or target, raising the question of the next worst-case scenario in a context where war accelerates social change and can harden attitudes toward brutality and silencing dissent. War tends to make populations more tolerant of atrocities and less tolerant of opposing views, and the endgame could include governance by technology to suppress opposition rather than improve citizens’ lives. Speaker B: War changes societies faster than anything else, and it can produce a range of effects, from shifts in national attitudes to the justification of harsh measures during conflict. The discussion notes the risk of rule by technology and the possibility that the public could become disillusioned or undermined if their political system fails to address their concerns. The conversation also touched on the broader implications for democratic norms and the potential for technology-driven control. (Note: The transcript contains an advertising segment about a probiotic product, which has been omitted from this summary as promotional content.)

Video Saved From X

reSee.it Video Transcript AI Summary
I emerged from prison to find that artificial intelligence is now used for mass assassinations, blurring the lines between assassination and warfare. Many targets in Gaza are bombed due to AI targeting. The link between artificial intelligence and surveillance is crucial, as AI relies on data from phones and the internet to identify targets and generate propaganda. Surveillance data is essential for training these algorithms to carry out such operations.

Video Saved From X

reSee.it Video Transcript AI Summary
Natalie asks about the AI piece, expressing cynicism that there may be a push for a “war bot” to circumvent consumer AI limits that block starting wars with WMDs, and wonders if there is a benevolent reason. Matthew responds that it’s worse than that: Hengseth described a platform to run on military desktops worldwide—secure, like ChatGPT or Claude but for the Pentagon and military services—that “doesn’t allow information to get out.” The core issue, he says, is who controls the AI, and two key questions about the future of war with AI: who ultimately owns these AI platforms, and who informs them—who gives them the algorithm and programming and essentially orders on how to answer questions. He notes increasing concerns about reliability of information, including how ChatGPT handles questions about trustworthy news sources. He mentions that ChatGPT defers to institutional structures rather than historical accuracy. The risk, he says, is that military AI programs may not provide honest, candid, objective information to military personnel, but rather information based on narratives the Pentagon or manufacturers want. A common belief is that technology makes war more precise and reduces civilian harm, but Matthew contends this is a myth. He explains that precision-guided munitions were not about preventing civilian casualties but about increasing efficiency—“the purpose was to make the weapons more efficient, so we had to drop less bombs to, say, blow up a bridge.” He cites the small diameter bomb as evidence that the aim is not to limit civilian casualties but to allow more bombs to be delivered from aircraft. He highlights real-world examples of AI in warfare, referencing Israeli systems in Gaza. He explains that three AI programs—Lavender, Gospel, and Where’s Daddy?—play roles in targeting and timing strikes. Lavender scans theInternet and databases to identify targets (e.g., labeling someone as a Hamas supporter based on a past online activity), and Where’s Daddy? coordinates that information to ensure bombs hit resistance fighters “when they are with their families,” not away from them. He notes reporting from Israeli media and Nine Two Magazine about these programs and urges viewers to examine that reporting; Tucker Carlson’s coverage is mentioned as example. Matthew argues this demonstrates the dystopian potential of AI in war and cautions against assuming American AI would be more benevolent. He mentions commentator references to justify or excuse actions, including a remark attributed to Mike Huckabee that “Israel did not attack Qatar. They just sent a missile into their country aimed at one person,” noting the nearby injuries or deaths. He ends with a reminder of Orwell’s reflections on war and the idea that those who cheer for war may be less enthusiastic if they experience its costs, suggesting a broader aim to make the costs of war felt among ruling elites who benefit from it.

Video Saved From X

reSee.it Video Transcript AI Summary
AI can be used to oppress people, as highlighted in an expose by 972 Magazine. The article discusses how Israel employed AI to identify suspects, but this technology resulted in the deaths of many civilians who were not the intended targets.

Video Saved From X

reSee.it Video Transcript AI Summary
A speaker claimed few people get wealthy, and another speaker alleged Al Qaeda killed their family in Palestine using AI and technology. The first speaker stated the primary source of death in Palestine is that Hamas has realized there are millions of useful idiots. Another speaker accused them of using AI and technology to kill Palestinians, not just terrorists. The first speaker responded that if the speaker's argument was strong, they would allow them to talk. The second speaker thanked anyone else who supports using technology and AI to kill Palestinians.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker explains that hacking millions of people only requires access to their data, allowing others to know individuals better than they know themselves. This poses a threat to democracy and free markets, as it enables manipulation and prediction of people's actions. Total surveillance regimes, like those seen in Xinjiang and the occupied territories of Israel, are emerging, where a small number of soldiers can control millions of people with the help of data.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the significance of the situation in Gaza, highlighting how it reflects humanity's choice between a dark future of lies and violence or a path towards truth and change. They emphasize the use of AI in targeting civilians and urge for a shift away from supporting atrocities. The speaker challenges society to confront the corruption within governments, media, and culture, and to consider a new perspective to avoid further descent into darkness. Gaza serves as a pivotal moment for humanity to choose between perpetuating destructive patterns or embracing revolutionary change.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 1 and Speaker 0 discuss the implications of AI in military use. They consider whether consumer AI is being bypassed with a secure, military-specific platform that would be sealed—essentially one-way in and no information out—for the Pentagon and military services. The key questions raised are: who controls the AI, who informs its algorithms, and who gives it its orders on how to answer questions, highlighting concerns about privatization and outsourcing of war. Speaker 1 argues that the future of war with AI hinges on two issues: ownership of AI platforms and the sources of their programming. They note that AI can deflect or defer to institutional structures rather than empirical accuracy, raising concerns about the reliability of information provided to military personnel. They also reference the myth that advancing technology automatically reduces civilian harm, citing that precision-guided munitions were designed for efficiency, not necessarily to prevent civilian casualties, noting that the intent was to reduce the number of bombs needed to achieve targets. The conversation shifts to the concept of precision in weapons. Speaker 1 points out that laser- and GPS-guided bombs were not primarily invented to minimize civilian casualties but to increase efficiency. They mention the small diameter bomb as an example, explaining that its use increases the number of bombs that can be deployed rather than primarily limiting collateral damage. The discussion then moves to real-world AI systems used in conflict zones. Speaker 1 cites Israeli programs—Lavender, Gospel, and Where’s Daddy?—as examples of nefarious and insidious AI in war. Lavender supposedly scans the Internet and other databases to identify targets, for example flagging someone as a Hamas supporter based on years of activity. Where’s Daddy? allegedly guides Israeli drones to strike fighters when they are with their families, not away from them. This reporting is linked to coverage from Israeli media and Nine Seven Two magazine, and Speaker 2 references Tucker Carlson’s coverage of these issues. Speaker 2 amplifies the point by noting the emotional impact of such capabilities, arguing that targeting men when they are with their children is particularly disturbing. They also discuss broader political reactions, including a remark attributed to Ambassador Huckabee about Israel not attacking Qatar but “sending a missile there” that injured nearby people. Speaker 1 concludes by invoking Orwell’s reflection on the Spanish Civil War, suggesting that those who cheer for war may be confronted by the consequences when modern aircraft enable distant bombing. They emphasize the need to make the costs of war felt by the ruling classes who benefit from it, not just the people on the ground.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 describes a 2021 claim by the commander of Israeli intelligence to design a machine to resolve a human bottleneck in locating and approving targets in war. A recent investigation by Plus 972 Magazine and Local Call reveals that the Israeli army developed an AI-based Lavender system to designate targets and direct airstrikes. During the initial weeks of the Lavender operation, the system designated about 37,000 Palestinians as targets and directed airstrikes on their homes. The system reportedly had an error rate of about 10%, and there was no requirement to verify the machine’s data. The Israeli army systematically attacked targeted individuals at night in their homes while their whole family was present. An automated component, known as “where’s daddy,” tracked targeted individuals and carried out bombings when they entered their family residences. The result, according to the report, was that thousands of women and children were killed by Israeli airstrikes. Israeli intelligence officers allegedly stated that the IDF bombed homes as a first option, and in several cases entire families were murdered when the actual target was not inside. In one instance, four buildings were destroyed along with everyone inside because a single target was in one of them. For targets marked as low level by Lavender, cheaper bombs were used, destroying entire buildings and killing mostly civilians and entire families. It was alleged that the IDF did not want to waste expensive bombs on “unimportant people,” and it was decided that for every low-level Hamas operative Lavender marked, it was permissible to kill up to 15 or 20 civilians; for a senior Hamas official, more than 100 civilians could be killed. Most AI targets were never tracked before the war. Lavender analyzed information collected on the 2,300,000 residents of the Gaza Strip through mass surveillance, assessing the likelihood of each person being a militant and giving a rating from 1 to 100. If the rating was high enough, the person and their entire family were killed. Lavender flagged individuals with patterns similar to Hamas, including police, civil defense, relatives, and residents with similar names or nicknames. The report notes that this kind of tracking system has existed in the US for years. Speaker 1 presents a counterpoint: a “fine gentleman of the secret service” claims to provide a list of every threat made about the president since February 3 and profiles of every threat maker, implying that targets could be identified through broad data collection including emails, chats, SMS. The passage suggests a tool akin to a Google search but including private communications. Speaker 0 adds that although some claim Israel controls the US, Joe Biden says Israel serves US interests. Speaker 2: A speaker asserts, “There’s no apology to be made. None. It is the best $3,000,000,000 investment we make,” and claims that without Israel the United States would have to invent an Israel to protect its regional interests. Speaker 0 closes reporting for Infowars, credited to Greg Reese.

Video Saved From X

reSee.it Video Transcript AI Summary
The conversation centers on fears of evolving toward a biometric surveillance state driven by predictive algorithms. Speaker 0 argues that the plan resembles a transition to mass surveillance on everybody, drawing on observations from a recent trip to China where some aspects were acceptable but others were not, and contrasts that with potential consequences in the speakers’ own country—specifically, “without the nice trains and without the free healthcare.” The core concern is the creation of a biometric surveillance framework that uses predictive analytics to monitor and control people. A key point raised is a new report that highlights contracts with Palantir, the data analytics company, which would “create data profiles of Americans to surveil and harass them.” This claim emphasizes the potential domestic use of technologies and methodologies that have been associated with counterterrorism efforts abroad. The discussion frames this as evidence that the United States could be adopting similar surveillance capabilities at home. Speaker 1 responds with a blend of agreement and critical tone, underscoring the perceived inevitability of this trajectory and hinting at the burdens of being right about such developments, including the intellectual burden of grappling with the math and ontology behind these systems. The exchange suggests that Palantir’s role is to “disrupt and make our the institutions we partner with the very best in the world” and to be prepared to “scare enemies and on occasion kill them.” This is presented as part of Palantir’s stated mission, with Speaker 1 affirming a sense of inevitability about the path forward. Speaker 0 further reframes the issue by stating that “the enemy is literally the American people,” expressing alarm at the idea that the same company tracking terrorists abroad would “now be tracking us at home.” They note posting on social media that this development should be very alarming, highlighting the notion that the entity responsible for foreign surveillance might be extending its reach domestically. Overall, the dialogue juxtaposes concerns about a domestic biometric surveillance state—enabled by predictive algorithms and proprietary data profiling by Palantir—with ethical and political anxieties about the implications for civil liberties, accountability, and the potential normalization of surveillance within the United States. The conversation dismisses no specific claims but emphasizes the perceived transformation of surveillance capabilities from foreign counterterrorism into internal population monitoring.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker addresses the use of artificial intelligence or Lavender by the IDF in identifying Hamas targets. They state they are not on top of all the details of what’s happening in Israel and that their bias is to defer to Israel. They say it’s not for us to second guess everything. They conclude that broadly the IDF gets to decide what it wants to do and that they’re broadly in the right.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: Welcome back to Jake GTV news. Did you see ICE shooting American citizens? Speaker 1: I thought they were supposed to get rid of the illegals, though. Speaker 0: Me too. Let's go to Ching Chong on the murder scene. Speaker 1: Chloe and Michael, good morning. We're here in Minneapolis where ICE agents trained by Israel are causing chaos. We go to John for more. Speaker 0: Thanks, Ching Chong. Thought it was only Libtards who opposed this, but they are literally murdering Americans. Back to you in the studio. Speaker 2: Stand back. Speaker 1: Please don't hurt me, sir Ed. I'm here to get rid of the illegals, grandma. Speaker 0: Wow. Thanks, John. Check this out here. It's from the protest. Here we see an agent assault a woman for simply being at the protest. Speaker 3: Then Alex steps in to help her Speaker 0: get back on her feet, and Speaker 4: the agents pepper spray him and proceed to assault him. Speaker 0: They then proceed to remove his legally owned firearm and shoot him in the back roughly 10 times, not even kidding. Holy shit. Speaker 1: Please tell me they're gonna jail. Speaker 0: Nope. They're on administrative leave while the FBI pretends to care. Dude, what? Let's see what Trump's team has to say. Speaker 5: Very, very unfortunate incident. I don't like that he had a gun. I don't like the fact that he was carrying a gun. Speaker 6: You know, you can't have guns. You can't walk in with guns. You just can't. And you can't listen. You can't walk in with guns. You can't do that, but it's it's a very unfortunate incident. Speaker 7: Do you Speaker 1: agree with Trump, Steen? Speaker 6: Oh, hell yeah. Guns are bad now. Didn't you get the memo? Speaker 1: What about the second amendment? Speaker 6: It's all four d chess, honey. Trust the plan. Speaker 1: Sup, bro? How do you feel about ICE? Speaker 0: This country needs more Indians than blacks. Check your privilege. Speaker 1: Dude, when did everybody get so retarded? Was it the vaccines or something? We go to the investigation team to learn more. Speaker 8: Thanks, Ching Chung. So basically, we uncovered that not only is ICE Embassy located in Tel Aviv, but they're using the same technology they used to genocide the Palestinians. Speaker 0: It's a freaking Jewish spyware by Paragon Solutions called Graphite, and check this out. Tell me why Alex Pretty was googled a month prior to the shooting and, again, five minutes before his death. Make of that what you will. Back to you guys. Wow. Wasn't the Homeland Security's own Twitter page being run from Israel? Speaker 1: Yeah. Same with ICE's embassy, Tel Aviv to be exact. Speaker 0: Freaking Jews, man. Speaker 9: Shut it down. He was an unhinged lefty who thought our Chobus Goy Trumpstein was a dictator. He kicked the taillight the week prior, so he deserved to be gunned down like a dog. Speaker 1: Air that. Jeez, Producer Berg, chill. Speaker 0: Gosh, he's so Talmudic. Speaker 1: Right. Always victim. Speaker 0: Anyways, here's their emotional justification for cold blooded murder. Speaker 1: That was a pretty good leg kick. Speaker 0: Right? Let's get Shapiro Steen's take on this whole thing. Speaker 10: Just because we didn't arrest anyone for the Epstein files, genocide, or our poisonous mRNA doesn't mean we won't also get away with murdering Boyum. After all, he kicked a taillight. Speaker 0: Yeah. I guess you're right, Shapiro Steen. Israel is our greatest ally. Speaker 1: You're not getting a raise. Speaker 0: Discount on your only freaks? Speaker 1: Not a chance. Ching chong, take it away. Gosh, dude. You're such a weak little simp. She's a literal succubus. Speaker 0: Anyways, let's take a tour with the IDF, I mean ice. Whoops. What was your training like? We were supposed to be trained for this? Speaker 0: Yeah. We've got an antiseptic on the next block. Get ready to murder. Stop resisting. Did you see me shoot that senior citizen? Yeah. Definitely not an immigrant, he sure had it coming. Let's see what Diego's up to. Speaker 2: I will tell you this, brother. What? You know? I will tell you this. You raise your voice? I raise your voice. Speaker 1: Wow. Isn't that like against the law? Speaker 0: You'd think so but they'll end up getting paid administrative leave and mental health support. Speaker 1: Seriously? Speaker 0: Dead ass. If I Speaker 11: raise my voice, you'll erase Speaker 2: my Exactly. Yeah. Yeah. Speaker 11: Are you serious? You said, if I raise my voice, you'll erase my voice? Speaker 1: Yes. Mhmm. Mhmm. Ice. You guys are saving this country. Speaker 0: Didn't they kill that American woman last week? Renee Good or something? Speaker 1: That non chosen person? She was lesbian leftist Karen. Who cares? Speaker 0: Whatever you say, Daisy. No. Speaker 7: No. Shit. Shit. Oh my fucking god. What the fuck? What What the the fuck? Fuck? Speaker 0: You might be wondering, why Minneapolis? Tim Waltz ushered in a defund the police initiative, which created a perfect opportunity for Trump's team to bring about the first AI surveillance state. You know what they say, create the problem, usher in the solution. Tom, back to you. Exactly. Speaker 0: So Peter Thiel, a close advisor to J. D. Vance, founded Palantir, the company that built the AI surveillance system used to target sand people. That same technology was sold to ICE and rebranded as Immigration OS, creating a satanic surveillance network to monitor Americans. Speaker 9: Shut it down, Tom. That's not for the normies to understand. Keep it up and I'll turn you into a lampshade like I did with Jackie. Back to the Goyslop or you're canceled. Speaker 12: Goyslop Junior's Goyslop Filet is back, and it's got more seed oils than ever. Speaker 0: I hate myself. Goyslop Junior. Speaker 7: Go on. Speaker 6: Enjoy cancer. Speaker 1: Gosh, that looks good. Speaker 0: Producer Verk said if we stop talking about Palantir, Goyslap Junior will cater to the Super Bowl party. Speaker 1: Alright. Speaker 0: Zipped. Let's just have Eric Warsaw break it down for us. Speaker 12: Palantir. The same company that is run by the hardline Zionist Alex Karp who works closely with Israeli military, will now be in charge of America's civilian data collection. We built Foundry, which was just was used to distribute the COVID vaccine and saved millions of lives globally. Palantir is here to disrupt and make our the institutions we partner with the very best in the world, and when it's necessary to scare enemies and on occasion kill them. Speaker 12: And also, the target selections for the US military, police forces, and even target selections for ICE officers. Speaker 1: That's right, Eric. We're giving our data to the Israeli Jew whose AI targeted over fifty percent of the civilian deaths in Gaza. Here he is. Speaker 7: Your AI and your technology from Palestine to kill Palestinians. Speaker 13: Mostly terrorists. Speaker 1: And by terrorists, he means anyone who opposes their families being genocided, including women and children. This guy. Speaker 9: Shut it the heck down. Say goodbye to your Goyslav junior catering. Remember what happened to Charlie? You're next. Run the freaking commercials. Speaker 0: Want to express yourself? Well, now you can. I always wonder how dumb this going sometimes can be. Speaker 7: TikTok, Speaker 0: Now owned by the Jews at BlackRock. Speaker 7: We're watching that. Speaker 0: Wow. I thought China owning our data was bad. Now you can't even say Zionist without getting flagged. Speaker 1: Straight up. It's like, give it back to China at this point. Speaker 0: Anything's better than Jews at this point. Speaker 1: Right? It's like take a freaking joke, let alone facts. Speaker 0: That's based. We go to John for some breaking news. Thanks, guys. Couldn't have said it better. And this just in, we're taking over Greenland because it was promised to us by Lucifer himself. So take it away, Satan. Speaker 14: By the way, what are we doing with Greenland? We gotta do something with Greenland. Where's my advance team? Go to Greenland. They must have some satellite needs or something that we could do there. But we are coloring the world blue. Speaker 0: So satanic. Speaker 1: Right? Isn't Greenland the central hub for the undersea data cables connecting North America, Europe, and Asia? Speaker 0: Bingo. Speaker 0: Ching Chong joins us live from Greenland. Speaker 1: We're here in Greenland, and not only is it located on a gold mine of rare earth minerals, but its freezing temperatures are the perfect natural coolant for the AI supercomputers needed to power the new world order that will enslave humanity. Eric Morsaw, break it down for us. Speaker 12: If you thought George Orwell's 1984 was a bad surveillance state, wait until you see what Israel's Palantir can do with AI technology or America. It's gonna make the movie The Matrix look mild. Speaker 1: Thanks, Eric. But to truly understand the endgame, you need to understand their ultimate prize, Jerusalem's Golden Dome. The satanic cabal believes controlling this one holy site lets them hijack God's story for billions and install the Antichrist. Let's hear what Trump's theme has to say about it. Speaker 5: We will have all everything we want. We're getting everything we want at no cost. Speaker 10: So the so the Golden Dome will be on Greenland? Speaker 5: A piece of it, yes. And it's a very important part because it's everything comes over Greenland. If the bad guys start shooting, it comes over Greenland. Speaker 1: So what he means by that is the satanic cabal is taking a piece of God's throne and putting it on their AI brain in Greenland to legitimize the antichrist. Speaker 6: Is that some sort of question? Speaker 1: How does that make you feel? Speaker 6: Get the out of our country. Speaker 10: So what are we talking about? An acquisition of Greenland? Are you going to pay for it? Speaker 5: I mean We're talking about it's really being negotiated now, the details of it, but essentially it's total access. It's there's no end. Speaker 0: We're making Iran great again, Venezuela, and now Greenland. How exciting. Speaker 1: Why can't we just fix this country? Speaker 0: Because Israel is our greatest ally. Speaker 1: Right, Shapiro Steen? Speaker 0: Well. I'm so sick of pretending we're Israel first. Speaker 10: I heard that. Just because you stupid goyim think you can expose our satanic agenda doesn't mean you won't fall for our next tie up. Dennis, shut this episode down or you're all fired. Speaker 0: Thanks, Shapiro Steen. Suck on this. Anyways, if you're still not following Jake GTV, you're either brainwashed or legally retarded. Speaker 15: I think I figured out where our data's going. Just let me hack into Homeland Security real quick, and we're in. Speaker 0: And time to get rid of their lice For antiseptic purposes, of course. Did you hear we gave Jake GTV a strike on his YouTube? Speaker 9: Oh, someone's hacked into our system. Another pizza cost. Speaker 1: Look who it is, my base fucking noticer. If you wanna stop wondering what's going on and know, check out my new book on jakegtv.com. Otherwise, just hit the like, comment, and subscribe, and I'll see you on the next one. Speaker 9: Did you hit him with a YouTube strike? Speaker 0: Sir, we did, but he's not stopping. Speaker 9: Shadow ban his accounts. We must shut him down before the red Speaker 7: heifer Speaker 0: is sacrificed.

Video Saved From X

reSee.it Video Transcript AI Summary
Israel is using artificial intelligence (AI) to target and assassinate individuals in Gaza, even if it means killing Palestinian civilians. The Israeli military has a division called the targets division, which uses AI algorithms and automated software to accelerate target creation. The goal is to create a shock effect and continue the war, as previous operations have run out of targets. The AI tools have created 12,000 targets in this war alone, twice as many as in the entire 2014 war. The military has loosened restrictions on harming civilians, knowingly striking targets that may result in civilian casualties. This war policy, aided by AI, has led to civilian devastation and potential war crimes.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the difference between targeting Hamas and intentionally harming civilians. They claim that the Israeli actions are not solely focused on Hamas, but rather involve purposely killing a large number of civilians. They argue that evidence from Israeli leaders and assessments supports the idea that this is a campaign to punish and ethnically cleanse Gaza and the West Bank by getting rid of Palestinians.

Video Saved From X

reSee.it Video Transcript AI Summary
- The speakers claim that American financial institutions and tech companies are deeply involved in the Gaza killings. They name banks, pension funds, Amazon, Google, and Microsoft as having provided services and access to Palestinian data that enabled Israel to set up systems to mass target and kill Palestinians. - They describe an application called Where is Daddy, asserting it allows the army to randomly track people and reach them even when they are with their families, facilitating harm. - The discussion characterizes Israel as possessing the most sophisticated military in the region, knowing precisely what it is doing for two years, and notes that many Israeli soldiers are breaking down, with rising suicidality among young soldiers who have served. - They argue that soldiers have been indoctrinated into becoming executioners of genocide, and that intervention is necessary to prevent further brutality. - The speakers contend that much of this action is driven by people outside Israel who defend the regime, which they describe as having imposed a military dictatorship on Palestinians in the West Bank, Jerusalem, and Gaza (the latter until 2005), and also affecting Israelis who are part of the system. They state that brutalizing others compromises humanity. - Speaker 0 presses for clarification about the existence of the Where is Daddy app, asking if it was a dream or a claim already stated. - Speaker 1 clarifies that Israel has developed an automated system to determine targets through computing, with data supplied by tech companies. He mentions Palantir as one company that publicly supports Israel. He references a public debate in which a Polish person protests that he is killing families, and the response is “you are killing civilians in Gaza,” to which the other person replies that the targets are “most probably terrorists.”

Video Saved From X

reSee.it Video Transcript AI Summary
- The discussion opens with claims that President Trump says “we’ve won the war against Iran,” but Israel allegedly wants the war to destroy Iran’s entire government structure, requiring boots on the ground for regime change. It’s argued that air strikes cannot achieve regime change and that Israel’s relatively small army would need U.S. ground forces, given Iran’s larger conventional force, to accomplish its objectives. - Senator Richard Blumenthal is cited as warning about American lives potentially being at risk from deploying ground troops in Iran, following a private White House briefing. - The new National Defense Authorization Act is described as renewing the involuntary draft; by year’s end, an involuntary draft could take place in the United States, pending full congressional approval. Dan McAdams of the Ron Paul Institute is described as expressing strong concern, arguing the draft would treat the government as owning citizens’ bodies, a stance attributed to him as supporting a view that “presumption is that the government owns you.” - The conversation contrasts Trump’s public desire to end the war quickly with Netanyahu’s government, which reportedly envisions a much larger military objective in the region, including a demilitarized zone in southern Lebanon akin to Gaza, and a broader aim to remove Hezbollah. The implication is that the United States and Israel may not share the same endgame. - Tucker Carlson is introduced as a guest to discuss these issues and offer predictions about consequences for the American people, including energy disruption, economic impacts, and shifts in U.S. influence in the Persian Gulf. - Carlson responds that he would not credit himself with prescience, but notes predictable consequences: disruption to global energy supplies, effects on the U.S. economy, potential loss of U.S. bases in the Gulf, and a shrinking American empire. He suggests that the war’s true goal may be to weaken the United States and withdraw from the Middle East; he questions whether diplomacy remains viable given the current trajectory. - Carlson discusses Iran’s new supreme leader Khomeini’s communique, highlighting threats to shut Hormuz “forever,” vows to avenge martyrs, and calls for all U.S. bases in the region to be closed. He notes that Tehran asserts it will target American bases while claiming it is not an enemy of surrounding countries, though bombs affect neighbors as well. - The exchange notes Trump’s remarks about possibly using nuclear weapons, and Carlson explains Iran’s internal factions, suggesting some seek negotiated settlements while others push for sustained conflict. Carlson emphasizes that Israel’s leadership may be pushing escalation in ways that diverge from U.S. interests and warns about the dangers of a joint operation with Israel, which would blur U.S. sovereignty in war decisions. - A discussion on the use of a term Amalek is explored: Carlson’s guest explains Amalek from the Old Testament as enemies of the Jewish people, with a historical biblical command to annihilate Amalek, including women and children, which the guest notes Christianity rejects; Netanyahu has used the term repeatedly in the conflict context, which Carlson characterizes as alarming and barbaric. - The guests debate how much influence is exerted in the White House, with Carlson noting limited direct advocacy for war among principal policymakers and attributing decisive pressure largely to Netanyahu’s threats. They question why Israel, a client state of the U.S., is allowed to dictate war steps, especially given the strategic importance of Hormuz and American assets in the region. - They discuss the ethical drift in U.S. policy, likening it to adopting the ethics of the Israeli government, and criticize the idea of targeting family members or civilians as a military strategy. They contrast Western civilization’s emphasis on individual moral responsibility with perceived tribal rationales. - The conversation touches on the potential rise of AI-assisted targeting or autonomous weapons: Carlson’s guest confirms that in some conflicts, targeting decisions have been made by machines with no human sign-off, though in the discussed case a human did press play on the attack. The coordinates and data sources for strikes are scrutinized, with suspicion cast on whether Israel supplied SIGINT or coordinates. - The guests warn about the broader societal impact of war on civil liberties, mentioning the increasing surveillance and the risk that technology could be used to suppress dissent or control the population. They discuss how war accelerates social change and potentially normalizes drastic actions or internal coercion. - The media’s role in selling the war is criticized as “propaganda,” with examples of government messaging and pop culture campaigns (including a White House-supported video game-like portrayal of U.S. military power). They debate whether propaganda can be effective without a clear, articulated rationale for war and without public buy-in. - They question the behavior of mainstream outlets and “access journalism,” arguing that reporters often avoid tough questions about how the war ends, the timetable, and the off-ramps, instead reinforcing government narratives. - In closing, Carlson and his co-hosts reflect on the political division surrounding the war, the erosion of trust in media, and the possibility of rebuilding a coalition of ordinary Americans who want effective governance without perpetual conflict or degradation of civil liberties. Carlson emphasizes a longing for a politics centered on improving lives rather than escalating war. - The segment ends with Carlson’s continued critique of media dynamics, the moral implications of the war, and a call for more transparent discussion about the true aims and consequences of extended military engagement in the region.

Video Saved From X

reSee.it Video Transcript AI Summary
The discussion centers on Palantir Technologies and a proposed March 2025 executive order that would require federal agencies to share and control data, aiming to centralize government data using Palantir’s Foundry platform. It is claimed that Palantir has already deployed Foundry in at least four agencies, including the Department of Homeland Security and Health and Human Services, and that the company has received over $113 million in federal contracts since Trump took office, with a recent $795 million Department of Defense contract. The speakers allege that the initiative could enable a comprehensive database on all Americans—“light years beyond Real ID, the Patriot Act, and Prism”—and that those who control it seek “complete power over you and everyone else.” They warn of mass surveillance and privacy violations, lack of oversight, and potential political abuse. Key concerns include the breadth of data that Palantir’s system could merge, such as bank accounts, medical records, driving records, student debt, disability status, political affiliation, credit card expenditures, online purchases, tax filings, and travel and phone records, creating “detailed profiles on every single American.” The speakers argue this centralization would enable unchecked monitoring with “zero oversight,” increasing data security risks and the potential for breaches, leaks, or mismanagement. They emphasize a history of opaqueness in Palantir’s operations and tie the company’s AI tools to predictive policing and military applications lacking public accountability. They cite Palantir’s CEO Alex Karp as having controversial views and describe the firm as aligned with a profit-driven push for technomilitarism. The talk links Palantir to broader power dynamics, including ties to Elon Musk’s and Peter Thiel’s spheres, and suggests a technocratic oligarchy could emerge that prioritizes corporate and political agendas over public interest. While acknowledging stated goals like fraud detection and national security, the speakers assert the lack of checks and balances, and fear that the surveillance infrastructure would be embedded to be expanded by future governments. The “kill chain” terminology is discussed both in military and cyber contexts, with Palantir’s Gotham platform described as designed to shorten the kill chain by fusing large datasets into actionable intelligence, enabling faster targeting decisions. They provide examples like the use of Palantir to improve the accuracy and speed of Ukraine’s artillery strikes and, publicly, the Israeli Defense Forces’ use for striking targets in Gaza. The segment also mentions Palantir’s use in predictive policing, including tools used by the Los Angeles Police Department, and argues that Palantir aims to track “everybody, not just immigrants.” The speakers conclude that this centralized system is “light years beyond Real ID, the Patriot Act, or Prism” and advocate resisting it and “thinking of ways we can break the links in the kill chain.”

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the potential dangers of phone surveillance and the Pegasus software. They mention that the phone could be a portal to the CIA and criticize the lack of oversight and safeguards imposed by Congress. The speaker also highlights Israel's role in developing surveillance and AI technology. They mention instances where the Pegasus software has been used to target human rights activists and journalists. The speaker expresses concern about the tracking of digital information by foreign governments and emphasizes that the US government is equally sinister in tracking digital footprints without oversight. They caution listeners to be mindful of their online activities.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 and Speaker 1 discuss the motivations behind expanding digital surveillance, warning that concerns go beyond merely watching current behavior. Speaker 1 argues that many surveillance actors are interested in predictive analytics and predictive policing, not just monitoring present actions. Based on current and past behavior, these systems aim to determine future actions, and in predictive policing could lead to court-ordered treatment or house arrest to prevent crimes before they occur. They reference PredPol (later rebranded) as a notable example, describing it as less accurate than a coin toss and noting that people were deprived of liberty due to an dangerously flawed algorithm. They also point to facial recognition algorithms in the UK, which have been shown to be hugely inaccurate, yet vendors remain unchanged despite demonstrated inaccuracies. The underlying concern is that constant surveillance could induce obedience, since any potential future action could be used against a person, even if they are not currently doing anything wrong. The speakers quote Larry Ellison of Oracle at an Oracle shareholder meeting, who allegedly said that surveillance will record everything and citizens will be on their best behavior because they “have to,” effectively linking surveillance to governance over behavior. Speaker 0 adds that Donald Trump’s circle includes tech figures who are not friends of freedom and liberty, naming Larry Ellison as leading that faction, which amplifies the concern about the direction of policy and governance under such influence. Speaker 1 broadens the critique to globalist networks, noting that many players in surveillance and tech also appear on the steering committee of the Bilderberg Group, a closed-door forum often associated with global policy coordination. They argue that some individuals in this network have attempted to frame libertarian rhetoric while pursuing oligarchic aims, including the idea that “the free market is for losers” and that monopolies are the path to wealth. The discussion emphasizes that the same actors may push policies under the banner of efficiency or libertarian appeal, especially as AI advances, and that vigilance is necessary to prevent a slide toward pervasive, technocratic governance. Speaker 1 concludes that, with AI and related technologies, the risk is that these strategies could be packaged and sold in a way that appeals to factions who opposed such policies in the past, making public vigilance crucial to prevent a repeat of dystopian outcomes.

Unlimited Hangout

The Age of Artificial Intelligence
Guests: Star
reSee.it Podcast Summary
The podcast hosted by Whitney Webb discusses the rapid rise of generative AI and its profound effects on various sectors, particularly media. Webb emphasizes that while proponents of AI frame it as a tool for enhancing human capabilities and promoting equality, its implementation is creating a surveillance infrastructure that tracks and analyzes every aspect of human activity. This raises concerns about the potential for AI to exacerbate existing inequalities and lead to mass job losses in media, where generative AI is replacing traditional journalism roles. Star, the podcast producer, shares insights on the implications of AI in media, noting that while it may reduce tedious tasks, it also risks diluting the quality of content and increasing censorship. The conversation highlights the dangers of AI-generated content dominating the information landscape, potentially leading to a homogenized narrative controlled by a few powerful entities. They discuss the alarming prediction that generative AI could account for 90% of all content by 2025, which raises questions about the future of independent media. The discussion also touches on the potential for AI to be weaponized in governance and military contexts, particularly in surveillance and targeting decisions. Webb references a book by Henry Kissinger and Eric Schmidt that outlines a vision for AI that could lead to a controlled society where human creativity and independent thought are stifled. They express concern that AI could be used to manipulate public perception and behavior, ultimately serving the interests of the elite rather than the general populace. Webb and Star emphasize the importance of being aware of the risks associated with AI and the need for individuals to maintain control over their engagement with technology. They advocate for critical thinking and caution against becoming overly reliant on AI tools, which could lead to a diminished capacity for independent thought and creativity. The conversation concludes with a call for listeners to consider the broader implications of AI on society and to take proactive steps to safeguard their autonomy in an increasingly automated world.

ColdFusion

AI is Now Being Used in War
reSee.it Podcast Summary
The episode surveys the deployment of AI in military operations, focusing on reports that the Pentagon used Anthropic’s Claude in targeting and a real-time system that helped prioritize and execute strikes across multiple theaters. It explains how the military uses customized AI models on dedicated hardware, contrasting this with consumer AI and highlighting concerns about reliability and human oversight in high-stakes decisions. The host traces the fallout between Anthropic and the U.S. government, including contractual demands for mass surveillance and autonomous weapons, and the consequential shift in relationships with OpenAI as the private sector pivots toward national-security deals. It also recounts public reactions, such as boycotts of ChatGPT and debates over safeguards, while noting that military-integrated AI can accelerate planning and execution beyond civilian capabilities. The discussion broadens to surveillance risks, the legal ambiguities around data, and potential policy responses aimed at limiting or reshaping state use of AI for war and mass monitoring.
View Full Interactive Feed