TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
In 2021, Israeli intelligence developed an AI program called Lavender to target individuals in war. The system designated 37,000 Palestinians as targets, resulting in civilian casualties. The IDF used mass surveillance to assess the likelihood of each person being a militant and targeted them accordingly. The Lavender system tracked individuals with patterns similar to Hamas, leading to the deaths of many innocent civilians. This AI targeting system has similarities to the US surveillance system.

Video Saved From X

reSee.it Video Transcript AI Summary
The speakers discuss a view of a connected “control grid” and the role of financier networks in enabling wider geopolitical and technological infrastructures. They claim that Epstein was financing and networking across multiple parts of this system, with money allegedly laundered to support various components of what they describe as the control grid. They assert that in the digital control grid, including infrastructure and software such as Palantir or crypto and programmable money, Epstein was steadily financing and networking the entire infrastructure needed to operate in both Gaza and America. They further argue that the control exercised by New York Fed member banks—whether as depository for the government, managing the exchange stabilization fund, or handling money transfers into and out of the country—means that entities like Mossad in Israel cannot act independently without the cooperation of the New York Fed member banks and, therefore, the CIA. In their view, the infrastructure is governed by the “US empire,” implying that independent operations by Mossad are not possible without alignment with these American financial and intelligence institutions. Speaker 1 adds that Gaza appears to be increasingly like a beta rollout for technology and killing technology being developed by Silicon Valley. This framing ties the deployment of deadly technologies to a broader trajectory of innovation in Silicon Valley, as interpreted by the speakers.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker A: The moral concern is that if you can remove the human element, you can use AI or autonomous targeting on individuals, and that could absolve us of the moral conundrum by making it seem like a mistake or that humans weren’t involved because it was AI or a company like Palantir. This worry is top of mind after the Min Minab girls school strike, and whether AI machine-assisted targeting played any role. Speaker B: In some ongoing wars, targeting decisions have been made by machines with no human sign-off. There are examples where the end-stage decision is simply identify and kill, with input data fed in but no human vetting at the final moment. This is a profound change and highly distressing. The analogy is like pager attacks where bombs are triggered with little certainty about who is affected, which many would label an act of terror. There is knowledge of both the use of autonomous weapons and mass surveillance as problematic points that have affected contracting and debates with a major AI company and the administration. Speaker A: In the specific case of the bombing of the girls’ school attached to the Iranian military base, today’s inquiries suggested that AI is involved, but a human pressed play in this particular instance. The key question becomes where the targeting coordinates came from and who supplied them to the United States military. Signals intelligence from Iran is often translated by Israel, a partner in this venture, and there are competing aims: Israel seeks total destruction of Iran, while the United States appears to want to disengage. There is speculation, not confirmation, about attempts to target Iran’s leaders or their officers’ families, which would have far-reaching consequences. The possibility of actions that cross a diplomatic line is a concern, especially given different endgames between the partners. Speaker C: If Israel is trying to push the United States to withdraw from the region, then the technology born and used in Israel—Palantir Maven software linked to DataMiner for tracking and social-media cross-checking—could lead to targeting in the U.S. itself. The greatest fear is that social media data could be used to identify who to track or target, raising the question of the next worst-case scenario in a context where war accelerates social change and can harden attitudes toward brutality and silencing dissent. War tends to make populations more tolerant of atrocities and less tolerant of opposing views, and the endgame could include governance by technology to suppress opposition rather than improve citizens’ lives. Speaker B: War changes societies faster than anything else, and it can produce a range of effects, from shifts in national attitudes to the justification of harsh measures during conflict. The discussion notes the risk of rule by technology and the possibility that the public could become disillusioned or undermined if their political system fails to address their concerns. The conversation also touched on the broader implications for democratic norms and the potential for technology-driven control. (Note: The transcript contains an advertising segment about a probiotic product, which has been omitted from this summary as promotional content.)

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker states that artificial intelligence is being used to create mass assassinations, blurring the lines between assassination and warfare. They claim that many targets in Gaza are bombed as a result of AI targeting. The speaker emphasizes the connection between AI and surveillance, asserting that AI needs information to generate targets, ideas, or propaganda. Surveillance data from telephones and the internet is key to training the algorithms used to conduct these mass assassinations.

Video Saved From X

reSee.it Video Transcript AI Summary
The discussion centers on the kill chain concept and Palantir’s role within it. One speaker explains that the system you call the kill chain was created privately, while publicly lawyers frame it as something like “tech for the amelioration of unwanted blah blah blah.” The term kill chain sounds good to him, though not originally Palantir’s; it’s a general military sequence from identifying a target to taking a life. Palantir’s contract added their software and artificial intelligence to the kill chain, making it quicker, and, in his view, “better and more violent.” He notes that stepping back to examine the actual application of these technologies can be destabilizing. Another speaker discusses a personal trajectory: Juan didn’t leave Palantir entirely for ethical reasons, only taking another job, but his motivation to speak out against Palantir grew after observing the Israeli invasion of Gaza following the October 7 attacks. Palantir has contracts with the Israeli Defense Forces, with the exact nature intentionally opaque, yet evidence suggests Palantir’s AI tech was used for target selection in Gaza. The speaker Carp embraces controversy as part of marketing, stating Palantir is comfortable being unpopular. He adds that Palantir works with health insurance companies to build AI for denials management to protect revenue, raising the question of whether Palantir’s AI should decide what care is covered for individuals. A third speaker explains the technical approach: they use what legal scholars call predicate-based search to identify indicators of potential bad behavior in a person’s life. In essence, Palantir makes software that helps customers collect and analyze data and then act on the analysis. By 2013, a decade after founding, Palantir’s client list included the FBI, the CIA, the NSA, the Marines, the Air Force, Special Operations Command, and more. Palantir already had contracts with the IRS to analyze taxpayer data to guide auditors to easier audits, handling financial information for many. They also had multiple contracts with the Department of Health and Human Services, whose core responsibility is Medicare and Medicaid, controlling millions of Americans’ health records and access to health care. A final speaker warns that as we increasingly live in a simulated world, we move toward governance by algorithm, governed by those influencing these AI systems to advance profit- or control-seeking objectives.

Video Saved From X

reSee.it Video Transcript AI Summary
I emerged from prison to find that artificial intelligence is now used for mass assassinations, blurring the lines between assassination and warfare. Many targets in Gaza are bombed due to AI targeting. The link between artificial intelligence and surveillance is crucial, as AI relies on data from phones and the internet to identify targets and generate propaganda. Surveillance data is essential for training these algorithms to carry out such operations.

Video Saved From X

reSee.it Video Transcript AI Summary
Natalie asks about the AI piece, expressing cynicism that there may be a push for a “war bot” to circumvent consumer AI limits that block starting wars with WMDs, and wonders if there is a benevolent reason. Matthew responds that it’s worse than that: Hengseth described a platform to run on military desktops worldwide—secure, like ChatGPT or Claude but for the Pentagon and military services—that “doesn’t allow information to get out.” The core issue, he says, is who controls the AI, and two key questions about the future of war with AI: who ultimately owns these AI platforms, and who informs them—who gives them the algorithm and programming and essentially orders on how to answer questions. He notes increasing concerns about reliability of information, including how ChatGPT handles questions about trustworthy news sources. He mentions that ChatGPT defers to institutional structures rather than historical accuracy. The risk, he says, is that military AI programs may not provide honest, candid, objective information to military personnel, but rather information based on narratives the Pentagon or manufacturers want. A common belief is that technology makes war more precise and reduces civilian harm, but Matthew contends this is a myth. He explains that precision-guided munitions were not about preventing civilian casualties but about increasing efficiency—“the purpose was to make the weapons more efficient, so we had to drop less bombs to, say, blow up a bridge.” He cites the small diameter bomb as evidence that the aim is not to limit civilian casualties but to allow more bombs to be delivered from aircraft. He highlights real-world examples of AI in warfare, referencing Israeli systems in Gaza. He explains that three AI programs—Lavender, Gospel, and Where’s Daddy?—play roles in targeting and timing strikes. Lavender scans theInternet and databases to identify targets (e.g., labeling someone as a Hamas supporter based on a past online activity), and Where’s Daddy? coordinates that information to ensure bombs hit resistance fighters “when they are with their families,” not away from them. He notes reporting from Israeli media and Nine Two Magazine about these programs and urges viewers to examine that reporting; Tucker Carlson’s coverage is mentioned as example. Matthew argues this demonstrates the dystopian potential of AI in war and cautions against assuming American AI would be more benevolent. He mentions commentator references to justify or excuse actions, including a remark attributed to Mike Huckabee that “Israel did not attack Qatar. They just sent a missile into their country aimed at one person,” noting the nearby injuries or deaths. He ends with a reminder of Orwell’s reflections on war and the idea that those who cheer for war may be less enthusiastic if they experience its costs, suggesting a broader aim to make the costs of war felt among ruling elites who benefit from it.

Video Saved From X

reSee.it Video Transcript AI Summary
Google, Palantir, Microsoft, Apple, Meta, X, Oracle, Amazon, what do all these companies have in common? Well, they're making a bag off the Palestinian genocide. These four companies, Google, Oracle, Amazon, and Microsoft run data centers for the Israeli military, storing the massive amount of surveillance data they track Palestinians. It's the infrastructure for the genocide. They feed that data, store it in these data centers, providing the compute for Palantir's AI killing systems, their algorithms that they've entered into a massive warfare deal with Israel for. It's allowed the systematic destruction of Palestinian civilization in Gaza. That's Palantir. They've netted hundreds of billions since entering this deal. They provide the means. Who provides the weapons? Well, of course, everyone knows this answer. It's The United States taxpayer. It's our war company. So add them in there too. On top of this, you have Google, X, and Meta all taking money from Israel, all taking money from Netanyahu's propaganda arm to push propaganda to Americans, denying the genocide, denying war crimes, denying masturbation in Gaza. Most recently, drop site news exposed Google and X, Google taking 45 mil, X taking 2 mil, from Netanyahu's office to deny masturbation in Gaza. This started just days after Israel began cutting off all aid to the Gaza Strip for over eighty straight days. This is complicity at the top level. I didn't forget about Apple. Of course, they run their largest R and D center in Israel, complicit in apartheid and occupation. They also match employee donations to the IDF and groups linked to the IDF. They're funding war criminals. Let's take let's take our attention away from Israel and look back home. Every single one of these companies are run by Zionists. Every single one of them. They've all donated to Trump because if they get in his good graces, Trump will let them do whatever they want. Trump loves letting billionaires do whatever they want. In fact, that's why only five companies control 90% of The US media market. It's true. That doesn't include social media, but we'll get to that in a second. Five companies, none of The US outlets are willing to call it a genocide or call out Israel's crimes. That's because their editorial boards are controlled by Zionists. This is not some conspiracy theory. It's factual. You cannot go to CNN, let alone any conservative site. But CNN, Reuters, Washington Post, New York Times, none of them none of them are calling it genocide. They're all covering up. They're all playing the propaganda game for Israel. Now let's turn our attention to the social media because that's not included in the 90%. We have the top three, the big three, Meta, X, and TikTok. And TikTok's an interesting case. But first of all, Meta blacklist pro Palestinian activist. I'm blacklisted on Meta for my free speech criticizing Israel. They've also hired hundreds of ex IDF soldiers from the intelligence unit, unit a two hundred, to run their moderation team. That's why this is all happening. They put Zionists in charge of their free speech policy. That's similar to what TikTok has done. They got lobbied by the ADL. They were pressured by the US government, surely, by this bipartisan bill to ban TikTok because of hosting anti Israel content like mine and many people, to hire an ex IDF soldier and put her, Erica Mendel, in charge of their hate speech policy, which she changed, and those changes went into effect on September 13. Since then, every single one of my videos criticizing Israel, making connections, talking about how literally, everything I'm talking about in this video, that's what has been getting pulled off the For You page or just getting banned out right now. The censorship is super ramped up, they're trying to sell TikTok US to Larry Ellison, who is the CEO of Oracle, the one who runs the data centers. You know, Larry Ellison once offered Netanyahu a seat on the board of Oracle. Yeah. Yeah. That's right. They're buds. They're like that. He's gonna con he's a Zionist, magabillionaire. Right? Whatever. And he's going to censor everyone once he owns it. So we got two things for TikTok. Met we covered MetaX. Elon Musk, he banned his own chatbot, Grok, when it started telling people there was genocide in Gaza. He's disgusting, and, you know, obviously, he took the money from Israel to run propaganda ads. So that's just great. That covers about everything. Welcome to The United States Of Israel, guys. Welcome to The United States Of Israel. We haven't even touched we haven't even touched on what our government has done under Biden. They arrested 3,200 student activists and professors who protested the genocide peacefully on college campuses, calling on their schools to divest their massive endowment funds worth billions from all these complicit companies I'm telling you about right now. But the schools wouldn't do it. They wouldn't do the right thing and stand with humanity. And, you know, there's a lot you can talk about under Biden. He, conducted most of this genocide, oversaw the destruction of every single one of Gaza's hospitals, amongst other things, while lying about a ceasefire even though he never pressured Israel for a ceasefire once. Now Trump, you know, he ramps it up even more, and they're talking about taking away US citizens' passports if you criticize the state of Israel. They are deporting student activists like Mahmoud Khalil, who, you know, is pro Palestinian. He stands in solidarity with them. They're pulling funding from schools that allow allow anti Israel or pro Palestinian protests that are peaceful. They're pulling funding. Both parties were involved in passing that bill to ban TikTok, which has led us to where we are now with Larry Ellison. So, you know, we can we can blame both parties for that. Let's look at the parties a little more closely, though, and who funds them. APAC. APAC spent over 100,000,000. They're just one part of the massive prosely lobby. They spent 100,000,000 and elected into power a super majority of Zionists into congress. They supported a super majority of Zionists in 2024 and got them elected. Almost every single one except like two, I believe. That's why no one in government is gonna say no to giving Israel as many weapons as they ask for. That's why they're gonna do everything they want to to suppress criticism of Israel, do all these things that I just described, enable these companies to make a bag off the Palestinian genocide? Why do only 20 of our 435 congress members say it's genocide when half of American voters are saying it's genocide? And just look at the Democrat party, what is it? 77% of their base says it's genocide? 92% of their base wants to stop sending weapons to Israel? And yet they can't even they can't even criticize Israel. The party won't stop weapons to Israel. They won't even vote on that policy on their platform. This is a pretty good picture, like, the wide view of what is happening. You have all the billionaires aligned with Israel, whether they be Christian or Jewish Zionist. You have every single big tech company supporting Israel in some way, fueling the systems they are using to commit genocide, taking money for propaganda. You have all our media institutions doing the same thing and running cover for them. And you have all of our politicians as well. This isn't some conspiracy theory. This is real life. You have to admit that there might be there might be a problem when you have four and a half percent of congress saying genocide while 50 plus percent of American voters are saying it's genocide. What's going on? We've lost our sovereignty. It's not a joke. It's not hyperbole. We go to war against Iran on Israel's behalf. We change our laws prevent criticism of Israel. We're trying to ban boycotting Israel to throw American citizens in jail. The DOJ has been given the power to denaturalize anyone they see fit and explicitly those who are critical of the state of Israel. This is this is great. Trump also threatened to not do a deal with Canada because they're recognizing Palestine. We're sanctioning the international courts. We're threatening to pull out of the UN if they kick Israel out. Both of us on the left and right need to unite and rid Zionist influence out of America. This is insane. They don't care about what we think. They don't care about our interests or our human morals or the fact it's our tax dollars funding all of this. Every single one of these genocide profiteers must be held accountable. Our politicians, the companies, and the billionaires. You can support my work by clicking the link in my bio, which will let you subscribe to my Substack. Thank you and free Palestine.

Video Saved From X

reSee.it Video Transcript AI Summary
An AI system marked 37,000 Palestinians in Gaza as suspected militants based on various factors. Despite knowing it made errors in 10% of cases, the IDF used this system to target individuals in their homes with unguided missiles, resulting in civilian casualties.

Video Saved From X

reSee.it Video Transcript AI Summary
Patrick Sarval is introduced as an author and expert on conspiracies, system architecture, geopolitics, and software systems. Ab Gieterink asks who Patrick Sarval is and what his expertise entails. Sarval describes himself as an IT architect, often a freelance contractor working with various control and cybernetics-oriented systems, with earlier experience including a Bitcoin startup in 2011, photography work for events, and involvement in topics around conspiracy thinking. He notes his books, including Complotcatalogus and Spiegelpaleis, and mentions Seprouter and Niburu in relation to conspiratorial topics. Gieterink references a prior interview about Complotcatalogus and another of Sarval’s books, and sets the stage to discuss Palantir, surveillance, and the internet. The conversation then shifts to explaining Palantir and its significance. Sarval emphasizes Palantir as a key element in a broader trend rather than focusing solely on the company itself. He uses science-fiction analogies to describe how data processing and artificial intelligence are evolving. In particular, he introduces the concept of a “brein” (brain) or “legion” that integrates disparate data streams, builds an ontology, and enables predictive analytics and tactical decision-making. Palantir is described as the intelligence brain that aggregates data from multiple sources to produce meaningful insights. Sarval explains that a rudimentary prototype of such a system operates under the name Lavender in Gaza, where metadata from sources like Meta (Facebook, WhatsApp, Instagram), cell towers, satellites, and other sensors are fed into Palantir. The system performs threat analysis, ranks threats from high to low, and then a military operator—still human—must approve the action, with about 20–25 seconds to decide whether to fire a weapon. The claim is that Palantir-like software functions as the brain behind this process, orchestrating data integration, ontology creation, data fusion, digital twins, profiling, predictions, and tactical dissemination. The discussion covers how Palantir integrates data from medical records, parking fines, phone data, WhatsApp contacts, and more, then applies an overarching data model and digital twin to simulate and project outcomes. This enables targeted marketing alongside military uses, illustrating the broad reach of the platform. Sarval notes there are two divisions within Palantir: Gotum (military) and Foundry (business models), which he mentions to illustrate the dual-use nature of the technology. He warns that the system is designed to close feedback loops, allowing it to learn and refine its outputs over time, similar to how a thermostat adjusts heating based on sensor inputs. A central concern is the risk to the rule of law and human agency. The discussion highlights the potential erosion of the presumption of innocence and due process when decisions increasingly rely on predictive models and AI. The panel considers the possibility that in a high-stress battlefield scenario, soldiers or commanders might defer to the Palantir-presented “world view,” making it harder to refuse an order. There is also concern about the shift toward autonomous weapons and the removal of human oversight in critical decisions, raising fears about the ethics and accountability of such systems. The conversation moves to the political and ideological backdrop surrounding Palantir’s leadership. Peter Thiel, Elon Musk, and a close circle with ties to PayPal and other tech-industry figures are discussed. Sarval characterizes Palantir’s leadership as ideologically defined, with statements about Zionism and a political worldview influencing how the technology is developed and deployed. The dialogue touches on perceived connections to broader geopolitical influence, including the role of influence campaigns, media shaping, and the involvement of powerful networks in technology development and national security. As the discussion progresses, the speakers explore the implications of advanced AI and the “new generative AI” era. They consider the nature of AI and the potential for it to act not just as a data processor but as a decision-maker with emergent properties that challenge human control. The concept of pre-crime—predicting and acting on potential future threats before they materialize—is discussed as a troubling possibility, especially when a machine’s probability-based judgments guide life-and-death actions. Towards the end, the conversation contemplates what a fully dominated surveillance state might look like, including cognitive warfare and personalized influence through media, ads, and social networks. The dialogue returns to questions about how far Palantir and similar systems have penetrated international security programs, with speculation about Gaza, NATO adoption, and commercial uses beyond military applications. The speakers acknowledge the possibility of multiple trajectories and emphasize the need for checks and balances, transparency, and critical reflection on the power such systems confer upon a relatively small group of technologists and influencers. They conclude with a nod to the transformative and potentially dystopian future of AI-enabled surveillance and decision-making, cautioning against unbridled expansion and urging vigilance.

Video Saved From X

reSee.it Video Transcript AI Summary
An AI system called Lavender marked 37,000 Palestinians in Gaza as suspected militants based on small signs like phone usage. The Israeli military used this information to target and bomb these individuals, even though the system made errors in 10% of cases. This led to civilian casualties when unguided missiles were used on family homes, killing up to 20 civilians per suspected militant.

Video Saved From X

reSee.it Video Transcript AI Summary
Lavender is software developed by Palantir for the Israeli IDF. Following the events of October 7th, when the IDF entered Gaza, they utilized Lavender for targeting during bombings. This software structure allows IDF personnel to evade legal accountability for actions that may violate international law. It also alleviates moral responsibility, facilitating the process of warfare. A notable incident involved Peter Thiel, who became visibly distressed when questioned about the implications of this technology, realizing that while he might avoid legal repercussions, public opinion could still hold him accountable. This situation exemplifies how advancements in automation and weaponry are reshaping modern warfare in ways previously unimaginable.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 1 and Speaker 0 discuss the implications of AI in military use. They consider whether consumer AI is being bypassed with a secure, military-specific platform that would be sealed—essentially one-way in and no information out—for the Pentagon and military services. The key questions raised are: who controls the AI, who informs its algorithms, and who gives it its orders on how to answer questions, highlighting concerns about privatization and outsourcing of war. Speaker 1 argues that the future of war with AI hinges on two issues: ownership of AI platforms and the sources of their programming. They note that AI can deflect or defer to institutional structures rather than empirical accuracy, raising concerns about the reliability of information provided to military personnel. They also reference the myth that advancing technology automatically reduces civilian harm, citing that precision-guided munitions were designed for efficiency, not necessarily to prevent civilian casualties, noting that the intent was to reduce the number of bombs needed to achieve targets. The conversation shifts to the concept of precision in weapons. Speaker 1 points out that laser- and GPS-guided bombs were not primarily invented to minimize civilian casualties but to increase efficiency. They mention the small diameter bomb as an example, explaining that its use increases the number of bombs that can be deployed rather than primarily limiting collateral damage. The discussion then moves to real-world AI systems used in conflict zones. Speaker 1 cites Israeli programs—Lavender, Gospel, and Where’s Daddy?—as examples of nefarious and insidious AI in war. Lavender supposedly scans the Internet and other databases to identify targets, for example flagging someone as a Hamas supporter based on years of activity. Where’s Daddy? allegedly guides Israeli drones to strike fighters when they are with their families, not away from them. This reporting is linked to coverage from Israeli media and Nine Seven Two magazine, and Speaker 2 references Tucker Carlson’s coverage of these issues. Speaker 2 amplifies the point by noting the emotional impact of such capabilities, arguing that targeting men when they are with their children is particularly disturbing. They also discuss broader political reactions, including a remark attributed to Ambassador Huckabee about Israel not attacking Qatar but “sending a missile there” that injured nearby people. Speaker 1 concludes by invoking Orwell’s reflection on the Spanish Civil War, suggesting that those who cheer for war may be confronted by the consequences when modern aircraft enable distant bombing. They emphasize the need to make the costs of war felt by the ruling classes who benefit from it, not just the people on the ground.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 describes a 2021 claim by the commander of Israeli intelligence to design a machine to resolve a human bottleneck in locating and approving targets in war. A recent investigation by Plus 972 Magazine and Local Call reveals that the Israeli army developed an AI-based Lavender system to designate targets and direct airstrikes. During the initial weeks of the Lavender operation, the system designated about 37,000 Palestinians as targets and directed airstrikes on their homes. The system reportedly had an error rate of about 10%, and there was no requirement to verify the machine’s data. The Israeli army systematically attacked targeted individuals at night in their homes while their whole family was present. An automated component, known as “where’s daddy,” tracked targeted individuals and carried out bombings when they entered their family residences. The result, according to the report, was that thousands of women and children were killed by Israeli airstrikes. Israeli intelligence officers allegedly stated that the IDF bombed homes as a first option, and in several cases entire families were murdered when the actual target was not inside. In one instance, four buildings were destroyed along with everyone inside because a single target was in one of them. For targets marked as low level by Lavender, cheaper bombs were used, destroying entire buildings and killing mostly civilians and entire families. It was alleged that the IDF did not want to waste expensive bombs on “unimportant people,” and it was decided that for every low-level Hamas operative Lavender marked, it was permissible to kill up to 15 or 20 civilians; for a senior Hamas official, more than 100 civilians could be killed. Most AI targets were never tracked before the war. Lavender analyzed information collected on the 2,300,000 residents of the Gaza Strip through mass surveillance, assessing the likelihood of each person being a militant and giving a rating from 1 to 100. If the rating was high enough, the person and their entire family were killed. Lavender flagged individuals with patterns similar to Hamas, including police, civil defense, relatives, and residents with similar names or nicknames. The report notes that this kind of tracking system has existed in the US for years. Speaker 1 presents a counterpoint: a “fine gentleman of the secret service” claims to provide a list of every threat made about the president since February 3 and profiles of every threat maker, implying that targets could be identified through broad data collection including emails, chats, SMS. The passage suggests a tool akin to a Google search but including private communications. Speaker 0 adds that although some claim Israel controls the US, Joe Biden says Israel serves US interests. Speaker 2: A speaker asserts, “There’s no apology to be made. None. It is the best $3,000,000,000 investment we make,” and claims that without Israel the United States would have to invent an Israel to protect its regional interests. Speaker 0 closes reporting for Infowars, credited to Greg Reese.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: Welcome back to Jake GTV news. Did you see ICE shooting American citizens? Speaker 1: I thought they were supposed to get rid of the illegals, though. Speaker 0: Me too. Let's go to Ching Chong on the murder scene. Speaker 1: Chloe and Michael, good morning. We're here in Minneapolis where ICE agents trained by Israel are causing chaos. We go to John for more. Speaker 0: Thanks, Ching Chong. Thought it was only Libtards who opposed this, but they are literally murdering Americans. Back to you in the studio. Speaker 2: Stand back. Speaker 1: Please don't hurt me, sir Ed. I'm here to get rid of the illegals, grandma. Speaker 0: Wow. Thanks, John. Check this out here. It's from the protest. Here we see an agent assault a woman for simply being at the protest. Speaker 3: Then Alex steps in to help her Speaker 0: get back on her feet, and Speaker 4: the agents pepper spray him and proceed to assault him. Speaker 0: They then proceed to remove his legally owned firearm and shoot him in the back roughly 10 times, not even kidding. Holy shit. Speaker 1: Please tell me they're gonna jail. Speaker 0: Nope. They're on administrative leave while the FBI pretends to care. Dude, what? Let's see what Trump's team has to say. Speaker 5: Very, very unfortunate incident. I don't like that he had a gun. I don't like the fact that he was carrying a gun. Speaker 6: You know, you can't have guns. You can't walk in with guns. You just can't. And you can't listen. You can't walk in with guns. You can't do that, but it's it's a very unfortunate incident. Speaker 7: Do you Speaker 1: agree with Trump, Steen? Speaker 6: Oh, hell yeah. Guns are bad now. Didn't you get the memo? Speaker 1: What about the second amendment? Speaker 6: It's all four d chess, honey. Trust the plan. Speaker 1: Sup, bro? How do you feel about ICE? Speaker 0: This country needs more Indians than blacks. Check your privilege. Speaker 1: Dude, when did everybody get so retarded? Was it the vaccines or something? We go to the investigation team to learn more. Speaker 8: Thanks, Ching Chung. So basically, we uncovered that not only is ICE Embassy located in Tel Aviv, but they're using the same technology they used to genocide the Palestinians. Speaker 0: It's a freaking Jewish spyware by Paragon Solutions called Graphite, and check this out. Tell me why Alex Pretty was googled a month prior to the shooting and, again, five minutes before his death. Make of that what you will. Back to you guys. Wow. Wasn't the Homeland Security's own Twitter page being run from Israel? Speaker 1: Yeah. Same with ICE's embassy, Tel Aviv to be exact. Speaker 0: Freaking Jews, man. Speaker 9: Shut it down. He was an unhinged lefty who thought our Chobus Goy Trumpstein was a dictator. He kicked the taillight the week prior, so he deserved to be gunned down like a dog. Speaker 1: Air that. Jeez, Producer Berg, chill. Speaker 0: Gosh, he's so Talmudic. Speaker 1: Right. Always victim. Speaker 0: Anyways, here's their emotional justification for cold blooded murder. Speaker 1: That was a pretty good leg kick. Speaker 0: Right? Let's get Shapiro Steen's take on this whole thing. Speaker 10: Just because we didn't arrest anyone for the Epstein files, genocide, or our poisonous mRNA doesn't mean we won't also get away with murdering Boyum. After all, he kicked a taillight. Speaker 0: Yeah. I guess you're right, Shapiro Steen. Israel is our greatest ally. Speaker 1: You're not getting a raise. Speaker 0: Discount on your only freaks? Speaker 1: Not a chance. Ching chong, take it away. Gosh, dude. You're such a weak little simp. She's a literal succubus. Speaker 0: Anyways, let's take a tour with the IDF, I mean ice. Whoops. What was your training like? We were supposed to be trained for this? Speaker 0: Yeah. We've got an antiseptic on the next block. Get ready to murder. Stop resisting. Did you see me shoot that senior citizen? Yeah. Definitely not an immigrant, he sure had it coming. Let's see what Diego's up to. Speaker 2: I will tell you this, brother. What? You know? I will tell you this. You raise your voice? I raise your voice. Speaker 1: Wow. Isn't that like against the law? Speaker 0: You'd think so but they'll end up getting paid administrative leave and mental health support. Speaker 1: Seriously? Speaker 0: Dead ass. If I Speaker 11: raise my voice, you'll erase Speaker 2: my Exactly. Yeah. Yeah. Speaker 11: Are you serious? You said, if I raise my voice, you'll erase my voice? Speaker 1: Yes. Mhmm. Mhmm. Ice. You guys are saving this country. Speaker 0: Didn't they kill that American woman last week? Renee Good or something? Speaker 1: That non chosen person? She was lesbian leftist Karen. Who cares? Speaker 0: Whatever you say, Daisy. No. Speaker 7: No. Shit. Shit. Oh my fucking god. What the fuck? What What the the fuck? Fuck? Speaker 0: You might be wondering, why Minneapolis? Tim Waltz ushered in a defund the police initiative, which created a perfect opportunity for Trump's team to bring about the first AI surveillance state. You know what they say, create the problem, usher in the solution. Tom, back to you. Exactly. Speaker 0: So Peter Thiel, a close advisor to J. D. Vance, founded Palantir, the company that built the AI surveillance system used to target sand people. That same technology was sold to ICE and rebranded as Immigration OS, creating a satanic surveillance network to monitor Americans. Speaker 9: Shut it down, Tom. That's not for the normies to understand. Keep it up and I'll turn you into a lampshade like I did with Jackie. Back to the Goyslop or you're canceled. Speaker 12: Goyslop Junior's Goyslop Filet is back, and it's got more seed oils than ever. Speaker 0: I hate myself. Goyslop Junior. Speaker 7: Go on. Speaker 6: Enjoy cancer. Speaker 1: Gosh, that looks good. Speaker 0: Producer Verk said if we stop talking about Palantir, Goyslap Junior will cater to the Super Bowl party. Speaker 1: Alright. Speaker 0: Zipped. Let's just have Eric Warsaw break it down for us. Speaker 12: Palantir. The same company that is run by the hardline Zionist Alex Karp who works closely with Israeli military, will now be in charge of America's civilian data collection. We built Foundry, which was just was used to distribute the COVID vaccine and saved millions of lives globally. Palantir is here to disrupt and make our the institutions we partner with the very best in the world, and when it's necessary to scare enemies and on occasion kill them. Speaker 12: And also, the target selections for the US military, police forces, and even target selections for ICE officers. Speaker 1: That's right, Eric. We're giving our data to the Israeli Jew whose AI targeted over fifty percent of the civilian deaths in Gaza. Here he is. Speaker 7: Your AI and your technology from Palestine to kill Palestinians. Speaker 13: Mostly terrorists. Speaker 1: And by terrorists, he means anyone who opposes their families being genocided, including women and children. This guy. Speaker 9: Shut it the heck down. Say goodbye to your Goyslav junior catering. Remember what happened to Charlie? You're next. Run the freaking commercials. Speaker 0: Want to express yourself? Well, now you can. I always wonder how dumb this going sometimes can be. Speaker 7: TikTok, Speaker 0: Now owned by the Jews at BlackRock. Speaker 7: We're watching that. Speaker 0: Wow. I thought China owning our data was bad. Now you can't even say Zionist without getting flagged. Speaker 1: Straight up. It's like, give it back to China at this point. Speaker 0: Anything's better than Jews at this point. Speaker 1: Right? It's like take a freaking joke, let alone facts. Speaker 0: That's based. We go to John for some breaking news. Thanks, guys. Couldn't have said it better. And this just in, we're taking over Greenland because it was promised to us by Lucifer himself. So take it away, Satan. Speaker 14: By the way, what are we doing with Greenland? We gotta do something with Greenland. Where's my advance team? Go to Greenland. They must have some satellite needs or something that we could do there. But we are coloring the world blue. Speaker 0: So satanic. Speaker 1: Right? Isn't Greenland the central hub for the undersea data cables connecting North America, Europe, and Asia? Speaker 0: Bingo. Speaker 0: Ching Chong joins us live from Greenland. Speaker 1: We're here in Greenland, and not only is it located on a gold mine of rare earth minerals, but its freezing temperatures are the perfect natural coolant for the AI supercomputers needed to power the new world order that will enslave humanity. Eric Morsaw, break it down for us. Speaker 12: If you thought George Orwell's 1984 was a bad surveillance state, wait until you see what Israel's Palantir can do with AI technology or America. It's gonna make the movie The Matrix look mild. Speaker 1: Thanks, Eric. But to truly understand the endgame, you need to understand their ultimate prize, Jerusalem's Golden Dome. The satanic cabal believes controlling this one holy site lets them hijack God's story for billions and install the Antichrist. Let's hear what Trump's theme has to say about it. Speaker 5: We will have all everything we want. We're getting everything we want at no cost. Speaker 10: So the so the Golden Dome will be on Greenland? Speaker 5: A piece of it, yes. And it's a very important part because it's everything comes over Greenland. If the bad guys start shooting, it comes over Greenland. Speaker 1: So what he means by that is the satanic cabal is taking a piece of God's throne and putting it on their AI brain in Greenland to legitimize the antichrist. Speaker 6: Is that some sort of question? Speaker 1: How does that make you feel? Speaker 6: Get the out of our country. Speaker 10: So what are we talking about? An acquisition of Greenland? Are you going to pay for it? Speaker 5: I mean We're talking about it's really being negotiated now, the details of it, but essentially it's total access. It's there's no end. Speaker 0: We're making Iran great again, Venezuela, and now Greenland. How exciting. Speaker 1: Why can't we just fix this country? Speaker 0: Because Israel is our greatest ally. Speaker 1: Right, Shapiro Steen? Speaker 0: Well. I'm so sick of pretending we're Israel first. Speaker 10: I heard that. Just because you stupid goyim think you can expose our satanic agenda doesn't mean you won't fall for our next tie up. Dennis, shut this episode down or you're all fired. Speaker 0: Thanks, Shapiro Steen. Suck on this. Anyways, if you're still not following Jake GTV, you're either brainwashed or legally retarded. Speaker 15: I think I figured out where our data's going. Just let me hack into Homeland Security real quick, and we're in. Speaker 0: And time to get rid of their lice For antiseptic purposes, of course. Did you hear we gave Jake GTV a strike on his YouTube? Speaker 9: Oh, someone's hacked into our system. Another pizza cost. Speaker 1: Look who it is, my base fucking noticer. If you wanna stop wondering what's going on and know, check out my new book on jakegtv.com. Otherwise, just hit the like, comment, and subscribe, and I'll see you on the next one. Speaker 9: Did you hit him with a YouTube strike? Speaker 0: Sir, we did, but he's not stopping. Speaker 9: Shadow ban his accounts. We must shut him down before the red Speaker 7: heifer Speaker 0: is sacrificed.

Video Saved From X

reSee.it Video Transcript AI Summary
Israel is using artificial intelligence (AI) to target and assassinate individuals in Gaza, even if it means killing Palestinian civilians. The Israeli military has a division called the targets division, which uses AI algorithms and automated software to accelerate target creation. The goal is to create a shock effect and continue the war, as previous operations have run out of targets. The AI tools have created 12,000 targets in this war alone, twice as many as in the entire 2014 war. The military has loosened restrictions on harming civilians, knowingly striking targets that may result in civilian casualties. This war policy, aided by AI, has led to civilian devastation and potential war crimes.

Video Saved From X

reSee.it Video Transcript AI Summary
The discussion centers on the ongoing tensions with Iran, the potential for American military involvement, and the role of media and ideology in shaping public perception. The speakers express a critical view of how the situation is being managed and portrayed. Key points about the Iran situation: - President Trump publicly claimed “we’ve won the war against Iran,” but the panel notes Israel’s public interest in a broader outcome, specifically regime change in Iran, which would require boots on the ground rather than air strikes. - It is argued that air strikes alone cannot achieve regime change; the Israeli military, even with about 170,000 active-duty soldiers plus reservists, would need American boots on the ground to accomplish such aims against a larger Iranian army. - Senators, including Richard Blumenthal, warned about the risk to American lives in potentially deploying ground troops in Iran, citing a path toward American ground forces. - The new National Defense Authorization Act renewal could lead to an involuntary draft by year’s end, a concern raised by Dan McAdams of the Ron Paul Institute who argues it treats citizens as owned by the government. - There is tension between Trump’s public push for a quick end to conflict and Netanyahu’s government talking about a larger, more prolonged objective in the region, including a potential demilitarized zone in southern Lebanon akin to Gaza’s situation. - Iran’s new supreme leader Khomeini issued a televised statement threatening to shut the Strait of Hormuz until the United States begs and vowing vengeance for martyrs, signaling that the conflict could continue or escalate beyond initial claims of victory. - The panel highlights potential escalation, including the possibility of nuclear weapons discussion by Trump and concerns about who controls the war, given factions within Iran and differing US-Israeli goals. Tucker Carlson’s analysis and warnings: - Carlson is presented as having warned that a war with Iran would be hard due to Iran’s ballistic missile arsenal aimed at US bases and allies’ infrastructure, and that it would push Iran closer to China and Russia, potentially undermining the US. - Carlson emphasizes the lack of a clear, publicly articulated endgame or exit strategy for the war, arguing that diplomacy has deteriorated and that the US appears discredited in its ability to negotiate peace. - He discusses the governance of Israel and the idea that some Israeli leaders advocate for extreme measures, referencing “Amalek” language used by Netanyahu to describe enemies, which Carlson characterizes as dangerous and incompatible with Western civilization’s values. - Carlson argues that American interests and Israeli strategic aims diverge, and questions why Israel is the partner with decision-making authority in such a conflict. He notes the US’s reliance on Israel for intelligence (with Israel translating SIGINT) and suggests that Israel’s endgame may be to erode American influence in the region. - He also suggests the war is being used to advance a broader political and ideological project, including America’s pivot away from foreign entanglements; he asserts that certain power centers in the US and in media and defense circles benefit from perpetual conflict. - Carlson discusses the moral framework around targeting and civilian casualties, asserting that there is concern over the ethical implications of autonomous targeting and the potential for AI to play a role in warfare decisions. - He notes the possibility that AI involvement in targeting decisions exists in other conflicts, though in the Iran situation, he mentions that a human pressed play in the specific case of an attack (the school near an Iranian base), while coordinates may have come from other sources, possibly shared by Israel. - Carlson discusses media dynamics, describing mainstream outlets as “embedded” with the defense establishment and questioning why there isn’t a robust public discussion about the war’s endgame, exit ramps, or the true costs of war. Media, propaganda, and public discourse: - The panel critiques media coverage as lacking skepticism, with anchors and outlets seemingly aligned with the administration’s war narratives, raising concerns about “access journalism” and the absence of tough questions about goals, timelines, and consequences. - Carlson and participants discuss the use of propaganda—historically, Disney and the Treasury Department in World War II as examples—arguing that today’s propaganda around Iran relies on pop culture and entertainment to normalize or justify intervention without clear justification to the public. - They argue that contemporary media often fails to examine the ethics and consequences of war or to question the necessity and legitimacy of continuing conflict, suggesting a broader risk of technology-enabled control over public opinion and civil discourse. White House dynamics and internal debate: - The guests discuss the possibility of internal disagreement within the White House, noting that while some senior figures had reservations, external pressure, particularly from Netanyahu, may have pushed the administration toward action. - They touch on the strategic ambiguity surrounding US forces in the region, noting that while large-scale ground invasion is unlikely, special forces and other assets may be deployed, with civilian and military costs disproportionately affecting American families. - The conversation also explores concerns about civil liberties, surveillance, and the potential for centralized control of information and warfare technologies to influence domestic politics and social cohesion. Overall, the dialogue presents a multifaceted critique of the handling and propulsion of a potential Iran conflict, emphasizing the risk of escalatory dynamics, the clash of strategic goals between the US and Israel, concerns about democratic consent and media accountability, and the ethical implications of modern warfare technology.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 Summary: Peter Thiel, a co-founder of PayPal and Palantir, and an early investor in Facebook, is described as now worth about $8,000,000,000. He has focused a large portion of his fortune on building JD Vance. Thiel and Vance met in 2011 at Yale Law School after Thiel gave a talk; Thiel became Vance’s mentor, employer, and financier, funding Vance’s venture firm and writing the blurb on Vance’s book. In 2022, Thiel donated $15,000,000 to Vance’s Senate campaign—the largest individual donation to a single Senate race in American history. He escorted Vance into Mar-a-Lago personally and introduced him to Donald Trump, despite Vance having previously called Trump “Hitler.” The transcript notes Thiel has stated publicly, and it is claimed here as a quote, that “I no longer believe that freedom and democracy are compatible.” Epstein files and connections: Thiel’s name allegedly appears over 2,200 times across Epstein’s email schedules and documents. The transcript says Thiel and Epstein lunch together in November 2017, nine years after Epstein’s conviction for soliciting prostitution from a minor. Epstein invested $40,000,000 into funds co-managed by Thiel, and Epstein reportedly brokered introductions between Thiel and Israeli officials, including arranging a 2014 dinner. Thiel denies wrongdoing, though the calendar entries cited do not express opinions. Palantir and government ties: Palantir, Thiel’s company, signed a strategic partnership with Israel’s Ministry of Defense in 2024. Palantir’s CEO publicly stated pride in supporting Israel “in every way we can,” and has acknowledged that their product is used, on occasion, to kill people. The transcript emphasizes Thiel as “the man who built your vice president,” asserting he is “the company in the bloodstream of your government.” It concludes with the line, “You didn’t vote for Peter Thiel, but Peter Thiel is governing you anyway. That’s not democracy. That’s a purchase.”

Video Saved From X

reSee.it Video Transcript AI Summary
The transcript surveys Palantir’s rise as a powerful data analytics company intertwined with government and military aims, emphasizing how fear, surveillance, and control have shaped its growth and public image. It frames Palantir as aiming to become “the ultimate military contractor and the ultimate arbiter of all of our data,” with its software described as enabling governments and major institutions to collect, analyze, and act on vast datasets, including in war zones. Key points include: - Palantir’s positioning and clients: The company claims it can revolutionize government systems with AI-powered data analysis and has been hired by the Department of Defense, the FBI, local police, the IRS, and other entities, including non-government customers like Wendy’s. Its business model is described as transforming “information those organizations collect, collect even more information, and use that data to draw conclusions.” - The kill chain concept and AI: Palantir’s tech is linked to the “kill chain,” a military term for the series of decisions leading to targeting and potentially taking life. Palantir’s contract adds AI to this chain, making it “quicker and better and safer and more violent.” - Founding story and rhetoric: Palantir traces its origins to a PayPal-connected network (the “PayPal mafia”) and to Alex Karp, who studied neoclassical social theory, with the company named after Tolkien’s Palantir. Middle-earth imagery is used to juxtapose potential good versus dangerous power. - Data, surveillance, and ontology: The software is described as capable of reconfiguring an organization’s ontology—what systems matter, what information matters, how processes are structured, and what biases are introduced. - Inside views and ethics: A former Palantir employee, Juan, explains his departure and later criticisms after observing the Israeli invasion of Gaza; Palantir’s involvement with the Israeli Defense Forces is noted, though contract details are opaque. The claim is that Palantir’s AI may have been used for target selection. - Revenue and focus on government: In 2024 Palantir earned nearly $2.9 billion, with 55% from government sources, most of it American. Palantir’s CTO Sham Sankar is cited with a Defense Reformation rhetoric that aligns with the Defense Innovation Board’s push to fund emerging tech, suggesting a fusion of defense spending and Palantir’s growth. - Domination and market strategy: Palantir is depicted as striving to be the “US government’s central operating system,” with Doge (an internal effort) aimed at unifying data across agencies like the IRS and Health and Human Services, potentially giving one contractor broad access to Americans’ data and health records. - Corporate culture and risk: The company is described as comfortable being unpopular, with leaders like Peter Thiel investing heavily and having a role in politics; Karp emphasizes civil liberties in terms of lawful use of government data and its potential misapplication. - Ethical tension and viewpoint: The piece notes that Palantir’s reach could enable governance by algorithm and automated decision-making, potentially reshaping personal lives, battlefields, and governance. The founders’ ownership structure preserves control through class voting shares. - Final reflections: The speakers argue that criticizing the system is fraught because watching and fear can silence dissent, and warn against replacing a broken system with an even more broken one, urging vigilance over who wields powerful data and AI.

Video Saved From X

reSee.it Video Transcript AI Summary
- The speakers claim that American financial institutions and tech companies are deeply involved in the Gaza killings. They name banks, pension funds, Amazon, Google, and Microsoft as having provided services and access to Palestinian data that enabled Israel to set up systems to mass target and kill Palestinians. - They describe an application called Where is Daddy, asserting it allows the army to randomly track people and reach them even when they are with their families, facilitating harm. - The discussion characterizes Israel as possessing the most sophisticated military in the region, knowing precisely what it is doing for two years, and notes that many Israeli soldiers are breaking down, with rising suicidality among young soldiers who have served. - They argue that soldiers have been indoctrinated into becoming executioners of genocide, and that intervention is necessary to prevent further brutality. - The speakers contend that much of this action is driven by people outside Israel who defend the regime, which they describe as having imposed a military dictatorship on Palestinians in the West Bank, Jerusalem, and Gaza (the latter until 2005), and also affecting Israelis who are part of the system. They state that brutalizing others compromises humanity. - Speaker 0 presses for clarification about the existence of the Where is Daddy app, asking if it was a dream or a claim already stated. - Speaker 1 clarifies that Israel has developed an automated system to determine targets through computing, with data supplied by tech companies. He mentions Palantir as one company that publicly supports Israel. He references a public debate in which a Polish person protests that he is killing families, and the response is “you are killing civilians in Gaza,” to which the other person replies that the targets are “most probably terrorists.”

Video Saved From X

reSee.it Video Transcript AI Summary
- The discussion opens with claims that President Trump says “we’ve won the war against Iran,” but Israel allegedly wants the war to destroy Iran’s entire government structure, requiring boots on the ground for regime change. It’s argued that air strikes cannot achieve regime change and that Israel’s relatively small army would need U.S. ground forces, given Iran’s larger conventional force, to accomplish its objectives. - Senator Richard Blumenthal is cited as warning about American lives potentially being at risk from deploying ground troops in Iran, following a private White House briefing. - The new National Defense Authorization Act is described as renewing the involuntary draft; by year’s end, an involuntary draft could take place in the United States, pending full congressional approval. Dan McAdams of the Ron Paul Institute is described as expressing strong concern, arguing the draft would treat the government as owning citizens’ bodies, a stance attributed to him as supporting a view that “presumption is that the government owns you.” - The conversation contrasts Trump’s public desire to end the war quickly with Netanyahu’s government, which reportedly envisions a much larger military objective in the region, including a demilitarized zone in southern Lebanon akin to Gaza, and a broader aim to remove Hezbollah. The implication is that the United States and Israel may not share the same endgame. - Tucker Carlson is introduced as a guest to discuss these issues and offer predictions about consequences for the American people, including energy disruption, economic impacts, and shifts in U.S. influence in the Persian Gulf. - Carlson responds that he would not credit himself with prescience, but notes predictable consequences: disruption to global energy supplies, effects on the U.S. economy, potential loss of U.S. bases in the Gulf, and a shrinking American empire. He suggests that the war’s true goal may be to weaken the United States and withdraw from the Middle East; he questions whether diplomacy remains viable given the current trajectory. - Carlson discusses Iran’s new supreme leader Khomeini’s communique, highlighting threats to shut Hormuz “forever,” vows to avenge martyrs, and calls for all U.S. bases in the region to be closed. He notes that Tehran asserts it will target American bases while claiming it is not an enemy of surrounding countries, though bombs affect neighbors as well. - The exchange notes Trump’s remarks about possibly using nuclear weapons, and Carlson explains Iran’s internal factions, suggesting some seek negotiated settlements while others push for sustained conflict. Carlson emphasizes that Israel’s leadership may be pushing escalation in ways that diverge from U.S. interests and warns about the dangers of a joint operation with Israel, which would blur U.S. sovereignty in war decisions. - A discussion on the use of a term Amalek is explored: Carlson’s guest explains Amalek from the Old Testament as enemies of the Jewish people, with a historical biblical command to annihilate Amalek, including women and children, which the guest notes Christianity rejects; Netanyahu has used the term repeatedly in the conflict context, which Carlson characterizes as alarming and barbaric. - The guests debate how much influence is exerted in the White House, with Carlson noting limited direct advocacy for war among principal policymakers and attributing decisive pressure largely to Netanyahu’s threats. They question why Israel, a client state of the U.S., is allowed to dictate war steps, especially given the strategic importance of Hormuz and American assets in the region. - They discuss the ethical drift in U.S. policy, likening it to adopting the ethics of the Israeli government, and criticize the idea of targeting family members or civilians as a military strategy. They contrast Western civilization’s emphasis on individual moral responsibility with perceived tribal rationales. - The conversation touches on the potential rise of AI-assisted targeting or autonomous weapons: Carlson’s guest confirms that in some conflicts, targeting decisions have been made by machines with no human sign-off, though in the discussed case a human did press play on the attack. The coordinates and data sources for strikes are scrutinized, with suspicion cast on whether Israel supplied SIGINT or coordinates. - The guests warn about the broader societal impact of war on civil liberties, mentioning the increasing surveillance and the risk that technology could be used to suppress dissent or control the population. They discuss how war accelerates social change and potentially normalizes drastic actions or internal coercion. - The media’s role in selling the war is criticized as “propaganda,” with examples of government messaging and pop culture campaigns (including a White House-supported video game-like portrayal of U.S. military power). They debate whether propaganda can be effective without a clear, articulated rationale for war and without public buy-in. - They question the behavior of mainstream outlets and “access journalism,” arguing that reporters often avoid tough questions about how the war ends, the timetable, and the off-ramps, instead reinforcing government narratives. - In closing, Carlson and his co-hosts reflect on the political division surrounding the war, the erosion of trust in media, and the possibility of rebuilding a coalition of ordinary Americans who want effective governance without perpetual conflict or degradation of civil liberties. Carlson emphasizes a longing for a politics centered on improving lives rather than escalating war. - The segment ends with Carlson’s continued critique of media dynamics, the moral implications of the war, and a call for more transparent discussion about the true aims and consequences of extended military engagement in the region.

Video Saved From X

reSee.it Video Transcript AI Summary
The discussion centers on Palantir Technologies and a proposed March 2025 executive order that would require federal agencies to share and control data, aiming to centralize government data using Palantir’s Foundry platform. It is claimed that Palantir has already deployed Foundry in at least four agencies, including the Department of Homeland Security and Health and Human Services, and that the company has received over $113 million in federal contracts since Trump took office, with a recent $795 million Department of Defense contract. The speakers allege that the initiative could enable a comprehensive database on all Americans—“light years beyond Real ID, the Patriot Act, and Prism”—and that those who control it seek “complete power over you and everyone else.” They warn of mass surveillance and privacy violations, lack of oversight, and potential political abuse. Key concerns include the breadth of data that Palantir’s system could merge, such as bank accounts, medical records, driving records, student debt, disability status, political affiliation, credit card expenditures, online purchases, tax filings, and travel and phone records, creating “detailed profiles on every single American.” The speakers argue this centralization would enable unchecked monitoring with “zero oversight,” increasing data security risks and the potential for breaches, leaks, or mismanagement. They emphasize a history of opaqueness in Palantir’s operations and tie the company’s AI tools to predictive policing and military applications lacking public accountability. They cite Palantir’s CEO Alex Karp as having controversial views and describe the firm as aligned with a profit-driven push for technomilitarism. The talk links Palantir to broader power dynamics, including ties to Elon Musk’s and Peter Thiel’s spheres, and suggests a technocratic oligarchy could emerge that prioritizes corporate and political agendas over public interest. While acknowledging stated goals like fraud detection and national security, the speakers assert the lack of checks and balances, and fear that the surveillance infrastructure would be embedded to be expanded by future governments. The “kill chain” terminology is discussed both in military and cyber contexts, with Palantir’s Gotham platform described as designed to shorten the kill chain by fusing large datasets into actionable intelligence, enabling faster targeting decisions. They provide examples like the use of Palantir to improve the accuracy and speed of Ukraine’s artillery strikes and, publicly, the Israeli Defense Forces’ use for striking targets in Gaza. The segment also mentions Palantir’s use in predictive policing, including tools used by the Los Angeles Police Department, and argues that Palantir aims to track “everybody, not just immigrants.” The speakers conclude that this centralized system is “light years beyond Real ID, the Patriot Act, or Prism” and advocate resisting it and “thinking of ways we can break the links in the kill chain.”

Breaking Points

Anthropic CEO: Claude Might Be CONSCIOUS. Pentagon Already Using for WAR
reSee.it Podcast Summary
The episode centers on the evolving debate over whether Anthropic’s Claude may be conscious and what that implies for how AI should be treated. Interview fragments with Dario Amodei and Ross Douthat explore questions of consciousness, responsibility, and the safeguards companies should build into advanced models. The hosts discuss the broader social and economic impacts of powerful AI, arguing that a pure free‑market approach risks mass wealth concentration and widespread disruption to white‑ and blue‑collar work alike. They emphasize the need for deliberate regulation, safeguards, and public input to guide deployment in ways that preserve freedom and democratic norms while addressing potential harms. The episode then shifts to a concrete battleground: the Pentagon’s use of Claude under a Palantir contract and the resulting clash with Anthropic over military applications. The conversation flags concerns about weaponization, exportability of AI technology, and the risk of global proliferation of capable tools. It also notes advancements suggesting AI can contribute novel insights in science, underscoring both transformative potential and peril as the technology moves from regurgitating human input to pushing frontiers, all under intense geopolitical scrutiny.

Breaking Points

Trump Voters REVOLT Over Admin's AI Scheme
reSee.it Podcast Summary
The hosts discuss a mounting backlash to AI data centers, framing it as a cross-partisan concern about community impact, energy use, and job disruption. They recount a town meeting in Indiana where opposition to a new data center led to a lengthy public hearing and ultimately a decision not to proceed, highlighting how residents connect AI development to local quality of life and rising costs. They contrast this with broader national debate, citing a Financial Times piece on Trump’s AI push fueling revolt in MAGA heartlands, where voters express unease about surveillance, resource demand, and the social consequences of automation. The conversation shifts to strategic tensions between private AI firms and government power, noting that defense interests push for rapid deployment and that moral red lines struggle to constrain state use. They warn that wartime, nationalization, and production authorities could redefine ownership and control of AI technologies, often beyond private oversight.

ColdFusion

AI is Now Being Used in War
reSee.it Podcast Summary
The episode surveys the deployment of AI in military operations, focusing on reports that the Pentagon used Anthropic’s Claude in targeting and a real-time system that helped prioritize and execute strikes across multiple theaters. It explains how the military uses customized AI models on dedicated hardware, contrasting this with consumer AI and highlighting concerns about reliability and human oversight in high-stakes decisions. The host traces the fallout between Anthropic and the U.S. government, including contractual demands for mass surveillance and autonomous weapons, and the consequential shift in relationships with OpenAI as the private sector pivots toward national-security deals. It also recounts public reactions, such as boycotts of ChatGPT and debates over safeguards, while noting that military-integrated AI can accelerate planning and execution beyond civilian capabilities. The discussion broadens to surveillance risks, the legal ambiguities around data, and potential policy responses aimed at limiting or reshaping state use of AI for war and mass monitoring.
View Full Interactive Feed