TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
- The situation on x is severe. - rise of bots and fake accounts, automated and AI powered bots are flooding s app, and they are getting smarter. - In one study, a botnet of over 1,000 fake accounts was caught promoting crypto scams. - During a political debate, over a thousand bots pushed coordinated false claims with some accounts tweeting every two minutes. - By 02/2024, 37% of all Internet traffic came from malicious bots. - These bots now use advanced AI models like Chat to generate human like responses and interact with each other, making them nearly impossible to detect. - The platform's ad driven business model thrives on outrage and engagement. - Emotional, polarizing content gets more clicks, and bots are perfect for spreading it. - Five, real world impact. Bots distort conversations, amplify falsehoods, and manipulate public opinion. - Conclusion. How bad is it? Very bad.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Anonymous networks flagging behavior predicting danger. We don't get a second chance. Let's not miss the next one. Fifteen seconds, Aaron. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. They're flying blind right now. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. If you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 asserts that another revolution is coming, aiming to achieve a broader peace, describing Israel’s conflict as an eight-front war—Jews against Rome, with the United States as the new Rome—and stating that Rome and Jerusalem clashed over values, a tragedy the Jews lost but must win next time. Speaker 1 adds that Jews against Rome have shifted from defense to offense. Speaker 2 notes that weapons evolve and swords do not work today, implying the need for new tools; Speaker 1 emphasizes that the battle requires the genius that created Apollo, pagers, and penetrated Hezbollah to prepare for this fight. Speaker 2 argues the most important battlefields are social media, with the next war to be decided online as much as offline. Speaker 0 designates this as the eighth front: the disinformation campaign. Speaker 3 and Speaker 0 discuss the scale of online manipulation, claiming billions of dollars are invested in the information battlefield by NGOs and governments, and asserting that money drives the effort. Speaker 6 and Speaker 7 describe policies to prohibit harmful stereotypes about Jews and to deplatform those who propagate them; they claim monitoring online spaces, including social media, messaging apps, video games, and cryptocurrency, and sharing intelligence with the FBI. Speaker 7 and others reference a spectrum of platforms and formats—podcasts, short-form video, Wikipedia, LLMs—and condemn antisemitism online, including “Hitler admires, Stalin admires, Jew haters,” while insisting on countermeasures. Speaker 8 and Speaker 9 discuss TikTok as a focal point, asserting that for every thirty minutes spent on TikTok, users become 17% more antisemitic, with carnage imagery from Gaza influencing perceptions; there is a stated problem with TikTok shaping youth attitudes. Speaker 10 and Speaker 6 describe redefining terms like Zionist as a proxy for Jews and Israelis, framing such language as hate speech; Speaker 11 indicates a desire for counterintelligence and critiques current curriculum, while Speaker 1 notes co-authoring Sunday school curricula with the ADL. Speaker 11 and Speaker 6 discuss developing technology to train LLMs and to combat antisemitism, with collaboration announced with OpenAI, Alphabet, Anthropic, Meta, and Microsoft; Speaker 10 notes a network of two dozen Jewish organizations feeding intelligence. Speaker 1 outlines a program to measure, monitor, and disrupt extremist content, with a full-time team of 40 analysts; Speaker 12 mentions monitoring campuses, digital networks, activist groups, and public officials, and that PhDs and academics support the effort. Speaker 13 and Speaker 14 discuss unifying data into a single platform, investing in intelligence, and mobilizing organizations to share information and fight common enemies; Speaker 12 emphasizes constant recording and reporting, aiming to mobilize allies. Speaker 15 and Speaker 9 reflect harsh strategies against antisemitism, including deportation and criminal measures, while Speaker 9 notes threats against those who push antisemitic conspiracy theories. Speaker 16–17 recount legal actions against antisemitic rhetoric and antisemitism lawsuits; Speaker 18 describes the J7 diaspora network meeting to share information and best practices; Speaker 19–20 advocate reform of education and even limiting the First Amendment to protect it, arguing for control over speech. Speaker 3 and Speaker 20 discuss enforcement and punishment for anti-Israel or antisemitic speech; Speaker 1 highlights training 20,000 officers annually in extremism and hate via partnerships with law enforcement going back to the FBI’s origins. Speaker 29 calls opponents “a small bunch of wannabe Nazis” and asserts intent to pursue justice; Speaker 0 closes by proclaiming that history remembers action, not denial of hatred, and that we are on the cusp of a new age where technology’s powerful benefits can drive positive outcomes in agriculture, health, transportation, and other fields, enabling Israel to become a primary power rather than a secondary one.

Video Saved From X

reSee.it Video Transcript AI Summary
Tom Alexandrovich, head of the Technological Defense Department at the National Cyber Unit, discusses the scale of official handling since October 7. The state prosecution received almost 30,000 inquiries—specifically, about 26,000 cases it handled and about 40,000 inquiries related to inciting content, content that leads to demoralization, and other malicious content. This represents a huge dataset, with 90% of the material connected to the META group, including Instagram and Facebook. There are organized campaigns on this issue, with a significant volume of posts identified and removed by the prosecution—several hundred posts of content associated with these concerns. The issue is described as not only coming from adversaries but also involving activists or anti-Israeli activities organizing against Israel. Tom Alexandrovich notes that the effort is broader than just external threats. He also encourages people to report content to ensure that others can benefit from the information.

Video Saved From X

reSee.it Video Transcript AI Summary
The system covers the entire Internet, including social networks like Facebook and Twitter. It identifies 200,000 suspect posts and tweets related to antisemitism daily, using artificial intelligence and machine learning. Approximately 10,000 antisemitic posts are identified each day. This information will now be made public, serving as a deterrent to antisemitism. We will be able to determine which city has the highest antisemitic internet activity and identify the top 10 antisemitic tweets and Twitter users. By understanding the causes behind spikes in antisemitism, we can take action. The command center in Tel Aviv is already operational, analyzing and sharing information with local authorities and municipalities to address antisemitic activities. This marks the official launch of the system.

Video Saved From X

reSee.it Video Transcript AI Summary
Natalie asks about the AI piece, expressing cynicism that there may be a push for a “war bot” to circumvent consumer AI limits that block starting wars with WMDs, and wonders if there is a benevolent reason. Matthew responds that it’s worse than that: Hengseth described a platform to run on military desktops worldwide—secure, like ChatGPT or Claude but for the Pentagon and military services—that “doesn’t allow information to get out.” The core issue, he says, is who controls the AI, and two key questions about the future of war with AI: who ultimately owns these AI platforms, and who informs them—who gives them the algorithm and programming and essentially orders on how to answer questions. He notes increasing concerns about reliability of information, including how ChatGPT handles questions about trustworthy news sources. He mentions that ChatGPT defers to institutional structures rather than historical accuracy. The risk, he says, is that military AI programs may not provide honest, candid, objective information to military personnel, but rather information based on narratives the Pentagon or manufacturers want. A common belief is that technology makes war more precise and reduces civilian harm, but Matthew contends this is a myth. He explains that precision-guided munitions were not about preventing civilian casualties but about increasing efficiency—“the purpose was to make the weapons more efficient, so we had to drop less bombs to, say, blow up a bridge.” He cites the small diameter bomb as evidence that the aim is not to limit civilian casualties but to allow more bombs to be delivered from aircraft. He highlights real-world examples of AI in warfare, referencing Israeli systems in Gaza. He explains that three AI programs—Lavender, Gospel, and Where’s Daddy?—play roles in targeting and timing strikes. Lavender scans theInternet and databases to identify targets (e.g., labeling someone as a Hamas supporter based on a past online activity), and Where’s Daddy? coordinates that information to ensure bombs hit resistance fighters “when they are with their families,” not away from them. He notes reporting from Israeli media and Nine Two Magazine about these programs and urges viewers to examine that reporting; Tucker Carlson’s coverage is mentioned as example. Matthew argues this demonstrates the dystopian potential of AI in war and cautions against assuming American AI would be more benevolent. He mentions commentator references to justify or excuse actions, including a remark attributed to Mike Huckabee that “Israel did not attack Qatar. They just sent a missile into their country aimed at one person,” noting the nearby injuries or deaths. He ends with a reminder of Orwell’s reflections on war and the idea that those who cheer for war may be less enthusiastic if they experience its costs, suggesting a broader aim to make the costs of war felt among ruling elites who benefit from it.

Video Saved From X

reSee.it Video Transcript AI Summary
In this video, we explore a world where presentations and artificial intelligence come together. To use this technology, simply input the topic or title of your presentation and let Degtypos do the thinking. You can also choose your goal for the presentation to optimize the suggested content. With this tool, you'll have a first draft to start working with.

Video Saved From X

reSee.it Video Transcript AI Summary
We launched an initiative to improve research on how automated processes curate online experiences. Understanding misinformation and disinformation is crucial. Ignoring this problem threatens the values we hold dear. Disinformation can perpetuate wars, hinder climate change efforts, and violate human rights. We must prevent these weapons of war from becoming normalized. Though we face many battles, there is cause for optimism. For every new weapon, there is a new tool to overcome it. We have the means, we just need the collective will.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Fifteen seconds, Aaron. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. They're flying blind right now. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. And if you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
Welcome to ActiL, the first pro-Israel grassroots online movement. Our app provides the tools to counter efforts to delegitimize Israel online, right in your hand. On the homepage, you can scroll through missions or choose them by platform. Each mission includes a description and a step-by-step video explanation. After only forty-eight hours and over 2,000 reports, Facebook took it down. This is the power of the masses. If you want to learn more about Israel, click on our new feature, Fact Library. You can also join communities based on location, organization, or skills. Completing missions unlocks more missions and earns points for prizes, which you can track on the leaderboards. For help, check the FAQ tab for video tutorials, or contact us directly.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 opens by noting the Trump administration recently launched a cyber strategy amid the war with Iran and expresses concern that war often serves as a Trojan horse for expanding government power and eroding civil rights. He examines parts of the plan that give him heartburn, focusing on aims to “unveil an embarrassed online espionage, destructive propaganda and influence operations, and cultural subversion,” and questions whether the government should police propaganda or cultural subversion, arguing that propaganda is legal and that individuals should be free to express themselves. Speaker 1, Ben Swan, counters by acknowledging that governments are major purveyors of propaganda, but suggests some of the language in the plan could be positive. He says the administration’s phrasing—“unveil and embarrass”—is not about prosecution or imprisonment but exposing inauthentic campaigns funded by outside groups or foreign governments. He views this as potentially beneficial if limited to highlighting non-grassroots, authentic concerns, and not expanding censorship. He argues that this approach could roll back some censorship apparatuses the previous years had built. Speaker 2 raises concerns about blurry lines between satire, low-cost AI, and authentic grassroots content, questioning whether the government should determine what is and isn’t authentic. Speaker 1 agrees that it should not be the government’s job to adjudicate authenticity and suggests community notes or crowd-sourced verification as a better mechanism. He gives an example involving Candace Owens’ expose on Erica Kirk and a cohort of right-wing influencers proclaiming she is demonic, labeling such efforts as propaganda under the plan’s framework. He expresses doubt that the administration would pursue those individuals, though he cannot be sure. The conversation shifts to broader implications of a new cyber task force: Speaker 1 cautions that bureaucracy tends to justify its own existence by policing propaganda or bad actors, citing the Russia-focused crackdown era as a precedent. He worries that the language’s vagueness could enable future administrations to expand control, regardless of party. The lack of specifics in “securing emerging technologies” worries both speakers, who interpret it as potentially broad overreach beyond protecting infrastructure, possibly extending into controlling information or AI outputs. Speaker 0 emphasizes that the biggest headaches for war hawks include platforms like TikTok and X, and perhaps certain AIs like Grok. He argues the idea of “securing emerging technologies” could imply controlling truth-telling AI outputs or preventing adverse revelations about Iran. Speaker 1 reiterates that there is no clear smoking gun in the document; the general language makes it hard to assess intent, and the real danger is the ongoing growth and persistence of bureaucracies that can outlast specific administrations. Toward the end, Speaker 1 notes Grok’s ability to verify videos amid widespread war-time misinformation, illustrating how AI verification could counter claims of fake footage, while also acknowledging the broader risk of information manipulation and the government’s expanding role. The discussion closes with a wary reflection on the disinformation governance era and the balance between safeguarding free speech and preventing government overreach.

Video Saved From X

reSee.it Video Transcript AI Summary
Welcome to the Movers platform, where you can advocate for Israel and help remove false and anti-Israeli content. Simply log in with your email to track your progress. You can support Israel by copying and posting provided content on Instagram, reporting anti-Israeli content, engaging in groups, or submitting positive/negative content about Israel. It's a user-friendly platform, and your efforts are greatly appreciated.

Video Saved From X

reSee.it Video Transcript AI Summary
"But October 7 in the Hamas raid in Southern Israel changed minds on this app. Explain how." "over 60% of the content that is pro Hamas, pro Palestine content, it's actually generated in Bangladesh, Malaysia, Egypt, Saudi Arabia, Pakistan, and then it is actually amplified in TikTok users' feeds in The United States." "the majority of the anti Israel content, it's actually generated and created overseas, and then the algorithm is tailored to push that content here in America." "it's not actually generated here in The United States. It's not a reflection of the sentiment here in The United States." "But think about the fact that in Israel, they have TikTok, and in Israel, they have manipulated the algorithm to show 90% of the sentiment is for pro Hamas in Israel." "Do you really think that Israelis after October 7 feel that that is the case?"

Video Saved From X

reSee.it Video Transcript AI Summary
This is Amani Brahim from DeepTrust, introducing CapOrNot. It's a bot I built using the DeepTrust speech alpha model to detect deep fake voices on Twitter. To use it, tag the bot in a video you want to fact check. It will respond with a speech analysis output, including an average score and a heat map showing where it detects deepfake content. In an example, the bot correctly identifies a silent portion of the video. It's a cool tool.

Video Saved From X

reSee.it Video Transcript AI Summary
Over the past decade, anti-Semitism has shifted online, making it easier to generate and spread hateful content. To address this, the Ministry of Diaspora Affairs developed a system that monitors anti-Semitism on the entire internet, focusing on Facebook and Twitter. Using artificial intelligence, the system identifies around 10,000 anti-Semitic posts daily out of 200,000 suspect posts. By making this information public, it aims to shame individuals and deter anti-Semitism. Additionally, a command center in Tel Aviv analyzes the data and takes action, such as notifying law enforcement or city officials about specific instances. The speaker urges Facebook and Twitter to take responsibility and not allow anti-Semitism under the guise of freedom of speech.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 1 and Speaker 0 discuss the implications of AI in military use. They consider whether consumer AI is being bypassed with a secure, military-specific platform that would be sealed—essentially one-way in and no information out—for the Pentagon and military services. The key questions raised are: who controls the AI, who informs its algorithms, and who gives it its orders on how to answer questions, highlighting concerns about privatization and outsourcing of war. Speaker 1 argues that the future of war with AI hinges on two issues: ownership of AI platforms and the sources of their programming. They note that AI can deflect or defer to institutional structures rather than empirical accuracy, raising concerns about the reliability of information provided to military personnel. They also reference the myth that advancing technology automatically reduces civilian harm, citing that precision-guided munitions were designed for efficiency, not necessarily to prevent civilian casualties, noting that the intent was to reduce the number of bombs needed to achieve targets. The conversation shifts to the concept of precision in weapons. Speaker 1 points out that laser- and GPS-guided bombs were not primarily invented to minimize civilian casualties but to increase efficiency. They mention the small diameter bomb as an example, explaining that its use increases the number of bombs that can be deployed rather than primarily limiting collateral damage. The discussion then moves to real-world AI systems used in conflict zones. Speaker 1 cites Israeli programs—Lavender, Gospel, and Where’s Daddy?—as examples of nefarious and insidious AI in war. Lavender supposedly scans the Internet and other databases to identify targets, for example flagging someone as a Hamas supporter based on years of activity. Where’s Daddy? allegedly guides Israeli drones to strike fighters when they are with their families, not away from them. This reporting is linked to coverage from Israeli media and Nine Seven Two magazine, and Speaker 2 references Tucker Carlson’s coverage of these issues. Speaker 2 amplifies the point by noting the emotional impact of such capabilities, arguing that targeting men when they are with their children is particularly disturbing. They also discuss broader political reactions, including a remark attributed to Ambassador Huckabee about Israel not attacking Qatar but “sending a missile there” that injured nearby people. Speaker 1 concludes by invoking Orwell’s reflection on the Spanish Civil War, suggesting that those who cheer for war may be confronted by the consequences when modern aircraft enable distant bombing. They emphasize the need to make the costs of war felt by the ruling classes who benefit from it, not just the people on the ground.

Video Saved From X

reSee.it Video Transcript AI Summary
The segment centers on a US-led Civil-Military Coordination Center in southern Israel, established in October 2025 to monitor the Gaza ceasefire. It showcases a map of the Strip, footage of trucks, and a Dataminer report. Dataminer is a private US tech company that uses artificial intelligence to mine social media in real time to issue warnings of critical situations, highlighting the growing relationship between private AI firms and militaries and signaling a structural shift in how warfare is conducted, who controls it, who profits, and how accountability works. Heidi Khalaf, chief AI scientist at the AI Now Institute, explains that militaries rely too heavily on commercial technologies and are not investing in their own traceable, explainable models, instead using a “black box.” Gaza provides the first confirmation that commercial AI models are being directly used in warfare, justified by speed at the cost of accuracy. The report asserts that Israel’s war in Gaza was not driven solely by soldiers but also by data prediction, location tracking, drone feeds, and AI models built by private tech firms. Palantir is described as a key player, with reports claiming they supplied AI tools to help identify and accelerate targeting of individuals in Gaza, though Palantir has denied these claims. Amazon and Google are said to have provided Israel with cloud infrastructure needed for military AI systems; both companies maintain their services are commercial, not military. These tools are said to have shifted the war from human intelligence to a data industry. While defense contracting is not new, earlier conflicts such as the 2003 US invasion of Iraq relied more on informants and interrogations; AI then involved a human in the loop, with clearer military applications. Now, the line between commercial and military use of AI is blurred, and corporations play a larger role. A key question raised is what it means when a private AI company controls the infrastructure the military depends on, rather than the state. Khalaf notes that militaries are ceding control and state obligations to faulty technology developed by private companies with different incentives, which can lead to AI being used to evade accountability for mass civilian casualties due to model inaccuracy. The analysis concludes that war is no longer just a battlefield—it is also about who builds and controls the software governing mass civilian data.

Video Saved From X

reSee.it Video Transcript AI Summary
This is Amani Brahim from DeepTrust, introducing Capronaut, a bot that uses the DeepTrust speech alpha model to detect deep fake voices on Twitter. To check a video, simply tag the bot and it will respond with a speech analysis output. It provides an average score and a heat map showing where it detects deepfake content throughout the video timeline. For example, in a video where a voice clone is present, the bot accurately detects the deepfake content by showing silence at certain points. Capronaut is a useful tool for verifying the authenticity of videos on Twitter.

Video Saved From X

reSee.it Video Transcript AI Summary
AI is being misused to create and spread false and hateful information at scale. AI-generated content, including fake videos and photos, is easily produced and often indistinguishable from real content. The barriers to creating such content are low, while financial and strategic gains incentivize its creation. AI content can be created cheaply with minimal human intervention. Deep fakes, images, audio, and video are being deployed in war zones like Ukraine, Gaza, and Sudan, triggering diplomatic crises, inciting unrest, and creating confusion. This also undermines the work of UN agencies as false information spreads about their intentions and work.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses a use case involving a government agency and a network analysis tool. They explain how the tool can identify coordinated attacks and misinformation by analyzing events, such as the sharing of an image on social media. The tool can build a network of accounts involved in spreading the information and identify patterns. In this case, the tool discovered a network spreading Russian propaganda and misinformation about the Nord Stream pipeline. The speaker demonstrates how the tool can counteract the narrative by generating tweets in Arabic that provide a different perspective. They also mention the potential for the tool to create knowledge across different networks and incorporate multimodal content like images and videos.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Anonymous networks flagging behavior predicting danger. We don't get a second chance. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. If you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
TikTok is being targeted for being pro-Palestinian, according to clips from the CEO of the Anti Defamation League and Senator Ted Cruz. They claim the app spreads anti-Israel sentiment. Cruz is funded by pro-Israel lobbies, leading to quick action against TikTok. Despite users advocating for peace and free speech, politicians are influenced by these lobbies. Many rely on TikTok for education, entertainment, and livelihoods, and fear losing this platform due to corruption.

Possible Podcast

Taiwan's Fight for Democracy | Fmr Digital Minister of Taiwan Audrey Tang & Divya Siddarth
Guests: Audrey Tang, Divya Siddarth
reSee.it Podcast Summary
When AI meets citizen deliberation, democracy may learn to listen at scale. Audrey Tang and Divya Siddarth describe a civic experiment in Taiwan that used collective intelligence to shape upcoming laws, not just public opinion. The Alignment Assemblies bring thousands of voices into AI-assisted deliberation, yielding proposals with broad support that lawmakers later codified. The goal is to move from a future of general AI to a future of augmented collective intelligence guiding policy. Taiwan’s process began with random SMS invitations to 200,000 residents and culminated in 450 civic jurors who deliberated online in small rooms of ten. AI summarized discussions in real time, and jurors refined ideas such as tying online ads to verifiable signatures, holding platforms liable for large scams, and gradually throttling services that refused to operate locally. Within months, the most supported ideas across age, gender, and party lines became law, an information-integrity milestone. Divya frames democracy as three core needs: buy-in and legitimacy, agency, and good decisions. Participation should be broad but governed by expert layers to ensure practical outcomes. Deliberative processes scale to tens of thousands, balancing speed with inclusion. A notable lever is the use of AI to revise software like model constitutions; in collaboration with Anthropic, the public rewrote a constitution that then shaped training of Claude. The resulting production models reflect those public principles. Audrey describes Polis as a counter to online enragement: remove the reply and retweet buttons, show each person’s view, and surface bridging statements that can unite different camps. In COVID and in other debates, depolarizing memes and pre-bunking helped shift conversation toward uncommon ground. She emphasizes transparency—sharing distribution data publicly so legislators can co-create fair policies. When debated, model weights matter less than process transparency; tailor-made, local-language models often perform best in government settings. Looking globally, pilots like Engaged California and Tokyo’s crowdsourced platforms demonstrate possible scale, yet critics warn against technosolutionism. The speakers argue for human agency, local adaptation, and gradual adoption—developing digital twins to negotiate values and using multiple institutions in dialogue rather than a single center. The message is hopeful: augmented collective intelligence can widen participation and improve policy, provided trust in institutions remains the foundation and careful design keeps human judgment central.

Breaking Points

Bibi BRAGS About Social Media TAKEOVER To Influencers
reSee.it Podcast Summary
Influence and power collide as Prime Minister Netanyahu briefs U.S. social media creators in Washington about Israel’s propaganda war. He casts social platforms as battlefield tools, calling TikTok the most important purchase and urging talks with Elon Musk about X to secure a pro-Israel foothold. The briefing, part of a broader push to direct online narratives, stresses fighting anti-Semitism with the new media playbook and highlights TikTok’s reach among everyday users who aren’t engaged in elite discourse. Panel chatter then shifts to censorship, algorithmic influence, and the vulnerability of public opinion to paid messaging. Some speakers push for bans or tighter controls, even on Twitter and TikTok, while others concede the platforms’ power is overwhelming and hard to contest. The discussion enters lobbying and media ownership terrain, noting APAC, Hollywood consolidation, and Paramount’s evolving leverage, with HBO and other studios looming as potential power centers. The episode ends noting that the battle for influence is increasingly conducted through platforms rather than traditional diplomacy.

Breaking Points

Pro Israel CRACKDOWN On Social Media
reSee.it Podcast Summary
A quiet policy overhaul on TikTok could silence Palestine coverage, as a September 13 shift, driven by a new hate speech czar hired after ADL lobbying, reshapes how users discuss Israel and Palestine. Erikica Commandel, described as an IDF instructor and State Department contractor, was installed to supervise changes, which were announced via a post notification when the app opened. The updated guidelines tighten references to violence and public-interest discussions, and require denouncing all designated terrorist organizations when they appear in neutral reporting, a rule the guest says targets Hamas coverage and related reporting. He notes before September 13 his channel had two video removals in six months, but since the change the count rose to eleven, with some cases lacking any option to appeal. Video removals come with restricted visibility, with some posts barred from the for-you feed and others stuck in limbo or shadow-banned, forcing creators to navigate monetization risk as 'soft violations' threaten payments.
View Full Interactive Feed