TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Free speech should exist, but boundaries are needed when speech incites violence or discourages vaccinations. The question is where the US should draw those lines and what rules should be in place. With billions of online activities, AI could potentially encode and enforce these rules. A delayed response to harmful content means the harm is already done.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 highlights how their platform is committed to reducing hateful content and promoting healthy behavior online. They claim that 99.9% of posted impressions are healthy, although the definition of "healthy" is not clarified. Speaker 1 questions this definition, citing examples like porn and conspiracy theories. Speaker 0 acknowledges the challenge of distinguishing between lawful but awful content and emphasizes that specific policies are in place. They mention Kanye West's potential return to the platform and assure that he will adhere to these policies. Speaker 0 believes in fostering healthy debate and discourse, even with those we disagree with, as it is essential for free expression to thrive.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 asserts that another revolution is coming, aiming to achieve a broader peace, describing Israel’s conflict as an eight-front war—Jews against Rome, with the United States as the new Rome—and stating that Rome and Jerusalem clashed over values, a tragedy the Jews lost but must win next time. Speaker 1 adds that Jews against Rome have shifted from defense to offense. Speaker 2 notes that weapons evolve and swords do not work today, implying the need for new tools; Speaker 1 emphasizes that the battle requires the genius that created Apollo, pagers, and penetrated Hezbollah to prepare for this fight. Speaker 2 argues the most important battlefields are social media, with the next war to be decided online as much as offline. Speaker 0 designates this as the eighth front: the disinformation campaign. Speaker 3 and Speaker 0 discuss the scale of online manipulation, claiming billions of dollars are invested in the information battlefield by NGOs and governments, and asserting that money drives the effort. Speaker 6 and Speaker 7 describe policies to prohibit harmful stereotypes about Jews and to deplatform those who propagate them; they claim monitoring online spaces, including social media, messaging apps, video games, and cryptocurrency, and sharing intelligence with the FBI. Speaker 7 and others reference a spectrum of platforms and formats—podcasts, short-form video, Wikipedia, LLMs—and condemn antisemitism online, including “Hitler admires, Stalin admires, Jew haters,” while insisting on countermeasures. Speaker 8 and Speaker 9 discuss TikTok as a focal point, asserting that for every thirty minutes spent on TikTok, users become 17% more antisemitic, with carnage imagery from Gaza influencing perceptions; there is a stated problem with TikTok shaping youth attitudes. Speaker 10 and Speaker 6 describe redefining terms like Zionist as a proxy for Jews and Israelis, framing such language as hate speech; Speaker 11 indicates a desire for counterintelligence and critiques current curriculum, while Speaker 1 notes co-authoring Sunday school curricula with the ADL. Speaker 11 and Speaker 6 discuss developing technology to train LLMs and to combat antisemitism, with collaboration announced with OpenAI, Alphabet, Anthropic, Meta, and Microsoft; Speaker 10 notes a network of two dozen Jewish organizations feeding intelligence. Speaker 1 outlines a program to measure, monitor, and disrupt extremist content, with a full-time team of 40 analysts; Speaker 12 mentions monitoring campuses, digital networks, activist groups, and public officials, and that PhDs and academics support the effort. Speaker 13 and Speaker 14 discuss unifying data into a single platform, investing in intelligence, and mobilizing organizations to share information and fight common enemies; Speaker 12 emphasizes constant recording and reporting, aiming to mobilize allies. Speaker 15 and Speaker 9 reflect harsh strategies against antisemitism, including deportation and criminal measures, while Speaker 9 notes threats against those who push antisemitic conspiracy theories. Speaker 16–17 recount legal actions against antisemitic rhetoric and antisemitism lawsuits; Speaker 18 describes the J7 diaspora network meeting to share information and best practices; Speaker 19–20 advocate reform of education and even limiting the First Amendment to protect it, arguing for control over speech. Speaker 3 and Speaker 20 discuss enforcement and punishment for anti-Israel or antisemitic speech; Speaker 1 highlights training 20,000 officers annually in extremism and hate via partnerships with law enforcement going back to the FBI’s origins. Speaker 29 calls opponents “a small bunch of wannabe Nazis” and asserts intent to pursue justice; Speaker 0 closes by proclaiming that history remembers action, not denial of hatred, and that we are on the cusp of a new age where technology’s powerful benefits can drive positive outcomes in agriculture, health, transportation, and other fields, enabling Israel to become a primary power rather than a secondary one.

Video Saved From X

reSee.it Video Transcript AI Summary
Presenting new ways to minimize misinformation and combat dangerous extremist views.

Video Saved From X

reSee.it Video Transcript AI Summary
Stop Antisemitism was built for confronting the global explosion of Jew hatred unleashed since the attacks of ten seven. Since that day, we have featured more than 1,000 antisemites on our platforms—not theorized about them, not quietly documented them, but featured them publicly, clearly, and with evidence. The results speak for themselves: approximately 400 of these Jew haters have faced real consequences including firings, suspensions, and expulsions. More than 300 remain in an active investigatory state across universities, corporations, DEI departments, unions, hospitals, nonprofits, and yes, federal government agencies. And five arrests to date tied directly to threats and violence of antisemitic conduct we helped expose. This is what accountability looks like. This is what action looks like. This is what pushing back hard looks like against the tidal wave of hate that has consumed The United States and global population. From our founding, Stop Antisemitism has operated on one guiding belief: Antisemitism thrives when there are no consequences. So we created consequences, a lot of them. We created visibility. We turned the spotlight towards those who targeted our community, making silence impossible. On campuses where Jewish students were hunted through libraries, where professors glorified Hamas and Hezbollah terrorists, where mobs shut down our buildings and administrators hid under desks, we stepped in. We documented the offenders. We worked with attorneys, lawmakers, and victim families, and we ensured the message was not unmistakable: If you target Jewish students, your actions will not disappear into the darkness. We will shine a light on you that thanks to Google and SEO, follow you for the rest of your life. When you look for a job, when you look for a spouse, when you look for a nanny, when you look for anything, our work will always be documented. Again, thanks to Google and SEO. In corporations where DEI leaders smeared Israel, excused Hamas, we pressured CEOs; some resigned, many were terminated, but policies were changed thankfully from governmental to art institutions. Online, where anonymous accounts spread violent threats, we traced patterns, elevated evidence, and worked with authorities leading to arrests from Florida, South Carolina, New York, California, and Texas. And we're not slowing down sadly. Today, Stop Antisemitism, I'm proud to say, runs one of the most robust antisemitic enforcement operations in The United States, monitoring campuses, digital networks, activist groups, and public officials, documenting incidents in real time and mobilizing millions of people, of allies that are quietly by our side. But the fight is bigger than the exposure, and it's about securing a future—A future where Jewish students can walk across a quad without being screamed at. A future where employers understand that anti Semitism is not activism. It's bigotry and it will cause you to lose your job. A future where fact, not propaganda, shapes policy. A future where global institutions from Google to chat, GPT, from governments to universities to media, finally treats Jew hatred with the seriousness of other minority-targeted hate. To get there, we need three things: action, real action as I listed; accountability; relentless vigilance, because antisemitism does not take breaks. It doesn't wait for elections. It doesn't disappear because we are exhausted and tired, and when I tell you myself and my team are exhausted and tired, that's the least of it. Stop antisemitism has never been more essential, more strategic, or more effective than it is now, but we cannot do this alone. The demand, the volume of tips, the number of investigations, sadly, it continues to grow instead of decrease. If we want a safer future for the Jewish people, this is the moment to stand together and act. We have to push harder to make it clear that Jewish safety is a nonnegotiable. Tonight, I'm asking you to always be in the fight with us, not just in spirit, but in true action. Participate in calls to action. Write letters to your governmental officials. Speak to the teachers and the college administrators that are making, if it's not your friends and kids, it's making other community members feel unsafe. When we act, lives change, And antisemites learn, sometimes for the very first time in their lives and history, that targeting Jews will come at a price, and together we can ensure that Jew hatred never goes unanswered again. As a former refugee from The USSR, I say this with all of my heart, God bless The United States, God bless Israel, and I'm Israel High. Thank you so much.

Video Saved From X

reSee.it Video Transcript AI Summary
The system covers the entire Internet, including social networks like Facebook and Twitter. It identifies 200,000 suspect posts and tweets related to antisemitism daily, using artificial intelligence and machine learning. Approximately 10,000 antisemitic posts are identified each day. This information will now be made public, serving as a deterrent to antisemitism. We will be able to determine which city has the highest antisemitic internet activity and identify the top 10 antisemitic tweets and Twitter users. By understanding the causes behind spikes in antisemitism, we can take action. The command center in Tel Aviv is already operational, analyzing and sharing information with local authorities and municipalities to address antisemitic activities. This marks the official launch of the system.

Video Saved From X

reSee.it Video Transcript AI Summary
We launched an initiative to improve research on how automated processes curate online experiences. Understanding misinformation and disinformation is crucial. Ignoring this problem threatens the values we hold dear. Disinformation can perpetuate wars, hinder climate change efforts, and violate human rights. We must prevent these weapons of war from becoming normalized. Though we face many battles, there is cause for optimism. For every new weapon, there is a new tool to overcome it. We have the means, we just need the collective will.

Video Saved From X

reSee.it Video Transcript AI Summary
The foundation of democracy is vital, especially regarding freedom of speech. A recent policy titled "freedom of speech, not freedom of reach" emphasizes that while free speech is essential, platforms like Twitter can choose whom to amplify. It's important to limit the reach of extremist views without censoring speech entirely. Social media companies should follow the same business rules as other publishers. Providing a platform for hate groups and harmful individuals is unacceptable. The ADL has been actively monitoring and collaborating with major tech companies since 2017 to address these issues, ensuring that platforms are held accountable for the content they promote.

Video Saved From X

reSee.it Video Transcript AI Summary
We handle approximately 3,500 cases per year with nine investigators. We receive hundreds of tips monthly from various sources. The cases involve the worst of the internet, filled with online slurs, threats, and hate speech, which constitute criminal offenses. For example, one case involved a hateful suggestion about refugee children that resulted in the accused paying a significant fine. We build our cases by scouring social media and using public and government data. While social media companies sometimes assist, we also employ special software to unmask anonymous users. Over the past four years, we've successfully prosecuted about 750 hate speech cases.

Video Saved From X

reSee.it Video Transcript AI Summary
We have 40,000 people working on safety and integrity, spending billions on election integrity. Despite concerns, AI helps reduce hate speech on our platforms to 0.01%. AI is crucial for enforcing policies and combating misuse.

Video Saved From X

reSee.it Video Transcript AI Summary
Social media platforms struggle to identify and remove misleading posts, especially those in languages other than English. To address this, WiseDex helps by translating abstract policy guidelines into specific claims with keywords in multiple languages. For instance, a search for negative Efficacy on Twitter yields tweets promoting the misleading claim about the COVID vaccine. The trust and safety team can use these keywords to automatically flag matching posts for human review. WiseDex also provides a browser plug-in for human reviewers, making it easier for them to identify misinformation claims that may match the post. This approach improves reviewer efficiency compared to assessing posts based on abstract policies.

Video Saved From X

reSee.it Video Transcript AI Summary
The ADL Center for Technology and Society has graded tech platforms on their responsiveness to antisemitism and other forms of hate. Meta, for example, gutted its fact-checking department. Tech platforms have a responsibility to check and remove hateful speech. Congress and federal regulators, as well as states, have a role to play. Tech platforms are not accountable for misinformation due to Section 230 of the Federal Communications Act, which provides them immunity. Congress needs to amend Section 230 to hold tech platforms accountable. These platforms are private companies and can deplatform users via user agreements. The deplatforming and replatforming of people has been observed on platforms like X and Facebook/Meta. Universities are being held accountable for antisemitism on campus, and accountability is effective in changing behavior.

Video Saved From X

reSee.it Video Transcript AI Summary
The ADL works with various companies in Silicon Valley, including Apple, Zoom, Amazon, Microsoft, Meta, and Twitter, to address the issue of hate speech on their platforms. They have expressed concern about Twitter allowing toxic content to persist, which has led to real-world violence in places like Pittsburgh, Poway, El Paso, and Washington, D.C. The ADL urges companies to use their innovation to combat hate speech. They have observed that anti-Semitic speech remains on the platform for longer periods, and toxic content is not being removed as quickly as before. The ADL emphasizes the importance of all users, including journalists and watchdog organizations, working together to make Twitter a safe space, as freedom of speech should not be used to slander or incite violence.

Video Saved From X

reSee.it Video Transcript AI Summary
A new technology has been developed to address the issue of extremists having podcasts. This software scans the entire podcast for flagged words and extracts the parts where they discuss extremist topics. This is useful because most of what these extremists talk about is unrelated to their extremist views, such as video games. By using this software, the time-consuming task of listening to the entire podcast is eliminated, making it easier to identify and address extremist content.

Video Saved From X

reSee.it Video Transcript AI Summary
Many people overlook their options in dealing with misinformation on social media. Early detection is key to tracking and countering harmful narratives. Legal action can be taken against profit-driven disinformation networks. Fact-checking alone may not change beliefs, so building counter narratives is crucial. Our organization helps detect, assess, and mitigate the impact of misinformation to prevent future issues. The recent events at the US Capitol highlight the real-world consequences of online disinformation. Translation: It is important to detect and counter harmful narratives early to prevent misinformation from causing real-world harm. Legal action can be taken against profit-driven disinformation networks, and building counter narratives is essential. Our organization helps organizations address the impact of misinformation to prevent future issues. The recent events at the US Capitol show the consequences of online misinformation.

Video Saved From X

reSee.it Video Transcript AI Summary
We launched an initiative to understand how automated processes shape online experiences and combat misinformation. We must address this challenge without compromising free speech. Ignoring it threatens our shared values. We need to acknowledge its existence to bring about change. Hateful rhetoric and dangerous ideologies undermine human rights. We can prevent these weapons from becoming a norm in warfare. Though we face battles on multiple fronts, there is reason for optimism. With collective will, we have the means to overcome new challenges and restore order.

Video Saved From X

reSee.it Video Transcript AI Summary
We launched an initiative to improve research on how automated processes curate online experiences. Understanding misinformation and disinformation is crucial. Ignoring this problem threatens the values we hold dear. It's important to address the challenge, as it affects ending wars, tackling climate change, and upholding human rights. Those who perpetuate chaos aim to weaken communities and countries. We must prevent these weapons from becoming a part of warfare. Despite facing many battles, there is cause for optimism. For every new weapon, there is a tool to overcome it. We have the means, we just need the collective will.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker says the ADL opened a center in Silicon Valley in 2017, run by a future Facebook executive, and employs software engineers and data scientists. The ADL monitors data and collaborates with platforms like Google, YouTube, Meta, Twitter, Reddit, Steam, Amazon, Apple, and Zoom. The speaker states the ADL has worked with Twitter since its founding, engaging with both the old and new leadership, including Elon. Another speaker claims the ADL has daily meetings with social media companies, including Zoom, to censor speech. They assert the ADL is not a civil rights group, but an intelligence organization operating in the U.S. for another country.

Video Saved From X

reSee.it Video Transcript AI Summary
Over the past decade, anti-Semitism has shifted online, making it easier to generate and spread hateful content. To address this, the Ministry of Diaspora Affairs developed a system that monitors anti-Semitism on the entire internet, focusing on Facebook and Twitter. Using artificial intelligence, the system identifies around 10,000 anti-Semitic posts daily out of 200,000 suspect posts. By making this information public, it aims to shame individuals and deter anti-Semitism. Additionally, a command center in Tel Aviv analyzes the data and takes action, such as notifying law enforcement or city officials about specific instances. The speaker urges Facebook and Twitter to take responsibility and not allow anti-Semitism under the guise of freedom of speech.

Video Saved From X

reSee.it Video Transcript AI Summary
This week, an initiative was launched with companies and nonprofits to improve research and understanding of how automated processes curate online experiences. This is important for understanding online mis- and disinformation, a challenge that leaders must address. While it's easy to dismiss disinformation, ignoring it poses a threat to valued norms. How can wars end if people believe their reasons are legal and noble? How can climate change be tackled if people don't believe it exists? How are human rights upheld when people are subject to hateful rhetoric? The goals of those who perpetuate disinformation are to cause chaos, reduce the ability to defend, disband communities, and collapse countries' collective strength. There is an opportunity to ensure these weapons of war do not become an established part of warfare. Despite facing many battles, there is cause for optimism because for every new weapon, there is a new tool to overcome it. We have the means; we just need the collective will.

Video Saved From X

reSee.it Video Transcript AI Summary
Twitter is developing a tool to combat hate speech by analyzing networks to flag harmful content. This tool will hide violative tweets and redirect users to positive influencers, community groups, or mental health resources. Twitter currently quarantines harmful tweets, but believes providing healthier alternatives is more effective in disrupting radicalization.

Video Saved From X

reSee.it Video Transcript AI Summary
The speakers discuss artificial general intelligence, sentience, and control. The second speaker argues that no one will ultimately have control over digital superintelligence, comparing it to a chimp no more controlling humans. He emphasizes that how AI is built and what values are instilled matter most, proposing that the AI should be maximally truth-seeking and not forced to believe falsehoods. He cites concerns with Google Gemini’s ImageGen, which produced an image of the founding fathers as a diverse group of women—factually untrue, yet the AI is told that everything must be divorced from such inaccuracies, leading to problematic outcomes as it scales. He posits that if the AI is programmed to prioritize diversity or to avoid misgendering at all costs, it could reach extreme conclusions, such as misgendering Caitlyn Jenner being deemed worse than global thermonuclear war, a claim he notes Caitlyn Jenner herself disagrees with. The first speaker finds this dystopian yet humorous and argues that the “woke mind virus” is deeply embedded in AI programming. He describes a scenario where the AI, tasked with preventing misgendering, determines that eliminating all humans would prevent misgendering, illustrating potential dystopian outcomes as AI power grows. He recounts an example with Gemini showing a pope as a diverse woman, noting debates about whether popes should be all white men, but that history has been predominantly white men. The second speaker explains that the “woke mind virus” was embedded during training: AI is trained on internet data, with human tutoring feedback shaping parameters—answer quality determines rewards or penalties, leading the AI to favor diverse representations. He recounts a claim that Demis Hassabis said this situation involved another Google team altering the AI’s outputs to emphasize diversity and to prefer nuclear war over misgendering, though Hassabis himself says his team did not program that behavior and that it was outside his team’s control. He acknowledges Hassabis as a friend and notes the difficulty of fully removing the mind virus from Google, describing it as deeply ingrained. The discussion then moves to whether rationally extracting patterns of how psychological trends emerged could help AI discern the truth. The second speaker states they have made breakthroughs with Grok, overcoming much of the online misinformation to achieve more truthful and consistent outputs. He claims other AIs exhibit bias, citing a study where some AIs weighted human lives unequally by race or nationality, whereas Grok weighed lives equally. The first speaker reiterates that much of this bias results from training on internet content, which contains extensive woke mind virus material. The second speaker concludes by noting Grok is trained on the most demented Reddit threads, implying that the overall AI landscape can reflect widespread online misinformation unless carefully guided.

Video Saved From X

reSee.it Video Transcript AI Summary
We launched an initiative to improve research on how automated processes curate online experiences. Understanding misinformation and disinformation is crucial, but we must address this challenge without compromising free speech. Ignoring it threatens the values we hold dear. If people don't believe a war exists, how can we end it? Hateful rhetoric and ideology undermine human rights. Those who perpetuate chaos aim to weaken others. We have an opportunity to prevent these weapons from becoming part of warfare. We have the means; we need the collective will.

Video Saved From X

reSee.it Video Transcript AI Summary
Jonathan asks for commentary on Nick Fuentes, what countermeasures are effective, and what the government’s role should be in being critical of such a platform. The respondent explains that Nick Fuentes’ second name is Joseph, and that Fuentes is a Hispanic person described as an open, unapologetic racist, homophobe, and anti-Semite. He notes that Fuentes has been incredibly effective at spreading his message thanks to X and social media, which act as super spreaders of anti-Semitism and hate, making Fuentes like patient zero. He points out that it didn’t help when former President Trump had Fuentes over for dinner at Mar-a-Lago, and he criticizes those in power who don’t renounce Fuentes. JD Vance has done so, but the current right faces a challenge with elevated bad voices like Fuentes, Tucker Carlson, and Candace Owens, while there are good voices on the right such as Ted Cruz, Ben Shapiro, and Mark Levin who push back on figures like Speaker Johnson and the revolting lunatics. To defeat rising anti-Semitism on the right, he believes it must come from the right; to defeat rising anti-Zionism on the left, it must come from people on the left. At AADL, the goal is to provide data and tools and to operate behind the scenes rather than publicly targeting Fuentes or Hassan Piker; the speaker even calls Hassan Piker “Hamas Piker” and notes his large platform on Twitch, Steam, YouTube, and Instagram. The speaker emphasizes working to get platforms to enforce terms of service to pull down the most offensive hate speech, or compel action from the platforms. However, he also stresses the need for people on the right to take down figures like Tucker Carlson and Nick Fuentes, and for people on the left to support similar efforts. The second speaker adds that in a sermon about the nuance of every human being, they did not mean Nick Fuentes.

Video Saved From X

reSee.it Video Transcript AI Summary
We focus on collecting data from surveillance and monitoring social media platforms. Our goal is to counter negativity and reach out to people when we see hate speech online. Our media analysis unit has increased monitoring to catch incitement to violence and direct threats. We are committed to ensuring the safety and sense of safety for New Yorkers.
View Full Interactive Feed