TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Doxing, which includes revealing someone's pseudonym, will result in temporary suspensions. Permanent suspensions are rare. It doesn't matter who you are, doxing is not acceptable. Revealing identities can have serious consequences, inhibiting public dialogue. Professors have been suspended for simply liking a post on social media. This shows the need for anonymous posting to allow people to freely express themselves, especially if it means risking their jobs.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker introduces Web, a tool built to allow natural-language conversations with an entire document set (specifically mentioning the Epstein files and expanding to other datasets, including items like the dancing Israeli files and Israeli art students files). Web enables users to ask normal questions, for example: “show me examples of his foundations, charities, and businesses interacting with Israelis or organizations based in Israel.” The tool analyzes the documents based on the user’s natural-language prompt and returns results with sources cited. Key features demonstrated: - When a query is run, Web pulls back all relevant documents, which can be clicked to turn red and opened as primary sources. Users can see the work the tool is doing, including entities such as Ehud Barak and the network of Ehud Barak, Wexner, and Epstein, as it compiles the research. - The response is written in natural language for easy understanding, with sources cited. The primary sources remain accessible on the left in their original organizational structure, allowing users to read documents in their original form. - The tool will not browse the internet or conduct external research to answer questions; it references only the files in the user’s document set and provides citations that can be checked. The speaker presents the current usage experience: - It’s possible to ask follow-up questions and expand the chat, using suggested questions or generating new ones. - The user interface shows both the generated explanation and its sources (with links to the documents). Operational and access details: - The speaker endorses Web as “the absolute shit” and encourages people to try it. After a period without a password gate, it’s offered in an open beta to anyone who wants to try. - The speaker has personally funded the tokens for the beta so users can access it for free during this phase; beta testers aren’t required to pay. - He notes that running AI tools costs money due to compute resources, and, after the open beta, Web will transition to a subscription model with access to additional datasets. - Plans include open-sourcing the project later, allowing people to download and run it themselves and examine the code (with a caveat: selling it would not be allowed). - The goal expressed is to enable broad accessibility so that “any old person can understand these documents” and to clearly show who Epstein worked for and what was in the files, with all content retained even if DOJ deletes files from the public domain, as “we’ve already got them all and they’re not being deleted from our database.”

Video Saved From X

reSee.it Video Transcript AI Summary
Online platforms, particularly X, often serve as a breeding ground for hatred. There is a lack of effective regulation to combat online hate, including Islamophobia and racism, which can be found in numerous posts daily. Social media platforms are not doing enough to address these issues, and the spread of fake news often exacerbates the problem.

Video Saved From X

reSee.it Video Transcript AI Summary
If you're not paying for the product, congratulations. You are the product. Social media tracks you like a hawk. Search engines, they're not just answering your questions. They're selling you. Those free apps you love? Excavation. They're not tools. They're data vacuums sucking up every bit of information they can find. Every like, every scroll, every pause, that's value being extracted from you. You thought you were the user. Right? But guess what? You're the asset, the metric, the line item on a balance sheet. You're not just scrolling through your feed. You're monetized, packaged, and sold to the highest bidder. You're not just a participant. You're the product on the shelf waiting to be picked up and exploited. So next time you think you're getting something for free, remember, nothing is free. You're the one paying the price.

Video Saved From X

reSee.it Video Transcript AI Summary
The Internet doesn't delete. It archives. Every click, every typo, every late night search you hoped no one saw. It's all logged by your apps, your ISP, your phone, even your smart fridge if it's nosy enough. You think you've wiped the slate clean, but it's all still there, tucked away in the shadows. Excavation. It's stored where you see it. It's stored where they can sell it. Because forgetting has no profit. But remembering, that's where the money is. Your data has a memory and it's not yours anymore. Those innocent searches, those fleeting moments of curiosity, they're commodities now packaged and sold to the highest bidder. Every detail, every secret you thought was yours is out there waiting to be exploited. So go ahead. Keep scrolling. Keep searching. Just remember, the Internet never forgets.

Video Saved From X

reSee.it Video Transcript AI Summary
Stop Antisemitism was built for confronting the global explosion of Jew hatred unleashed since the attacks of ten seven. Since that day, we have featured more than 1,000 antisemites on our platforms—not theorized about them, not quietly documented them, but featured them publicly, clearly, and with evidence. The results speak for themselves: approximately 400 of these Jew haters have faced real consequences including firings, suspensions, and expulsions. More than 300 remain in an active investigatory state across universities, corporations, DEI departments, unions, hospitals, nonprofits, and yes, federal government agencies. And five arrests to date tied directly to threats and violence of antisemitic conduct we helped expose. This is what accountability looks like. This is what action looks like. This is what pushing back hard looks like against the tidal wave of hate that has consumed The United States and global population. From our founding, Stop Antisemitism has operated on one guiding belief: Antisemitism thrives when there are no consequences. So we created consequences, a lot of them. We created visibility. We turned the spotlight towards those who targeted our community, making silence impossible. On campuses where Jewish students were hunted through libraries, where professors glorified Hamas and Hezbollah terrorists, where mobs shut down our buildings and administrators hid under desks, we stepped in. We documented the offenders. We worked with attorneys, lawmakers, and victim families, and we ensured the message was not unmistakable: If you target Jewish students, your actions will not disappear into the darkness. We will shine a light on you that thanks to Google and SEO, follow you for the rest of your life. When you look for a job, when you look for a spouse, when you look for a nanny, when you look for anything, our work will always be documented. Again, thanks to Google and SEO. In corporations where DEI leaders smeared Israel, excused Hamas, we pressured CEOs; some resigned, many were terminated, but policies were changed thankfully from governmental to art institutions. Online, where anonymous accounts spread violent threats, we traced patterns, elevated evidence, and worked with authorities leading to arrests from Florida, South Carolina, New York, California, and Texas. And we're not slowing down sadly. Today, Stop Antisemitism, I'm proud to say, runs one of the most robust antisemitic enforcement operations in The United States, monitoring campuses, digital networks, activist groups, and public officials, documenting incidents in real time and mobilizing millions of people, of allies that are quietly by our side. But the fight is bigger than the exposure, and it's about securing a future—A future where Jewish students can walk across a quad without being screamed at. A future where employers understand that anti Semitism is not activism. It's bigotry and it will cause you to lose your job. A future where fact, not propaganda, shapes policy. A future where global institutions from Google to chat, GPT, from governments to universities to media, finally treats Jew hatred with the seriousness of other minority-targeted hate. To get there, we need three things: action, real action as I listed; accountability; relentless vigilance, because antisemitism does not take breaks. It doesn't wait for elections. It doesn't disappear because we are exhausted and tired, and when I tell you myself and my team are exhausted and tired, that's the least of it. Stop antisemitism has never been more essential, more strategic, or more effective than it is now, but we cannot do this alone. The demand, the volume of tips, the number of investigations, sadly, it continues to grow instead of decrease. If we want a safer future for the Jewish people, this is the moment to stand together and act. We have to push harder to make it clear that Jewish safety is a nonnegotiable. Tonight, I'm asking you to always be in the fight with us, not just in spirit, but in true action. Participate in calls to action. Write letters to your governmental officials. Speak to the teachers and the college administrators that are making, if it's not your friends and kids, it's making other community members feel unsafe. When we act, lives change, And antisemites learn, sometimes for the very first time in their lives and history, that targeting Jews will come at a price, and together we can ensure that Jew hatred never goes unanswered again. As a former refugee from The USSR, I say this with all of my heart, God bless The United States, God bless Israel, and I'm Israel High. Thank you so much.

Video Saved From X

reSee.it Video Transcript AI Summary
We propose linking digital identities like France Identité or La Poste's digital identity to Facebook accounts. This would confirm that there is a real person behind the account and provide an encrypted code that only authorities can decipher in specific cases of illegal activity. The idea is to know who you are, even if you use a pseudonym and a cat photo on Facebook. Anonymity is not the goal; instead, we want to associate your account with a digital identity to ensure you are not anonymous in the end.

Video Saved From X

reSee.it Video Transcript AI Summary
The system covers the entire Internet, including social networks like Facebook and Twitter. It identifies 200,000 suspect posts and tweets related to antisemitism daily, using artificial intelligence and machine learning. Approximately 10,000 antisemitic posts are identified each day. This information will now be made public, serving as a deterrent to antisemitism. We will be able to determine which city has the highest antisemitic internet activity and identify the top 10 antisemitic tweets and Twitter users. By understanding the causes behind spikes in antisemitism, we can take action. The command center in Tel Aviv is already operational, analyzing and sharing information with local authorities and municipalities to address antisemitic activities. This marks the official launch of the system.

Video Saved From X

reSee.it Video Transcript AI Summary
We handle approximately 3,500 cases per year with nine investigators. We receive hundreds of tips monthly from various sources. The cases involve the worst of the internet, filled with online slurs, threats, and hate speech, which constitute criminal offenses. For example, one case involved a hateful suggestion about refugee children that resulted in the accused paying a significant fine. We build our cases by scouring social media and using public and government data. While social media companies sometimes assist, we also employ special software to unmask anonymous users. Over the past four years, we've successfully prosecuted about 750 hate speech cases.

Video Saved From X

reSee.it Video Transcript AI Summary
Following mass shootings, social media platforms like X, Facebook, and YouTube often delete the shooter's social media presence during active inquiries. This purging of social media data makes it difficult for researchers and the public to understand the events and the shooter's mindset. The speaker is archiving what appears to be the social media account of a recent shooter but is concerned about potential future legal implications of possessing this data. The speaker questions why this information is immediately purged, asking if law enforcement requests it, if an archive version is retained, or if it's due to terms of service violations. The speaker seeks transparency from X leadership regarding this practice, as the Wayback Machine doesn't fully capture all posts.

Video Saved From X

reSee.it Video Transcript AI Summary
They express that recognition by Microsoft or the UN means little in the face of ongoing genocide, emphasizing that “the genocide, that's when you will have our respect” and that words from politicians or organizations do not solve the problem. Shadow banning is described as a process where big tech restricts content reach for users, aligning with policy or regularity to support the propaganda they serve. Content labeling before model training could be biased (e.g., from IDEV), leading to content being flagged and pro-Palestinian users banned. Meta later calls such issues “bugs,” but they are viewed as deliberate actions to suppress certain content. They claim Larry Ellison, owner of Oracle, is the biggest contributor to the “Friends for Idea Yeah. Charity,” with last contribution around 16,000,000. They assert that if a person who is friends with Benjamin Netanyahu and Israel owns 80% of TikTok, and Netanyahu promotes using TikTok and X to spread their narrative, it demonstrates the danger of social media in shaping global views and the propaganda machine. They accuse these entities of trying to control social media to brainwash younger generations, potentially restricting pro-Palestinian speech. Lobbying is described as highly structured, with knowledge of where to go, who to speak with, and organizations that move money to actions aligned with those goals. They urge each person to contribute their own skills toward free Palestine, noting strengths in tech, music, journalism, etc., and to create alternatives and support one another to change the dynamic. They argue that Zionists became powerful by mutual support, while others are weaker due to lack of unity, asserting that unity would strengthen their movement. Hejazi introduces himself as the founder of Upscroll. He is Palestinian, born in Jordan, currently living in Australia, with seventeen years of experience in Big Tech. The genocide’s ongoing impact changed his life, leading him to feel complicit via his work at big tech and to witness shadow banning of friends, family, and others posting about Gaza. He mentions that 60 relatives were killed in Gaza. He quit his successful professional career to build an alternative social media platform and decided to devote himself to creating Upscroll, an independent platform to counter the influence of Meta, X, and TikTok. Upscroll launched a couple of months ago and is similar to Instagram, X, and soon TikTok, with tens of thousands joining monthly. On launch, the platform saw rapid uptake: hundreds, thousands, tens of thousands as users sought an alternative to shadow bans, seeking to have their content reach others. The platform is presented as a response to the pain of posting without reach and the desire to become independent from dominant platforms.

Video Saved From X

reSee.it Video Transcript AI Summary
Social media platforms struggle to identify and remove misleading posts, especially those in languages other than English. To address this, WiseDex helps by translating abstract policy guidelines into specific claims with keywords in multiple languages. For instance, a search for negative Efficacy on Twitter yields tweets promoting the misleading claim about the COVID vaccine. The trust and safety team can use these keywords to automatically flag matching posts for human review. WiseDex also provides a browser plug-in for human reviewers, making it easier for them to identify misinformation claims that may match the post. This approach improves reviewer efficiency compared to assessing posts based on abstract policies.

Video Saved From X

reSee.it Video Transcript AI Summary
If you're not paying for the product, congratulations. You are the product. Social media tracks you like a hawk. Search engines, they're not just answering your questions. They're selling you. Those free apps you love? Excavation. They're not tools. They're data vacuums sucking up every bit of information they can find. Every like, every scroll, every pause, that's value being extracted from you. You thought you were the user. Right? But guess what? You're the asset, the metric, the line item on a balance sheet. You're not just scrolling through your feed. You're being monetized, packaged, and sold to the highest bidder. Welcome to the Internet economy, folks. You're not just a participant. You're the product on the shelf waiting to be picked up and exploited. So next time you think you're getting something for free, remember, nothing is free. You're the one paying the price.

Video Saved From X

reSee.it Video Transcript AI Summary
Think you deleted your data? That's cute. The Internet doesn't delete. It archives. Every click, every typo, every late night search you hoped no one saw. It's all logged by your apps, your ISP, your phone, even your smart fridge if it's nosy enough. You think you've wiped the slate clean, but it's all still there, tucked away in the shadows. It's stored where you see it. It's stored where they can sell it. Because forgetting has no profit. But remembering, that's where the money is. Your data has a memory and it's not yours anymore. Those innocent searches, those fleeting moments of curiosity, they're commodities now packaged and sold to the highest bidder. Every detail, every secret you thought was yours is out there waiting to be exploited. Just remember, the Internet never forgets.

Video Saved From X

reSee.it Video Transcript AI Summary
Palantir collects data from social media sites and uses sentiment analysis. They analyze followers on Twitter and Facebook to create a database. They have a signal detector to identify future followers and push posts to fake versions of individuals. This is part of their strategy to counter a potential class war. They create fake anti-establishment characters in The Matrix. The speaker urges viewers to join a bottoms-up movement by volunteering at sheepherforpresident.com.

Video Saved From X

reSee.it Video Transcript AI Summary
If you're not paying for the product, congratulations. You are the product. Social media tracks you like a hawk. Search engines, they're not just answering your questions. They're selling you. Those free apps you love? Excavation. They're not tools. They're data vacuums sucking up every bit of information they can find. Every like, every scroll, every pause, that's value being extracted from you. You thought you were the user. Right? But guess what? You're the asset, the metric, the line item on a balance sheet. You're not just scrolling through your feed. You're being monetized, packaged, and sold to the highest bidder. Welcome to the Internet economy, folks. You're not just a participant. You're the product on the shelf waiting to be picked up and exploited. So next time you think you're getting something for free, remember, nothing is free. You're the one paying the price.

Video Saved From X

reSee.it Video Transcript AI Summary
Over the past decade, anti-Semitism has shifted online, making it easier to generate and spread hateful content. To address this, the Ministry of Diaspora Affairs developed a system that monitors anti-Semitism on the entire internet, focusing on Facebook and Twitter. Using artificial intelligence, the system identifies around 10,000 anti-Semitic posts daily out of 200,000 suspect posts. By making this information public, it aims to shame individuals and deter anti-Semitism. Additionally, a command center in Tel Aviv analyzes the data and takes action, such as notifying law enforcement or city officials about specific instances. The speaker urges Facebook and Twitter to take responsibility and not allow anti-Semitism under the guise of freedom of speech.

Video Saved From X

reSee.it Video Transcript AI Summary
Think you deleted your data? That's cute. The Internet doesn't delete. It archives. Every click, every typo, every late night search you hoped no one saw. It's all logged by your apps, your ISP, your phone, even your smart fridge if it's nosy enough. You think you've wiped the slate clean, but it's all still there, tucked away in the shadows. Excavation. It's stored where you see it. It's stored where they can sell it. Because forgetting has no profit. But remembering, that's where the money is. Your data has a memory and it's not yours anymore. Those innocent searches, those fleeting moments of curiosity, they're commodities now packaged and sold to the highest bidder. Every detail, every secret you thought was yours is out there waiting to be exploited. So go ahead. Keep scrolling. Keep searching. Just remember, the Internet never forgets.

Video Saved From X

reSee.it Video Transcript AI Summary
X is committed to encouraging healthy behavior online, claiming 99.9% of all posted impressions are healthy. When asked to define "healthy," it was stated that lawful but awful content is difficult to see due to freedom of speech, not reach. Although Kanye West, who has not yet rejoined the platform but plans to do so, has millions of followers and is considered "lawful but awful," he will operate within specific, accessible policies. An extraordinary team oversees content to maintain the 99.9% healthy impression rate. Free expression at its core will only survive when someone you don't agree with says something you don't agree with, allowing for healthy, constructive discourse.

Video Saved From X

reSee.it Video Transcript AI Summary
- "ADL and the University of California at Berkeley's D Lab have been working to develop a new approach to tackle online hate using the latest methods." - "The goal of the online hate index is to help tech platforms better understand the growing amount of hate on social media and to use that information to address the problem." - "By combining artificial intelligence and machine learning with social science, the online hate index will ultimately uncover and identify trends and patterns in hate speech across different platforms." - "We've just completed our first phase of research and we found that the machine learning model identified hate speech accurately between seventy eight and eighty five percent of the time." - "We'll examine content on multiple social media sites and we'll identify strategies to deploy the model more broadly."

Video Saved From X

reSee.it Video Transcript AI Summary
Twitter is developing a tool to combat hate speech by analyzing networks to flag harmful content. This tool will hide violative tweets and redirect users to positive influencers, community groups, or mental health resources. Twitter currently quarantines harmful tweets, but believes providing healthier alternatives is more effective in disrupting radicalization.

Video Saved From X

reSee.it Video Transcript AI Summary
If social media platforms like Facebook, X, Instagram, or TikTok don't moderate and monitor content, we lose total control. This loss of control extends beyond social and psychological effects to include real harm.

Video Saved From X

reSee.it Video Transcript AI Summary
I used to be on Twitter, but it has become toxic and not worth my time. I'm trying to find an alternative to it. Social media needs a code of conduct to address issues like spreading false news and racism. The power of social media platforms should be reflected upon by society. The policy of the owner of X is also problematic. This is a problem that future society needs to address, focusing on ethics in social media.

Video Saved From X

reSee.it Video Transcript AI Summary
Think you deleted your data? That's cute. The Internet doesn't delete. It archives. Every click, every typo, every late night search you hoped no one saw. It's all logged by your apps, your ISP, your phone, even your smart fridge if it's nosy enough. You think you've wiped the slate clean, but it's all still there, tucked away in the shadows. Excavation. It's stored where you see it. It's stored where they can sell it. Because forgetting has no profit. But remembering, that's where the money is. Your data has a memory and it's not yours anymore. Those innocent searches, those fleeting moments of curiosity, they're commodities now packaged and sold to the highest bidder. Every detail, every secret you thought was yours is out there waiting to be exploited. So go ahead. Keep scrolling. Keep searching. Just remember, the Internet never forgets.

The Rubin Report

Candace Owens & Blaire White Debate Social Autopsy and Much More | POLITICS | Rubin Report
Guests: Candace Owens, Blaire White
reSee.it Podcast Summary
A long-form discussion unfolds around a controversial online project about public shaming and the responsibilities of creators in the era of mass online discourse. The host frames the conversation as a rare face-to-face encounter between three adults with deep disagreements who nonetheless agree to attempt a constructive exchange about a project intended to address the harms of online bullying. One guest recounts the origins of the project, describing a high‑school experience with threats and harassment that influenced her belief in using technology to help manage online behavior. She explains that the idea was to archive public remarks and use it as a preventive tool for youth, proposing school involvement and time-bound consequences rather than criminal punishment. The other guest questions the project’s methods, particularly the line between archiving public information and doxxing, and raises concerns about privacy, safety, and the potential for real-world harm. The moderator guides the discussion toward clarifying the technical status of the project, the developers’ terminology, and what was planned versus what was actually built. The exchange frequently returns to how intent can be misunderstood or misrepresented in online debates, and how miscommunications about jargon—such as the meaning of a splash page versus a functional database—fed a public controversy. Throughout, both guests acknowledge that even well-meaning initiatives can be exploited or misused by others, turning a cautionary idea into a Flashpoint for political rhetoric and personal attack. The conversation shifts between personal history, online culture wars, and questions about accountability, asking whether the core idea was misguided or simply poorly executed, and whether the resulting public discourse did more harm than good. The episode concludes with a reflective note on the climate of digital politics, the difficulty of fully reconciling competing perspectives, and an openness to future dialogue or reconciliation, even if the path forward remains unsettled for many listeners.
View Full Interactive Feed