TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
A partnership with Palantir aims to address mortgage fraud. The partnership has only scratched the surface of what is possible. Previously, it took investigators sixty days to detect fraud; Palantir's technology accomplishes the same task in ten seconds. Palantir understands security and rooting up fraud. The partnership considers this a matter of public trust. The goal is to understand the fraud and stop it. The partnership intends to get to the bottom of mortgage fraud.

Video Saved From X

reSee.it Video Transcript AI Summary
They describe a monitoring and disruption program with a dedicated apparatus. They have 40 analysts working full time, seven days a week, twenty four hours a day, monitoring extremists online across platforms including social media, messaging apps, video games, cryptocurrency, podcasts, short form video, Wikipedia, and LLMs. They monitor these people and share the intelligence with the FBI. They are monitoring left-wing radicals like the DSA, antiwar activists, and pro-Palestine extremists; right-wing extremists like white supremacists and armed militia groups; political Islamists and Christian nationalists, all of them. They also emphasize training, stating they are the largest trainer of law enforcement in America, training 20,000 officers every year.

Video Saved From X

reSee.it Video Transcript AI Summary
The system covers the entire Internet, including social networks like Facebook and Twitter. It identifies 200,000 suspect posts and tweets related to antisemitism daily, using artificial intelligence and machine learning. Approximately 10,000 antisemitic posts are identified each day. This information will now be made public, serving as a deterrent to antisemitism. We will be able to determine which city has the highest antisemitic internet activity and identify the top 10 antisemitic tweets and Twitter users. By understanding the causes behind spikes in antisemitism, we can take action. The command center in Tel Aviv is already operational, analyzing and sharing information with local authorities and municipalities to address antisemitic activities. This marks the official launch of the system.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: Number one, we measure and track. Number two, we monitor and disrupt. We have a whole apparatus. I have 40 analysts working full time, seven days a week, twenty four hours a day, monitoring extremists. We monitor them online, social media, messaging apps, video games, cryptocurrency, podcasts, short form video, Wikipedia, LLMs. We monitor these people and we share the intelligence with the FBI. You saw last month, you heard about the thing that happened at Wilshire Boulevard Temple. Our analysts investigated what happened. They said they were Koreatown for Palestine, this group of people. They weren't. We were able to ascertain they were from a group called the Turtle Island Liberation Front. Turtle Island is how, like, left wing activists refer to The United States. They don't call it America. They call it Turtle Island. Like the Iranians call it, Iranians call it the Zionist entity, or they only call by its name. The Turtle Island Liberation Front, we gave them a whole dossier. Who are what is Turtle Island Liberation Front? What are their ideas, their goals? Who are they? We identified the people who are in the synagogue. This was on Wednesday, December 10. On Monday, December 15, this is gonna ring a bell. Kashmattel announced they cracked a terror ring where they arrested four people who are playing New Year's Eve bombings, Turtle Island Liberation Front. At least one of the people I know for certain was in the building at Wilshire Boulevard Temple vandalizing it and disrupting the event. So we're monitoring left wing radicals like the DSA and the anti war crazies and the pro Palestine crazies. We're monitoring right wing extremists like white supremacists, armed militia groups. We're monitoring political Islamists and Christian nationalists, all of them. And then we train. We're the largest trainer of law enforcement in America. Extremism hate. We train 20,000 officers every year.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Fifteen seconds, Aaron. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. They're flying blind right now. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. And if you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
A partnership with Palantir aims to address mortgage fraud. The partnership intends to ensure there is no fraud. According to one speaker, they have only scratched the surface with Palantir. Previously, it took investigators sixty days to detect fraud; Palantir's technology completes the same task in ten seconds. One speaker expressed excitement about Palantir's technology and expertise in security and fraud detection. For Palantir, this partnership is a matter of public trust. The partnership aims to understand mortgage fraud and stop it. The goal is to get to the bottom of mortgage fraud.

Video Saved From X

reSee.it Video Transcript AI Summary
"And Trump has been openly building databases on people with Palantir." "Palantir also manages all of your health data Because they contract extensively with HHS." "It was called DEEP and there's been a few arrests under DEEP for people making Facebook posts and things like that." "But anyway, this pitch to that Trump made about having social media spy on its users and use like analytics to, you know, bring about some sort of pre crime society." "didn't ultimately happen in creating this agency called HARPA, which was supposed to be like the health version of the Pentagon's DARPA." "the goal of Palantir, just like it was with total information awareness, is about stopping crime before it happens. It's pre crime." "There's one in LA called Predpol, and they have an accuracy of half a percent."

Video Saved From X

reSee.it Video Transcript AI Summary
Patrick Sarval is introduced as an author and expert on conspiracies, system architecture, geopolitics, and software systems. Ab Gieterink asks who Patrick Sarval is and what his expertise entails. Sarval describes himself as an IT architect, often a freelance contractor working with various control and cybernetics-oriented systems, with earlier experience including a Bitcoin startup in 2011, photography work for events, and involvement in topics around conspiracy thinking. He notes his books, including Complotcatalogus and Spiegelpaleis, and mentions Seprouter and Niburu in relation to conspiratorial topics. Gieterink references a prior interview about Complotcatalogus and another of Sarval’s books, and sets the stage to discuss Palantir, surveillance, and the internet. The conversation then shifts to explaining Palantir and its significance. Sarval emphasizes Palantir as a key element in a broader trend rather than focusing solely on the company itself. He uses science-fiction analogies to describe how data processing and artificial intelligence are evolving. In particular, he introduces the concept of a “brein” (brain) or “legion” that integrates disparate data streams, builds an ontology, and enables predictive analytics and tactical decision-making. Palantir is described as the intelligence brain that aggregates data from multiple sources to produce meaningful insights. Sarval explains that a rudimentary prototype of such a system operates under the name Lavender in Gaza, where metadata from sources like Meta (Facebook, WhatsApp, Instagram), cell towers, satellites, and other sensors are fed into Palantir. The system performs threat analysis, ranks threats from high to low, and then a military operator—still human—must approve the action, with about 20–25 seconds to decide whether to fire a weapon. The claim is that Palantir-like software functions as the brain behind this process, orchestrating data integration, ontology creation, data fusion, digital twins, profiling, predictions, and tactical dissemination. The discussion covers how Palantir integrates data from medical records, parking fines, phone data, WhatsApp contacts, and more, then applies an overarching data model and digital twin to simulate and project outcomes. This enables targeted marketing alongside military uses, illustrating the broad reach of the platform. Sarval notes there are two divisions within Palantir: Gotum (military) and Foundry (business models), which he mentions to illustrate the dual-use nature of the technology. He warns that the system is designed to close feedback loops, allowing it to learn and refine its outputs over time, similar to how a thermostat adjusts heating based on sensor inputs. A central concern is the risk to the rule of law and human agency. The discussion highlights the potential erosion of the presumption of innocence and due process when decisions increasingly rely on predictive models and AI. The panel considers the possibility that in a high-stress battlefield scenario, soldiers or commanders might defer to the Palantir-presented “world view,” making it harder to refuse an order. There is also concern about the shift toward autonomous weapons and the removal of human oversight in critical decisions, raising fears about the ethics and accountability of such systems. The conversation moves to the political and ideological backdrop surrounding Palantir’s leadership. Peter Thiel, Elon Musk, and a close circle with ties to PayPal and other tech-industry figures are discussed. Sarval characterizes Palantir’s leadership as ideologically defined, with statements about Zionism and a political worldview influencing how the technology is developed and deployed. The dialogue touches on perceived connections to broader geopolitical influence, including the role of influence campaigns, media shaping, and the involvement of powerful networks in technology development and national security. As the discussion progresses, the speakers explore the implications of advanced AI and the “new generative AI” era. They consider the nature of AI and the potential for it to act not just as a data processor but as a decision-maker with emergent properties that challenge human control. The concept of pre-crime—predicting and acting on potential future threats before they materialize—is discussed as a troubling possibility, especially when a machine’s probability-based judgments guide life-and-death actions. Towards the end, the conversation contemplates what a fully dominated surveillance state might look like, including cognitive warfare and personalized influence through media, ads, and social networks. The dialogue returns to questions about how far Palantir and similar systems have penetrated international security programs, with speculation about Gaza, NATO adoption, and commercial uses beyond military applications. The speakers acknowledge the possibility of multiple trajectories and emphasize the need for checks and balances, transparency, and critical reflection on the power such systems confer upon a relatively small group of technologists and influencers. They conclude with a nod to the transformative and potentially dystopian future of AI-enabled surveillance and decision-making, cautioning against unbridled expansion and urging vigilance.

Video Saved From X

reSee.it Video Transcript AI Summary
We can enhance school security by implementing AI cameras to monitor campuses and alert authorities immediately if a weapon is detected. Our redesigned body cameras, costing only $70, continuously record and transmit footage to headquarters, ensuring police accountability. Privacy is maintained, as recordings can only be accessed with a court order. AI monitors these feeds, instantly notifying supervisors of any incidents, promoting better behavior among both police and citizens. Additionally, drones can quickly respond to incidents, such as tracking suspects instead of engaging in high-speed chases, and detecting forest fires autonomously. These AI applications represent a significant advancement in public safety and law enforcement.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real-time AI-powered threat detection system for law enforcement and schools. It scans the open web, social media, Reddit, Discord, and gaming chats, flagging grievance buildup, martyrdom language, and tactical planning before someone acts. Law enforcement agencies are on board to pilot it. I'm raising funds directly from my audience—Cohen's commandos, the people who actually care to bring Gideon to life. If you've ever asked yourself, why didn't someone catch this before? This is the answer. Hit the link in the description and donate what you can and please share it. This isn't about politics. This is about protecting America, protecting our kids, and it's about giving law enforcement signal before the next tragedy unfolds. This is Gideon. This is my new mission. Help me build it, and let's do it together.

Video Saved From X

reSee.it Video Transcript AI Summary
Palantir is here to disrupt and make our the institutions we partner with the very best in the world and when it's necessary to scare enemies and, on occasion, kill them.

Video Saved From X

reSee.it Video Transcript AI Summary
Trump has been openly building databases on people with Palantir. Palantir also manages all of your health data Because they contract extensively with HHS. Trump called on social media companies to stop shooters before they commit a crime and to basically flag what people were saying on social media and use that to determine if there should be intervention before a crime might be committed, basically. That's minority report. William Barr, when he was in office the first time, created this program that legalized precrime in The United States, and I think I was, like, one of two people maybe that reported on that at the time. It was called DEEP. The legal framework has been there since, you know, Trump round one. This pitch that Trump made about having social media spy on its users and use like analytics to bring about some sort of pre crime society.

Video Saved From X

reSee.it Video Transcript AI Summary
Correct. I am now about to launch Gideon, America's first ever AI threat detection platform built specifically for law enforcement. It scrapes the Internet twenty four seven using an Israeli grade ontology to pull specific threat language and then routes it to local law enforcement. It's a twenty four seven detective. It never sleeps, and it's going to get us in front of these attacks. Would it have picked up on this, do you think? 100%. Percent. I wish this pro I wish my program would already be up. We're not launching until next week. I've got a dozen agencies on board, Trace. I just onloaded a major Northeast, agency with over 2,700 sworn. This is America's early warning system.

Video Saved From X

reSee.it Video Transcript AI Summary
We train 20,000 officers every year, making us the largest trainer of law enforcement in America. Our approach has two core components: measure and track, and monitor and disrupt. We maintain a dedicated operation with 40 analysts working full-time, seven days a week, 24 hours a day, to monitor extremists. Their monitoring covers online activities across social media, messaging apps, video games, cryptocurrency, podcasts, short-form video, Wikipedia, and large language models. The intelligence collected is shared with the FBI. In relation to a real-world incident, our analysts investigated the events at Wilshire Boulevard Temple. They identified the individuals who were present at the synagogue. This investigation occurred in December, with the timeline noting that on Wednesday, December 10, the events were observed, and by Monday, December 15, Kashmir Patel announced that they had cracked a terror ring.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Anonymous networks flagging behavior predicting danger. We don't get a second chance. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. If you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
Correct. I am now about to launch Gideon, America's first ever AI threat detection platform built specifically for law enforcement. It scrapes the Internet twenty four seven using an Israeli grade ontology to pull specific threat language and then routes it to local law enforcement. It's a 20 fourseven detective. It never sleeps, and it's going to get us in front of these attacks. Would it have picked up on this, do you think? 100%. I wish this pro I wish my program would already be up. We're not launching until next week. I've got a dozen agencies on board, Trace. I just onloaded a major Northeast agency with over 2,700 sworn. This is America's early warning system.

Video Saved From X

reSee.it Video Transcript AI Summary
Imagine a world without murder. I lost many loved ones to violence. Six years ago, the homicide rate was alarming, but then we introduced the pre-crime program. Within a month, murders in Washington, D.C. dropped by 90%. Within a year, the program effectively eliminated murder in the capital, and for six years, there hasn't been a single homicide. Now, we want to extend this success to all Americans. We aim to ensure that the system not only keeps us safe but also preserves our freedom. Vote yes on the National Free Crime Initiative on Tuesday, April 22nd.

Video Saved From X

reSee.it Video Transcript AI Summary
Devar AI presents Rockia, a dashboard designed as a copilot for navigating information wars, particularly on TikTok. It is currently demoed for the IDF counter propaganda unit, but can be tailored for any government monitoring collective consciousness. Rockia analyzes narratives and generates counter-narratives and social media campaigns using AI, addressing the rise in anti-Israel sentiment on social media since October 7. The dashboard displays topic clusters of TikTok videos, each represented by a card. Users can access full reports on each topic, examine AI-generated counter-narratives to combat negative or bolster positive sentiments, and view lists of TikTok videos within each cluster.

Video Saved From X

reSee.it Video Transcript AI Summary
We focus on collecting data from surveillance and monitoring social media platforms. Our goal is to counter negativity and reach out to people when we see hate speech online. Our media analysis unit has increased monitoring to catch incitement to violence and direct threats. We are committed to ensuring the safety and sense of safety for New Yorkers.

Video Saved From X

reSee.it Video Transcript AI Summary
We are the Secure Community Network, working to build a proactive security shield for the Jewish community in North America. Our focus is on intelligence sharing, physical security solutions, training, and incident response. With a team of experts, including FBI and Department of State members, we aim to protect and allow Jewish life to flourish for future generations.

Relentless

#48 - Police Chases, Ride Alongs, Bureaucracy | Daniel Francis, CEO Abel Police
Guests: Daniel Francis
reSee.it Podcast Summary
Daniel Francis, founder and CEO of Able Police, discusses the real-world problems police agencies face with tedious, time-consuming reporting and how their AI-powered solution aims to reclaim officers’ time for frontline work. The conversation dives into the origin of Able Police, born from Francis’s hands-on experiences in ride-alongs and observing how much time is spent documenting incidents. He explains the product’s core value: turning body-cam footage into police reports, addressing the two-part structure of a report—structured data versus the narrative—and the shift from manual transcription to intelligent generation, all while navigating CJIS and security concerns. The episode highlights the acquisition journey: persistence through dozens of agency rejections, the breakthrough moment with Richmond, and the strategic pivot when Axon announced similar capabilities, which validated the concept but also exposed gaps Able Police could fill with a more tailored CJIS-compliant stack. Francis emphasizes the fragmentation of policing across 18,000 US agencies, each with different contracts and processes, and why the company focuses on “soft,” understaffed departments first, then scales using demonstrations, conferences, and relationship-building. The interview also touches on culture within policing, the stress of the job, the appeal of body cameras for accountability, and how reliable reporting can impact budgets and safety outcomes. Towards the end, the discussion shifts to expansion plans and product strategy. Francis outlines Able Writer, a forthcoming tool to convert body-cam narratives into polished reports, and Able Citizen, a citizen-facing report intake with a chat interface to elicit precise crime details. He argues that stronger frontline presence reduces crime, saves lives, and improves city governance. The broader theme is leveraging AI to enhance policing through better data, streamlined workflows, and faster, more accurate documentation, while acknowledging political and administrative realities that shape adoption across diverse jurisdictions.

Cheeky Pint

Garrett Langley of Flock Safety on building technology to solve crime
Guests: Garrett Langley
reSee.it Podcast Summary
Garrett Langley describes the origin and evolution of Flock Safety, from a neighborhood initiative to track license plates after a crime to a nationwide hardware and software platform used by thousands of cities and private companies. He emphasizes the core insight that traditional home and vehicle security focuses on reacting to crime rather than preventing it, and explains how Flock built a community-focused safety system, culminating in real-time, city-wide coordination through Flock OS, license plate readers, cameras, and drones. The conversation showcases concrete case studies: real-time 911 integration that can surface suspect descriptions such as clothing and vehicles, cross-agency collaboration enabled by shared data, and a drone-enabled response model that reduces dangerous pursuits and speeds up arrests. Langley highlights the shift from single-neighborhood deployments to a national network that supports complex operations across multiple states, with a strong emphasis on balancing rapid disruption of crime with accountability, privacy, and data retention safeguards. The interview also delves into the broader implications of this technology for public safety, including the tension between expanding law enforcement bandwidth and civil liberties, the role of third-party data and federal coordination, and the evolving regulatory landscape shaped by state bills that set data retention and auditing standards. Questions about hardware scale, supply chain risks, and the economics of hardware-heavy growth reveal how Flock navigates a difficult capital-intensive path while maintaining a profitable core and pursuing ambitious future bets. The discussion ends with Langley’s forward-looking ideas: using Flock’s platform to prevent crime before it happens, investing in community-economic development to reduce crime incentives, and exploring humane paths to rehabilitate offenders. He frames safety as a public-right goal that requires legislative guardrails, transparent data practices, and a deliberate balance between effectiveness and privacy, while acknowledging the inevitable trade-offs as technology accelerates.

a16z Podcast

The Crime Crisis In America (How Technology Fixes It)
Guests: Garrett Langley, Ben Horowitz
reSee.it Podcast Summary
The episode centers on a candid exploration of how technology intersects with crime, policing, and public safety in America, with a focus on practical strategies for reducing crime through smarter use of data, sensors, and analytics. The speakers argue that crime is best deterred not by fear alone but by credible incentives, accountability, and a prosecutorial approach that emphasizes catching offenders while prioritizing the social costs of mass incarceration. The discussion moves from high-level ideas about staffing and culture in policing to concrete examples of deploying cameras, drones, gunshot detection, and AI-powered data orchestration to understand and respond to incidents faster and more precisely. The tone is pragmatic and future-facing, insisting that technology should serve citizens and be transparent so communities can trust how safety is achieved. Across their case studies, they stress that trust and accountability are as important as speed and reach, and they advocate for aligned incentives among police, public officials, and private partners to address both immediate crime threats and long-term social risks. The conversation also delves into the political and social dynamics of policing, acknowledging that reforms must balance public safety with civil liberties and that the most successful models combine intelligent surveillance with community policing and direct investments in social supports to reduce crime over time. The hosts and guests share a vision of a more proactive, data-driven style of policing that lowers violence, improves clearance rates, and preserves individual rights, while highlighting the human side of policing—recognizing the stress on officers, the importance of diverse recruitment, and the need for humane policies that prevent people from being trapped in a cycle of offense. The overall message is that technology can amplify good policing when deployed thoughtfully, with clear governance, robust privacy protections, and meaningful collaboration between cities, vendors, and residents.”

Weaponized

Air Force Analyst Lenval Logan Exposes What AARO Won't Show You About UFOs
reSee.it Podcast Summary
The episode centers on Lenval Logan, a former US Air Force airman who became an all-source analyst and contractor involved in UAP investigations and the military’s efforts to understand unidentified aerial phenomena. Logan recounts early exposures to unusual aircraft during his service in Europe and the United States, including a moment when he and colleagues identified a glowing, zigzagging object that outpaced known platforms on radar. He explains how pilots and radar operators often encounter things that defy easy explanation, and how the culture of secrecy, stigma, and concern for careers discouraged open discussion about such sightings. The conversation emphasizes the distinction between different intelligence disciplines and how an all-source analyst aggregates reports from imagery, SIGINT, human intelligence, and pilot debriefs to form a coherent assessment for commanders. Logan describes his path from B-52 crew chief to intel work and to involvement with a task force focused on UAPs, noting how sensitive the subject is, how information was sometimes shared unofficially, and how internal skepticism and the fear of compromising sources shaped the process. The hosts and Logan also touch on the role of public-facing media, including appearances on Joe Rogan, and how leaks and media portrayals influence public perception. The discussion shifts to the UAP task force’s evolution, the tension with Arrow, and the broader debate about whether the most compelling footage exists but remains classified. Logan discusses the motivation behind building Phenom, an app designed to educate, expose documentation workflows, and crowdsource analysis from the public, while acknowledging ongoing concerns about doxxing and misinterpretation. He frames the app as a way to empower responsible citizen participation, with a plan to assemble expert review through a council of trusted voices, and to provide a safer, more transparent channel for reporting sightings and validating data without compromising national security.

My First Million

EXCLUSIVE: $3B Founder Reveals His Next Big Idea | Brett Adcock Interview
reSee.it Podcast Summary
Brett Adcock discusses his entrepreneurial journey and the alarming rise in school shootings in the U.S., noting that several hundred guns are brought into K-12 schools annually, primarily handguns. He emphasizes the need for technology to detect weapons, inspired by a NASA research paper on imaging technology. Adcock's startup aims to develop a system that can identify weapons hidden under clothing, with plans to launch within 30 days. He believes that while gun control is important, it won't solve all issues, as violence can still occur with other weapons. Adcock's philosophy centers on building impactful solutions, prioritizing speed and iteration in product development. He aims to address safety in schools and other public spaces, viewing this project as a passion rather than a primary business venture. He also highlights the importance of clear communication and understanding across disciplines in tech development, drawing inspiration from successful entrepreneurs like Steve Jobs and Elon Musk.
View Full Interactive Feed