TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Anonymous networks flagging behavior predicting danger. We don't get a second chance. Let's not miss the next one. Fifteen seconds, Aaron. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. They're flying blind right now. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. If you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
A partnership with Palantir aims to address mortgage fraud. The partnership has only scratched the surface of what is possible. Previously, it took investigators sixty days to detect fraud; Palantir's technology accomplishes the same task in ten seconds. Palantir understands security and rooting up fraud. The partnership considers this a matter of public trust. The goal is to understand the fraud and stop it. The partnership intends to get to the bottom of mortgage fraud.

Video Saved From X

reSee.it Video Transcript AI Summary
They describe a monitoring and disruption program with a dedicated apparatus. They have 40 analysts working full time, seven days a week, twenty four hours a day, monitoring extremists online across platforms including social media, messaging apps, video games, cryptocurrency, podcasts, short form video, Wikipedia, and LLMs. They monitor these people and share the intelligence with the FBI. They are monitoring left-wing radicals like the DSA, antiwar activists, and pro-Palestine extremists; right-wing extremists like white supremacists and armed militia groups; political Islamists and Christian nationalists, all of them. They also emphasize training, stating they are the largest trainer of law enforcement in America, training 20,000 officers every year.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 discusses Palantir and expanded government use. Key points: - Palantir is openly building databases on people, used with ICE and announced for broader government use; Palantir also manages all health data due to extensive contracts with HHS. - Trump’s first term included a push to have social media companies flag statements to prevent shootings, using analytics to determine intervention before a crime—concept described as “minority report.” - William Barr, during the first Trump administration, created DEEP, a program that legalized precrime in the United States; there were a few arrests under DEEP for Facebook posts, but not many, with the legal framework in place since Trump’s first term. - The pitch for a precrime system included HARPA, a health-focused version of DARPA, and a program called Safe Homes intended to analyze American social media posts for early warning signs of neuropsychiatric violence. Based on that analysis, individuals could be sent to a court-ordered psychologist or physician or placed under house arrest without having committed any crime. - With Palantir’s increased government integration, especially through the Doge agency led by Elon Musk, Palantir has embedded itself further in government, including the IRS and mortgage-related entities like Fannie Mae; this involves access to data from the Department of Treasury and the IRS, forming a master database aimed at stopping crime before it happens. - Palantir’s precrime activities included piloting predictive policing programs in police departments, initially in New Orleans, targeting primarily low-income minority neighborhoods. - Other companies besides Palantir, such as Predpol in Los Angeles, claim to provide predictive policing with an accuracy of 0.5%; contracts with Predpol have not been terminated. - The overarching concept traces to the Panopticon idea: constant surveillance leads people to police themselves and censor themselves, implying control through perpetual observation, rather than purely improved efficiency in policing. The speaker characterizes this as the foundational form of control.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: The police will be on their best behavior because we record we're constantly recording, watching, and recording everything that's going on. Citizens will be on their best behavior because we're constantly recording and reporting everything that's going on. And it's unimpeachable. The cars have cameras on them. I think we have a squad car here someplace. But those kind of applications using AI, if we can use AI, and we're using AI to monitor the video. So if that altercation had occurred, that occurred in Memphis, the chief of police would be immediately notified. It's not people that are looking at those cameras, it's AI that's looking at the camera. No. No. No. You can't do this. It would be like a shooting. That's gonna be immediately that's gonna be an an event that's immediately rip an alarm's gonna go off. It's gonna be and we're gonna we're gonna have supervision. In other words, every police officer is gonna be supervised at all times. And and the supervision will, and and if there's a problem, AI will report the problem and report it to the appropriate for person, whether it's the sheriff or the chief or whom whomever we need to take control of the situation. We have you know, same thing. We have drones. We just if there's something going on in a shopping and and I'll stop. A drone goes out there. I get there way faster than a police car. There's no reason for, by the way, high speed chases. You shouldn't have high speed chases between cars. You just have a drone follow the car. I mean, it's very, very simple. And then new generation generation of autonomous drones.

Video Saved From X

reSee.it Video Transcript AI Summary
On October 1, there were over 9,000 911 calls in just one minute, highlighting the challenges of emergency response. Garrett Langley shared a powerful story about how Flock Safety's technology helped locate a kidnapped baby in Atlanta, showcasing the impact of public safety technology. Sheriff Kevin McMahill discussed innovations in law enforcement, including the use of drones and gun detection technology, which have significantly improved safety and crime resolution rates in Las Vegas. Flock Safety operates in over 4,000 cities, solving about 22,100 crimes daily. The conversation emphasized the importance of community engagement and transparency in law enforcement, as well as the future potential of technology to enhance public safety and reduce crime.

Video Saved From X

reSee.it Video Transcript AI Summary
Stop Antisemitism was built for confronting the global explosion of Jew hatred unleashed since the attacks of ten seven. Since that day, we have featured more than 1,000 antisemites on our platforms—not theorized about them, not quietly documented them, but featured them publicly, clearly, and with evidence. The results speak for themselves: approximately 400 of these Jew haters have faced real consequences including firings, suspensions, and expulsions. More than 300 remain in an active investigatory state across universities, corporations, DEI departments, unions, hospitals, nonprofits, and yes, federal government agencies. And five arrests to date tied directly to threats and violence of antisemitic conduct we helped expose. This is what accountability looks like. This is what action looks like. This is what pushing back hard looks like against the tidal wave of hate that has consumed The United States and global population. From our founding, Stop Antisemitism has operated on one guiding belief: Antisemitism thrives when there are no consequences. So we created consequences, a lot of them. We created visibility. We turned the spotlight towards those who targeted our community, making silence impossible. On campuses where Jewish students were hunted through libraries, where professors glorified Hamas and Hezbollah terrorists, where mobs shut down our buildings and administrators hid under desks, we stepped in. We documented the offenders. We worked with attorneys, lawmakers, and victim families, and we ensured the message was not unmistakable: If you target Jewish students, your actions will not disappear into the darkness. We will shine a light on you that thanks to Google and SEO, follow you for the rest of your life. When you look for a job, when you look for a spouse, when you look for a nanny, when you look for anything, our work will always be documented. Again, thanks to Google and SEO. In corporations where DEI leaders smeared Israel, excused Hamas, we pressured CEOs; some resigned, many were terminated, but policies were changed thankfully from governmental to art institutions. Online, where anonymous accounts spread violent threats, we traced patterns, elevated evidence, and worked with authorities leading to arrests from Florida, South Carolina, New York, California, and Texas. And we're not slowing down sadly. Today, Stop Antisemitism, I'm proud to say, runs one of the most robust antisemitic enforcement operations in The United States, monitoring campuses, digital networks, activist groups, and public officials, documenting incidents in real time and mobilizing millions of people, of allies that are quietly by our side. But the fight is bigger than the exposure, and it's about securing a future—A future where Jewish students can walk across a quad without being screamed at. A future where employers understand that anti Semitism is not activism. It's bigotry and it will cause you to lose your job. A future where fact, not propaganda, shapes policy. A future where global institutions from Google to chat, GPT, from governments to universities to media, finally treats Jew hatred with the seriousness of other minority-targeted hate. To get there, we need three things: action, real action as I listed; accountability; relentless vigilance, because antisemitism does not take breaks. It doesn't wait for elections. It doesn't disappear because we are exhausted and tired, and when I tell you myself and my team are exhausted and tired, that's the least of it. Stop antisemitism has never been more essential, more strategic, or more effective than it is now, but we cannot do this alone. The demand, the volume of tips, the number of investigations, sadly, it continues to grow instead of decrease. If we want a safer future for the Jewish people, this is the moment to stand together and act. We have to push harder to make it clear that Jewish safety is a nonnegotiable. Tonight, I'm asking you to always be in the fight with us, not just in spirit, but in true action. Participate in calls to action. Write letters to your governmental officials. Speak to the teachers and the college administrators that are making, if it's not your friends and kids, it's making other community members feel unsafe. When we act, lives change, And antisemites learn, sometimes for the very first time in their lives and history, that targeting Jews will come at a price, and together we can ensure that Jew hatred never goes unanswered again. As a former refugee from The USSR, I say this with all of my heart, God bless The United States, God bless Israel, and I'm Israel High. Thank you so much.

Video Saved From X

reSee.it Video Transcript AI Summary
The system covers the entire Internet, including social networks like Facebook and Twitter. It identifies 200,000 suspect posts and tweets related to antisemitism daily, using artificial intelligence and machine learning. Approximately 10,000 antisemitic posts are identified each day. This information will now be made public, serving as a deterrent to antisemitism. We will be able to determine which city has the highest antisemitic internet activity and identify the top 10 antisemitic tweets and Twitter users. By understanding the causes behind spikes in antisemitism, we can take action. The command center in Tel Aviv is already operational, analyzing and sharing information with local authorities and municipalities to address antisemitic activities. This marks the official launch of the system.

Video Saved From X

reSee.it Video Transcript AI Summary
Natalie asks about the AI piece, expressing cynicism that there may be a push for a “war bot” to circumvent consumer AI limits that block starting wars with WMDs, and wonders if there is a benevolent reason. Matthew responds that it’s worse than that: Hengseth described a platform to run on military desktops worldwide—secure, like ChatGPT or Claude but for the Pentagon and military services—that “doesn’t allow information to get out.” The core issue, he says, is who controls the AI, and two key questions about the future of war with AI: who ultimately owns these AI platforms, and who informs them—who gives them the algorithm and programming and essentially orders on how to answer questions. He notes increasing concerns about reliability of information, including how ChatGPT handles questions about trustworthy news sources. He mentions that ChatGPT defers to institutional structures rather than historical accuracy. The risk, he says, is that military AI programs may not provide honest, candid, objective information to military personnel, but rather information based on narratives the Pentagon or manufacturers want. A common belief is that technology makes war more precise and reduces civilian harm, but Matthew contends this is a myth. He explains that precision-guided munitions were not about preventing civilian casualties but about increasing efficiency—“the purpose was to make the weapons more efficient, so we had to drop less bombs to, say, blow up a bridge.” He cites the small diameter bomb as evidence that the aim is not to limit civilian casualties but to allow more bombs to be delivered from aircraft. He highlights real-world examples of AI in warfare, referencing Israeli systems in Gaza. He explains that three AI programs—Lavender, Gospel, and Where’s Daddy?—play roles in targeting and timing strikes. Lavender scans theInternet and databases to identify targets (e.g., labeling someone as a Hamas supporter based on a past online activity), and Where’s Daddy? coordinates that information to ensure bombs hit resistance fighters “when they are with their families,” not away from them. He notes reporting from Israeli media and Nine Two Magazine about these programs and urges viewers to examine that reporting; Tucker Carlson’s coverage is mentioned as example. Matthew argues this demonstrates the dystopian potential of AI in war and cautions against assuming American AI would be more benevolent. He mentions commentator references to justify or excuse actions, including a remark attributed to Mike Huckabee that “Israel did not attack Qatar. They just sent a missile into their country aimed at one person,” noting the nearby injuries or deaths. He ends with a reminder of Orwell’s reflections on war and the idea that those who cheer for war may be less enthusiastic if they experience its costs, suggesting a broader aim to make the costs of war felt among ruling elites who benefit from it.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: Number one, we measure and track. Number two, we monitor and disrupt. We have a whole apparatus. I have 40 analysts working full time, seven days a week, twenty four hours a day, monitoring extremists. We monitor them online, social media, messaging apps, video games, cryptocurrency, podcasts, short form video, Wikipedia, LLMs. We monitor these people and we share the intelligence with the FBI. You saw last month, you heard about the thing that happened at Wilshire Boulevard Temple. Our analysts investigated what happened. They said they were Koreatown for Palestine, this group of people. They weren't. We were able to ascertain they were from a group called the Turtle Island Liberation Front. Turtle Island is how, like, left wing activists refer to The United States. They don't call it America. They call it Turtle Island. Like the Iranians call it, Iranians call it the Zionist entity, or they only call by its name. The Turtle Island Liberation Front, we gave them a whole dossier. Who are what is Turtle Island Liberation Front? What are their ideas, their goals? Who are they? We identified the people who are in the synagogue. This was on Wednesday, December 10. On Monday, December 15, this is gonna ring a bell. Kashmattel announced they cracked a terror ring where they arrested four people who are playing New Year's Eve bombings, Turtle Island Liberation Front. At least one of the people I know for certain was in the building at Wilshire Boulevard Temple vandalizing it and disrupting the event. So we're monitoring left wing radicals like the DSA and the anti war crazies and the pro Palestine crazies. We're monitoring right wing extremists like white supremacists, armed militia groups. We're monitoring political Islamists and Christian nationalists, all of them. And then we train. We're the largest trainer of law enforcement in America. Extremism hate. We train 20,000 officers every year.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Fifteen seconds, Aaron. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. They're flying blind right now. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. And if you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
A partnership with Palantir aims to address mortgage fraud. The partnership intends to ensure there is no fraud. According to one speaker, they have only scratched the surface with Palantir. Previously, it took investigators sixty days to detect fraud; Palantir's technology completes the same task in ten seconds. One speaker expressed excitement about Palantir's technology and expertise in security and fraud detection. For Palantir, this partnership is a matter of public trust. The partnership aims to understand mortgage fraud and stop it. The goal is to get to the bottom of mortgage fraud.

Video Saved From X

reSee.it Video Transcript AI Summary
"And Trump has been openly building databases on people with Palantir." "Palantir also manages all of your health data Because they contract extensively with HHS." "It was called DEEP and there's been a few arrests under DEEP for people making Facebook posts and things like that." "But anyway, this pitch to that Trump made about having social media spy on its users and use like analytics to, you know, bring about some sort of pre crime society." "didn't ultimately happen in creating this agency called HARPA, which was supposed to be like the health version of the Pentagon's DARPA." "the goal of Palantir, just like it was with total information awareness, is about stopping crime before it happens. It's pre crime." "There's one in LA called Predpol, and they have an accuracy of half a percent."

Video Saved From X

reSee.it Video Transcript AI Summary
Patrick Sarval is introduced as an author and expert on conspiracies, system architecture, geopolitics, and software systems. Ab Gieterink asks who Patrick Sarval is and what his expertise entails. Sarval describes himself as an IT architect, often a freelance contractor working with various control and cybernetics-oriented systems, with earlier experience including a Bitcoin startup in 2011, photography work for events, and involvement in topics around conspiracy thinking. He notes his books, including Complotcatalogus and Spiegelpaleis, and mentions Seprouter and Niburu in relation to conspiratorial topics. Gieterink references a prior interview about Complotcatalogus and another of Sarval’s books, and sets the stage to discuss Palantir, surveillance, and the internet. The conversation then shifts to explaining Palantir and its significance. Sarval emphasizes Palantir as a key element in a broader trend rather than focusing solely on the company itself. He uses science-fiction analogies to describe how data processing and artificial intelligence are evolving. In particular, he introduces the concept of a “brein” (brain) or “legion” that integrates disparate data streams, builds an ontology, and enables predictive analytics and tactical decision-making. Palantir is described as the intelligence brain that aggregates data from multiple sources to produce meaningful insights. Sarval explains that a rudimentary prototype of such a system operates under the name Lavender in Gaza, where metadata from sources like Meta (Facebook, WhatsApp, Instagram), cell towers, satellites, and other sensors are fed into Palantir. The system performs threat analysis, ranks threats from high to low, and then a military operator—still human—must approve the action, with about 20–25 seconds to decide whether to fire a weapon. The claim is that Palantir-like software functions as the brain behind this process, orchestrating data integration, ontology creation, data fusion, digital twins, profiling, predictions, and tactical dissemination. The discussion covers how Palantir integrates data from medical records, parking fines, phone data, WhatsApp contacts, and more, then applies an overarching data model and digital twin to simulate and project outcomes. This enables targeted marketing alongside military uses, illustrating the broad reach of the platform. Sarval notes there are two divisions within Palantir: Gotum (military) and Foundry (business models), which he mentions to illustrate the dual-use nature of the technology. He warns that the system is designed to close feedback loops, allowing it to learn and refine its outputs over time, similar to how a thermostat adjusts heating based on sensor inputs. A central concern is the risk to the rule of law and human agency. The discussion highlights the potential erosion of the presumption of innocence and due process when decisions increasingly rely on predictive models and AI. The panel considers the possibility that in a high-stress battlefield scenario, soldiers or commanders might defer to the Palantir-presented “world view,” making it harder to refuse an order. There is also concern about the shift toward autonomous weapons and the removal of human oversight in critical decisions, raising fears about the ethics and accountability of such systems. The conversation moves to the political and ideological backdrop surrounding Palantir’s leadership. Peter Thiel, Elon Musk, and a close circle with ties to PayPal and other tech-industry figures are discussed. Sarval characterizes Palantir’s leadership as ideologically defined, with statements about Zionism and a political worldview influencing how the technology is developed and deployed. The dialogue touches on perceived connections to broader geopolitical influence, including the role of influence campaigns, media shaping, and the involvement of powerful networks in technology development and national security. As the discussion progresses, the speakers explore the implications of advanced AI and the “new generative AI” era. They consider the nature of AI and the potential for it to act not just as a data processor but as a decision-maker with emergent properties that challenge human control. The concept of pre-crime—predicting and acting on potential future threats before they materialize—is discussed as a troubling possibility, especially when a machine’s probability-based judgments guide life-and-death actions. Towards the end, the conversation contemplates what a fully dominated surveillance state might look like, including cognitive warfare and personalized influence through media, ads, and social networks. The dialogue returns to questions about how far Palantir and similar systems have penetrated international security programs, with speculation about Gaza, NATO adoption, and commercial uses beyond military applications. The speakers acknowledge the possibility of multiple trajectories and emphasize the need for checks and balances, transparency, and critical reflection on the power such systems confer upon a relatively small group of technologists and influencers. They conclude with a nod to the transformative and potentially dystopian future of AI-enabled surveillance and decision-making, cautioning against unbridled expansion and urging vigilance.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 describes a 2021 claim by the commander of Israeli intelligence to design a machine to resolve a human bottleneck in locating and approving targets in war. A recent investigation by Plus 972 Magazine and Local Call reveals that the Israeli army developed an AI-based Lavender system to designate targets and direct airstrikes. During the initial weeks of the Lavender operation, the system designated about 37,000 Palestinians as targets and directed airstrikes on their homes. The system reportedly had an error rate of about 10%, and there was no requirement to verify the machine’s data. The Israeli army systematically attacked targeted individuals at night in their homes while their whole family was present. An automated component, known as “where’s daddy,” tracked targeted individuals and carried out bombings when they entered their family residences. The result, according to the report, was that thousands of women and children were killed by Israeli airstrikes. Israeli intelligence officers allegedly stated that the IDF bombed homes as a first option, and in several cases entire families were murdered when the actual target was not inside. In one instance, four buildings were destroyed along with everyone inside because a single target was in one of them. For targets marked as low level by Lavender, cheaper bombs were used, destroying entire buildings and killing mostly civilians and entire families. It was alleged that the IDF did not want to waste expensive bombs on “unimportant people,” and it was decided that for every low-level Hamas operative Lavender marked, it was permissible to kill up to 15 or 20 civilians; for a senior Hamas official, more than 100 civilians could be killed. Most AI targets were never tracked before the war. Lavender analyzed information collected on the 2,300,000 residents of the Gaza Strip through mass surveillance, assessing the likelihood of each person being a militant and giving a rating from 1 to 100. If the rating was high enough, the person and their entire family were killed. Lavender flagged individuals with patterns similar to Hamas, including police, civil defense, relatives, and residents with similar names or nicknames. The report notes that this kind of tracking system has existed in the US for years. Speaker 1 presents a counterpoint: a “fine gentleman of the secret service” claims to provide a list of every threat made about the president since February 3 and profiles of every threat maker, implying that targets could be identified through broad data collection including emails, chats, SMS. The passage suggests a tool akin to a Google search but including private communications. Speaker 0 adds that although some claim Israel controls the US, Joe Biden says Israel serves US interests. Speaker 2: A speaker asserts, “There’s no apology to be made. None. It is the best $3,000,000,000 investment we make,” and claims that without Israel the United States would have to invent an Israel to protect its regional interests. Speaker 0 closes reporting for Infowars, credited to Greg Reese.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real-time AI-powered threat detection system for law enforcement and schools. It scans the open web, social media, Reddit, Discord, and gaming chats, flagging grievance buildup, martyrdom language, and tactical planning before someone acts. Law enforcement agencies are on board to pilot it. I'm raising funds directly from my audience—Cohen's commandos, the people who actually care to bring Gideon to life. If you've ever asked yourself, why didn't someone catch this before? This is the answer. Hit the link in the description and donate what you can and please share it. This isn't about politics. This is about protecting America, protecting our kids, and it's about giving law enforcement signal before the next tragedy unfolds. This is Gideon. This is my new mission. Help me build it, and let's do it together.

Video Saved From X

reSee.it Video Transcript AI Summary
We train 20,000 officers every year, making us the largest trainer of law enforcement in America. Our approach has two core components: measure and track, and monitor and disrupt. We maintain a dedicated operation with 40 analysts working full-time, seven days a week, 24 hours a day, to monitor extremists. Their monitoring covers online activities across social media, messaging apps, video games, cryptocurrency, podcasts, short-form video, Wikipedia, and large language models. The intelligence collected is shared with the FBI. In relation to a real-world incident, our analysts investigated the events at Wilshire Boulevard Temple. They identified the individuals who were present at the synagogue. This investigation occurred in December, with the timeline noting that on Wednesday, December 10, the events were observed, and by Monday, December 15, Kashmir Patel announced that they had cracked a terror ring.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Anonymous networks flagging behavior predicting danger. We don't get a second chance. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. If you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
Correct. I am now about to launch Gideon, America's first ever AI threat detection platform built specifically for law enforcement. It scrapes the Internet twenty four seven using an Israeli grade ontology to pull specific threat language and then routes it to local law enforcement. It's a 20 fourseven detective. It never sleeps, and it's going to get us in front of these attacks. Would it have picked up on this, do you think? 100%. I wish this pro I wish my program would already be up. We're not launching until next week. I've got a dozen agencies on board, Trace. I just onloaded a major Northeast agency with over 2,700 sworn. This is America's early warning system.

Video Saved From X

reSee.it Video Transcript AI Summary
So there was a program called HHS Protect as start during operation warp speed. So this HHS protect program is really interesting because what it did, it used two different Palantir programs. The AMA, HHS, the CDC, specifically, all partnered with Palantir, and then Palantir developed a program for operation warp speed. And that program, what it did was it assigned people a threat risk score, and then that was a program called Tiberius. They also could determine down to the ZIP code where you were and how compliant areas were. And then Gotham is the AI kill chain program created by Palantir. So the Gotham program, it takes the threat risk score from Tiberius, and then it executes the threat or tells does an AI decision making process and decide decides when and how and where to deploy the countermeasures, which was your vaccine, your remdesivir, and your ventilator.

My First Million

MFM #160: How to Build a Paid Community Making $20M a Year
reSee.it Podcast Summary
The discussion begins with a humorous exchange about appearances and voice stereotypes between hosts Saam Paar and Shaan Puri. They then transition into a conversation about paid communities, specifically focusing on Tiger21, a peer group for wealthy individuals with at least $10 million in investable assets. Membership costs $30,000 annually, and the group facilitates portfolio defenses and networking among its members, which include high-profile individuals. Shaan shares insights from a friend in Tiger21, highlighting the group's focus on investing and wealth preservation, along with exclusive events. They explore the potential for creating similar paid communities, suggesting a model for their podcast audience, particularly targeting individuals with a million dollars in investable assets. They discuss the rapid growth potential of paid communities, the low startup costs, and the profitability of such ventures. However, they also address challenges like high churn rates and scalability issues, particularly in communities that rely on close-knit interactions. They mention successful examples like Aventa, a professional community for executives, and Soul Savvy, a sneakerhead community that provides insider tips on shoe drops. The hosts emphasize the importance of passion and financial return for community members, suggesting that communities centered around making money tend to thrive. They propose ideas for new communities, including those for nurses, Google Sheets enthusiasts, and senior engineers discussing advanced concepts. They conclude that many job sectors could benefit from paid communities, especially those where members feel undervalued or lack resources. In the latter part of the conversation, they discuss the importance of breaking inertia in personal and professional life, advocating for drastic changes to spur growth and decision-making. They share personal anecdotes about overcoming inertia and emphasize the need for individuals to actively choose their paths rather than passively follow existing patterns. The episode wraps up with a guest interview featuring Dave Sellinger, founder of Deep Sentinel, a security company that employs a human-in-the-loop model to monitor properties using AI and security guards. He explains how the service works, the market potential, and the unique advantages of their approach over traditional security systems. The conversation highlights the importance of innovation in security and the potential for significant disruption in the industry.

Relentless

#48 - Police Chases, Ride Alongs, Bureaucracy | Daniel Francis, CEO Abel Police
Guests: Daniel Francis
reSee.it Podcast Summary
Daniel Francis, founder and CEO of Able Police, discusses the real-world problems police agencies face with tedious, time-consuming reporting and how their AI-powered solution aims to reclaim officers’ time for frontline work. The conversation dives into the origin of Able Police, born from Francis’s hands-on experiences in ride-alongs and observing how much time is spent documenting incidents. He explains the product’s core value: turning body-cam footage into police reports, addressing the two-part structure of a report—structured data versus the narrative—and the shift from manual transcription to intelligent generation, all while navigating CJIS and security concerns. The episode highlights the acquisition journey: persistence through dozens of agency rejections, the breakthrough moment with Richmond, and the strategic pivot when Axon announced similar capabilities, which validated the concept but also exposed gaps Able Police could fill with a more tailored CJIS-compliant stack. Francis emphasizes the fragmentation of policing across 18,000 US agencies, each with different contracts and processes, and why the company focuses on “soft,” understaffed departments first, then scales using demonstrations, conferences, and relationship-building. The interview also touches on culture within policing, the stress of the job, the appeal of body cameras for accountability, and how reliable reporting can impact budgets and safety outcomes. Towards the end, the discussion shifts to expansion plans and product strategy. Francis outlines Able Writer, a forthcoming tool to convert body-cam narratives into polished reports, and Able Citizen, a citizen-facing report intake with a chat interface to elicit precise crime details. He argues that stronger frontline presence reduces crime, saves lives, and improves city governance. The broader theme is leveraging AI to enhance policing through better data, streamlined workflows, and faster, more accurate documentation, while acknowledging political and administrative realities that shape adoption across diverse jurisdictions.

Cheeky Pint

Garrett Langley of Flock Safety on building technology to solve crime
Guests: Garrett Langley
reSee.it Podcast Summary
Garrett Langley describes the origin and evolution of Flock Safety, from a neighborhood initiative to track license plates after a crime to a nationwide hardware and software platform used by thousands of cities and private companies. He emphasizes the core insight that traditional home and vehicle security focuses on reacting to crime rather than preventing it, and explains how Flock built a community-focused safety system, culminating in real-time, city-wide coordination through Flock OS, license plate readers, cameras, and drones. The conversation showcases concrete case studies: real-time 911 integration that can surface suspect descriptions such as clothing and vehicles, cross-agency collaboration enabled by shared data, and a drone-enabled response model that reduces dangerous pursuits and speeds up arrests. Langley highlights the shift from single-neighborhood deployments to a national network that supports complex operations across multiple states, with a strong emphasis on balancing rapid disruption of crime with accountability, privacy, and data retention safeguards. The interview also delves into the broader implications of this technology for public safety, including the tension between expanding law enforcement bandwidth and civil liberties, the role of third-party data and federal coordination, and the evolving regulatory landscape shaped by state bills that set data retention and auditing standards. Questions about hardware scale, supply chain risks, and the economics of hardware-heavy growth reveal how Flock navigates a difficult capital-intensive path while maintaining a profitable core and pursuing ambitious future bets. The discussion ends with Langley’s forward-looking ideas: using Flock’s platform to prevent crime before it happens, investing in community-economic development to reduce crime incentives, and exploring humane paths to rehabilitate offenders. He frames safety as a public-right goal that requires legislative guardrails, transparent data practices, and a deliberate balance between effectiveness and privacy, while acknowledging the inevitable trade-offs as technology accelerates.

a16z Podcast

a16z Podcast | The Fundamentals of Security and the Story of Tanium’s Growth
Guests: Orion Hindawi
reSee.it Podcast Summary
In the a16z podcast, Orion Hindawi, co-founder of Tainium, discusses enterprise security, emphasizing the importance of basic practices over complex solutions. He critiques traditional hub-and-spoke models, which struggle to manage the scale of modern enterprise environments, and highlights Tainium's innovative approach that allows for rapid management of hundreds of thousands of endpoints. Hindawi notes that many companies are realizing their existing security measures are inadequate, leading to increased interest in Tainium's solutions. He explains that Tainium's dual focus on security and operations provides tangible ROI, making it attractive to large enterprises. Hindawi also addresses the misconception that perimeter security is sufficient, stating that attackers often exploit vulnerabilities within networks. He argues that effective security requires visibility into endpoints and the ability to respond quickly to threats. Tainium's platform is designed to be easily deployed, allowing organizations to identify and eliminate inefficiencies, ultimately enhancing their security posture while reducing costs.

Possible Podcast

Devshi Mehrotra on AI, justice, and public defense
Guests: Devshi Mehrotra
reSee.it Podcast Summary
Devshi Mehrotra's arc spans from a Beijing lab to a courtroom technology startup that aims to change how justice is practiced. Her first exposure to AI came in 2016 during a Beijing internship where she built a cancer cell image analysis prototype, learning gradient descent and neural networks while feeling overwhelmed yet hooked by the idea that math could drive real-world tasks. She later joined Google Brain, Microsoft Research, and DeepMind, contributing to NLP, computer vision, and robotics. Those experiences laid the foundation for Justice Text, which she co-founded with Leslie after meeting in the University of Chicago's computer science program and sharing a commitment to social justice. Justice Text emerged from a direct request: public defenders overwhelmed by video, transcripts, and jail calls needed tools to sift through footage and extract evidence. The platform automates transcription, offers searchable summaries, flags key moments such as miranda warnings or arrests, and lets attorneys assemble video exhibits for court. A Northern California case involving a Spanish-speaking client showed how a clip could reveal rights violations and help dismiss a charge. Mehrotra emphasizes that Justice Text is funded through customer relationships with government bodies, not charity, with durable, scalable adoption through procurement. Today, Justice Text serves around 60 public defender agencies, including statewide systems in Tennessee and Massachusetts, and major cities like Portland and Houston, with a delivery model that combines training, office hours, and in-person visits to fit varied county structures. Mehrotra describes a future of expanded partnerships, additional statewide deployments, and features such as Miranda AI, which summarizes large discovery folders and lets lawyers poll the data with natural-language questions, cross-referencing answers to exact files and timestamps. She notes governments are increasingly surveying AI use, demanding data safeguarding and interoperable APIs, and foresees growth into adjacent defense contexts and private criminal defense. She cites the Indian film Queen as a source of optimism about bold, independent paths.
View Full Interactive Feed