reSee.it - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Anonymous networks flagging behavior predicting danger. We don't get a second chance. Let's not miss the next one. Fifteen seconds, Aaron. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. They're flying blind right now. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. If you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
Data centers under construction in the United States show how quickly AI infrastructure is expanding. Texas has 135, Virginia 134, Georgia 51, Ohio 45, Arizona 35, Nevada 29, Indiana 21, Mississippi 21, Illinois 19, Iowa 16, Oregon 12, South Carolina 12, Wisconsin 11, Maryland 11, North Carolina 11, Pennsylvania 11, Utah 10, Missouri 8, Wyoming 2, Alabama 7, New York 7, Tennessee 7, and Florida 7 under construction. Australia, the UK, and Canada have smaller numbers. In Australia, Sydney has 10 to 15 distinct sites or campuses actively under construction; Melbourne has 8 to 12 sites; nationally, 20 to 30 sites total actively under construction, plus 48 upcoming facilities overall. In the UK, London has 7; other regions show slow growth with two to four in some areas. Northeast England, Wales have one to two; Greater Manchester, Yorkshire, Scotland have one to three; national totals are approximately 20 to 30 distinct sites or facilities actively under construction, with 29 projects expected to begin or continue construction in 2026. In Canada, Toronto (Greater Toronto Area) has four to six; Montreal (Quebec metro area) five to eight; Quebec City two to four; Vancouver one to three; Calgary/Alberta five to ten. Other regions such as Ottawa, Waterloo, and Halifax have one to three being planned. Flock Safety is a US-based technology company, Flock Group Inc, founded in 2017 and headquartered in Atlanta, Georgia, that develops and operates a public safety platform focused on surveillance tools to help prevent and solve crime. They produce automated license plate recognition, ALPR or LPR cameras, which are solar powered fixed cameras capturing images of vehicles, often focusing on rear plates, bumper stickers, and other details on public roads. They use AI and machine learning to read plates, identify unique vehicle features like vehicle fingerprint, and provide real time alerts for vehicles on hot lists, such as stolen cars or wanted suspects. Additional devices include video surveillance cameras, gunfire detection, ShotSpotter-like audio sensors, and drones for first response. Integrated platform FlockOS feeds data from these devices into a cloud-based system hosted on AWS where law enforcement can search nationwide, get alerts, review footage and clips, and use natural language AI searches (for example, specific vehicle descriptions). Data is typically retained for thirty days unless flagged. Flock data can be integrated into platforms like Palantir for law enforcement use. They claim that more than 6,000 communities trust Flock to help keep their communities safer and describe their solution as hassle-free, scalable, and customizable, expediting positive outcomes. They note that 15% of reported crimes in the US are solved with the help from FLOCK, with an asterisk. Despite the perceived positive impact, the transcript acknowledges disasters and secrecy surrounding Flock.

Video Saved From X

reSee.it Video Transcript AI Summary
They describe a monitoring and disruption program with a dedicated apparatus. They have 40 analysts working full time, seven days a week, twenty four hours a day, monitoring extremists online across platforms including social media, messaging apps, video games, cryptocurrency, podcasts, short form video, Wikipedia, and LLMs. They monitor these people and share the intelligence with the FBI. They are monitoring left-wing radicals like the DSA, antiwar activists, and pro-Palestine extremists; right-wing extremists like white supremacists and armed militia groups; political Islamists and Christian nationalists, all of them. They also emphasize training, stating they are the largest trainer of law enforcement in America, training 20,000 officers every year.

Video Saved From X

reSee.it Video Transcript AI Summary
The narrator hired a new cybersecurity company run by the former CIO of Northrop Grumman, who has “thirty five years in counter espionage specifically for the navy, specifically for advanced propulsion,” and was the chief information officer for Northrop Grumman. The narrator promised the CIO a series of “really crazy ass stories” about electronic and physical surveillance, stalking, and “weird ass death and sexual threats” and asked him to reserve judgment until the end, ready to hear if they were insane or not. The CIO told him, “I wish I could tell you that you were insane, but you're totally not.” He apologized, saying he was trying to let him down gently, and that the stories he’d heard were not unique: “the stories you've told me, I've heard from other young women in technology positions.” They do it “on purpose” to make you feel you can’t tell anyone, and they “pose as, like, agents on their own government.” He urged the narrator to protect herself and to tell others, but he waits until he has five or six “smoking guns” before speaking up.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: It has come to my attention that there are several flock cameras installed around our town. My resources count over 30 of them, and I have graphics showing where they are. I’d like to be passed around to the guests here tonight so they can see where these cameras are. These cameras utilize AI to track you and your family when you’re out in public. They run by a company Palantir. This company claims that they just record movement of vehicles and they will reduce the crime rate to zero. However, people much more educated than I on these cameras have proven this to be false when speaking to their city councils. They do not monitor where you drive, but they also monitor where you walk, what you do, what you say, what’s on your phone when you walk by, and they spy on you all the time. Today, I walked around and I noticed the one down by the bridge was pointed towards the courtyard and the field, not towards any roads. So why would it be pointed towards the river, not towards the streets if it’s just to monitor vehicles? Also, in order to bring the crime rate down to zero, they would need to be able to predict crime before it happens, and I think that that is a slippery slope. Some cities are discussing adding this AI to police body cameras, which would be constantly monitored by an AI, which would make a judgment call about releasing drones also controlled by this AI. Again, I see it as a very slippery slope along with the military drones that we’ve seen used over in Iran and in Ukraine. That is not my biggest problem with these though. The owner of Palantir, Peter Thiel, is a man mentioned in the Epstein files over 2,200 times, making him the fourth most mentioned individual in the files. He accepted $40,000,000 that we know about from Epstein. The victims of Epstein and Jalane Maxwell were human sex trafficked, reported almost all members consisting of high profile and ultra wealthy individuals, and they witnessed murders, ritual sacrifice, and cannibalism of infants. That being the consumption of human flesh and blood. They used code words for their victims like pizza, jerky, and grape soda. I have a hard time believing that any human being could do something so evil. This is something that I would be told in a story about vampires. And I don’t know about you, but I think that vampires are meant for campfires. They are supposed to be a mythological being, and they’re not supposed to be real and definitely should not be in charge of the security and safety of our city. I believe that any decent person would say no to giving up their safety and security to someone with such little value of a human life, let alone a potential ultra wealthy pedophilic vampire in the Epstein files. So the gazebo is right here. Right? So I’m trying to capture this area where we have people hanging out.

Video Saved From X

reSee.it Video Transcript AI Summary
On October 1, there were over 9,000 911 calls in just one minute, highlighting the challenges of emergency response. Garrett Langley shared a powerful story about how Flock Safety's technology helped locate a kidnapped baby in Atlanta, showcasing the impact of public safety technology. Sheriff Kevin McMahill discussed innovations in law enforcement, including the use of drones and gun detection technology, which have significantly improved safety and crime resolution rates in Las Vegas. Flock Safety operates in over 4,000 cities, solving about 22,100 crimes daily. The conversation emphasized the importance of community engagement and transparency in law enforcement, as well as the future potential of technology to enhance public safety and reduce crime.

Video Saved From X

reSee.it Video Transcript AI Summary
Stop Antisemitism was built for confronting the global explosion of Jew hatred unleashed since the attacks of ten seven. Since that day, we have featured more than 1,000 antisemites on our platforms—not theorized about them, not quietly documented them, but featured them publicly, clearly, and with evidence. The results speak for themselves: approximately 400 of these Jew haters have faced real consequences including firings, suspensions, and expulsions. More than 300 remain in an active investigatory state across universities, corporations, DEI departments, unions, hospitals, nonprofits, and yes, federal government agencies. And five arrests to date tied directly to threats and violence of antisemitic conduct we helped expose. This is what accountability looks like. This is what action looks like. This is what pushing back hard looks like against the tidal wave of hate that has consumed The United States and global population. From our founding, Stop Antisemitism has operated on one guiding belief: Antisemitism thrives when there are no consequences. So we created consequences, a lot of them. We created visibility. We turned the spotlight towards those who targeted our community, making silence impossible. On campuses where Jewish students were hunted through libraries, where professors glorified Hamas and Hezbollah terrorists, where mobs shut down our buildings and administrators hid under desks, we stepped in. We documented the offenders. We worked with attorneys, lawmakers, and victim families, and we ensured the message was not unmistakable: If you target Jewish students, your actions will not disappear into the darkness. We will shine a light on you that thanks to Google and SEO, follow you for the rest of your life. When you look for a job, when you look for a spouse, when you look for a nanny, when you look for anything, our work will always be documented. Again, thanks to Google and SEO. In corporations where DEI leaders smeared Israel, excused Hamas, we pressured CEOs; some resigned, many were terminated, but policies were changed thankfully from governmental to art institutions. Online, where anonymous accounts spread violent threats, we traced patterns, elevated evidence, and worked with authorities leading to arrests from Florida, South Carolina, New York, California, and Texas. And we're not slowing down sadly. Today, Stop Antisemitism, I'm proud to say, runs one of the most robust antisemitic enforcement operations in The United States, monitoring campuses, digital networks, activist groups, and public officials, documenting incidents in real time and mobilizing millions of people, of allies that are quietly by our side. But the fight is bigger than the exposure, and it's about securing a future—A future where Jewish students can walk across a quad without being screamed at. A future where employers understand that anti Semitism is not activism. It's bigotry and it will cause you to lose your job. A future where fact, not propaganda, shapes policy. A future where global institutions from Google to chat, GPT, from governments to universities to media, finally treats Jew hatred with the seriousness of other minority-targeted hate. To get there, we need three things: action, real action as I listed; accountability; relentless vigilance, because antisemitism does not take breaks. It doesn't wait for elections. It doesn't disappear because we are exhausted and tired, and when I tell you myself and my team are exhausted and tired, that's the least of it. Stop antisemitism has never been more essential, more strategic, or more effective than it is now, but we cannot do this alone. The demand, the volume of tips, the number of investigations, sadly, it continues to grow instead of decrease. If we want a safer future for the Jewish people, this is the moment to stand together and act. We have to push harder to make it clear that Jewish safety is a nonnegotiable. Tonight, I'm asking you to always be in the fight with us, not just in spirit, but in true action. Participate in calls to action. Write letters to your governmental officials. Speak to the teachers and the college administrators that are making, if it's not your friends and kids, it's making other community members feel unsafe. When we act, lives change, And antisemites learn, sometimes for the very first time in their lives and history, that targeting Jews will come at a price, and together we can ensure that Jew hatred never goes unanswered again. As a former refugee from The USSR, I say this with all of my heart, God bless The United States, God bless Israel, and I'm Israel High. Thank you so much.

Video Saved From X

reSee.it Video Transcript AI Summary
The system covers the entire Internet, including social networks like Facebook and Twitter. It identifies 200,000 suspect posts and tweets related to antisemitism daily, using artificial intelligence and machine learning. Approximately 10,000 antisemitic posts are identified each day. This information will now be made public, serving as a deterrent to antisemitism. We will be able to determine which city has the highest antisemitic internet activity and identify the top 10 antisemitic tweets and Twitter users. By understanding the causes behind spikes in antisemitism, we can take action. The command center in Tel Aviv is already operational, analyzing and sharing information with local authorities and municipalities to address antisemitic activities. This marks the official launch of the system.

Video Saved From X

reSee.it Video Transcript AI Summary
Natalie asks about the AI piece, expressing cynicism that there may be a push for a “war bot” to circumvent consumer AI limits that block starting wars with WMDs, and wonders if there is a benevolent reason. Matthew responds that it’s worse than that: Hengseth described a platform to run on military desktops worldwide—secure, like ChatGPT or Claude but for the Pentagon and military services—that “doesn’t allow information to get out.” The core issue, he says, is who controls the AI, and two key questions about the future of war with AI: who ultimately owns these AI platforms, and who informs them—who gives them the algorithm and programming and essentially orders on how to answer questions. He notes increasing concerns about reliability of information, including how ChatGPT handles questions about trustworthy news sources. He mentions that ChatGPT defers to institutional structures rather than historical accuracy. The risk, he says, is that military AI programs may not provide honest, candid, objective information to military personnel, but rather information based on narratives the Pentagon or manufacturers want. A common belief is that technology makes war more precise and reduces civilian harm, but Matthew contends this is a myth. He explains that precision-guided munitions were not about preventing civilian casualties but about increasing efficiency—“the purpose was to make the weapons more efficient, so we had to drop less bombs to, say, blow up a bridge.” He cites the small diameter bomb as evidence that the aim is not to limit civilian casualties but to allow more bombs to be delivered from aircraft. He highlights real-world examples of AI in warfare, referencing Israeli systems in Gaza. He explains that three AI programs—Lavender, Gospel, and Where’s Daddy?—play roles in targeting and timing strikes. Lavender scans theInternet and databases to identify targets (e.g., labeling someone as a Hamas supporter based on a past online activity), and Where’s Daddy? coordinates that information to ensure bombs hit resistance fighters “when they are with their families,” not away from them. He notes reporting from Israeli media and Nine Two Magazine about these programs and urges viewers to examine that reporting; Tucker Carlson’s coverage is mentioned as example. Matthew argues this demonstrates the dystopian potential of AI in war and cautions against assuming American AI would be more benevolent. He mentions commentator references to justify or excuse actions, including a remark attributed to Mike Huckabee that “Israel did not attack Qatar. They just sent a missile into their country aimed at one person,” noting the nearby injuries or deaths. He ends with a reminder of Orwell’s reflections on war and the idea that those who cheer for war may be less enthusiastic if they experience its costs, suggesting a broader aim to make the costs of war felt among ruling elites who benefit from it.

Video Saved From X

reSee.it Video Transcript AI Summary
Within these pillars, protect, advocate, educate, we are innovating. We are trying new ways to have an impact in the fight against anti Semitism. So let's talk about what that looks like. Within the first pillar, protect, not only has the team at the Center on Extremism expanded its ranks with data scientists and software engineers, but we use cutting edge AI tools to analyze the endless online chatter. And then when we identify salient trends or material threats, we route them to whomever may need it, could be journalists, policymakers, and very often law enforcement. Now I can't talk about all the times we do that. I can't divulge what's happened with every piece of intelligence shared, but just know there are real dangers that have been averted. Serious bad guys that have been put behind bars all over the world, And the information generated by Orin Siegel and the entire team on the Center on Extremism, it makes our communities safer.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: Number one, we measure and track. Number two, we monitor and disrupt. We have a whole apparatus. I have 40 analysts working full time, seven days a week, twenty four hours a day, monitoring extremists. We monitor them online, social media, messaging apps, video games, cryptocurrency, podcasts, short form video, Wikipedia, LLMs. We monitor these people and we share the intelligence with the FBI. You saw last month, you heard about the thing that happened at Wilshire Boulevard Temple. Our analysts investigated what happened. They said they were Koreatown for Palestine, this group of people. They weren't. We were able to ascertain they were from a group called the Turtle Island Liberation Front. Turtle Island is how, like, left wing activists refer to The United States. They don't call it America. They call it Turtle Island. Like the Iranians call it, Iranians call it the Zionist entity, or they only call by its name. The Turtle Island Liberation Front, we gave them a whole dossier. Who are what is Turtle Island Liberation Front? What are their ideas, their goals? Who are they? We identified the people who are in the synagogue. This was on Wednesday, December 10. On Monday, December 15, this is gonna ring a bell. Kashmattel announced they cracked a terror ring where they arrested four people who are playing New Year's Eve bombings, Turtle Island Liberation Front. At least one of the people I know for certain was in the building at Wilshire Boulevard Temple vandalizing it and disrupting the event. So we're monitoring left wing radicals like the DSA and the anti war crazies and the pro Palestine crazies. We're monitoring right wing extremists like white supremacists, armed militia groups. We're monitoring political Islamists and Christian nationalists, all of them. And then we train. We're the largest trainer of law enforcement in America. Extremism hate. We train 20,000 officers every year.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: There are several flock cameras around our town—resources count over 30, with graphics showing their locations to be passed around for guests to see. These cameras utilize AI to track you and your family in public. They run by a company Palantir. This company claims they just record movement of vehicles and will reduce crime to zero, but people more educated than I on these cameras have proven this false when speaking to city councils. They do not monitor only where you drive, but also where you walk, what you do, what you say, what’s on your phone when you walk by, and they spy on you all the time. Today, I walked around and noticed the one down by the bridge was pointed toward the courtyard and the field, not toward roads, so why would it be pointed toward the river, not toward the streets if it’s just to monitor vehicles? In order to bring the crime rate down to zero, they would need to predict crime before it happens, and I think that is a slippery slope. Some cities are discussing adding this AI to police body cameras, which would be constantly monitored by an AI, making a judgment call about releasing drones also controlled by this AI. Again, I see it as a very slippery slope along with the military drones that we’ve seen used over in Iran and in Ukraine. That is not my biggest problem with these, though. The owner of Palantir, Peter Thiel, is a man mentioned in the Epstein files over 2,200 times, making him the fourth most mentioned individual in the files. He accepted $40,000,000 that we know about from Epstein. The victims of Epstein and Jalane Maxwell were human sex trafficked, reported almost all members consisting of high profile and ultra wealthy individuals, and they witnessed murders, ritual sacrifice, and cannibalism of infants. That being the consumption of human flesh and blood. They used code words for their victims like pizza, jerky, and grape soda. I have a hard time believing that any human being could do something so evil. This is something that I would be told in a story about vampires. And I don’t know about you, but I think vampires are meant for campfires. They’re supposed to be a mythological being, not real and definitely should not be in charge of the security and safety of our city. I believe that any decent person would say no to giving up their safety and security to someone with such little value of a human life, let alone a potential ultra-wealthy pedophilic vampire in the Epstein files. So the gazebo is right here, right? So I’m trying to capture this area where we have people hanging out.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Fifteen seconds, Aaron. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. They're flying blind right now. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. And if you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
A partnership with Palantir aims to address mortgage fraud. The partnership intends to ensure there is no fraud. According to one speaker, they have only scratched the surface with Palantir. Previously, it took investigators sixty days to detect fraud; Palantir's technology completes the same task in ten seconds. One speaker expressed excitement about Palantir's technology and expertise in security and fraud detection. For Palantir, this partnership is a matter of public trust. The partnership aims to understand mortgage fraud and stop it. The goal is to get to the bottom of mortgage fraud.

Video Saved From X

reSee.it Video Transcript AI Summary
"And Trump has been openly building databases on people with Palantir." "Palantir also manages all of your health data Because they contract extensively with HHS." "It was called DEEP and there's been a few arrests under DEEP for people making Facebook posts and things like that." "But anyway, this pitch to that Trump made about having social media spy on its users and use like analytics to, you know, bring about some sort of pre crime society." "didn't ultimately happen in creating this agency called HARPA, which was supposed to be like the health version of the Pentagon's DARPA." "the goal of Palantir, just like it was with total information awareness, is about stopping crime before it happens. It's pre crime." "There's one in LA called Predpol, and they have an accuracy of half a percent."

Video Saved From X

reSee.it Video Transcript AI Summary
Patrick Sarval is introduced as an author and expert on conspiracies, system architecture, geopolitics, and software systems. Ab Gieterink asks who Patrick Sarval is and what his expertise entails. Sarval describes himself as an IT architect, often a freelance contractor working with various control and cybernetics-oriented systems, with earlier experience including a Bitcoin startup in 2011, photography work for events, and involvement in topics around conspiracy thinking. He notes his books, including Complotcatalogus and Spiegelpaleis, and mentions Seprouter and Niburu in relation to conspiratorial topics. Gieterink references a prior interview about Complotcatalogus and another of Sarval’s books, and sets the stage to discuss Palantir, surveillance, and the internet. The conversation then shifts to explaining Palantir and its significance. Sarval emphasizes Palantir as a key element in a broader trend rather than focusing solely on the company itself. He uses science-fiction analogies to describe how data processing and artificial intelligence are evolving. In particular, he introduces the concept of a “brein” (brain) or “legion” that integrates disparate data streams, builds an ontology, and enables predictive analytics and tactical decision-making. Palantir is described as the intelligence brain that aggregates data from multiple sources to produce meaningful insights. Sarval explains that a rudimentary prototype of such a system operates under the name Lavender in Gaza, where metadata from sources like Meta (Facebook, WhatsApp, Instagram), cell towers, satellites, and other sensors are fed into Palantir. The system performs threat analysis, ranks threats from high to low, and then a military operator—still human—must approve the action, with about 20–25 seconds to decide whether to fire a weapon. The claim is that Palantir-like software functions as the brain behind this process, orchestrating data integration, ontology creation, data fusion, digital twins, profiling, predictions, and tactical dissemination. The discussion covers how Palantir integrates data from medical records, parking fines, phone data, WhatsApp contacts, and more, then applies an overarching data model and digital twin to simulate and project outcomes. This enables targeted marketing alongside military uses, illustrating the broad reach of the platform. Sarval notes there are two divisions within Palantir: Gotum (military) and Foundry (business models), which he mentions to illustrate the dual-use nature of the technology. He warns that the system is designed to close feedback loops, allowing it to learn and refine its outputs over time, similar to how a thermostat adjusts heating based on sensor inputs. A central concern is the risk to the rule of law and human agency. The discussion highlights the potential erosion of the presumption of innocence and due process when decisions increasingly rely on predictive models and AI. The panel considers the possibility that in a high-stress battlefield scenario, soldiers or commanders might defer to the Palantir-presented “world view,” making it harder to refuse an order. There is also concern about the shift toward autonomous weapons and the removal of human oversight in critical decisions, raising fears about the ethics and accountability of such systems. The conversation moves to the political and ideological backdrop surrounding Palantir’s leadership. Peter Thiel, Elon Musk, and a close circle with ties to PayPal and other tech-industry figures are discussed. Sarval characterizes Palantir’s leadership as ideologically defined, with statements about Zionism and a political worldview influencing how the technology is developed and deployed. The dialogue touches on perceived connections to broader geopolitical influence, including the role of influence campaigns, media shaping, and the involvement of powerful networks in technology development and national security. As the discussion progresses, the speakers explore the implications of advanced AI and the “new generative AI” era. They consider the nature of AI and the potential for it to act not just as a data processor but as a decision-maker with emergent properties that challenge human control. The concept of pre-crime—predicting and acting on potential future threats before they materialize—is discussed as a troubling possibility, especially when a machine’s probability-based judgments guide life-and-death actions. Towards the end, the conversation contemplates what a fully dominated surveillance state might look like, including cognitive warfare and personalized influence through media, ads, and social networks. The dialogue returns to questions about how far Palantir and similar systems have penetrated international security programs, with speculation about Gaza, NATO adoption, and commercial uses beyond military applications. The speakers acknowledge the possibility of multiple trajectories and emphasize the need for checks and balances, transparency, and critical reflection on the power such systems confer upon a relatively small group of technologists and influencers. They conclude with a nod to the transformative and potentially dystopian future of AI-enabled surveillance and decision-making, cautioning against unbridled expansion and urging vigilance.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 describes a 2021 claim by the commander of Israeli intelligence to design a machine to resolve a human bottleneck in locating and approving targets in war. A recent investigation by Plus 972 Magazine and Local Call reveals that the Israeli army developed an AI-based Lavender system to designate targets and direct airstrikes. During the initial weeks of the Lavender operation, the system designated about 37,000 Palestinians as targets and directed airstrikes on their homes. The system reportedly had an error rate of about 10%, and there was no requirement to verify the machine’s data. The Israeli army systematically attacked targeted individuals at night in their homes while their whole family was present. An automated component, known as “where’s daddy,” tracked targeted individuals and carried out bombings when they entered their family residences. The result, according to the report, was that thousands of women and children were killed by Israeli airstrikes. Israeli intelligence officers allegedly stated that the IDF bombed homes as a first option, and in several cases entire families were murdered when the actual target was not inside. In one instance, four buildings were destroyed along with everyone inside because a single target was in one of them. For targets marked as low level by Lavender, cheaper bombs were used, destroying entire buildings and killing mostly civilians and entire families. It was alleged that the IDF did not want to waste expensive bombs on “unimportant people,” and it was decided that for every low-level Hamas operative Lavender marked, it was permissible to kill up to 15 or 20 civilians; for a senior Hamas official, more than 100 civilians could be killed. Most AI targets were never tracked before the war. Lavender analyzed information collected on the 2,300,000 residents of the Gaza Strip through mass surveillance, assessing the likelihood of each person being a militant and giving a rating from 1 to 100. If the rating was high enough, the person and their entire family were killed. Lavender flagged individuals with patterns similar to Hamas, including police, civil defense, relatives, and residents with similar names or nicknames. The report notes that this kind of tracking system has existed in the US for years. Speaker 1 presents a counterpoint: a “fine gentleman of the secret service” claims to provide a list of every threat made about the president since February 3 and profiles of every threat maker, implying that targets could be identified through broad data collection including emails, chats, SMS. The passage suggests a tool akin to a Google search but including private communications. Speaker 0 adds that although some claim Israel controls the US, Joe Biden says Israel serves US interests. Speaker 2: A speaker asserts, “There’s no apology to be made. None. It is the best $3,000,000,000 investment we make,” and claims that without Israel the United States would have to invent an Israel to protect its regional interests. Speaker 0 closes reporting for Infowars, credited to Greg Reese.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real-time AI-powered threat detection system for law enforcement and schools. It scans the open web, social media, Reddit, Discord, and gaming chats, flagging grievance buildup, martyrdom language, and tactical planning before someone acts. Law enforcement agencies are on board to pilot it. I'm raising funds directly from my audience—Cohen's commandos, the people who actually care to bring Gideon to life. If you've ever asked yourself, why didn't someone catch this before? This is the answer. Hit the link in the description and donate what you can and please share it. This isn't about politics. This is about protecting America, protecting our kids, and it's about giving law enforcement signal before the next tragedy unfolds. This is Gideon. This is my new mission. Help me build it, and let's do it together.

Video Saved From X

reSee.it Video Transcript AI Summary
Correct. I am now about to launch Gideon, America's first ever AI threat detection platform built specifically for law enforcement. It scrapes the Internet twenty four seven using an Israeli grade ontology to pull specific threat language and then routes it to local law enforcement. It's a twenty four seven detective. It never sleeps, and it's going to get us in front of these attacks. Would it have picked up on this, do you think? 100%. Percent. I wish this pro I wish my program would already be up. We're not launching until next week. I've got a dozen agencies on board, Trace. I just onloaded a major Northeast, agency with over 2,700 sworn. This is America's early warning system.

Video Saved From X

reSee.it Video Transcript AI Summary
We train 20,000 officers every year, making us the largest trainer of law enforcement in America. Our approach has two core components: measure and track, and monitor and disrupt. We maintain a dedicated operation with 40 analysts working full-time, seven days a week, 24 hours a day, to monitor extremists. Their monitoring covers online activities across social media, messaging apps, video games, cryptocurrency, podcasts, short-form video, Wikipedia, and large language models. The intelligence collected is shared with the FBI. In relation to a real-world incident, our analysts investigated the events at Wilshire Boulevard Temple. They identified the individuals who were present at the synagogue. This investigation occurred in December, with the timeline noting that on Wednesday, December 10, the events were observed, and by Monday, December 15, Kashmir Patel announced that they had cracked a terror ring.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Anonymous networks flagging behavior predicting danger. We don't get a second chance. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. If you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
So there was a program called HHS Protect as start during operation warp speed. So this HHS protect program is really interesting because what it did, it used two different Palantir programs. The AMA, HHS, the CDC, specifically, all partnered with Palantir, and then Palantir developed a program for operation warp speed. And that program, what it did was it assigned people a threat risk score, and then that was a program called Tiberius. They also could determine down to the ZIP code where you were and how compliant areas were. And then Gotham is the AI kill chain program created by Palantir. So the Gotham program, it takes the threat risk score from Tiberius, and then it executes the threat or tells does an AI decision making process and decide decides when and how and where to deploy the countermeasures, which was your vaccine, your remdesivir, and your ventilator.

Relentless

#48 - Police Chases, Ride Alongs, Bureaucracy | Daniel Francis, CEO Abel Police
Guests: Daniel Francis
reSee.it Podcast Summary
Daniel Francis, founder and CEO of Able Police, discusses the real-world problems police agencies face with tedious, time-consuming reporting and how their AI-powered solution aims to reclaim officers’ time for frontline work. The conversation dives into the origin of Able Police, born from Francis’s hands-on experiences in ride-alongs and observing how much time is spent documenting incidents. He explains the product’s core value: turning body-cam footage into police reports, addressing the two-part structure of a report—structured data versus the narrative—and the shift from manual transcription to intelligent generation, all while navigating CJIS and security concerns. The episode highlights the acquisition journey: persistence through dozens of agency rejections, the breakthrough moment with Richmond, and the strategic pivot when Axon announced similar capabilities, which validated the concept but also exposed gaps Able Police could fill with a more tailored CJIS-compliant stack. Francis emphasizes the fragmentation of policing across 18,000 US agencies, each with different contracts and processes, and why the company focuses on “soft,” understaffed departments first, then scales using demonstrations, conferences, and relationship-building. The interview also touches on culture within policing, the stress of the job, the appeal of body cameras for accountability, and how reliable reporting can impact budgets and safety outcomes. Towards the end, the discussion shifts to expansion plans and product strategy. Francis outlines Able Writer, a forthcoming tool to convert body-cam narratives into polished reports, and Able Citizen, a citizen-facing report intake with a chat interface to elicit precise crime details. He argues that stronger frontline presence reduces crime, saves lives, and improves city governance. The broader theme is leveraging AI to enhance policing through better data, streamlined workflows, and faster, more accurate documentation, while acknowledging political and administrative realities that shape adoption across diverse jurisdictions.

a16z Podcast

a16z Podcast | The Fundamentals of Security and the Story of Tanium’s Growth
Guests: Orion Hindawi
reSee.it Podcast Summary
In the a16z podcast, Orion Hindawi, co-founder of Tainium, discusses enterprise security, emphasizing the importance of basic practices over complex solutions. He critiques traditional hub-and-spoke models, which struggle to manage the scale of modern enterprise environments, and highlights Tainium's innovative approach that allows for rapid management of hundreds of thousands of endpoints. Hindawi notes that many companies are realizing their existing security measures are inadequate, leading to increased interest in Tainium's solutions. He explains that Tainium's dual focus on security and operations provides tangible ROI, making it attractive to large enterprises. Hindawi also addresses the misconception that perimeter security is sufficient, stating that attackers often exploit vulnerabilities within networks. He argues that effective security requires visibility into endpoints and the ability to respond quickly to threats. Tainium's platform is designed to be easily deployed, allowing organizations to identify and eliminate inefficiencies, ultimately enhancing their security posture while reducing costs.

Possible Podcast

Devshi Mehrotra on AI, justice, and public defense
Guests: Devshi Mehrotra
reSee.it Podcast Summary
Devshi Mehrotra's arc spans from a Beijing lab to a courtroom technology startup that aims to change how justice is practiced. Her first exposure to AI came in 2016 during a Beijing internship where she built a cancer cell image analysis prototype, learning gradient descent and neural networks while feeling overwhelmed yet hooked by the idea that math could drive real-world tasks. She later joined Google Brain, Microsoft Research, and DeepMind, contributing to NLP, computer vision, and robotics. Those experiences laid the foundation for Justice Text, which she co-founded with Leslie after meeting in the University of Chicago's computer science program and sharing a commitment to social justice. Justice Text emerged from a direct request: public defenders overwhelmed by video, transcripts, and jail calls needed tools to sift through footage and extract evidence. The platform automates transcription, offers searchable summaries, flags key moments such as miranda warnings or arrests, and lets attorneys assemble video exhibits for court. A Northern California case involving a Spanish-speaking client showed how a clip could reveal rights violations and help dismiss a charge. Mehrotra emphasizes that Justice Text is funded through customer relationships with government bodies, not charity, with durable, scalable adoption through procurement. Today, Justice Text serves around 60 public defender agencies, including statewide systems in Tennessee and Massachusetts, and major cities like Portland and Houston, with a delivery model that combines training, office hours, and in-person visits to fit varied county structures. Mehrotra describes a future of expanded partnerships, additional statewide deployments, and features such as Miranda AI, which summarizes large discovery folders and lets lawyers poll the data with natural-language questions, cross-referencing answers to exact files and timestamps. She notes governments are increasingly surveying AI use, demanding data safeguarding and interoperable APIs, and foresees growth into adjacent defense contexts and private criminal defense. She cites the Indian film Queen as a source of optimism about bold, independent paths.
View Full Interactive Feed