reSee.it - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Anonymous networks flagging behavior predicting danger. We don't get a second chance. Let's not miss the next one. Fifteen seconds, Aaron. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. They're flying blind right now. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. If you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
In an AI-driven world, cybersecurity is crucial to prevent unauthorized access. Watermarking ensures authenticity, while transparency and quality of information are essential. A significant amount of important work is being done in these areas.

Video Saved From X

reSee.it Video Transcript AI Summary
Data centers under construction in the United States show how quickly AI infrastructure is expanding. Texas has 135, Virginia 134, Georgia 51, Ohio 45, Arizona 35, Nevada 29, Indiana 21, Mississippi 21, Illinois 19, Iowa 16, Oregon 12, South Carolina 12, Wisconsin 11, Maryland 11, North Carolina 11, Pennsylvania 11, Utah 10, Missouri 8, Wyoming 2, Alabama 7, New York 7, Tennessee 7, and Florida 7 under construction. Australia, the UK, and Canada have smaller numbers. In Australia, Sydney has 10 to 15 distinct sites or campuses actively under construction; Melbourne has 8 to 12 sites; nationally, 20 to 30 sites total actively under construction, plus 48 upcoming facilities overall. In the UK, London has 7; other regions show slow growth with two to four in some areas. Northeast England, Wales have one to two; Greater Manchester, Yorkshire, Scotland have one to three; national totals are approximately 20 to 30 distinct sites or facilities actively under construction, with 29 projects expected to begin or continue construction in 2026. In Canada, Toronto (Greater Toronto Area) has four to six; Montreal (Quebec metro area) five to eight; Quebec City two to four; Vancouver one to three; Calgary/Alberta five to ten. Other regions such as Ottawa, Waterloo, and Halifax have one to three being planned. Flock Safety is a US-based technology company, Flock Group Inc, founded in 2017 and headquartered in Atlanta, Georgia, that develops and operates a public safety platform focused on surveillance tools to help prevent and solve crime. They produce automated license plate recognition, ALPR or LPR cameras, which are solar powered fixed cameras capturing images of vehicles, often focusing on rear plates, bumper stickers, and other details on public roads. They use AI and machine learning to read plates, identify unique vehicle features like vehicle fingerprint, and provide real time alerts for vehicles on hot lists, such as stolen cars or wanted suspects. Additional devices include video surveillance cameras, gunfire detection, ShotSpotter-like audio sensors, and drones for first response. Integrated platform FlockOS feeds data from these devices into a cloud-based system hosted on AWS where law enforcement can search nationwide, get alerts, review footage and clips, and use natural language AI searches (for example, specific vehicle descriptions). Data is typically retained for thirty days unless flagged. Flock data can be integrated into platforms like Palantir for law enforcement use. They claim that more than 6,000 communities trust Flock to help keep their communities safer and describe their solution as hassle-free, scalable, and customizable, expediting positive outcomes. They note that 15% of reported crimes in the US are solved with the help from FLOCK, with an asterisk. Despite the perceived positive impact, the transcript acknowledges disasters and secrecy surrounding Flock.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 says body cams ensure behavior because "we're constantly recording and reporting everything that's going on." He argues the first AI step for government is to "unify all of their data so it can be consumed and used by the AI model," bringing health data, EHRs, and genomic data into a single platform; the UAE has rich data, the NHS data is fragmented. He insists "data centers ... need to be in our countries" for privacy and security, likening them to airports and ports. He forecasts: "the last year you will ever log on to an Oracle system with a password" and "biometric logins" that use voice recognition and even "index finger on the return key." He cites ransomware with FBI advice to "Just pay them because there's nothing we can do about it." Speaker 1 adds: "there's an amazing opportunity to reimagine the state, the way that government functions, and the service that it can provide for its citizens."

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: The police will be on their best behavior because we record we're constantly recording, watching, and recording everything that's going on. Citizens will be on their best behavior because we're constantly recording and reporting everything that's going on. And it's unimpeachable. The cars have cameras on them. I think we have a squad car here someplace. But those kind of applications using AI, if we can use AI, and we're using AI to monitor the video. So if that altercation had occurred, that occurred in Memphis, the chief of police would be immediately notified. It's not people that are looking at those cameras, it's AI that's looking at the camera. No. No. No. You can't do this. It would be like a shooting. That's gonna be immediately that's gonna be an an event that's immediately rip an alarm's gonna go off. It's gonna be and we're gonna we're gonna have supervision. In other words, every police officer is gonna be supervised at all times. And and the supervision will, and and if there's a problem, AI will report the problem and report it to the appropriate for person, whether it's the sheriff or the chief or whom whomever we need to take control of the situation. We have you know, same thing. We have drones. We just if there's something going on in a shopping and and I'll stop. A drone goes out there. I get there way faster than a police car. There's no reason for, by the way, high speed chases. You shouldn't have high speed chases between cars. You just have a drone follow the car. I mean, it's very, very simple. And then new generation generation of autonomous drones.

Video Saved From X

reSee.it Video Transcript AI Summary
On October 1, there were over 9,000 911 calls in just one minute, highlighting the challenges of emergency response. Garrett Langley shared a powerful story about how Flock Safety's technology helped locate a kidnapped baby in Atlanta, showcasing the impact of public safety technology. Sheriff Kevin McMahill discussed innovations in law enforcement, including the use of drones and gun detection technology, which have significantly improved safety and crime resolution rates in Las Vegas. Flock Safety operates in over 4,000 cities, solving about 22,100 crimes daily. The conversation emphasized the importance of community engagement and transparency in law enforcement, as well as the future potential of technology to enhance public safety and reduce crime.

Video Saved From X

reSee.it Video Transcript AI Summary
AI will be in heavy-duty applications, not in laptops or phones. It requires powerful computers in service centers, which are easily identifiable by their heat signature from space. While not advocating for their destruction, it may be prudent to have contingency plans involving governments.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Fifteen seconds, Aaron. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. They're flying blind right now. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. And if you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
We can enhance school security by implementing measures to prevent unauthorized individuals from entering campuses. Utilizing AI cameras can help quickly identify threats, such as someone brandishing a weapon, and immediately alert authorities. The goal is to ensure that only those who belong on campus are present, significantly reducing the risk of incidents.

Video Saved From X

reSee.it Video Transcript AI Summary
The K5 and similar devices can act as crime deterrents and help us learn how to use technology effectively in the future.

Video Saved From X

reSee.it Video Transcript AI Summary
The benefits are clear. Digital ID will make our interactions with each other and with the state faster, cheaper and more reliable. It will allow us to judge who has a right to be in our country and who doesn't, and so solves one of the major challenges of immigration. Facial recognition can now spot suspects in real time from live video, tracking organised criminals at borders, in public spaces, even helping find missing people. In London, live facial recognition led to three sixty arrests by the Met Police between January and October 2024, just in a pilot project. It boosts response times and helps identify suspects quickly in busy places like train stations and events. Live video from body cams and CCTVs can be used to provide real time advice to officers from a command centre or deploy resources to where they're most needed.

Video Saved From X

reSee.it Video Transcript AI Summary
Our next generation police car, which is Elon Musk's favorite, is about to be released. It's incredibly safe and fast, with a stainless steel body. We don't need to add cameras because we utilize the existing ones in Tesla vehicles for our application. This technology is already being used in Stanislaus County, California, for both police and fire departments. The county, located near Yosemite Valley, is prone to brush fires, and we are concerned about the increasing dryness in California summers.

Video Saved From X

reSee.it Video Transcript AI Summary
The AI drone flies itself and reacts faster than a human. It uses stochastic motion as an anti-sniper feature. Like mobile devices, it has cameras and sensors, and performs facial recognition. It contains three grams of shaped explosive. The small explosion can penetrate the skull and destroy the contents.

Video Saved From X

reSee.it Video Transcript AI Summary
Gideon is the first real time AI system built to detect threats online before they become attacks. Anonymous networks flagging behavior predicting danger. We don't get a second chance. You're talking about stopping mass shootings, attacks in Boulder before they start. Trace, I'm building the first AI driven threat prediction platform for law enforcement. I've got an elite team of engineers from Palantir. I've got law enforcement agencies lined up. 76% of these mass attackers posted some type of grievance online. This is America's early warning detection system. If you're a chief out there, reach out to me and get on my pilot. If you're a VC, I'm about to open my seed round, partner with me, and let's make America safe. They're gonna get cops the tools they need.

Video Saved From X

reSee.it Video Transcript AI Summary
The police will be on their best behavior because we re we're we're constantly recording, watching, and recording everything that's going on. Citizens will be on their best behavior because we're constantly recording and reporting everything that's going on. And it's unimpeachable. The cars have cameras on them. So if that altercation had occurred, that occurred in Memphis, the chief of police would be immediately notified. In other words, every police officer is going to be supervised at all times. We have you know, same thing. We have drones. A drone goes out there. I get there way faster than a police car. There's no reason for, by the way, high speed chases. You shouldn't have high speed chases between cars. You just have a drone follow the car. I mean, it's very, very simple. And then new generation generation of autonomous drones.

Lenny's Podcast

The most successful AI company you’ve never heard of | Qasar Younis
Guests: Qasar Younis
reSee.it Podcast Summary
The episode centers on a conversation about the near-term and long-term impact of AI in physical industries, with a focus on how autonomous systems could reshape sectors like farming, mining, construction, and transportation. The guest argues that solving complex problems such as cancer will be accelerated by AI, and he offers an optimistic view that net human suffering could decrease as technology spreads access and capabilities, drawing a contrast with the industrial revolution where benefits eventually outweighed early downsides. A core theme is the pragmatic, rather than sensational, adoption of AI: by understanding the technology and applying it for good, individuals and organizations can mitigate fears about job displacement and safety concerns. The discussion emphasizes autonomy in existing heavy machinery and vehicles as the initial, high-impact application, noting that many sectors already rely on mature engineering and could gain substantial productivity when infused with AI, while the public debate often centers on misunderstood risks and the speed of change. The guest also reflects on the psychology of fear, acknowledging anxiety around automation while urging people to study the technology’s edges to see both limits and opportunities, such as the relative safety improvements offered by self-driving systems compared with human drivers. A recurring thread is the reality that markets and investors may overreact in the face of rapid AI development, mispricing risk due to simplified narratives about “vibe coding” or overnight disruption, and thus the importance of founder discipline, customer focus, and speed paired with safety. Throughout, the interview explores leadership lessons learned from building Applied Intuition: the value of staying quiet to focus on the product, cultivating “radical pragmatism,” maintaining transparent decision-making, and fostering a culture where the best idea wins and where inputs from all levels are actively solicited. The host and guest also debate China’s role in global tech, the limits of comparisons between OpenAI and Chinese firms, and the necessity of maintaining open markets to support broad innovation, while recognizing geopolitical nuances. The conversation closes with practical guidance for founders on reading widely, maintaining craft, and balancing visibility with product excellence.

Possible Podcast

Reflections on AI and S2 of Possible
reSee.it Podcast Summary
Artificial intelligence is framed here as a partner for humanity, not a replacement, as Possible closes season two with a meditation on medicine, democracy, and the future of work. The wrap-up features Mustafa on Pi, Maya on socially assistive robots for autism, and Amar Slaughter on the care economy, all framed by a belief that advances in AI should be pro-human and guided by leadership, care. A live episode teased for season three, and the hosts stress that democracy and governance will shape how technology evolves in 2024 and beyond. One of the season’s through-lines is a Renaissance in medicine, not merely tools for clinical assistants but in drug discovery, genetic mapping, and protein design, with references to AlphaFold and related work. Aria recounts a striking neuroscience case—mapping a patient’s brain to allow speech after 20 years of immobility—illustrating AI’s potential to extend human capacity and reframe medical progress as a collective human endeavor rather than a distant frontier. Aria also notes her fascination with Oliver Sacks' The Man Who Mistook His Wife for a Hat and George R.R. Martin's Game of Thrones. Open questions about governance surface as Reed Hoffman reflects on his board experience at OpenAI, noting that a lack of board-competence and decision-making were central issues, while Aria defends the nonprofit OpenAI model as a public-benefit approach compatible with investment. The discussion also covers the tension between nonprofit structure and the need for capital, and how strategic governance impacts safety, trust, and deployment timelines. Copyright and data use dominate a thread on training data, fair use, and licensing, including questions about whether training on copyrighted material is permissible reading and how publishers might be compensated. The NYT’s lawsuit and the broader debate about training data for models come up, with references to ideas for nominal data fees and beneficial licensing models. The hosts suggest Wikipedia-style licensing as a viable template and discuss how partnerships could extend access while protecting creators. Beyond policy, Possible maps a future in which AI serves education, government services, and everyday life by expanding access to personal assistants, forms-filling, and support for low-income families, while stressing the need for lifelong learning and retraining. The conversation touches AI in national security as a tool for peace and the risk of deepfakes and misinformation in elections, with calls for truth-telling AI and responsible deployment to sustain democracy and human well-being.

Relentless

#48 - Police Chases, Ride Alongs, Bureaucracy | Daniel Francis, CEO Abel Police
Guests: Daniel Francis
reSee.it Podcast Summary
Daniel Francis, founder and CEO of Able Police, discusses the real-world problems police agencies face with tedious, time-consuming reporting and how their AI-powered solution aims to reclaim officers’ time for frontline work. The conversation dives into the origin of Able Police, born from Francis’s hands-on experiences in ride-alongs and observing how much time is spent documenting incidents. He explains the product’s core value: turning body-cam footage into police reports, addressing the two-part structure of a report—structured data versus the narrative—and the shift from manual transcription to intelligent generation, all while navigating CJIS and security concerns. The episode highlights the acquisition journey: persistence through dozens of agency rejections, the breakthrough moment with Richmond, and the strategic pivot when Axon announced similar capabilities, which validated the concept but also exposed gaps Able Police could fill with a more tailored CJIS-compliant stack. Francis emphasizes the fragmentation of policing across 18,000 US agencies, each with different contracts and processes, and why the company focuses on “soft,” understaffed departments first, then scales using demonstrations, conferences, and relationship-building. The interview also touches on culture within policing, the stress of the job, the appeal of body cameras for accountability, and how reliable reporting can impact budgets and safety outcomes. Towards the end, the discussion shifts to expansion plans and product strategy. Francis outlines Able Writer, a forthcoming tool to convert body-cam narratives into polished reports, and Able Citizen, a citizen-facing report intake with a chat interface to elicit precise crime details. He argues that stronger frontline presence reduces crime, saves lives, and improves city governance. The broader theme is leveraging AI to enhance policing through better data, streamlined workflows, and faster, more accurate documentation, while acknowledging political and administrative realities that shape adoption across diverse jurisdictions.

Cheeky Pint

Garrett Langley of Flock Safety on building technology to solve crime
Guests: Garrett Langley
reSee.it Podcast Summary
Garrett Langley describes the origin and evolution of Flock Safety, from a neighborhood initiative to track license plates after a crime to a nationwide hardware and software platform used by thousands of cities and private companies. He emphasizes the core insight that traditional home and vehicle security focuses on reacting to crime rather than preventing it, and explains how Flock built a community-focused safety system, culminating in real-time, city-wide coordination through Flock OS, license plate readers, cameras, and drones. The conversation showcases concrete case studies: real-time 911 integration that can surface suspect descriptions such as clothing and vehicles, cross-agency collaboration enabled by shared data, and a drone-enabled response model that reduces dangerous pursuits and speeds up arrests. Langley highlights the shift from single-neighborhood deployments to a national network that supports complex operations across multiple states, with a strong emphasis on balancing rapid disruption of crime with accountability, privacy, and data retention safeguards. The interview also delves into the broader implications of this technology for public safety, including the tension between expanding law enforcement bandwidth and civil liberties, the role of third-party data and federal coordination, and the evolving regulatory landscape shaped by state bills that set data retention and auditing standards. Questions about hardware scale, supply chain risks, and the economics of hardware-heavy growth reveal how Flock navigates a difficult capital-intensive path while maintaining a profitable core and pursuing ambitious future bets. The discussion ends with Langley’s forward-looking ideas: using Flock’s platform to prevent crime before it happens, investing in community-economic development to reduce crime incentives, and exploring humane paths to rehabilitate offenders. He frames safety as a public-right goal that requires legislative guardrails, transparent data practices, and a deliberate balance between effectiveness and privacy, while acknowledging the inevitable trade-offs as technology accelerates.

a16z Podcast

a16z Podcast | The Self-Flying Camera
Guests: Adam Bry, Chris Dixon, Hanne Tidnam
reSee.it Podcast Summary
In this a16z podcast, Adam Bry, co-founder and CEO of Skydio, and Chris Dixon discuss the evolution and future of autonomous drones, specifically self-flying cameras. They highlight the transition from manually operated drones to autonomous systems, emphasizing the importance of autonomy in enhancing user experience and expanding applications. Current drones require skilled pilots, but autonomy allows for safer, more efficient operations, enabling users to focus on tasks rather than piloting. Bry explains that Skydio's technology utilizes cameras and advanced algorithms for navigation and obstacle avoidance, contrasting it with self-driving cars, which rely on road structures. The drones are designed as flying computers, integrating various sensors and powerful computing capabilities to process visual information and make real-time decisions. The conversation also touches on the potential for drones in commercial applications, such as infrastructure inspection and data collection, which can reduce risks and improve efficiency. As drones become more autonomous, the role of humans will shift towards higher-level decision-making rather than manual operation. The discussion concludes with the idea that advancements in AI and drone technology will democratize creative expression, enabling more people to capture and share their experiences like never before.

a16z Podcast

The Crime Crisis In America (How Technology Fixes It)
Guests: Garrett Langley, Ben Horowitz
reSee.it Podcast Summary
The episode centers on a candid exploration of how technology intersects with crime, policing, and public safety in America, with a focus on practical strategies for reducing crime through smarter use of data, sensors, and analytics. The speakers argue that crime is best deterred not by fear alone but by credible incentives, accountability, and a prosecutorial approach that emphasizes catching offenders while prioritizing the social costs of mass incarceration. The discussion moves from high-level ideas about staffing and culture in policing to concrete examples of deploying cameras, drones, gunshot detection, and AI-powered data orchestration to understand and respond to incidents faster and more precisely. The tone is pragmatic and future-facing, insisting that technology should serve citizens and be transparent so communities can trust how safety is achieved. Across their case studies, they stress that trust and accountability are as important as speed and reach, and they advocate for aligned incentives among police, public officials, and private partners to address both immediate crime threats and long-term social risks. The conversation also delves into the political and social dynamics of policing, acknowledging that reforms must balance public safety with civil liberties and that the most successful models combine intelligent surveillance with community policing and direct investments in social supports to reduce crime over time. The hosts and guests share a vision of a more proactive, data-driven style of policing that lowers violence, improves clearance rates, and preserves individual rights, while highlighting the human side of policing—recognizing the stress on officers, the importance of diverse recruitment, and the need for humane policies that prevent people from being trapped in a cycle of offense. The overall message is that technology can amplify good policing when deployed thoughtfully, with clear governance, robust privacy protections, and meaningful collaboration between cities, vendors, and residents.”

The Megyn Kelly Show

Redditor Helps Solve Brown U. Case, Tapper Trump Health Sham, Leftist Bullying, w/ Sexton and McNabb
Guests: Sexton, McNabb
reSee.it Podcast Summary
The episode offers a rapid-fire examination of a violent Brown University and MIT tragedy, political reactions, and a broader conversation about how institutions respond to crises in real time. The host and guests trace the shooter’s path from Boston-area connections to the eventual suicide, highlighting how social media posts, citizen tips, and open‑source sleuthing converged with traditional police work. They question the speed and tone of official briefings, arguing that celebratory press conferences can feel misplaced when the public remains grieving and when questions about the investigation’s timeline, methods, and lessons learned linger. The discussion expands to the implications of surveillance technologies, facial recognition, and data from cameras, alongside the growing reality that ordinary people wield powerful investigative tools online. The conversation shifts toward the public’s role in aiding law enforcement, the reliability of tips, and the potential for crowdsourced information to outpace formal investigations, all while acknowledging the risks of misattribution and misinformation. As the panel moves into policy and culture, the dialogue touches on how campus security and interagency coordination are shaped by politics, media narratives, and evolving technologies that empower individuals to scrutinize ongoing events. The episode further broadens to address media scrutiny of political figures and institutions, including criticism of management decisions, the optics of leadership during emergencies, and the adversarial tendencies of contemporary journalism. In closing, the hosts reflect on the holiday season’s media landscape, contrasting sensationalism with accountability, and they emphasize the tension between free speech, public safety, and responsible discourse in a media ecosystem driven by rapid, decentralized information flow. The discussion also travels through competing demands of accuracy and speed in storytelling, the ethics of public commentary during crises, and how private individuals using open networks can shape public perception and investigation outcomes. The guests balance urgent questions about what happened with broader concerns about privacy, civil liberties, and the ethical responsibilities of bystanders, authorities, and media alike when a mass incident tests community trust and investigative rigor. The dialogue underscores a culture-war frame—criticism of political leadership, appeals for greater transparency, and a call for pragmatic reforms in policing, campus security, and media accountability—while preserving space for civil debate about preventing future tragedies and ensuring that truth, rather than noise, guides public understanding. In a broader arc, the episode intertwines a crises narrative with a critique of online culture: the speed of Reddit tips, the power and peril of crowdsourcing, and the need for reliable verification in a world where any user can influence an official investigation. The result is a layered exploration of how truth emerges amid social platforms, sensational headlines, and polarized political climates.

a16z Podcast

Big Ideas 2024: New Applications for Computer Vision and Video Intelligence with Kimberly Tan
Guests: Kimberly Tan
reSee.it Podcast Summary
In 2024, significant advancements in computer vision and video intelligence are expected, particularly in industries lacking modern video systems. Companies are adopting a hardware and software model to enhance safety and efficiency, as seen with Flock Safety. The proliferation of cameras, cost-efficient tech, and innovative models are driving this change. Applications could emerge in transportation, industrials, and agriculture, improving processes like compliance and monitoring. Privacy concerns must be addressed through regulations and data rights. The convergence of these factors suggests a tipping point for widespread adoption of these technologies this year.

PBD Podcast

“We Hunt Them Down” - Sheriff Grady Judd on Crime, Drugs & Justice” | PBD #774
Guests: Sheriff Grady Judd
reSee.it Podcast Summary
Sheriff Grady Judd discusses his long tenure in Polk County, detailing a policing philosophy that prioritizes public safety, accountability, and community trust. He describes a proactive approach to crime reduction, emphasizing strong sentencing policies, detective work, and aggressive undercover operations. The conversation covers how his agency uses real-time intelligence, collaboration with federal partners, and visible public accountability—including publicizing arrests and disciplinary actions—to deter crime and reassure residents. He explains how Florida’s sentencing structure has shaped outcomes locally, noting crime reductions over multi-decade horizons and arguing that targeted enforcement paired with rehabilitative programs can sustain safety while still offering second chances to non-violent offenders and veterans. The host presses on controversial topics, including the Epstein matter, debates about immigration enforcement, and the balance between civil liberties and safety, to which Judd responds by outlining a principled stance: prioritize the safety of law-abiding citizens, support strong border and enforcement measures, and avoid politicizing everyday policing. A significant portion of the discussion is devoted to the use of technology in policing. Judd describes the creation of a sheriff’s artificial intelligence laboratory (SAIL) in partnership with a regional polytechnic, highlighting projects that improve public safety while mitigating bias. He envisions an AI hub that coordinates law enforcement applications, predicts risk, and optimizes responses, including drone-based search-and-rescue and incident management. The dialogue also touches privacy concerns, acknowledging limits on surveillance and arguing that technology should enhance safety without infringing on private space. The interview moves through operational challenges, such as drug and human-trafficking interdiction, child protection efforts, and the legal framework that classifies victims versus criminals, underscoring a systemic approach that connects prevention, prosecution, and social services. Toward the end, Judd reflects on leadership, succession planning, and community engagement. He explains how he handles internal discipline with equal standards for civilians and officers, shares anecdotes about high-profile encounters, and reiterates a commitment to mentoring the next generation of law enforcement professionals. The conversation closes with a reaffirmation of Florida’s crime trends, a call for accountability at all levels of government, and an emphasis on safeguarding families as the core mission of policing, tempered by realistic and humane strategies for rehabilitation and public trust.

Generative Now

Scott Belsky: Content Creators, Creativity, and Marketing in the AI Landscape
Guests: Scott Belsky
reSee.it Podcast Summary
Generative AI is not merely a tool for tweaking images or drafting copy; Scott Belsky explains how it reshapes creativity, marketing, and the very economics of content. In a conversation recorded after the Robin Hood AI Summit, he and the host unpack how AI shifts who can create, what counts as originality, and whether the flood of automated output will drown or elevate human ideas. The discussion repeatedly returns to tensions between democratization and rising expectations. Creatives find that novelty often leads to utility, using AI for mood boards, then discovering commercial possibilities. Belsky argues that the real challenge is whether AI democratizes or commoditizes creativity, and how surface area of exploration shapes outcomes. As brands flood social feeds with automatically generated variants, the demand for authentic, emotionally resonant work rises, making the creator's ability to tell a distinctive story more valuable than ever. On platforms and governance, the conversation shifts to regulation, licensing, and the provenance of models. Adobe argues that outputs should carry credentials indicating training data sources, and that brands will prefer models trained on licensed content for commercial work. The company points to Adobe Stock as an example of licensed training, and suggests a future where assets carry verifiable model-origin metadata to enable trust and compliance. Beyond compliance, the dialogue explores personal agents and the next wave of AI helpers. On-device, privacy-preserving agents could manage communications, shopping, and routines while surfacing safer choices and warnings. The vision extends to small businesses benefiting from AI-assisted decision making, allowing a five-person team to reach revenue levels once reserved for larger firms. The optimism rests on human ingenuity unlocking higher-order work as lower-order tasks become automated.
View Full Interactive Feed