TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: Palantir has multiple health care management programs called dashboards. They function as central intelligence for the hospital, showing real-time census, staffing levels, drug inventories, and more. They are centralized tools that allow users to touch a button to change staffing, pharmacy orders, vaccine orders, ventilator counts, and other resources. Hospitals already use these Palantir programs to manage their business, so adopting them was not a big change. Under Operation Warp Speed, the HHS Protect program was part of the effort, and Palantir created a program called Tiberius. Tiberius used data collected from HHS Protect and from the COVID-19 registry. It could predict behavior and included data such as ethnicity, location, behavioral data, and medical record information. It knew whether someone was vaccinated, even though there wasn’t a publicly trackable code for that, but Palantir had a way to determine it; the CDC and HHS had ways to determine it, while hospitals did not have access and were blinded. Tiberius assigned a risk score to people, and also to hospitals. This risk scoring was used to determine how resources were allocated—where to send vaccines, where to send ventilators, and where to send remdesivir.

Video Saved From X

reSee.it Video Transcript AI Summary
Gizmet aims to improve healthcare accessibility by reducing administrative burdens. The primary goal is to build capacity for participants who spend most of their time on administrative tasks. A significant portion of this administration involves locating available and suitable healthcare providers who can meet the specific care needs of each participant.

Video Saved From X

reSee.it Video Transcript AI Summary
Using the tools Sam and Masa are providing, the team is pursuing a cancer vaccine. All cancers, cancer tumors, and fragments float in your blood, enabling early cancer detection via a blood test. AI analysis of the blood test can identify cancers that are seriously threatening. After sequencing or gene sequencing the cancer tumor, you could vaccinate the person with a personalized vaccine, designed for each individual to target that cancer, and produce it robotically as an mRNA vaccine in about forty eight hours. This could enable early cancer detection and a vaccine for your specific cancer within forty eight hours. This is the promise of AI and the future.

Video Saved From X

reSee.it Video Transcript AI Summary
HHS is initiating an AI revolution, attracting experts from Silicon Valley to improve government systems. Changes include improving or supplementing the VAERS system using AI. The FDA is using AI to accelerate drug approvals, potentially eliminating the need for primate or animal models. CMS is implementing AI to detect waste, abuse, and fraud. The CDC and other departments will use AI to analyze mega data for better decision-making regarding interventions. AI can assess the effectiveness and side effects of drugs like diabetes medications, statins, and SSRIs across the population. This use of AI has the potential to revolutionize medicine.

Video Saved From X

reSee.it Video Transcript AI Summary
A program manager in the Biological Technologies Office at DARPA, an active duty army infectious diseases physician specialized in addressing biological threats that can be engineered or naturally occurring, discusses one of the technologies he actively manages: a company called Profusa, which aims at achieving tissue level continuous health monitoring. Through the SBIR program, they funded Profusa to solve an incredible technical challenge that no one else had previously been able to solve. The key innovation presented is the question: Why can't we make a chemical substance that's really identical to what's underneath the skin, what we call the subcutaneous tissue, so that your body doesn't recognize it as a foreign body response? It just incorporates itself into the tissue. There are now many examples where a sensor put right underneath the skin can sense things like oxygen and other chemicals that are very important to our metabolism, and not just sense that for a day or a week or even a month, but the team imagines that sensing these parameters can go on for a period of years. One of the most important applications to DARPA is to improve the health of the worldwide deployed military force. There is a strong sense of obligation that if we're going to ask somebody to be deployed and to carry out their mission, we want to keep them healthy. This technology will give a way to monitor if someone is getting sick, and they imagine that they would be able to sense that very early and therefore prevent them from getting sick and prevent their complications, allow them to stay healthy and continue to carry out their mission. In addition, if the technology translates into general health benefit, they are very excited about that. In other words, they fund those national security applications, while the company pursues private sector partnerships.

Video Saved From X

reSee.it Video Transcript AI Summary
So there was a program called HHS Protect as start during operation warp speed. So this HHS protect program is really interesting because what it did, it used two different Palantir programs. The AMA, HHS, the CDC, specifically, all partnered with Palantir, and then Palantir developed a program for operation warp speed. And that program, what it did was it assigned people a threat risk score, and then that was a program called Tiberius. They also could determine down to the ZIP code where you were and how compliant areas were. And then Gotham is the AI kill chain program created by Palantir. So the Gotham program, it takes the threat risk score from Tiberius, and then it executes the threat or tells does an AI decision making process and decide decides when and how and where to deploy the countermeasures, which was your vaccine, your remdesivir, and your ventilator.

Possible Podcast

Daphne Koller on drug discovery and AI
Guests: Daphne Koller
reSee.it Podcast Summary
Artificial intelligence is returning to medicine not as a curiosity but as a driver of drug discovery and development. The typical pipeline begins with a biological insight and a therapeutic hypothesis about how modulating a target could help a patient, then moves to creating the right chemical matter, and finally to clinical development in people. The farther you go, the more expensive it gets, with clinical development being the costliest and most failure-prone stage. Depending on the estimate you trust, only about 5 to 10 percent of molecules entering the final clinical phase emerge with regulatory approval. The industry’s cost mood has spiraled, with fully loaded programs now soaring north of 2.6 billion dollars. Advances in AI are accelerating the middle piece of this journey: turning a target into a drug by designing effective molecules and screening vast libraries. The protein space benefits especially because advances like AlphaFold give structural context that makes it easier to predict how a molecule will interact with a protein. In addition, the explosion of multi-modal biological data—from cells and tissues to single-cell profiling and imaging—creates raw material for AI to interrogate biology at scale. Yet there is a gap: AI can rapidly generate hypotheses and designs, but turning new biological insights into disease-modifying therapies remains the harder, slower part of the journey. The strongest potential lies in redefining disease biology itself and identifying precise subtypes that respond to specific interventions. Data and incentives shape what is possible. A transformation in health care data collection and sharing is needed: richer, harmonized data from patients, with appropriate anonymization and safeguards. The talk notes that incentives in the United States often do not align with comprehensive diagnostics and data-driven treatment choices, and that centralized health data repositories could unlock breakthroughs much faster. Collaboration between academia and industry is essential, balancing deep theoretical thinking with product-like execution. The optimism rests on an exponential trajectory across AI, biology, and medicines, with the pace of change accelerating as measurement improves and integration tightens, ultimately enabling more precise, effective therapies.

The Ben & Marc Show

Ben Horowitz & Marc Andreessen: Why Silicon Valley Turned Against Defense (And How We're Fixing It)
reSee.it Podcast Summary
The episode examines why Silicon Valley’s traditional stance on defense needs a fundamental rethink, arguing that America’s dynamism—its blend of innovation, flexible execution, and a willingness to leverage private sector strengths—remains essential to global security and prosperity. The hosts trace a history of closer ties between tech and defense, describe a decades-long drift toward hostility, and propose a pragmatic path back to collaboration, modernization, and a shared national mission anchored in American values. A core theme is the shift from centralized five-year planning toward rapid iteration and decentralized creativity. The speakers critique entrenched procurement models and five-year cycles, arguing that today’s battlefield and technology landscape demand speed, adaptability, and close alignment between Silicon Valley founders and government customers. They emphasize how the Ukraine conflict and near-peer competition have underscored the need for modern, attritable systems, not grand but fragile, exquisitely engineered platforms. The conversation highlights the emergence of American Dynamism as a cross-cutting investment thesis. Hardware paired with software, commodity components scaled by advanced AI and autonomy, and a shift toward domestic manufacturing and critical minerals are presented as the route to resilience. Energy, space, and aerospace are discussed as interdependent pillars, with investments in nuclear power, energy storage, satellite infrastructure, and modular space systems illustrating how a diversified portfolio can sustain national security alongside economic growth. Katherine, Ben, Mark, and the guests describe a cultural reorientation in the Valley—toward embracing defense, national service, and the realities of hardware-driven, physical-world problems. The dialogue affirms the importance of founders who understand government customers, have authentic security clearances, or come from backgrounds that connect deeply with the needs of the user. The overarching aim is a modern, American-led ecosystem capable of competing with China while strengthening allied markets through shared technology and procurement reform. The episode concludes on a forward-looking note: manufacturing will be reimagined through automation and high-skill jobs, not mere nostalgia for old plants. The group predicts increased collaboration with legacy primes and a wave of new startups solving “dumb parts” and sophisticated systems alike. They see robotics, AI-enabled hardware, and offensive space as fertile grounds, with international partnerships expanding the market for American dynamism and keeping the United States at the center of global technological leadership. ], topics otherTopics booksMentioned

Possible Podcast

Possible Ep 94 | AMD’s Comeback and Vision for Chipmaking w/ CEO Lisa Su
Guests: Lisa Su
reSee.it Podcast Summary
AMD’s comeback isn’t a single product launch so much as a disciplined reimagining of what the company should be best at. Su describes a founding conviction: we could own high-performance computing, and we chose to bet the road map on chiplets. Raised in Taiwan, educated at MIT, she recalls hands-on curiosity—tinkering with a remote-control car and discovering cause and effect in hardware. At IBM, she moved from device physics to product leadership and business strategy, learning that engineering matters most when it creates real products. Those bets, especially on chiplets, reshaped AMD’s trajectory and prepared the ground for today’s AI-driven era. Today Su frames AMD as more than a hardware vendor: a provider of end-to-end compute for AI and cloud services. AMD touches billions daily—driving servers in the cloud, powering consumer devices, and supporting games consoles—because the right compute at the right time is core to every application. Chiplets remain central, enabling scalability and rapid evolution as AI workloads grow. The company also leans into software and tooling to help customers translate code to AMD and to accelerate development with AI across design, test, and manufacturing. Beyond products, Su argues for national manufacturing resilience, US-based fabs, and a diversified supply chain amid global volatility. Her answers touch on a longer horizon: AI’s decade-plus supercycle, the disruption of how we build and use hardware, and the crucial role of talent. She emphasizes taking risks on people who volunteer for hard problems, fostering cross-disciplinary thinkers who can stitch hardware and software together. She notes that AI’s adoption is accelerating—from code writing to kernel optimizations—while the pace of hardware and software evolution demands continuous learning. She also links health care and medicine to AI’s potential, envisioning second opinions and drug discovery accelerations that could improve lives. The first step, she says, is broader, more aggressive use of AI across health and industry.

The Koerner Office

This Little AI Device Will Print Millionaires
reSee.it Podcast Summary
The Koerner Office episode examines a bold vision: use a wearable AI pendant to automate knowledge work, particularly in healthcare and sales, and to create scalable services that help other businesses operate more efficiently. The hosts argue that entrepreneurship paired with artificial intelligence represents a powerful opportunity for younger generations, especially when you position yourself as a modern technician who can implement AI tools, automate workflows, and charge premium fees for ongoing management and optimization. The conversation moves from high-level trends to concrete use cases, exploring how a wearable device could capture conversations, summarize meetings, and feed structured data into patient records, sales coaching, or CRM systems. A central thread is the insistence that AI adoption is not a one-time fix but an ongoing evolution requiring human operators, automation know-how, and careful consideration of privacy and compliance. The hosts imagine a tiered service model: getting clients to wear the pendant, setting up the backend automations, and then offering optimization coaching or ongoing data-driven interventions. They discuss real-world constraints—such as HIPAA, data storage, and the variability of existing healthcare IT setups—and stress that the real value lies in speeding note-taking, improving accuracy, and delivering timely summaries that can inform decisions, billing, and patient care. Throughout the dialogue, practical implementation questions loom large: who is already doing similar AI agency work, what pricing models exist, and how to validate a replicable framework? The hosts propose tangible paths—pilot patients, side ventures in clinics, or sales teams—while weighing the trade-offs between onshore staff, virtual assistants, and automated workflows. The overarching message is provocative: leverage voice-driven AI interoperability to elevate performance across industries, then capture value by monetizing improved outcomes and faster decision-making.

Possible Podcast

The Godmother of AI on what AGI means for humanity
reSee.it Podcast Summary
Humans stand at the edge of a spatial AI revolution that links the 3D world we perceive with the digital realms we build. Fei-Fei Li traces ImageNet to 2006, noting that data quality and diversity drive learning as much as model complexity. Moving from WordNet to ImageNet, she pursued large, diverse visual data as a foundation for understanding a world beyond flat pixels. Years later, she defines spatial intelligence—the ability to perceive, reason about, and act in 3D space, in physical and virtual environments. ImageNet labeled 2D projections of a 3D world; World Labs aims to unify 3D grounding across real and digital realms. She contrasts large language models with world models whose units are pixels or voxels, arguing that language is the human language of 3D, while perception and action point to AI’s future. She envisions a future where AI augments human agency, not replaces it, grounded in two principles: respect for human agency and respect for people. The Human-Centered AI Institute promotes science-based governance with guardrails focused on medicine, finance, and education. She highlights AI4ALL, expanding AI for diverse K-12 students. In healthcare, she describes noninvasive smart cameras to help caregivers monitor patients and improve safety. She discusses AGI as a debated term, noting the aim of machines that can think and perform a range of tasks. Governance should focus on where harm occurs, updating frameworks, and building a public-private ecosystem to educate, innovate, and democratize benefits. She ends with optimism about energy innovation and a 15-year vision of knowledge, wellbeing, and shared prosperity.

a16z Podcast

America's Autism Crisis and How AI Can Fix Science with NIH Director Jay Bhattacharya
Guests: Jay Bhattacharya, Erik Torenberg, Vineeta Agarwala, Jorge Conde
reSee.it Podcast Summary
A bold mission to fix science from the inside out unfolds as NIH director Bhattacharya lays out a Silicon Valley–inspired portfolio. Six months in, he launches a $50 million autism data-science initiative, with 250 teams applying and 13 receiving grants to pursue data-driven answers for families. He cites the CDC’s estimate of autism at 1 in 31 and argues for therapies that actually work and clearer causes to guide prevention. One funded effort centers on folinic acid treatment delivering brain folate, improving outcomes for some children with deficient folate processing, including speech in a subset. Not all benefit, but wider access could help. A second thread urges caution with prenatal acetaminophen use, noting evidence of autism risk and signaling guideline changes. He also highlights a cross-agency push on pre-term birth to narrow the US–Europe gap in prenatal care. The dialogue then shifts to the replication crisis in science, born from volume and conservative peer review. Bhattacharya, a longtime grant-panelist, argues that ideas stall because reviewers cling to familiar methods and fear novelty. He describes NIH reforms modeled on venture capital: centralized grant reviews, empowering institute directors to curate portfolios, and rewarding success at the portfolio level rather than individual wins. He emphasizes funding early-career investigators to bring fresh ideas while evaluating mentorship of the next generation. The aim is a sustainable pipeline that balances risk and reward, mirrors scientific opportunity, and aligns with the institutes’ strategic plans. He calls for a broader, transparent conversation with Congress and the public about funding and progress toward healthier lives. He ties trust to gold-standard science—replication and open communication—and notes how HIV/AIDS-era public pressure redirected NIH priorities. The Silicon Valley analogy endures: a portfolio of bets, most fail, a few breakthroughs transform health. AI can accelerate discovery, streamline radiology, and optimize care, but should augment rather than replace scientists; safeguards must protect privacy while expanding open access and academic freedom. The long-term aim is to reduce chronic disease and improve life expectancy. He closes with Max Perutz’s persistence as a blueprint for patient science. He envisions an NIH that protects academic freedom, expands open publishing, and uses AI to augment, curating a diverse portfolio balanced by evidence and bold bets to lift health outcomes for all Americans.

Sourcery

Meme Lord CEO Raises $22.5M Series A for Hello Patient
Guests: Alex Cohen
reSee.it Podcast Summary
The episode centers on Alex Cohen, the founder behind Hello Patient, and his unconventional path through Silicon Valley, including a string of high-profile firings and a recent Series A of 22.5 million. The conversation tracks the origin of Hello Patient, a healthcare-focused conversational AI platform designed to handle patient-facing communications across multiple channels, from phone calls to text messages and website chat. The host and guest discuss the rationale for targeting healthcare specifically, highlighting the friction in patient engagement and the potential for AI to fill staffing gaps, improve appointment scheduling, and reduce missed calls. Cohen describes the practical steps behind building their product, including the emphasis on HIPAA compliance, BAAs, data minimization, and regulatory considerations around telephone communications. They also detail the sales process—from initial discovery calls to a live demo and a proof-of-concept deployment that simulates real patient interactions and workflows. The interview delves into the current regulatory and risk landscape, noting that while AI in healthcare promises efficiencies, the real risk lies in misdiagnosis or misrouting urgent cases, which motivates a preference for administrative automation rather than clinical decision-making. The discussion also covers the evolution of healthcare software, the entrenched position of incumbents like Epic and ModMed, and the competitive edge gained by early adoption of AI-powered patient engagement. Throughout, Cohen emphasizes that the company’s model focuses on improving inbound accessibility and outbound engagement, measuring impact via improved call answer rates, appointment bookings linked to campaigns, and overall patient satisfaction, while noting the challenges of integrating with diverse practice management systems. The episode closes with a candid look at the team’s growth plans, hiring priorities, and the ongoing exploration of peptides and longevity, including personal experimentation and considerations around hormone optimization and sleep strategies, framed within a broader conversation about health optimization and the limits of current science.

a16z Podcast

Mark Zuckerberg & Priscilla Chan: How AI Will Cure All Disease
Guests: Priscilla Chan, Mark Zuckerberg
reSee.it Podcast Summary
Priscilla Chan and Mark Zuckerberg discuss the Chan Zuckerberg Initiative's (CZI) ambitious mission to cure, prevent, and manage all disease by the end of the century. Priscilla, a pediatrician, realized the limitations of current medical knowledge, especially for rare diseases, highlighting the critical need for advancements in basic science. Mark clarifies that their strategy isn't to directly cure diseases but to accelerate the pace of scientific discovery by building foundational tools, a niche often overlooked by traditional government funding which favors shorter-term projects. CZI focuses on long-term, expensive tool development, such as those costing hundreds of millions to a billion dollars over 10-15 years. The core of CZI's scientific philanthropy is the Biohub, which uniquely integrates frontier biology with advanced AI. A key example is the Cell by Gene atlas, initially an annotation tool for single-cell data that evolved into a widely adopted, community-driven open-source resource due to its standardized format. The current major focus is on developing 'virtual cell models' using AI, including large language models and early reasoning models. These models aim to simulate complex biological processes, from proteins to entire immune systems, allowing scientists to test riskier hypotheses computationally (in silico) before committing to costly and time-consuming wet lab experiments. CZI's organizational approach emphasizes interdisciplinary collaboration, bringing biologists, engineers, and AI experts together in Biohubs located near leading universities. They also provide large-scale compute resources (GPU clusters) to the broader scientific community, fostering external collaborations. This model encourages a shift towards precision medicine, where treatments are tailored to individual biology rather than broad classifications. The founders express that while CZI initially explored various philanthropic areas, science research consistently yielded the greatest impact, leading them to double down on the Biohub. They believe that with the rapid advancements in AI, their ambitious goal of accelerating disease understanding and prevention can be achieved significantly sooner, empowering a new wave of scientific innovation and drug discovery.

Genius Life

The Hidden Crisis in Women’s Health & The Blind Spots AI Might Fix - Dr. Erin Nance
Guests: Dr. Erin Nance
reSee.it Podcast Summary
The episode centers on Dr. Erin Nance’s exploration of why women’s health often goes misdiagnosed and how medical knowledge historically reflected male presentations more than female symptoms. The conversation starts with heart attacks, highlighting how women’s symptoms can diverge from the classic male picture, and explains that research underrepresentation of women has led to pattern-recognition biases that persist in clinical training. This bias is not about lazy clinicians but about systemic gaps in data collection and research that shape medical education, leaving women more likely to be misdiagnosed. The discussion then broadens to ADHD, illustrating that girls and women show different symptoms than boys, which further compounds misdiagnosis when research focuses predominantly on male presentations. The host and guest affirm that AI offers optimism for diagnostics, not as a replacement for clinicians, but as a tool to complement expertise, especially as healthcare becomes more data-driven. They discuss how AI can help with rapid literature review, data synthesis, and targeted differential diagnoses, while emphasizing that final decisions still hinge on thoughtful clinician judgment and patient-provider collaboration. The talk moves into practical patient engagement: tracking symptoms, journaling, and using credible social-media resources to understand patterns while avoiding misinformation. Dr. Nance describes Feel Better, a platform intended to curate vetted medical information and support informed conversations between patients and Northstar providers—trusted clinicians who coordinate care and bring colleagues into the diagnostic process when needed. Personal stories—ranging from a misdiagnosed toe pain to the broader theme of “rare” conditions that are often underrecognized—underscore how social media can widen access to information and connect patients with experts who can help identify root causes, not just descriptive diagnoses. The discussion also touches on systemic issues in medicine, such as overprescription, insurance landscapes, and the evolving role of precision medicine. Throughout, the episode champions a patient-centered, data-informed approach: use AI to expand capabilities and access, but maintain human-centered care that respects patient experiences and seeks ambitious, scientifically grounded solutions for complex, sometimes rare, health problems. The guest closes with a hopeful note about science’s iterative nature—questions and continuous improvement are essential—and a call to empower individuals to advocate for themselves while recognizing the need for robust research and better systemic processes to reduce misdiagnoses and improve outcomes for women and all patients.

Shawn Ryan Show

Dr. David Fajgenbaum - Doctor Finds a Cure for His Own Castleman’s Disease | SRS #240
Guests: David Fajgenbaum
reSee.it Podcast Summary
A man faces a terminal illness with a radical idea: medicines we already have can cure what we lack. David Fajgenbaum’s journey begins with his mother’s brain cancer, a losing battle that fuels his vow to change medicine. As a medical student, he nearly dies from idiopathic multicentric Castleman disease, enduring dialysis, brief blindness, and last rites. He survives after intensive chemotherapy and vows to find treatments for others, while starting a grief support group named AMF, later Actively Moving Forward. Before long, chemotherapy isn’t enough. He discovers the drugs saving his life were not designed for Castleman’s, and asks: could there be an eighth drug repurposed from another disease? He researches globally, stores blood and tissue, and asks doctors to try drugs used elsewhere. In this crucible, he identifies a key insight: a drug to prevent organ rejection can suppress a harmful immune signal driving Castleman’s. He begins sirolimus and, after relapses, reaches durable remission, marrying Caitlyn in 2014 as his hair regrows. AI becomes his partner. He and Grant Mitchell co-found Every Cure to scan all 4,000 FDA-approved drugs against 18,000 diseases using a biomedical knowledge graph. The goal is to reveal which medicines might treat which conditions. In the first phase, 75 million matches are scored; the team of about 50 has reviewed the top 6,000, deep-dived into roughly 60–70, and advanced about 15 toward plans. Nine programs are active, including lidocaine for recurrence and a Jack inhibitor for Castleman’s. Nonprofit funding plus ARPA-H supports scale. The human side continues. They share successes: Michael with metastatic angiosarcoma responded to pembrolizumab; Kyla, a Castleman’s patient, improved after a JAK inhibitor; Joey, a child at CHOP, showed rapid lab improvements. Caitlyn’s unwavering support culminates in their wedding day. They discuss dissemination: UpToDate is imperfect, and knowledge must reach doctors worldwide, not just scholars. They envision a future where AI-guided matches are tested in labs and moved into trials, expanding access and reducing suffering for thousands. The mission: unlock hidden cures in existing drugs and spread them widely.

Genius Life

The Real Reason Healthcare Is Failing & The Death Of “Sick Care” - Dr. Nasim Afsar
Guests: Dr. Nasim Afsar
reSee.it Podcast Summary
The discussion centers on how artificial intelligence is reshaping healthcare without replacing the human element of care. The guest argues that AI, as of 2026, is a powerful tool that can augment clinicians and patients, but only if data is managed with privacy and security in mind. Afsar emphasizes that the issue in healthcare is not a lack of technology but a misalignment of systems and incentives, urging a shift from siloed stakeholder focus to a consumer-centric view of health and care. She reflects on past transitions, like electronic health records, and cautions against simply layering AI onto dysfunctional workflows. Instead, the conversation concentrates on redefining processes around the consumer’s needs and outcomes, with AI augmenting decision-making, predictive insights, and everyday health management beyond clinic visits. The notion of intelligent health is introduced as a framework that integrates clinical data with lifestyle, environment, genetics, and lived experience to craft personalized pathways. This approach seeks to reduce cognitive load for individuals while providing clinicians with actionable signals to prevent illness and optimize well-being. The dialogue also explores practical uses of AI in daily life, such as personalized meal planning, data-driven motivation, and the interpretation of biometric trends, while acknowledging that evidence-based guidance and careful framing of questions are essential to avoid misinformation or harm. The discussion does not shy away from challenges: data sharing risks, environmental costs of AI infrastructure, and potential gaps in model training that may overlook underrepresented populations. Yet the overarching message is one of balance—leveraging AI to empower people, not to replace human judgment, and building systems that reward health-promoting choices rather than sick-care interventions. The host reflects on personal experiences with habit formation and the role of tools like digital scales and appetite-aware feedback. By the end, intelligent health is presented as a consumer-owned, goal-driven ecosystem where technology reduces friction, clarifies options, and supports individualized health journeys while maintaining ethical and sustainable standards.

Possible Podcast

Peter Lee on the future of health and medicine
Guests: Peter Lee
reSee.it Podcast Summary
Healthcare’s future began to reveal itself through a string of chance assignments that followed a speeding ticket and a two-page memo. After the 2008 election, I wrote two-page policy papers for DARPA at Tom Kalil’s request, left Carnegie Mellon to join DARPA, and found myself briefing the Secretary of Defense. Crowdsourcing, network effects, and machine learning, I learned, can shift deployment and impact. Later at Microsoft, I worked in an internal healthcare incubator, and in 2016 Satya Nadella asked me to focus on healthcare instead of returning to research. Today the conversation centers on healthcare and AI, including personal use of GPT-4. I use it to interpret lab results, explain benefits, and decipher CPT codes that insurance notices present. Even executives struggle with these documents, and AI can clarify what an elevated LDL means and what costs are owed. I describe curbside consultations: GPT-4 can critique a clinician’s differential diagnosis, suggest tests like an angiogram or BNP, and, as a co-pilot, help prepare questions for a brief call with a specialist. This technology empowers families and clinicians while highlighting risks and limits. On the governance side, regulation remains unsettled and globally uneven. The medical community must help shape a practical code of conduct and ensure humans stay in the loop to finalize decisions, with transparency about AI assistance to patients. I compare this evolution to copper wire and light bulbs, emphasizing education, testing, and gradual adoption. Partnerships with Mercy, Epic, Nuance, and others illustrate how AI can reduce clerical burden and improve patient communication, including draft notes that patients find more human. The dream is real-world evidence that every encounter contributes to medical knowledge and broad access within the next decade.

The Rich Roll Podcast

How A.I. and Big Tech Are Shaping The Future of Healthcare | Dr. Lloyd Minor X Rich Roll Podcast
Guests: Dr. Lloyd Minor
reSee.it Podcast Summary
The episode surveys how artificial intelligence is reshaping medicine, from diagnostics to drug discovery and patient care. Dr. Lloyd Minor, dean of Stanford Medical School, frames AI as medicine’s most consequential moment, enabling models trained on vast datasets to complement human expertise, reduce errors, and expand access, particularly in under-resourced settings. The conversation traces the evolution from electronic prescribing and basic clinical decision support to modern large language models and transformer-based systems that can sift through billions of data points to identify patterns, predict disease, and tailor therapies. A key theme is that AI will not replace clinicians but redefine roles: radiologists and pathologists, for example, may work more efficiently with AI, while retaining critical judgment and patient interaction. The discussion emphasizes safety, transparency, and public engagement in deploying AI, arguing for governance that includes patient privacy and ongoing evaluation of model performance to avoid bias. The guest offers concrete examples of AI’s impact on healthcare delivery, such as computer-assisted skin cancer evaluation that can triage cases in rural areas, and AI-assisted imaging that highlights overlooked findings for radiologists. In pathology, AI can aggregate data across health systems to improve diagnostic accuracy for rare tumors, leveraging volumes of data that exceed what any individual expert could review. AI also enhances drug discovery by mapping protein structures from sequences and enabling the design of new therapeutics or refined clinical trials, ushering in a broader vision of Precision Health that seeks to anticipate and prevent disease rather than react after onset. Wearable devices and consumer health data are presented as catalysts for real-time monitoring, with Apple Heart Study highlighted as proof of feasibility for detecting atrial fibrillation, and glucose, blood pressure, and other metrics poised to become more routinized in daily life. The transcript delves into medical education’s transformation, predicting diminished emphasis on memorization and greater focus on data literacy, critical skepticism about AI outputs, and training that uses AI as a tool for inquiry. Virtual reality and simulation are described as supplements to cadaver work and surgical planning, while nutrition and behavioral science gain traction as essential components of a preventive paradigm. The guest also addresses ethical concerns—privacy, data bias, and preserving patient–provider relationships—calling for responsible regulation and public transparency. Finally, while acknowledging systemic healthcare challenges, the talk remains optimistic about incremental, practical changes that improve detection, prevention, and patient engagement in the near to mid-term future.

20VC

Alex Lebrun: Why the EU's AI Regulation is a Disaster; How Zuck Prepares for Meetings | E1027
Guests: Alex Lebrun
reSee.it Podcast Summary
Will AI replace doctors? AI will not replace doctors, but doctors who use AI would replace doctors who don't. The conversation focuses on practical deployment, not hype, with Nabla's Alex Lebrun arguing AI augments clinicians rather than replaces them. The founder fell in love with chatbots 22 years ago. 'I fell in love with a chatbot. Her name was Sibel,' he recalls, launching Virtuos, an early customer-service bot company. Nabla grew from lessons learned across two prior startups and Facebook AI research. Chairing the arc of AI progress, the guest says the last 18 months look like a step function to the public but a long continuum to insiders. GPT-4 builds on GPT-3; transformers arrived in 2016; progress is continuous, even if perception feels discontinuous. Nabla aims to be an AI assistant to clinicians, bridging ambient listening with EHR data through interoperable APIs. The prototype targets emergency call centers and hospital workflows, emphasizing physician productivity and patient empathy. The player needs a payer model; Nabla started in the US with a bottom-up approach, since hospitals may resist unless incentives align; Europe regulation can hinder deployment. Regulation timing is key; too early blocks progress, too late invites unaddressed risks. Europe’s rules may push startups to relocate or retrain models abroad; he suggests learning from field first and involving users before formal regulation. Bias, data quality, and open vs closed models are discussed. He notes biases exist in models and that diverse teams improve outcomes; open models vs closed models: open infrastructure can win, but openness is not a panacea. Hallucinations are acknowledged as a design issue.

Uncapped

The Future of AI Software Security | Ep. 39
reSee.it Podcast Summary
The episode examines how the rise of AI dramatically changes the security landscape for software, insisting that traditional defenses must evolve to handle both the scale and sophistication of AI-driven threats. The guest shares a hands-on view of how a security-focused startup aims to turn AI into an active defender, deploying an AI-powered security engineer concept that can map complex codebases, review configurations, and continuously surface vulnerabilities. The conversation emphasizes that attacks are becoming more frequent and that effective defense requires deep context, disciplined experimentation, and a willingness to rethink prioritization so security does not undercut developer velocity. Thinking through real-world examples from prior work, the host and guest discuss how a culture of rapid iteration, strong data discipline, and clear ownership can unlock meaningful security improvements without crippling productivity. A recurring theme is the tension between ambitious protection and practical delivery, and how modern tooling can harmonize safety with speed by treating security as an integrated capability rather than a gating mechanism. The discussion also explores how security leadership can attract talent and shape company strategy in a fast-moving AI era, highlighting the human elements of leadership, risk assessment, and long-term alignment with a company’s mission.

Doom Debates

Andrew Critch vs. Liron Shapira: Will AI Extinction Be Fast Or Slow?
Guests: Andrew Critch
reSee.it Podcast Summary
In this episode of Doom Debates, host Liron Shapira interviews Dr. Andrew Critch, a research scientist at UC Berkeley and co-founder of Healthcare Agents. They discuss the urgent topic of AI safety and the potential risks associated with advanced AI systems. Critch emphasizes that the challenge is not merely a coordination problem but a fundamental disagreement in values among humans, with some individuals indifferent to the survival of humanity. Critch shares his journey into the AI safety field, beginning with his skepticism of deep learning claims made by Andrew Ng in 2010. This skepticism led him to the Machine Intelligence Research Institute (MIRI), where he found a community concerned about the risks of AI. He expresses concerns about the future, estimating an 85% chance that humanity will not survive past 2050 due to AI-related risks, with a significant portion of that risk stemming from the potential loss of control over AI systems. The conversation touches on the importance of understanding the dynamics of AI development, including the potential for multi-polar competition among AI systems, which could lead to negative outcomes for humanity. Critch argues that the focus should be on ensuring that AI systems are aligned with human values and that there is a need for a robust regulatory framework to prevent catastrophic scenarios. Critch also discusses his work at Healthcare Agents, where he aims to integrate AI into healthcare in a way that respects human autonomy and welfare. He believes that the healthcare industry can serve as a model for how AI can positively interact with humans, setting a precedent for future AI development. The discussion concludes with Critch advocating for greater empathy and understanding in the discourse around AI risks, emphasizing the need for a collective effort to ensure a future where humanity thrives alongside advanced AI systems. He calls for individuals to engage in discussions about AI safety and to take action to prevent potential existential risks.

Lenny's Podcast

The coming AI security crisis (and what to do about it) | Sander Schulhoff
Guests: Sander Schulhoff
reSee.it Podcast Summary
The episode presents a hard-edged critique of current AI safety approaches, arguing that guardrails and automated red-teaming tools, as they exist today, are fundamentally insufficient to prevent harmful outputs or misuses as AI systems gain more power and autonomy. The guest explains that attempts to classify and block dangerous prompts often fall short against the sheer scale of potential attacks, describing an almost infinite prompt landscape and the unrealistic promises of catching “everything.” Through concrete demonstrations and historical examples, the conversation emphasizes that real-world AI can be manipulated to reveal secrets, exfiltrate data, or orchestrate harmful actions, which underscores the urgency of rethinking how we deploy and govern these systems as they become more agentic and capable. (continued) The discussion moves from problem diagnosis to practical implications, connecting the dots between cybersecurity principles and AI-specific risks. The guest argues that the traditional patch-and-fix mindset from software security does not translate to intelligent systems with evolving capabilities. Instead, teams should adopt a mindset that treats deployed AIs as potentially hostile actors that require strict permissioning, containment, and governance. Real-world scenarios, from chatbot misbehavior to autonomous agents executing actions across data, email, and web services, illustrate how even well-intentioned systems can be coerced into harmful workflows, highlighting a need for organizational changes, specialized expertise, and cross-disciplinary collaboration between AI researchers and classical security professionals. A forward-looking arc closes the talk with a pragmatic roadmap: educate leadership, invest in high-skill AI security expertise, and explore architectural safeguards like restricted permissions and containment frameworks. The guest stresses that no silver bullet exists, but several concrete steps—hierarchical permissioning, human-in-the-loop when appropriate, and framework-like approaches for controlling agent capabilities—can reduce risk in the near term. They also urge humility about current capabilities, reframing the problem as a frontier of security where ongoing research, governance, and careful product design are essential to prevent the kind of real-world harm that could accompany increasingly capable AI agents. Ultimately, the episode leaves listeners with a call to rethink deployment practices, cultivate interdisciplinary security talent, and pursue education and dialogue as the core tools for safer AI innovation.

Generative Now

Anu Atluru: Will AI Change How We Use Social Utilities?
Guests: Anu Atluru
reSee.it Podcast Summary
Generative Now explores how AI redefines social tools. Atluru argues that the next wave should be seen as social utilities—tools you use with people without building a full graph of connections. She contrasts this with social networks, which depend on existing relationships, and with single-player utilities that work alone. Partiful is cited as a leading example: an invite-and-engagement flow with RSVPs and comments that doesn’t require a prebuilt network. Slang, her product studio, builds these social utilities rather than traditional broadcast platforms. The conversation covers how such tools can evolve into networks over time, as data and mutuals emerge from events. They discuss choosing a wedge for a startup—staying a utility, becoming a network, or occupying a middle ground where competition grows. AI is positioned as a lever, not a magic wand. The discussion covers how AI boosts productivity, enabling lean teams to build social utilities quickly, and how this affects subscription economics, pricing, and payments. They debate centralized versus decentralized models, noting privacy and creator ownership as arguments for decentralization, but consumer demand for convenience often favors centralized platforms. They touch on the TikTok pressures, Substack’s paid content, and the appeal of ad-free experiences. They also examine taste in an era of abundant AI-generated content, arguing that taste is discernment, not mere preference, and cautioning against letting the term become a buzzword. They reflect on AI powering utilities in the background while preserving human judgment at the design core. Turning to health, Atluru describes AI aiding medicine—especially for rare diseases and upstream research—while clinicians maintain relational care and integration with the system. Scribes and workflow automation could reclaim clinician time, but broad adoption requires trusted products and governance. They discuss a future where many software creators are nonprofessionals, driven by AI to build end-to-end, publish-and-distribute ecosystems, and where competition centers on who can enable high-quality, scalable tools. They consider the possibility of an Instagram-for-software but argue that consumption may look more like discovery-plus-downloading, with code embedded in experiences rather than standalone apps. The conversation closes with anticipation for a new era of software authorship and questions about value, work, and the human touch.

a16z Podcast

Health Tech Founders: The Future of Care Is Personalized, Proactive—and AI-Powered
Guests: Jonathan Swerdlin, Daniel Cahn
reSee.it Podcast Summary
Most AI experiences focus on digital activity, but there's a need to integrate biology into the human experience. Over 54% of individuals with mental illness lack care. AI can enhance healthcare by providing clearer signals during doctor visits, allowing professionals to focus on their expertise. Founders Jonathan Swerdlin and Daniel Cahn emphasize the importance of consumer control over health. Swerdlin's journey began with personal health challenges, while Cahn highlights the mental health crisis among college students. Both companies aim to improve access to care through AI, with Swerdlin's Function Health focusing on health data and Cahn's Ash providing mental health support. They stress the importance of trust and user experience, aiming for meaningful engagement rather than daily use, and recognize AI's potential to augment rather than replace human connection in therapy.
View Full Interactive Feed