TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
I asked about AI, and he mentioned that the public only sees a fraction of its capabilities. Most of the powerful technology is kept under wraps, which is concerning. For instance, BlackRock uses an AI called Aladdin for forecasting, developed over several years. This model outperforms all other software and human predictions.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 raises a question from the audience about whether the ADL has considered hiring people to counter-march, particularly with diverse ethnicities, to ensure marches are unopposed on social media and publicity. Speaker 1 responds: It’s important to “go where the puck is going” and not just to where it is. Since 10/07, resources have been redistributed toward LLMs and generative AI. He asks how many used ChatGPT in the last week, noting that ChatGPT has over a billion users and is ground truth for vast numbers of people, having existed for about two and a half years. While marching in the streets is one approach, he emphasizes building technology to train LLMs more effectively and working with leading AI companies. He specifies collaborations with OpenAI, Alphabet, Anthropic, Meta, and Microsoft, and says they are in conversations with Alibaba to train their LLM, highlighting that Chinese AI models are profound, potent, cost-effective, and spreading. He reiterates that marching in the streets is only one option, but the focus is on going where the puck is going by investing in Wikipedia, LLMs, and changing the game before it changes us.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker reframes computers as AI factories, which produce tokens, numbers. These AI factories should be used for three fundamental things, with the first being to train the next frontier model so you can build the best AI and get to market first. The goal is to train it as fast as possible. Regarding performance, Rubin is described as a 4x leap compared to Blackwell, meaning the fourfold improvement could be achieved in one month instead of four months.

Video Saved From X

reSee.it Video Transcript AI Summary
So if you were to ask, what's the one most important AI technology to pay attention to? I would say it's agentic AI. The word AI agents has become so widely used by technical and non technical people, it's become a little bit of a hype y term. The way that most of us use large language models today is with what's sometimes called zero shot prompting. Here's what an agentic workflow is like: 'To generate an essay, ask an AI to first write an essay outline and ask her, Do you need to do some web research? If so, let's download some webpages and put it into the context of the large language model.' 'Then let's write the first draft, and then let's read the first draft and critique it, and revise the draft, and so on.' And by going round this loop over and over, it takes longer, but this results in a much better work output.

20VC

David Luan: Why Nvidia Will Enter the Model Space & Models Will Enter the Chip Space | E1169
Guests: David Luan
reSee.it Podcast Summary
OpenAI realized, before basically everybody but DeepMind, that the next phase of AI after a Transformer would focus on solving a major unsolved scientific problem rather than writing papers. The second path to boosting model performance is just starting to be tapped and will demand vast compute. Because of that, I’m not worried about diminishing returns to compute; 'Every tier one cloud provider existentially needs to win here.' Harry describes Google Brain’s era (2012–2018) when bottom-up research produced the Transformer, diffusion models, and other breakthroughs. Transformers became a universal model, replacing task-specific architectures. GPT-2 showed early capabilities; GPT-3 with instruction tuning accelerated adoption, but consumer virality required packaging for non-developers. OpenAI then built teams around solving real-world problems, not just publishing papers. On scaling, the view shifts from base size to data, tooling, and environments. There are two scaling parts: enlarging the base model with more data and GPUs, and enabling smarter behavior via interactive environments that allow experimentation. Memory remains a challenge; Gemini-like context lengths are huge, but long-term memory requires end-to-end product design. Business-wise, the race hinges on who controls the model layer and the chips. Nvidia, Google TPUs, and in-house accelerators shape costs; Apple may dominate edge-running privacy tasks. The shift to agents over traditional RPA challenges incumbents’ value chains, with a co-pilot model likely to become the dominant work tool. Regulation and data access remain contentious, but consolidation among frontier-model players is likely.

Moonshots With Peter Diamandis

Why We Need New AI Benchmarks, Which Industries Survive AI, and Recursive Learning Timelines | #218
reSee.it Podcast Summary
In this Moonshots episode, the host and guest imagine a future where artificial intelligence is not a peripheral upgrade but a core operating system for every business. They argue that companies should pursue targeted, rapid AI experiments rather than waiting for perfect, organization-wide implementations. The dialogue underscores that AI will transform some functions far faster than others, with strong implications for knowledge work, documentation, and decision support. A central theme is data readiness: clean, well-structured data forms the foundation, while fragmented or low-fidelity data can doom initiatives before they start. The guests present a practical playbook for boards and executives: identify two to three high-impact use cases, pursue fast prototyping with rigorous validation, and measure outcomes against real operational KPIs. They caution against “thousand flowers bloom” strategies that lack governance, recommending instead a focused, edge-driven approach led by operational leaders who own the metrics. The conversation also tackles organizational design, arguing that AI initiatives should reside outside the traditional IT function and be steered by proven operators with explicit performance targets, to avoid turning projects into science fairs. They examine the evolving role of human judgment in AI deployments, noting that while automation will handle many repetitive tasks, human input remains essential for complex decisions, nuanced contexts, and domains with limited precedent data. Real-world use cases span optimizing healthcare workflows, supporting underwriting and legal processes with calibrated baselines, and enabling advanced analytics for sports, logistics, and defense-related applications. A recurring thread is the tension between generic models and enterprise-specific benchmarks: the panel predicts a boom in narrow, task-specific evaluations tailored to each organization, arguing these bespoke benchmarks will drive trust and measurable performance. The episode closes with a forward-looking view: as models grow more capable, enterprises will increasingly rely on multi-agent systems, multimodal interfaces, and simulated environments to pilot and scale AI, while protecting sensitive, proprietary data and maintaining essential human oversight where needed. The discussion also highlights how AI-native startups and AI-enabled incumbents will compete for distribution and execution parity. Success will hinge less on grand plans and more on disciplined execution: early pilots with clear success criteria, willingness to rent or partner when needed, and a relentless focus on data quality and governance. As the timeline accelerates toward 2026 and beyond, they foresee organizations using specialized agents for discrete tasks, coordinating them with larger language models, and relying on digital twins and RL-enabled environments to test and refine strategies before production rollouts. This pragmatic, experiment-first mindset aims to reduce time-to-value, shrink risk, and accelerate adoption across industries.

20VC

Jeff Seibert: Why OpenAI Will Become an Infrastructure Play | E1085
Guests: Jeff Seibert
reSee.it Podcast Summary
Harry Stebbings and Jeff Seibert map an AI landscape where OpenAI may become infrastructure like AWS and Apple could enable on‑device models with custom silicon. They warn Google appears most vulnerable if AI undercuts its core search business, while noting Apple's silicon advantage could yield extraordinary performance. Digits founder story and pivots frame their ethos: from CrashLitics’ real-time analytics to a bookkeeping‑oriented product, then to real‑time AI‑driven accounting. They emphasize data quality, the need for a full year of runway, and making a decisive pivot—often after testing 2–3 experiments and aligning the leadership team. Leadership discussions cover speed, decision discipline, and the realities of management: most managers aren’t intentional, so founders must be convinced and move fast. They cite weekly sprints, anchors and breezes, and the value of celebrating small wins while avoiding complacency. Disagree‑and‑commit is essential when steering through pivots. On the tech front they discuss the AI ecosystem: base models, fine‑tuning, data quality over data volume, and the risk of ‘sherlocked’ startups. They debate enterprise data control, on‑premise vs cloud, the pace of adoption, and Apple, Google, and OpenAI positioning. They foresee commoditization of AI and potential edge‑device breakthroughs. Ventures and markets frame a closing arc: most startups fail or are worth a fraction of their peak; secondaries balance liquidity; runway governs pivots; and the future of AI will be about meaningful, problem‑driven platforms rather than thin wrappers. They advocate customer focus, rapid experimentation, and disciplined, decisive leadership.

20VC

Aidan Gomez: What No One Understands About Foundation Models | E1191
Guests: Aidan Gomez
reSee.it Podcast Summary
The reality of the matter is there's no market for last year's model. If you throw more compute at the model, if you make the model bigger, it'll get better. There will be multiple models—verticalized and horizontal—and consolidation is coming. It's dangerous when you make yourself a subsidiary of your cloud provider. I grew up in rural Ontario. We couldn't get internet; dial-up lasted for years after high-speed came. That early hardship fueled a fascination with tech and coding and gaming that taught resilience. On the scaling question, 'the single biggest rate limiter that we have today' is not just more compute but smarter data and algorithms. There will be both large general models and smaller focused ones. The pattern is to 'grab, you know, an expensive big model, prototype with, prove that it can be done, and then distill that into an efficient Focus model at the specific thing they care about.' 'The major gains that we've seen in the open-source space have come from data improvements'—higher quality data and synthetic data. We need to 'let them think and work through problems' and even 'let them fail.' 'Private deployments like inside their VPC on Prem' are essential as data stays on their hardware. Enterprises are sprinting toward production, focusing on employee augmentation and productivity. The hype around 'agents' is justified; they could transform workflows, but the value will come from human–machine collaboration. Robotics are viewed as 'the era of big breakthroughs' once costs fall. Beyond models, the drive is 'driving productivity for the world and making humans more effective' and to push growth over displacement.

Possible Podcast

Notion’s Founder Deleted 3 Years of Work. Here’s Why.
reSee.it Podcast Summary
The episode centers on Notion founder Ivan Zhao’s experience rebuilding Notion from scratch after scrapping three years of work, and what that decision reveals about designing organizations in an era of AI. Zhao describes a radical approach to software as a material to be mastered, drawing a throughline from historical computing to present-day AI, and argues that the real opportunity lies in designing teams and tools that operate as a cohesive system rather than as isolated products. The conversation emphasizes Notion’s evolution into an AI-first company, not by replacing humans but by creating building blocks—text, databases, and interfaces—that can be composed and extended with language models. Zhao discusses how the GPT-4 class of models changed the timing and expectations around AI adoption, recounting how his team hurried to launch a Notion prototype just before the ChatGPT moment and later re-architected their core AI layer multiple times to keep pace with rapidly evolving capabilities. The dialogue shifts to a broader question: how do organizations scale when AI capabilities scale even faster? Zhao likens language models to steel beams for organizations, enabling throughput growth without linear headcount increases, and he envisions AI-driven coordination as the new skeleton that lets large groups align information, make decisions, and act with greater speed. A recurring theme is the human element—how to maintain human scale, culture, and agency in a world of endless computational mind power. The discussion revisits multimodality, the shift from traditional writing to spoken and AI-assisted documentation, and the idea that the most valuable strategic work will increasingly involve taste, judgment, and direction rather than merely execution. Finally, the conversation explores social and ethical dimensions: how AI affirms or reshapes human agency, the importance of walkable, human-centered cities for future work, and the responsibility of leaders to keep human values at the core as technology densifies networks and reorganizes work. The episode ends on an optimistic note about bootstrapping an intellectual industrial revolution, while urging careful attention to what should endure as our tools become more capable.

20VC

a16z, Anish Acharya: Is SaaS Dead in a World of AI? | Who Wins the Dev Market: Cursor or Claude Code
Guests: Anish Acharya
reSee.it Podcast Summary
The episode centers on a conversation about how artificial intelligence and large-scale software ecosystems are reshaping enterprise software, venture investing, and the competitive landscape for apps built on AI models. The guests challenge the conventional wisdom that software and SaaS will simply scale by replacing human processes, arguing that many enterprise workloads will not be rewritten from scratch, and that the value will accrue in layers that manage multiple models and workflows. They discuss the shifting dynamics of switching costs in large ERP and CRM environments, the role of coding agents and apps that orchestrate multiple models, and how incumbents can still win by deepening distribution and adding niche, high-surface-area features rather than purely copying incumbents’ capabilities. The dialogue also delves into where durable competitive advantages will come from in an AI-enabled world, including data networks, product moats, and the importance of real, live data. The speakers touch on market structure, pricing power after the ChatGPT wave, and the idea that power users and selective adoption may drive new growth and profitability. Finally, the conversation turns to investing strategy in this period of rapid AI-driven transformation, emphasizing founder realism, the need to try products today, and how to assess markets, teams, and funding tactics in a landscape where traditional margins and timelines are evolving. The host and guest also reflect on personal experiences, including lessons from past investments, how to structure partnerships with founders, and the balance between speed, risk, and long-term value creation as they navigate a frontier where models, apps, and human guidance intersect in new, sometimes surprising ways.

Sourcery

Thomas Laffont, Coatue - Anthropic, Citrini Paper, AI Volatility & Next Mag 7
Guests: Thomas Laffont
reSee.it Podcast Summary
In this episode, Thomas Laffont discusses the rapid adoption and impact of AI tools across large and late-stage private companies, emphasizing that board-level visibility into AI spend has surged and that executives expect AI-forward strategies to outpace the market. He notes that Claude and other AI platforms are shifting where value is created, and that the breadth of innovation from this group of firms could lead to public listings within the next couple of years. The conversation also covers how public and private market dynamics interact as the Magnificent Seven era evolves, highlighting that investors are weighing growth, valuation multiples, and the potential for AI to rewrite core business models. Laffont argues that volatility surrounding AI developments should be seen as a healthy signal, prompting governments, regulators, and companies to engage with the technology and plan for various scenarios, including shifting margins and terminal value concerns. Throughout the talk, he reflects on risk management as a core pillar of generational investing, recounting his firm’s long tenure and the importance of discipline in navigating big bets on AI, hardware, and software ecosystems. He also describes how his team blends creative, big-idea investing with rigorous risk controls, drawing on experiences with Nvidia, Anthropic, and other transformative assets. The discussion touches on tooling, productivity, and the evolving role of engineers as AI augments human work, with a focus on how teams can deploy AI to expand TAM and create new business models without reducing headcount unnecessarily.

The Koerner Office

How To Start a $10K/Month AI Automation Agency (No Code)
reSee.it Podcast Summary
The episode centers on Lindy, a no‑code platform that lets users build AI agents to run conversations, automate tasks, and manage personal and business workflows. Flo from Lindy explains that AI agents are already practical and profitable, citing a creator who’s hitting around $10,000 a month with a Lindy‑powered agency. The discussion distinguishes AI agents from simple automations: agents have memory, context, and the ability to handle open‑ended decisions, especially in conversations, whereas automations are more linear and task‑oriented. The host and Flo walk through practical use cases from sales and customer support to personal assistants, showing how agents can work across channels like email, SMS, WhatsApp, and phone calls. The conversation delves into how Lindy operates: an agent is fundamentally an LLM at the core, with a memory and context management that allow it to recall past interactions and adapt to evolving instructions. They explain how context windows currently constrain all LLMs, yet modern models and retrieval augmentation mitigate limits by pulling in external knowledge bases, emails, calendars, and CRM data. The pair explores how to deploy agents in real‑world scenarios—from lead generation and lead enrichment to scheduling, meeting preparation, and post‑meeting follow‑ups—demonstrating the depth and reliability of automated executive assistance. A substantial portion is devoted to the advantages and potential challenges of AI voice agents, including the reality that some interactions still benefit from a human touch in complex, high‑value conversations. They discuss when to disclose that an interaction is AI, the value of speed versus personalization, and industry suitability, noting that on‑the‑go professionals (plumbers, field reps, busy restaurateurs) often benefit most from voice agents. The episode also showcases “deep research” workflows, where agents summarize and compare multiple interviews or sources, offering a scalable way to distill insights for podcasts, recruiting, or corporate strategy. The show ends with practical tips for building an agency on Lindy, emphasizing templates and flows, and highlighting how an entrepreneur used content and outreach to attract clients. They touch on privacy considerations, account scalability, and future features like team collaboration and desktop integration. The underlying message is clear: AI agents are not a distant future—they’re being used today to save time, generate revenue, and transform how teams communicate, sell, and operate.

Possible Podcast

The SECRET to scaling your business
reSee.it Podcast Summary
AI agents listening in on every professional meeting may seem like science fiction, but it is becoming practical. In a live session, Reid Hoffman asks founders to explain how they misread scaling in an era of rapid AI leverage. The first question focuses on misconceptions about growing a company quickly, and the answer emphasizes scale product market fit instead of simply hiring more people. Scaling is not merely adding fuel; it requires proving the fit while expanding, and deciding how the business model will evolve. Blitz scaling is risky when the probability of scale product market fit is uncertain, and Hoffman names Uber, Airbnb, and the early days of Facebook as examples. The discussion then turns to how AI changes scale decisions, including whether model size truly matters, the rise of open source models, and how multimodal options create competition among large providers. Teams must stay nimble, adjusting licenses and strategies as models evolve, while balancing network effects that can slow or speed adoption. The talk returns to concrete loops where AI can serve front line customer interactions, sales, and enterprise workflows, all while monitoring the human factors that drive deployment. Large scale adoption will depend on clear value.

20VC

Noam Shazeer: How We Spent $2M to Train a Single AI Model and Grew Character.ai to 20M Users | E1055
Guests: Noam Shazeer
reSee.it Podcast Summary
Noam Shazeer, co-founder and CEO of Character.ai, calls it a full-stack AI computing platform giving people access to their own flexible super intelligence. The mission is 'a billion users inventing a billion use cases,' with examples like 'I'm talking to a video game character who's now my new therapist, and this makes me feel better.' He contrasts a direct-to-consumer approach with a traditional B2B path, citing Google's lesson that general tech should launch to billions. He explains language modeling as 'guess what the next word' with scalable neural models. The biggest challenge is making a system that is both very general and usable: 'make it very general, and make it usable.' Privacy matters: 'we are careful to not compromise anyone's privacy,' and user data helps improve the product. He also notes an ecosystem of open and closed approaches and that startups often move faster than giants.

The Koerner Office

The Most Useful AI Skill You Can Learn Today
reSee.it Podcast Summary
The episode centers on practical, non-technical paths to profit with AI tools. The host emphasizes hands-on experimentation with no-code automation platforms like Gumloop, n8n, and AnyMake, urging listeners to build at least one automation and to leverage free trials, because learning by doing is more valuable than theory. He stresses that you don’t need to be a coder to create useful workflows, and suggests starting with small, personal automations that improve daily life while gradually understanding how back-end data, databases, and front-end interfaces connect. A recurring theme is choosing the right tools for the task and being flexible about AI models. OpenAI tends to be favored for user-facing, personality-rich applications, while Claude excels at coding, agents, and more complex tasks, with Gemini offering cost-effective versatility. The conversation also covers evaluating models, keeping an eye on evolving capabilities, and recognizing that the landscape shifts quickly as new models and features launch. The speaker stresses industry immersion as a gold mine for opportunities: identify real pain points in your own field, collaborate with domain experts, and build wrappers or automations that address those problems rather than chasing hype in the tech bubble. Another major thread is the importance of product design and user experience as a moat. Beautiful UX and thoughtful interfaces can differentiate products much more than mere functionality, and the host uses analogies like Jira versus Linear to illustrate how design quality drives adoption. He also advocates for “vibe coding” and pairing non-technical founders with developers to translate ideas into usable apps. Finally, the discussion touches the value of community, ongoing learning, and the mindset shift required to thrive in an AI-enabled future, encouraging listeners to keep experimenting, iterate, and have fun while they learn.

All In Podcast

Debt Spiral or NEW Golden Age? Super Bowl Insider Trading, Booming Token Budgets, Ferrari's New EV
reSee.it Podcast Summary
The episode centers on a rapid evolution in AI as a driver of work, value creation, and enterprise strategy. The hosts discuss a Harvard Business Review study showing that AI tools increase throughput and scope at work, raising productivity while also elevating stress and burnout. The conversation emphasizes a shift from task-based to purpose-based work, with early adopters of AI—“AI natives”—likely to demonstrate outsized value to employers, cutting timelines from days to hours and turning AI-assisted tasks into high-value outcomes. They explore how bottom-up adoption of consumerized AI within organizations can outpace traditional top-down transformation efforts, potentially accelerating enterprise-wide AI deployment through replicants, agents, and orchestration platforms. The group also probes the practical constraints of using AI in business, including data security and confidentiality, the potential need for on-prem solutions versus public-cloud usage, and the economic trade-offs of private provisioned networks as AI-driven efficiency pressures rise. Across these points, the discussion contends that the current wave is less about replacing knowledge workers and more about augmenting them, and it examines how token budgets, cost per task, and the productivity delta will shape compensation, hiring, and organizational design in the near term. The conversation then broadens to prediction markets and real-world use at the Super Bowl, debating insider information, regulation, and societal impact as such platforms scale, while balancing the public-interest value of faster truth with the risk of manipulation. The hosts pivot to macroeconomics, evaluating the Congressional Budget Office’s debt trajectory, debt-to-GDP concerns, and the potential consequences of higher interest costs and entitlements funding. They underscore the possibility of a “golden age” scenario driven by AI-related capital expenditure, innovation, and a booming tech economy, while acknowledging the structural risks of rising deficits if growth does not accelerate. The episode closes with a digest of consumer tech and automotive trends, including Ferrari’s forthcoming all-electric hypercar and broader shifts in mobility and autonomy, which sit against a backdrop of a larger productivity boom that could reshape labor markets and consumer behavior for years to come.

The Koerner Office

Build Your Next Business With This Viral AI Tool
reSee.it Podcast Summary
The episode centers on Gum Loop, an automation platform described as AI-first, drag-and-drop tooling that lets non-engineers build powerful AI workflows. Max Broer explains how Gum Loop enables users to create multistep automations for tasks like lead enrichment, customer support analysis, and outbound outreach, effectively replacing large chunks of manual work with scalable “flows.” He positions Gum Loop as the next Zapier for the AI era, emphasizing that it expands what is possible with automation rather than just replacing existing tools. A core theme is the distinction between traditional automation (Zapier-style) and AI-powered workflows. Gum Loop’s strength lies in combining AI reasoning with programmable blocks to perform complex, data-rich tasks—such as researching a lead, drafting personalized emails, summarizing thousands of chat messages, and generating research reports—without requiring engineering resources. The co-founder notes the product’s philosophy of measured agent capabilities, focusing on reliable, auditable steps rather than fully autonomous agents. The conversation delves into practical use cases and pricing dynamics, highlighting a diverse customer base from large enterprises like Instacart to small businesses. Common patterns include lead scoring, content generation, CRM enrichment, and programmatic SEO. The show explores how Gum Loop is used to build agencies or “experts” who construct custom workflows for clients, and discusses the upcoming co-pilot feature intended to lower the learning curve and enable users to go from idea to running workflow in minutes. Towards the end, Max discusses the future roadmap and business strategy, including an emphasis on the interviewees’ belief that AI will catalyze productivity at scale. He mentions an upcoming marketplace for expert flows, privacy considerations around sharing credentials, and the potential for white-labeling Gum Loop. The dialogue closes with reflections on model selection for different tasks and the value of treating AI like a capable employee who operates within clearly defined steps.

a16z Podcast

Why NOW is the Golden Era to build AI apps.
reSee.it Podcast Summary
The episode traces how product cycles have shaped software growth, arguing that we are in a lasting AI era built on prior infrastructure like semiconductors, the internet, cloud, and mobile devices. The speaker emphasizes that AI is not starting from scratch but amplifying what already exists, with smartphones and broadband enabling billions of users to access increasingly capable AI tools. A core observation is that most new revenue in software is now coming from AI at both the application and infrastructure layers, and the pace of progress has accelerated dramatically in a short window. The discussion then delineates three broad themes for AI-enabled investing: first, traditional software going AI native, where incumbents and startups alike rewrite existing categories to embed AI; second, software that essentially replaces labor, a larger potential market where the value is delivered through automation rather than through new products alone; and third, the rise of walled gardens—systems of record powered by proprietary data models and data moats that create defensible advantages. Examples across these themes include wallets and banking platforms that gained strength during the AI shift, ERP and payroll ecosystems that could be enhanced by AI-driven processes, and niche sectors like debt collection and legal services where endpoint workflows become the moat. The guests discuss how defensibility lives in end-to-end workflows and data advantage, not merely in novelty features like voice agents. They compare incumbents’ responses with greenfield opportunities and caution that brownfield moves—simply adding AI to an old product—are harder to scale into durable leadership. The conversation also touches on consumer AI, noting that aggregators of models can outperform single-lab solutions in many markets, and highlights examples where proprietary data, AI-scribe workflows, and domain-specific training deliver premium products. Throughout, the emphasis remains on the strategic value of data, the need for moats, and a conviction that AI will augment rather than annihilate labor, enabling firms to be lazier and richer while driving significant cost savings and revenue growth.

Generative Now

Chris Pedregal + Sam Stephenson: Making Meetings More Effective with Granola
Guests: Chris Pedregal, Sam Stephenson
reSee.it Podcast Summary
Granola co-founders Chris Pedregal and Sam Stephenson built a note-taking AI tool after witnessing how meetings generate tedious follow-up work. The duo met through a shared conviction that AI could reshape tools for thinking, inspired by GPT-3’s instruct version and a fascination with tools for thought. They described three years of exploration, from leaving Google to chase a startup in London to identifying a painful, universal problem: turning meeting conversations into usable, actionable notes and tasks rather than menial aftercare. They designed Granola to sit inside meetings and become a habit. They stressed the app layer versus frontier models: it’s more valuable to build a polished product that leverages the best models than to train one from scratch. They discussed examples like real-time transcription, multiple language support, and retrieval-augmented generation to manage long meeting histories beyond the model’s context window. They described a design philosophy they call the lizard brain approach: keep the interface simple because users operate under stress during back-to-back meetings. The goal is an experience that surfaces what matters from a single meeting and across teams. On business and growth, they described a capital-intensive, long-horizon bet. Revenue comes from enterprise adoption and network effects through shared granola workspaces, not just AI credits. They acknowledged expensive compute today but expect costs to fall over time, enabling broader use. They contrasted London's talent with Silicon Valley, framing Granola as a Silicon Valley-style startup in a European hub. They emphasized product quality and taste, screening for product thinking in engineers, and balancing rapid iteration with preserving a simple, elegant user experience. Looking ahead, they envision Granola becoming a jetpack for the mind, a workspace for people whose work is conversation, with meeting transcripts, emails, and documents interwoven into a coherent knowledge base. They imagined use cases for venture memos, sales calls, and company reorgs, all powered by context-rich AI. Privacy discussions emerged as they noted Granola does not store audio and users control access to transcripts, signaling norms that will shape adoption. The conversation closed with a reminder that the era of AI-enabled tools is accelerating, and Granola aims to lead with usefulness.

The Koerner Office

99% of Companies Have No Idea How to Use AI (Here's How to Profit)
reSee.it Podcast Summary
The episode centers on the practical, sometimes gritty realities of adopting AI in large organizations, emphasizing that most companies lack even basic tools to leverage AI effectively. The speakers argue that many corporate teams struggle with fundamental tasks like searching the web or applying AI to real workflows, and they challenge listeners to rethink what it means to turn AI into tangible value. A key theme is the idea that AI isn’t just a fad or a toy; it requires disciplined experimentation, rapid prototyping, and a clear plan for how AI can replace or augment specific job tasks. The conversation moves from high-level hype to concrete tactics, illustrating how AI agents can act as rapid testing machines, enabling quick validation of ideas, demand, and pricing. The hosts discuss building “KGs” of data and tools to support ongoing AI work, including locally hosted models to reduce costs and dependencies on third-party inference. They recount hands-on experiments with Claude, Gemini, and Opus models, comparing performance, cost, and practicality, and they stress that the best early leverage is in designing workflows that save executives and teams time—such as automating data gathering, summarizing meetings, and drafting communications. A large portion of the episode is dedicated to a template for creating value: record and transcribe meetings, extract structured insights, and build an archival, queryable system that surfaces actionable follow-ups. The speakers share a candid view of their own ventures, highlighting the importance of clean data, careful data organization, and a taxonomy that makes information retrievable for AI agents. They also discuss go-to-market ideas, from executive education and roundtables to fractional AI leadership, and stress that success comes from understanding clients’ pain points and delivering high-leverage tools rather than flashy, one-off projects. Overall, the episode blends practical engineering detail with strategic business thinking, illustrating how to move from “AI as a toy” to “AI as a disciplined, revenue-generating capability.”

Uncapped

Bret Taylor on AI and the Future of Software | Ep. 42
Guests: Bret Taylor
reSee.it Podcast Summary
In this episode of Uncapped, the host and Bret Taylor explore how artificial intelligence is reshaping software strategy, incentives, and the core architecture of modern enterprises. They discuss the idea that the traditional “systems of record”—databases and the associated workflows—will coexist with AI agents, but the relative value may shift from the database itself to the agents that operate on top of it. The conversation traces how early software platforms built defensibility through network effects, ecosystems, and high switching costs, and then asks what happens when AI agents can perform many tasks that used to require manual interaction with ERP, CRM, or IT service management systems. Taylor argues that the strength of incumbents may erode as agents become capable of handling onboarding, lead generation, quoting, and other familiar processes, while incumbents still hold some advantages in scale, integration, and existing ecosystems. A central question is whether the role of a system of record will diminish if AI agents handle most tasks invisibly, and how to balance the gravity of the database with the gravity of autonomous agents operating around it. The dialogue suggests that the market will favor platforms and ecosystems that can assemble robust agent networks and offer industrial-grade reliability, especially in regulated industries like healthcare and banking, where compliance and risk management matter deeply. The discussion then moves to pricing models, with a strong emphasis on outcomes-based pricing over token- or input-based schemes. Taylor explains why tying value to measurable business outcomes—such as successful sales conversions or satisfactory customer support—offers a clearer alignment with customer needs than charging by token usage. They also reflect on the practical realities of making AI work at scale, including edge cases in voice and multilingual support, and the need for teams committed to rapid, reliable deployment that can still navigate complex change management. The interview ends on reflections about the future of work in AI-centric software, the potential for smaller, intense teams to win in certain markets, and the importance of combining deep domain knowledge with AI fluency to deliver durable customer value. Throughout, the emphasis remains on building products and partnerships that can move quickly, but with a maturity that matches the demands of large organizations and regulated industries.

Lenny's Podcast

Head of Claude Code: What happens after coding is solved | Boris Cherny
Guests: Boris Cherny
reSee.it Podcast Summary
Boris Cherny discusses a transformative shift in software development driven by Claude Code and the broader AI tooling at Anthropic. He describes a world where code is largely authored by AI, with humans focusing on higher-level design, strategy, and safety—shifting the craft from writing lines of code to shaping problem-solving approaches and tool usage. The conversation covers the launch trajectory of Claude Code, its rapid adoption across organizations, and how it has redefined productivity per engineer. Cherny notes that Claude Code not only writes code but also uses tools, reviews pull requests, and assists in project management, illustrating a broader move toward agentic AI capable of acting within real-world workflows. He emphasizes the importance of latency demand, where user feedback and real-world use reveal new product directions, such as Co-Work and terminal-based interfaces. He explains how early releases and fast feedback loops were essential to discovering and validating latent use cases beyond traditional coding tasks, including automation of mundane administrative work and cross-functional collaboration. The discussion also explores the safety and governance layers that accompany these advances, including observation of model reasoning, evals, sandboxing, and the open-source efforts that aim to balance rapid innovation with responsible deployment. Cherny reflects on personal perspectives, recounting his own background, the inspiration drawn from long time scales and miso making, and the aspirational view that a future where anyone can program is possible, albeit with significant societal and workforce disruption to navigate. The episode closes with practical guidance for builders: embrace generalist thinking, grant engineers broad access to tokens, avoid over-constraining models, race toward general models, and design products around the model’s evolving capabilities rather than forcing the model into rigid workflows. Throughout, the thread remains: incremental experimentation with AI can unlock extraordinary capabilities, while maintaining a strong focus on safety, human oversight, and alignment to responsible outcomes.

Possible Podcast

Does AI really save time?
reSee.it Podcast Summary
The conversation centers on whether AI actually saves time in knowledge work, or simply raises expectations and increases throughput. The hosts discuss a recent Harvard Business Review argument that AI accelerates work pace and volume rather than delivering a straightforward time-saver, noting that more drafts, reviews, and risk checks can follow AI-assisted outputs. They acknowledge the potential for higher quality results and faster turnarounds, but emphasize that the real impact depends on context, task type, and how teams configure AI into their processes. The discussion moves to practical implications: even with faster analysis and decision support, expensive activities like due diligence, contracting, and strategic coordination will still require human judgment and thorough review. They explore scenarios where AI reduces the time for repetitive, high-volume tasks but does not eliminate the need for critical oversight, risk management, and cross-functional alignment. The speakers highlight a core tension between speed and quality, and how competitive dynamics shape how organizations adopt AI—sometimes trading longer, more thorough processes for quicker terms or faster market responses. They also reflect on the broader organizational consequences: meetings and bureaucratic routines persist, but AI can trim unproductive engagement while revealing new forms of collaboration and governance that require ongoing human input. The overall message is that AI acts as a powerful accelerant; its value lies in how individuals and teams recalibrate workflows, incentives, and decision-making in a changing landscape.

Possible Podcast

Possible 109 ParthPt2 NoIntro V3
reSee.it Podcast Summary
The conversation centers on how large organizations are deploying AI, focusing on the gap between declared AI strategies and real-world execution. The speakers describe a “first inning” phase where proposals exist in committees and pilot projects, but actual integration into daily workflows remains limited. They emphasize that the most immediate value from AI comes from language-model–driven tasks that touch everyday communication and coordination, such as meeting transcription, action-item tracking, and surfacing relevant information from business intelligence in real time. They argue that AI’s impact will compound as it moves from isolated pilots to bottom-up changes in how people work, enabling employees to reimagine processes rather than merely automate old ones. They illustrate this with examples from software migrations, translation workflows, and the creation of dashboards from raw data, suggesting that AI can dramatically shorten what used to take weeks into minutes by augmenting human judgment rather than replacing it. The dialogue also explores the role of agents and “coding agents” in accelerating analysis, orchestrating tasks across multiple projects, and enabling new forms of collaboration where a single executive can guide numerous parallel explorations. The participants discuss how to design environments that reward experimentation, share wins, and reduce resistance by normalizing rapid prototyping. They highlight concerns about secrecy around productivity gains and contrast individual acceleration with organizational learning, arguing that scalable adoption hinges on creating common tools, knowledge graphs, and ambient AI that supports decision-making across teams. Throughout, the emphasis is on practical steps—transcribe meetings, automate routine actions, and empower non-technical leaders by partnering with technically adept colleagues to build internal tools that unlock faster, broader problem-solving across the company.

20VC

Turing CEO Jonathan Siddharth: Who Wins in Data Labelling & Why 99% of Knowledge Work Will Disappear
Guests: Jonathan Siddharth
reSee.it Podcast Summary
Jonathan Siddharth argues that the era of simple data labeling is ending and that the real battleground is building scalable research accelerators that generate the right data, for the right workflows, at the right scale. He outlines a shift from training models to take tests to training models to do real work, from chatbots to agentic systems that execute complex, multi-step tasks across orgs, and from generic data to highly processed, domain-specific data. In this frame, Turing positions itself as a data-centric research partner for frontier labs and large enterprises, producing synthetic RL environments across industries, functions, and roles to train agents that can operate in the real world. Siddharth emphasizes that the data requirement is the primary bottleneck now: as models get smarter, the need for real-world, high-fidelity data—able to simulate tools, workflows, and private enterprise contexts—becomes essential for building robust agents. The conversation delves into how Turing creates four-dimensional RL matrices spanning industries, functions, roles, and workflows, enabling a huge expansion of knowledge work through agentic AI. He argues that the work is still in innings one, with a slow, steady takeoff toward AGI, rather than a rapid, disruptive jump, and notes that 30 trillion dollars of digital knowledge work are at stake, driving demand from labs and enterprises alike. The host and Siddharth discuss the economics of the space, questioning the role of revenue versus gross merchandise value, the need for on-premise, fine-tuned models for sensitive domains like insurance underwriting, and the importance of data security and secrecy in enterprise deployments. They also explore the future of work: the potential for 100x productivity, a broader entrepreneurial landscape enabled by AI copilots, and the societal implications of widespread access to superintelligence. Throughout, Siddharth stresses the necessity of hands-on leadership, close customer collaboration, and a data-driven feedback loop to close the gap between model capability and real-world performance, with a pragmatic view of regulatory sovereignty and the evolving architecture of AI platforms. topics andKeyPointsInThisEpisodePlayers have to solve data acquisition challenges for agentic AI; RL environments for domain workflows; enterprise adoption with on-prem fine-tuning; data privacy and firewalling; the shift from SaaS to research accelerators; incremental vs rapid AI progress; AI for front-office vs back-office tasks; governance, sovereignty, and government use cases; the future of work and entrepreneurship under AGI; role of multimodal, tool use, and coding in AGI strategy; market structure with a few winners and the importance of research depth; the balance between proprietary vs open models; the impact of corporate culture and leadership on AI initiatives; practical deployment challenges like first-mile and last-mile “schlep”
View Full Interactive Feed