TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
When something becomes a common platform, it becomes open source. This applies to the internet's software infrastructure and has led to faster progress and increased safety. The rapid advancement of AI in the past decade is a result of open research and sharing of code. Open sourcing allows for collaboration and reuse, with common platforms like PyTorch benefiting the entire field. If open source is legislated out of existence due to fears, progress will be significantly slowed down.

20VC

Sam Altman's Masterplan or a Gift to Anthropic? Palantir & Shopify Crush Earnings
reSee.it Podcast Summary
"My big aha is it's like dealing with a deranged madman trying to estimate what the street will do. I spend no time on this. Utterly unknowable. You don't need half your company, and Palantir and Shopify are proving it. Let's look at Shopify for a minute. From peak employee was 2022, 11,600 employees at Shopify. Since then, revenue has grown 91%, pretty impressive for a company at 11 billion revenue. And employees have gone down from 11,600 to 8100, gone down while revenue is up 91%. He's ruthless. Zuck's ruthless. Karp's ruthless. And if you think you're going to win in B2B, if you're not ruthless, you're going to lose. Ready to go." "GPT5 is the top story of the week. Consensus is it's slightly underwhelming. The first experience was underwhelming when it said we had the greatest market crash since the tulip era. If Aaron Levy is running this through Box and saying redline and document comparison and term extraction is materially better, maybe that doesn't make those of us who are using it for therapy excited. If it's materially better at coding and competes with Anthropic, you know, that's six billion of revenue that they lost. So, but I get it. It does feel like it's a worse therapist at the moment, doesn't it?" "Underwhelming is great. We’re now in the grind it out, make it better, build a business stage of life, which I think is a more normalized world. And so there's two things in it. What implicit in that is the statement I don't buy any of this. You know, they're going to keep on getting better exponential takeoff, all that AGI rubbish. I've always assumed it's rubbish. Maybe I'm wrong, but at least right now the evidence shifted a little more in favor of, perhaps not nearly as quickly as you think." "OpenAI going at a big ass pile of revenue that Entropic has. And maybe Entropic overplayed their hand a little bit by kind of bullying Windurf. ... the big ass guy in the block is now trying to com, you know, is now another vendor of tokens, significantly cheaper. I'm going to push the hell out of this. That's a really big business comment. It's not as sexy as AGI stuff, but if you're trying to build a business and your Cursor, this is the best damn thing that ever happened, right?" "They shipped the open source products earlier this week. ... moving away from all those models to the single model selector. ... it's time to get business savvy, not just AI is coming savvy."

Conversations (Stripe)

Arthur Mensch (Mistral AI) and John Collison (Stripe) fireside chat | Stripe AI Day—Paris
Guests: Arthur Mensch
reSee.it Podcast Summary
Arthur Mensch explains Mistral's open core approach: release model weights, open-source family plus proprietary hosting, to differentiate from closed US players. They see Meta's Llama 2 as an opportunity, since access enables retraining and community improvements; expect synergy with open source progress. A small model release in a couple of days; a modest but high-quality model forthcoming. On safety, open weights enable safer moderation; censorship behind APIs hinders control; strong safety comes from enabling end-user control and policies via weights. Hallucinations addressed by long training, retrieval augmentation, and soon a non-embedding model; architecture aims for retrievability. France's AI renaissance due to math/CS education and tech ecosystem; need boldness and balanced European regulation focusing on auditable documentation rather than fixed thresholds. They do not chase AGI; aim to empower enterprises and shorten time-to-value. They train from scratch on decoder architecture; on-device inference for small models; multimodal work planned later; emphasis on open models solving cost and hallucination.

20VC

Jonathan Ross: DeepSeek Special - How Should OpenAI and the US Government Respond | E1253
reSee.it Podcast Summary
Deep Seek is described as Sputnik 2.0; they spent about six million on training, and more distilling or scraping the OpenAI model. The guests discuss distillation, reinforcement learning from OpenAI data, and a claim that better data quality lets you train with fewer tokens, citing AlphaGo Zero’s self-play. They describe an automated reward-model approach and a 'box' method to evaluate output without human gating. Data sovereignty and geopolitics dominate: concerns about CCP access to US data, possible export-law issues, IP blocking of China, and the CCP’s incentives. Deep Seek emphasizes an option with 'we store nothing'—no hard drives—so data 'not going to the CCP'. They discuss sanctions and the idea that open source could complicate efforts by OpenAI and others. The conversation touches on Europe’s risk-averse stance and the need for a global response. Business and strategy themes run through: Seven Powers framework, brand power of OpenAI vs. open source, inference vs training economics, and Nvidia’s role. They predict commoditization of models, focus on infrastructure, and the likely rise of 'mixture of experts' (MoE) and larger parameter counts with sparse compute. They discuss open sourcing as a competitive move, Europe’s Station F approach, and the likelihood of continued disruption in the AI arms race, including national-security implications.

20VC

Aidan Gomez: What No One Understands About Foundation Models | E1191
Guests: Aidan Gomez
reSee.it Podcast Summary
The reality of the matter is there's no market for last year's model. If you throw more compute at the model, if you make the model bigger, it'll get better. There will be multiple models—verticalized and horizontal—and consolidation is coming. It's dangerous when you make yourself a subsidiary of your cloud provider. I grew up in rural Ontario. We couldn't get internet; dial-up lasted for years after high-speed came. That early hardship fueled a fascination with tech and coding and gaming that taught resilience. On the scaling question, 'the single biggest rate limiter that we have today' is not just more compute but smarter data and algorithms. There will be both large general models and smaller focused ones. The pattern is to 'grab, you know, an expensive big model, prototype with, prove that it can be done, and then distill that into an efficient Focus model at the specific thing they care about.' 'The major gains that we've seen in the open-source space have come from data improvements'—higher quality data and synthetic data. We need to 'let them think and work through problems' and even 'let them fail.' 'Private deployments like inside their VPC on Prem' are essential as data stays on their hardware. Enterprises are sprinting toward production, focusing on employee augmentation and productivity. The hype around 'agents' is justified; they could transform workflows, but the value will come from human–machine collaboration. Robotics are viewed as 'the era of big breakthroughs' once costs fall. Beyond models, the drive is 'driving productivity for the world and making humans more effective' and to push growth over displacement.

a16z Podcast

Aaron Levie and Steven Sinofsky on the AI-Worker Future
Guests: Aaron Levie, Steven Sinofsky
reSee.it Podcast Summary
An evolving vision of AI emerges: not a chatty helper, but autonomous agents that run in the background, executing real work for you with minimal intervention. They produce outputs that loop back into themselves, creating a feedback loop that can extend a task far beyond a single prompt. The speakers compare this to the amperand in Linux, a background process that seems like the worst intern yet keeps getting better. The more work these agents perform without human handholding, the more agentic they become, reshaping what we mean by an AI assistant. The core question shifts from form factor to capability: how independently can an agent operate? The conversation notes long-running inference, where outputs are fed back as inputs, and discusses practical limits of containment. A key insight is that real progress will likely come from a system of many specialized agents rather than a single monolithic intelligence. Some agents go deep on a task; others handle orchestration. In this view, work is subdivided into smaller modules, echoing Unix tools and the idea that distributed components can collaborate without one giant brain. Enterprise adoption centers on balancing productivity gains with risk and governance. Hallucinations have declined as models improve, and organizations are learning to verify outputs, especially in coding and writing tasks. Prompting remains essential, with longer, more detailed prompts delivering better results than one-shot commands. A trend toward subagents tied to microservices emerges, with each agent owning a specific component of a codebase or workflow. People start to manage portfolios of agents, turning engineers into managers of agents and rethinking how work flows through teams. Beyond coding, the discussion anticipates a platform shift that could spawn hundreds of specialized agents across verticals. The fear that large models will swallow entire domains fades as experts build and orchestrate domain-specific agents, sometimes offered by third parties. The payoff is new efficiencies, new roles, and fresh startup opportunities, as workflows are redesigned around agent-enabled productivity. As in past platform shifts, the move may redefine what professionals produce and how they organize their work, promising exponential gains in enterprise productivity over time.

a16z Podcast

Chasing Silicon: The Race for GPUs
Guests: Guido Appenzeller
reSee.it Podcast Summary
Finding compute capacity for applications is a significant challenge for companies, especially with the exponential growth of AI. Founders should consider their hardware needs and explore various providers, as demand for AI hardware currently outstrips supply by a factor of 10. The bottlenecks in chip manufacturing and the complexity of building new fabs hinder rapid production increases. Companies often need to pre-reserve capacity, leading to negotiations with cloud providers for exclusive access. While renting cloud services is generally more feasible for early-stage founders, owning infrastructure may be necessary for larger-scale operations. Differentiated data access can serve as a competitive moat, especially in specialized fields. Open-source models are emerging, but most still lag behind larger proprietary models in performance. As compute costs rise, the trend may shift towards local inference on devices, reducing reliance on cloud services. The AI landscape is evolving, presenting opportunities for new companies and technologies as the ecosystem adapts to these changes. Future discussions will focus on the costs associated with AI compute and its sustainability.

20VC

Aravind Srinivas:Will Foundation Models Commoditise & Diminishing Returns in Model Performance|E1161
Guests: Aravind Srinivas
reSee.it Podcast Summary
Today’s models are just giving you the output. Tomorrow’s models will start with an output, reason, elicit feedback from the world, go back, and improve the reasoning. That is the beginning of a real reasoning era. The biggest beneficiaries of the commoditization of foundation models are the application layer companies ready to go. Harry describes his accidental entry into AI via an undergrad ML contest, exploring scikit-learn and reinforcement learning. He notes diminishing returns and the central role of data curation in scaling. What makes these models magical is not domain-specific data but general-purpose emergent capabilities. They are trained to predict the next token, yet they show reasoning-like flexibility. 'The magic in these models' emerges from vast, diverse data; the debate about verticalization is not settled—some argue domain specialization helps, others doubt. Memory and long-context remain challenges; some see a Gmail-like storage approach as practical, while infinite context remains elusive. The path forward may depend on how we orchestrate data, prompts, and tools. On the business side, the conversation centers on commoditization, funding, and monetization. 'The second tier models' will be commoditized; OpenAI, Anthropic, and others are valued more for the people who build the models than for the models themselves. Perplexity pursues a mix of advertising, subscriptions, APIs, and enterprise offerings, aiming to scale with a strong product and user base. They view advertising as potentially dominant if they crack the relevance code, while enterprise remains a separate, longer-term path. The 2034 vision is Perplexity as the go-to assistant for facts and knowledge.

20VC

Sarah Tavel: Will Foundation Models Be Commoditised? | E1149
Guests: Sarah Tavel
reSee.it Podcast Summary
Sarah explains that frontier AI models are likely to stay close-sourced for now, pushing value to the application layer where startups can capture it. Progress is compute-constrained, making models more expensive and fostering an oligopoly. Benchmark's approach emphasizes partnering with founders and supporting their growth rather than scaling via recruiters. She highlights the importance of the 'why now' in fundraising—a strong catalyst such as AI can create a powerful current that accelerates a company's momentum, while weak 'why now' leaves founders paddling uphill. On AI's economics, she argues AI is a sustaining technology for incumbents when used as APIs to augment existing workflows, while startups can disrupt by selling the work product rather than per-seat software. The first wave of AI startups has faced distribution challenges; incumbents can bundle improvements, whereas new entrants must own more of the workflow and create workflows hard to replicate. She discusses the open vs closed model debate, predicting frontier models will be close-sourced for now, with open options evolving later. This frame supports the conclusion that incumbents win on integration while startups win on comprehensive end-to-end outcomes. Benchmark's differentiated model centers on equal partnership and deep founder alignment, not a large recruiting machine. They recruit by leveraging founders' success and focus on one or two investments yearly, aiming for durable, independent companies with network effects or moats. They value cohort engagement and early usage signals, evaluating whether a 'why now' is enduring. They confront dilution and capital intensity by arguing that big, capital-intensive AI bets can yield outsized, long-run moats if the founders escape competition. The firm's board approach prioritizes hands-on value creation and critical questions.

20VC

Sam Altman: What Startups Will be Steamrolled by OpenAI & Where is Opportunity | E1223
Guests: Sam Altman
reSee.it Podcast Summary
we believe that we are on a pretty a quite steep trajectory of improvement and that the current shortcomings of the models today will just be taken care of by Future generations, and I encourage people to be aligned with that ready to go. If you are building a business that patches some current small shortcomings, if we do our job right, then that will not be as important in the future. there will be many trillions of dollars of market cap that gets created by using AI to build products and services that were either impossible or quite impractical before. It’ll get there for sure. There’s clearly a really important place in the Eos system for open source models. Reasoning is our current most important area of focus. I think this is what unlocks the next like massive Leap Forward in value created. We will do multimodal work and other features in the models that we think are super important to the ways that people want to use these things.

a16z Podcast

Marc Andreessen's 2026 Outlook: AI Timelines, US vs. China, and The Price of AI
Guests: Marc Andreessen
reSee.it Podcast Summary
Marc Andreessen’s long view on AI paints a landscape of explosive product and revenue growth, yet with a caveat: the current wave is just the opening act of a multi-decade transformation. He argues the shift is bigger than previous revolutions like the internet or microprocessors, driven by affordable, widely accessible AI tools that democratize capabilities and unlock new business models. The conversation focuses on two market realities: rapidly increasing demand and the corresponding push to manage costs, pricing, and capital intensity. He emphasizes a portfolio-based venture approach that bets on multiple strategies in parallel, from big-model to small-model deployments, open-source to proprietary, consumer, and enterprise. The underlying message is that we’re at the dawn of a period where price per unit of intelligence falls precipitously, enabling widespread adoption while sustaining aggressive innovation across a global ecosystem. The discussion then turns to policy, geopolitics, and the competitive chessboard with China. Andreessen stresses that AI is increasingly a geopolitical as well as economic contest, with China closing the AI gap through open-source breakthroughs, state-backed projects, and rapid hardware development. He notes a shift in Washington toward a managed, collaborative stance that recognizes the need for federal leadership to avoid a messy, state-by-state regulatory patchwork that could hobble progress. The guest highlights the risk and opportunity of “two-horse” competition, where the US and China push one another forward, while other nations contribute through diverse models, chips, and ecosystems. The panel also roasts regulatory experiments (and missteps) in various states, contrasts EU regulation with the realities of US innovation, and defends a pragmatic path toward national coherence and protection of startups’ freedom to innovate. The final portion situates venture strategy within this macro context, arguing that incumbents and startups will both win in different ways as AI matures. Andreessen describes a future in which a few “god models” sit at the top of a hierarchy, complemented by a cascade of smaller, embedded models that enable ubiquitous deployment. He cites the accelerating cycle of model improvements (for both big and small models) and the growing importance of pricing strategy, suggesting usage-based or value-based models that align incentives with real productivity gains. The conversation also celebrates the vitality of open source as a learning tool and a driver of broad participation, while acknowledging the ongoing push from closed models for continuous, rapid improvement. Overall, the episode is a blueprint for navigating an era of unprecedented AI-enabled opportunity and risk, underscored by a belief that thoughtful policy, resilient capital allocation, and relentless innovation will determine who leads the next wave.

20VC

Tom Hulme: Lessons from a 24x Angel Track Record, 275x on Robinhood & Making Billions on Uber |E1150
Guests: Tom Hulme
reSee.it Podcast Summary
There are basically three types of investors: smart ones who will add value, passive ones who won’t get in the way, and those who think they’re smart but interfere. I sold a company and decided to try angel investing as an experiment to answer what kind of investor I’d be, whether I’d enjoy it, and whether I’d be any good at it. Growing up in North London and enduring school bullying taught me empathy and gave me a chip on my shoulder. The best founders listen to feedback and market signals, while many founders actually resist feedback. Price rounds are less common because valuations remain contentious, so convertible notes are often used. VC reporting on TVPI tied to recent rounds can misstate value when public comps are down, and this can delay honest pricing, potentially masking inaccuracy in a portfolio’s value. As for returns, the biggest winners tend to be those who tackle hard problems and stay the course for a decade, not momentum plays. Execution matters most; you can’t invest on an idea alone—you must know how you’ll execute if the idea has true value. We invested $100 million into Stripe in 2020 and extended the Series G round. The discussion shifts to fear versus FOMO, noting fear as a major driver that can wreck industry returns. Clock speed, rapid iteration, and the data you collect matter as much as the concept itself, and fear should not paralyze investment decisions. Cloud providers are likely to become cash cows; incumbents and cloud-scale players will acquire foundational-model companies, extract value in the cloud, and offer some foundation models at little or no cost to drive adoption. The conversation then turns to the future of foundation models, the application layer, and why the right partners matter. It ends by reiterating Stripe’s trajectory and the belief that speed and disciplined execution—not novelty alone—drive enduring value.

a16z Podcast

How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning
Guests: Sherwin Wu
reSee.it Podcast Summary
The episode centers Sherman Woo’s deep dive into how OpenAI builds for a broad, growing user base while balancing the API as a developer platform with the company’s own first-party products. Sherman emphasizes that the era of a single all-powerful model is giving way to a proliferation of specialized models and tuned variants, driven by the sheer scale of data that companies possess and the potential of reinforcement learning fine-tuning. The discussion delves into why OpenAI has embraced multiple interfaces—the API for developers, ChatGPT as a first-party consumer app, and broader verticals like Codex and Sora—arguing that this diversity is both an operational necessity and a strategic opportunity. The guests unpack how the company’s open-source initiatives and openness to other models through Ethos and GPOSS-type offerings fit into a broader ecosystem strategy intended to accelerate adoption, foster collaboration, and offset disintermediation concerns while ensuring safety and governance across platforms. The conversation also surveys the evolution of product thinking around agents and automation, revealing that agents are viewed not as a single product but as an umbrella concept that can manifest across APIs, CLI tools, coding assistants, and first-party apps. A recurring theme is the tension between enabling broad reach and protecting customers’ needs, with a nuanced exploration of how context engineering, tooling, and data access contribute to performance, reliability, and user trust. Throughout, Sherman reflects on the challenges of building for scale—managing pricing models, infrastructure, and usage at hundreds of millions of users while maintaining developer appeal through robust tooling and predictable economics. The interview ends with a forward-looking take on model specialization, the continued role of fine-tuning and RL-based customization, and the importance of a healthy, multi-model ecosystem that supports a wide range of use cases from enterprise workflows to consumer-facing experiences. topics OpenAI model proliferation and specialization Fine-tuning and reinforcement learning First-party apps vs API and developer platform Open source in AI strategy and ecosystem Agents as a modality and product strategy Pricing and monetization in AI APIs Vertical vs horizontal AI product layering RAG, context engineering, and tool integration World-building and inference infrastructure across multiple modalities OpenAI governance, safety, and data usage policies Impact of large-scale AI on startups and developers

20VC

Alex Wang: Why Data Not Compute is the Bottleneck to Foundation Model Performance | E1164
Guests: Alex Wang
reSee.it Podcast Summary
Alex and the host discuss AI's potential as a military asset, arguing AGI could outpace traditional weapons and empower aggressors. The conversation notes the CCP’s ability to drive centralized industrial policy and questions whether a future where China or Russia possesses AGI would allow them to conquer. They explore model performance, noting GPT-4’s era and a current data/compute arms race with NVIDIA’s revenue surging since GPT-4, while large models have not produced a jaw-dropping leap. Three pillars—compute, data, algorithms—shape progress, with a data wall limiting gains. To move beyond emulating the internet, they advocate Frontier data: complex reasoning chains, tool use, and agent-based workflows. The strategy combines enterprise data mining (e.g., 150 PB in JP Morgan vs sub-petabyte internet training) with forward data production and human-guided synthetic data. They discuss roles like AI trainers and the need for data abundance, including longitudinal workplace data and consumer data, to train powerful agents. They describe a hybrid process: autonomous generation of data by AI, guided by human experts to correct factuality and improve coverage across scenarios. On business models and deployment, they argue data strategy can create durable advantages; data access and exclusive data sources may outpace compute or algorithms over time. Enterprises may favor on-prem or closed systems to protect data, while open models remain viable for broader value. Regulation remains a tension, with calls for data pooling in some sectors and careful anonymization in healthcare. They foresee a future where open-source or on-prem solutions coexist with hyperscalers, and where value accrues above the model in apps, services, and data networks. The discussion ends with hiring, leadership, and a pragmatic, 'Navy Seals' approach to building elite teams.

20VC

Sam Altman, Arthur Mensch and more discuss:Which Startups Are Threatened vs Enabled by OpenAI?|E1156
Guests: Sam Altman, Arthur Mensch
reSee.it Podcast Summary
There will be a small number of providers, just a dozen or something like that, doing models at big scale, and it'll be extremely complex, extremely expensive. The long-term differentiation will not be the base model. Intelligence is just some emergent property of matter or something. The long-term differentiation will be the model that's most personalized to you, that has your whole life context, that plugs into everything else you want to do, that's well integrated into your life. But for now, the curve is just so steep that the right thing for us to focus on is just make that base model better and better. the technology is commoditizing incredibly quickly." "it's a sustaining innovation." "There’s liquidity in the market; you could sell that for a 5x now." "Llama 3, released last week, is already incredible." "The best teacher I ever had was Clay Christensen." "He wrote the Innovator's Solution." "But if you're investing in fundamentals, it's very difficult to invest in something that actually is going to commoditize that quickly."

Generative Now

Semil Shah: Venture Capital Trends in the Age of AI
Guests: Semil Shah
reSee.it Podcast Summary
AI investment today feels like a seismic moment where capital acts as compass and weapon. Round sizes in AI are growing, with Competitive AI Series A rounds often starting around fifty million, while market dynamics bifurcate between early, capital-efficient bets and bigger, infrastructure-heavy bets. Semil Shah argues that the most effective firms use capital to create edges, not just fund ideas, citing Nat Friedman, Daniel Gross, and Elad Gil as founders and investors who were ahead of the curve by backing people and tracking transformer research. He cautions that history rhymes more than repeats, and that predicting AI’s trajectory remains uncertain even as the opportunity looks immense. The Reddit IPO and data-training conversations highlight a possible inflection point for platforms monetizing user-generated content alongside ads. Haystack’s decision process centers on the belief that the product of a VC firm is a high-conviction investment decision, guided by a trusted network and founder access rather than flashy accelerators. Shah explains they sometimes make exceptions to rules for AI, defending the idea of larger, direct-series bets and open-minded bets on defense or hardware when the founder and problem align. He stresses that incumbents have distribution and packaging advantages, but startups can outflank them by targeting problems incumbents overlook and by creating leaner, more capital-efficient models. Regulation looms as a factor—FTC/DOJ scrutiny could shape how and when deals happen, and the path to value is uncertain across cycles and political regimes. Looking ahead, Shah envisions thousands of models, with a few becoming dominant in niches while many others remain specialized. He imagines open-source and closed-source forces coexisting, and sees infrastructure costs eventually easing as the field matures. The conversation touches on Reddit’s training-data dynamics and the broader question of data as a revenue stream, as well as the ongoing influence of incumbents and talent networks. The takeaway is not a simple forecast but a view of a dynamic, feedback-rich market where capital, founders, and developers sculpt the AI frontier, one relational move at a time.

20VC

The Ultimate AI Roundtable: What Happens Now in AI, Why Google are Vulnerable | E1085
reSee.it Podcast Summary
Foundational models and commoditization dominate the chat. Emad from Stability forecasts only five or six foundation-model companies in 3–5 years: Stability, Nvidia, Google, Microsoft, OpenAI, Meta, with Apple likely among them. Dez of Intercom notes that testing OpenAI against competitors shows differences in conversation quality, hallucination resistance, and confidence inference, so commoditization isn’t complete yet. Jeff of Digits leans toward strong open-source momentum. Model size and longevity spark debate. Emad says no models today will be used in a year as parameters shrink; Yan argues smaller models can run on laptops and still perform well; Chris says models aren’t the mode; open-source momentum grows while OpenAI leads. Pricing and business models dominate the rest. Miles envisions selling work with SLA-based outcomes instead of software; Dez expects consumption pricing for AI services; some argue co-pilots are an incumbents’ tactic while startups should aim for orthogonal approaches; Apple’s edge AI and on-device LLMs showcase a future where devices run the AI.

20VC

Christian Kleinerman: Do OpenAI and Anthropic Have a Sustaining Moat? Who Wins the AI Wars? | E1063
reSee.it Podcast Summary
It's not a mass firing happening next week. It's more incremental productivity boosts happening over the next 6, 12, 24 months, and then over time you decide whether you take those productivity gains and you turn it into fewer employees versus more productively deployed employees. I would say that for sure there is hype. There is fundamental Innovation there. I think it's comparable. I think it's on the scale of the internet. This is a real shot in the arm to the creative business. The opportunity to democratize data dramatically more than who we are today. The vast majority goes to data. to bring Gen AI and LLMs to the data. There will be new models and new refinements on an ongoing basis. The best companies will be able to transition between models at ease and those that can will win. There are platforms where it's easy to bring LLMs to the data as opposed to send large data volumes to where the LLMs are. Microsoft will stand by customers from a copyright perspective. It's not just a line, it's a truth that we strongly believe in. We want to bring Gen AI and LLMs to the data.

20VC

Aaron Levie: How the Business Model of SaaS Changes Forever & Startups vs Incumbents:Who Wins?|E1155
Guests: Aaron Levie
reSee.it Podcast Summary
AI is entering a moment of both breakthrough technology and breakthrough application, a period that will be as much about incumbents as startups. It will demand nonstop focus and execution, with a window of opportunity to build platform-scale, franchise-like companies. This window is fleeting, and the lines between technology advances and practical use cases will define who survives. Foundational models will exist, but the scale of impact will come from application-layer companies. The trend is that billion-dollar bets to commoditize the model layer by leaders like Zuckerberg push differentiation toward specialized applications. Pure-play horizontal LLMs may be subsumed by incumbents, leaving room for a handful of independent players in niche areas while the rest get absorbed. AI agents represent a shift from chat-based UX to autonomous task execution. After the initial ChatGPT wave, the next breakthrough is agents that complete tasks instead of merely returning information. This echoes RPA but with more general intelligence, turning software into AI labor that can act as autopilots for outbound sales, product testing, and customer support, changing how organizations structure work and processes. Regulation has become more surgical than pausing progress. While some bills raise concerns, practical conversations about copyrights, data training, and IP are progressing. Pricing and go-to-market models for AI services are still evolving, with debates over consumption-based versus seat-based models. Leaders expect AI labor to drive growth across functions, prompting changes in org design, budgets, and the need for change management as AI becomes embedded in everyday operations.

Generative Now

Soumith Chintala: Meta’s AI Strategy, PyTorch, and Llama
Guests: Soumith Chintala
reSee.it Podcast Summary
Meta’s open source stance, PyTorch, and its rapid adoption form a surprising origin story for today’s AI tooling. Soumith Chintala, co-creator of PyTorch, explains how Torch inspired him in academic research and evolved into a library that developers worldwide embraced. A community arose to share models, solve problems, and amplify standout work, turning a niche tool into shared infrastructure used by OpenAI, Meta apps, Tesla, NASA, and many others. The ecosystem’s strength came from listening to users, resolving real challenges, and making neural networks easy to build and scale. Inside Meta, Llama followed a natural path: open sourcing what can advance the world, with safety baked in. Chintala says releasing Llama was obvious and strategic, aligned with Meta’s FAIR philosophy of accelerating AI progress through open research. The conversation emphasizes that value comes from how models are deployed, personalized, and integrated with tools, retrieval, and memory. Cost and practicality matter; a larger model may be smarter but not always cost-effective to serve. Beyond tooling, the discussion turns to governance, regulation, and social implications of AI breakthroughs. The Johansson likeness case and OpenAI’s equity clawback highlight tensions between individual rights, intellectual property, and the pace of innovation. The group frames energy and data as real bottlenecks in a capital-intensive race that may split across market segments and open versus closed ecosystems. They acknowledge debates about architectures and tool use, and they note PyTorch’s continued relevance alongside approaches that combine neural networks with retrieval, memory, and external systems.

20VC

a16z GP, Martin Casado: Anthropic vs OpenAI & Why Open Source is a National Security Risk with China
Guests: Martin Casado
reSee.it Podcast Summary
There's only been one sin, and that one sin is zero-sum thinking. The answer has been unilaterally yes. The answer has been every layer has gotten value. Every layer has winners. These markets are so large and they're growing so fast. Brand effects take place in this phase of model scaling. A lot of the approaches to scaling don't generalize. Open source is most dangerous because China is better at it than we are. Martin outlines two futures to code: 'In one future, you've got anthropic as a monopoly and another future you have, let's call it an oligopoly or maybe even a bit more of a market of of these coding models.' He notes that 'Historically models don't really keep much of an advantage because they're so easy to distill.' This implies that success will hinge on a separate consumption layer that serves non-technical users and Python coders alike, creating a healthy, distributed value layer even as models compete. Episodic launches mean competitive advantage is not guaranteed; leaders may emerge and fade. Brand effects are taking place: we're actually seeing brand effects take place as leaders gain trust and scale. The frontier continues to expand and the adoption is easier with a household name. Growth slowing will increase dispersion and raise regional strategies; there are geographic biases showing up with AI and the regulatory environments bulkanized, producing regional players. On safety, the speaker argues for funding academia and national labs, embracing a mix of open and closed approaches to maintain innovation while addressing national security concerns. The only sin in investing is missing the winner. There is no one-size-fits-all strategy; you invest in leaders, you manage ownership, and you navigate pivots with founder-market fit as a core filter. The conversation covers conflicts, multi-stage funding, and the reality that markets evolve, sometimes dramatically. A brief personal thread references Zorba the Greek when discussing resilience and grounding under pressure, and ends on a note that the firm will keep adapting through the next decade.

The Ben & Marc Show

Build Your Startup With AI
reSee.it Podcast Summary
The discussion centers on the current state of AI and its implications for startups and larger companies. The hosts express concerns about the greed of major tech companies like Google and Microsoft, which they believe prioritize profit over safety, pushing for government intervention to restrict open AI development. They address listener questions about what founders should focus on in AI, emphasizing the need to consider how advancements in foundation models could impact their startups. Sam Altman's advice is highlighted: founders should anticipate significant improvements in AI models and assess whether these advancements will benefit or threaten their businesses. The hosts discuss the competitive landscape, noting that while large companies have advantages, startups can succeed by focusing on niche applications or by creating distilled versions of existing models. The conversation also touches on the potential for AI models to improve dramatically, with advancements in training techniques and data utilization. They explore the idea that while proprietary data is often touted as a competitive advantage, the vast amount of publicly available data may overshadow it. The hosts argue that companies should focus on how to leverage their data effectively rather than merely selling it. They compare the current AI boom to the early days of the internet, suggesting that AI represents a new kind of computing rather than a network. The hosts predict a diverse ecosystem of AI models, akin to the evolution of computers from mainframes to personal devices. They caution about the speculative nature of technology investments, acknowledging that while many startups will fail, the process is essential for innovation and growth. Finally, they discuss the societal implications of speculation in technology, arguing that it fosters a spirit of invention and entrepreneurship, which is crucial for progress. The conversation concludes with a call for further exploration of these themes in future discussions.

Generative Now

Mikey Shulman: Suno and the Sound of AI Music
Guests: Mikey Shulman
reSee.it Podcast Summary
Music meets machine learning in a way that reframes both art and tech. Suno's Mikey Shulman describes a path from Harvard physics to leading a music AI startup, through Kensho, where an early experiment transcribing earnings calls sparked a realization: audio is beautiful but dramatically under-explored by AI. After Bark, an open-source text-to-speech project with surprising traction, the team decided to build music tools that are future-proof and player-friendly. In just over six months they moved from the Kensho halls to Suno, founded by four musicians who wanted to let everyone make music, not just engineers. At the core is a foundation-model approach, chosen to unlock flexible, long-lasting capabilities rather than one-off audio tricks. The founders point to a lack of large, searchable audio corpora and the difficulty of inspecting and curating audio data, which makes audio models inherently data-hungry and brittle when trained in isolation. They emphasize self-supervised learning on vast, unlabeled audio as the path forward, and they purposefully avoid mimicking specific artists to respect rights. Their early Bark project revealed strong community interest in music, reinforcing the shift from speech to music as the next frontier. They describe a product philosophy that centers on intuitive workflows and aesthetics, not professional-only studios. Suno's first moves were a Discord community and a web app, then a Microsoft Copilot integration that places a free tier of songs into a major productivity suite. The team talks about 'soundtracking your life,' where prompts or even non-text cues can inspire music, and about controllability, curiosity, and the joy of arriving at a result the user loves. They also stress ethical licensing: you won't prompt for a Taylor Swift song, and the model is designed to avoid direct impersonation while still enabling personal, original music.

20VC

Clem Delangue: The Ultimate Guide to Investing in AI; Elon's Threat to Sue OpenAI | E1013
Guests: Clem Delangue
reSee.it Podcast Summary
Hugging Face began as a joke about listing publicly with an emoji and pivoted from a Tamagotchi AI to an open AI platform. The founders pursued a challenging, entertaining AI project before the pivot. They center open science and open source as the engine of progress, with a team across Paris, New York, and SF, prioritizing the joy of building over milestones. On models, Hugging Face contrasts 'one model to rule them all' with 'open source models.' A single dominant model concentrates builders; multiple models let firms tailor use cases and train their own. API-first can be faster at first, but differentiation and cost control favor internal models. Enterprises may prefer bundled solutions; AI-native startups push bespoke architectures. Regulation and openness are central. Stay argues regulation is necessary, with clearer fair-use rules for training data. Open source openness is celebrated; he cites content access, opt-out data initiatives, and Musk/OpenAI debates as part of the conversation. He says openness and transparency help society and the field, while warning against fear-driven bans and doom narratives. Pricing varies; adoption and usage drive value. Hiring is the biggest bottleneck—top ML engineers are scarce and expensive—and AI-native startups may outpace incumbents in differentiation, demanding strategic focus and speed.

Breaking Points

EXPERT: AI Bubble Is REAL — But Here’s How We Fix It
reSee.it Podcast Summary
AI investment is booming, but the guests warn that the surge may be a bubble built on unsustainable funding rather than lasting value. The discussion weighs the benefits of rapid innovation against risks of secrecy, monopoly, and misaligned incentives as OpenAI, Anthropic, and others push proprietary systems while open-source rivals push for transparency and broader participation. Data sovereignty emerges as a core concern: who controls citizens’ information once models are trained on it, and what power do governments retain? Travis Oliphant argues that open-source AI should be the norm, not an afterthought. He outlines risks of closed systems, stresses the need for distributed decision-making, and proposes that if a model trains on government data, the government should own it. He also frames four alternative funding mechanisms for sustainable open-source ecosystems and cautions against overreliance on centralized data centers and hype from investors. Open Teams and the Open-Source AI Foundation aim to influence policy and build sovereign AI tools for organizations and governments. The interview leans toward practical steps, such as policy rules that retain data with the public sector, and toward cultivating an ecosystem where open models compete with commercial platforms. The bottom line: the long arc of AI’s benefits may hinge on distributed ownership and accountable, transparent development.
View Full Interactive Feed