TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
- XAI is two and a half years old and has achieved rapid progress across multiple domains, outperforming many competitors who are five to twenty years older and have larger teams. The company claims to be number one in voice, image and video generation, and to be leading in forecasting with Grok 4.20. Grok is integrated into apps like Imagine and Grokipedia, with Grokipedia positioned to become Encyclopedia Galactica—much more comprehensive and accurate than Wikipedia, including video and image data not present on Wikipedia. - XAI has achieved a 100,000-hour GPU training cluster and is about to reach 1,000,000 GPU-equivalent hours in training. The company emphasizes velocity and acceleration as the key drivers of leadership in technology. - The company outlines a four-area organizational structure: Grok Main and Voice (the main Grok model), a coding-focused model (Grok Code), an image and video model (Imagine), MacroHard (digital emulation of entire companies), and the infrastructure layers. - Grok Main and Voice will be merged into one team. In September 2024, OpenAI released a voice product, but XAI states it started later and, in six months, developed an in-house model surpassing OpenAI, with Grok in over 2,000,000 Teslas and a Grok voice agent API. The aim is to move beyond question answering toward building and deploying broader capabilities, such as handling legal questions, generating slide decks, or solving puzzles. - Product vision stresses that Grok Main’s intent is genuinely useful across engineering, law, and medicine, aiming to be valuable in a wide range of areas necessary to understand the universe and make things useful. - MacroHard is described as the effort to digitally emulate entire companies, enabling end-to-end digital output and the emulation of human workers across various functions (rocket design, AI chips, physics, customer service, etc.). MacroHard is presented as potentially the most important project, with the Roof of the training cluster bearing the MacroHard name. The team emphasizes that most valuable companies produce digital output and that MacroHard could replicate the outputs of companies like Apple, Nvidia, Microsoft, and Google, among others, across multiple domains. - Imagine focuses on imaging and video generation; six months into the project, Imagine released v1 and topped leaderboards across several metrics. The team highlights rapid iteration with multiple product updates daily and model updates every other week. Users are generating close to 50,000,000 videos per day and 6,000,000,000 images in the last 30 days, claiming this surpasses other providers combined. The goal is to turn anything you can imagine into reality. - Hakan discusses longer-form video capabilities, predicting end-of-year capabilities for generating 10 to 20-minute videos in one shot, with real-time rendering and interaction in imagined worlds. The expectation is that most AI compute will be real-time video understanding and generation, with XAI leading in this trajectory and continuing to improve Grok code toward state-of-the-art performance within two to three months. - MacroHard details: the team envisions building a fully capable digital human emulator to perform any computer-based task, including using advanced tools in engineering and medicine, like rocket engines designed by AI. The project is framed as a response to the remaining gap between AI and human capability in this domain, making it a high-priority area for recruitment of top talent. - XChat and X Money are described as major products in development. XChat is planned as a standalone standalone messaging app with full features (encrypted messaging, audio and video calls, screen sharing, etc.), with no advertising or hooks in Grok Chat. X Money is currently in closed beta within the company, moving toward external beta and then worldwide, intended to be the central hub for all monetary transactions, including mortgages, business loans, lines of credit, stock ownership, and crypto. - The presentation also emphasizes the synergy between XAI and SpaceX, noting that SpaceX has acquired xAI and that orbital AI data centers are being pursued to dramatically increase available AI training compute. FCC filings indicate plans to launch a million AI satellites for training and inference, with annual launches potentially reaching 200–300 gigawatts per year, and longer-term goals including moon-based factories, satellites, and a mass driver to launch AI satellites into orbit. The mass driver on the moon is described as a path to exponentially greater compute, potentially reaching gigawatts or terawatts per year, with the broader ambition of enabling a self-sustaining lunar city and interplanetary expansion. - The overall message stresses extraordinary progress, a relentless push toward greater compute and capability, and aggressive growth in user adoption and product scope. The company frames its trajectory as a fundamental shift toward real-time, scalable AI that can transform work, communication, and the management of digital assets across the globe and beyond Earth.

Video Saved From X

reSee.it Video Transcript AI Summary
Battery works, so understanding the exact mechanics of "super agents" isn't necessary, only their capabilities for deployment. The speaker emphasizes speed and immediacy. The speaker's view is to avoid extensive debates about large versus small language models. Their company uses data AI to hedge equity books, executing 6,000 movements of money in split seconds, which requires confined data and smaller AI models, not LLMs. The speaker advises against ignoring AI and states their company's goal is to be the best at it.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 raises a question from the audience about whether the ADL has considered hiring people to counter-march, particularly with diverse ethnicities, to ensure marches are unopposed on social media and publicity. Speaker 1 responds: It’s important to “go where the puck is going” and not just to where it is. Since 10/07, resources have been redistributed toward LLMs and generative AI. He asks how many used ChatGPT in the last week, noting that ChatGPT has over a billion users and is ground truth for vast numbers of people, having existed for about two and a half years. While marching in the streets is one approach, he emphasizes building technology to train LLMs more effectively and working with leading AI companies. He specifies collaborations with OpenAI, Alphabet, Anthropic, Meta, and Microsoft, and says they are in conversations with Alibaba to train their LLM, highlighting that Chinese AI models are profound, potent, cost-effective, and spreading. He reiterates that marching in the streets is only one option, but the focus is on going where the puck is going by investing in Wikipedia, LLMs, and changing the game before it changes us.

Video Saved From X

reSee.it Video Transcript AI Summary
In 2014, the speaker's company hired Manuela Veloso from Carnegie Mellon to run machine learning. They have a 200-person AI research group and spend approximately $2 billion on AI, with about 600 end use cases. This number of use cases is expected to double or triple next year. The company moved AI and data out of the technology department because it was deemed too important. The head of AI and data now reports to the speaker and the president. The company focuses on accelerating AI development and tests extensively, collaborating with many people. AI will change everything.

a16z Podcast

The Top 100 Consumer AI Apps | The a16z Show
Guests: Olivia Moore
reSee.it Podcast Summary
The episode surveys the release of the top 100 consumer AI apps and what has changed since the project began three years ago. The host notes that despite the ongoing growth, the space remains early, with ChatGPT still the dominant global product by wide margins on both web and mobile. A key theme is the expanding consumer and prosumer focus, as non‑AI native products embrace AI features and integrate AI into more surfaces, from browsers to desktop apps. The conversation covers how app stores are catalyzing differentiation, with ChatGPT pursuing broad consumer reach and monetization through ads and transactions, while Claude leans into premium data sources and professional tools. Gemini is described as carving out a creative corner, and the speakers discuss how the three platforms’ user bases and paid adoption tracks align with major product releases, such as AI‑enhanced Gmail, Sheets, and Calendar, and the emergence of 200+ app ecosystems in each store with only modest overlap. The dialogue then shifts to the idea of compounding advantages: lock‑in from memory, authentication layers, and cross‑product utility, suggesting that a user’s AI identity could travel with them and amplify the value of AI across tools and services. The discussion also touches how enterprise contracts may shape memory and privacy decisions, and whether this will slow or accelerate personal adoption as users decide how to segment work versus personal data. The second half explores global trends, regional adoption, and cultural attitudes toward AI. The hosts highlight Russia and China as distinctive markets with strong local ecosystems and restrictions that shape usage, while places like Singapore, Hong Kong, and the UAE exhibit high per‑capita activity tied to tech‑forward workforces. The conversation delves into the evolution of creative tools, noting shifts away from standalone image generators toward integrated model ecosystems, and the rising importance of music, voice, and video tools. The discussion closes with reflections on agents and desktop ambient AI, the rapid emergence of OpenClaw and Manis in the consumer‑oriented space, the idea that agents will become a standard feature across tech companies, and the potential for memory to become a core differentiator for AI products in the near future.

Generative Now

Reinventing Wall Street: Rogo’s AI Revolution with Gabriel Stengel
Guests: Gabriel Stengel
reSee.it Podcast Summary
A finance startup is quietly threatening to reshape Wall Street’s workflow by turning minutes of research into seconds of insight. Gabe Stangel, a Lazard alumnus turned founder of Rogo, explains how the idea began as Princeton senior projects pairing computer science with econometrics, then evolved into a product that now helps banks and hedge funds analyze earnings, benchmark peers, and build decks in moments. He describes his path from banking to data science at Lazard, and why a focused AI tool mattered more than novelty. Rogo’s edge rests on three pillars. First, it combines licensed data from providers like S&P Global Cap IQ with company-internal data, giving the model access to both external and proprietary sources. Second, it uses tooling that matters on finance teams—Excel, filings, and precedent transactions—so outputs are auditable and actionable. Third, it relies on post-training and reinforcement fine-tuning to teach the model how to use these tools and to follow Wall Street workflows, not merely generate plausible text. Market entry hinged on a shift from structured data to structured and unstructured data, and on reframing the pitch around ROI. A large private equity firm’s pricing question—quoted loosely as two million dollars a year—became a turning point, signaling that users valued the automation and speed. The team also wrestled with enterprise-grade security, multi-cloud deployment, and governance, treating security as a core feature. They still face the challenge of enterprise sales, preferring top‑down deals while exploring a possible product-led growth path. Looking ahead, Stangel envisions Rogo becoming the most effective analyst on Wall Street within five to ten years, enabling firms to win transactions faster and extend sophisticated financial services to smaller players. He sees consulting firms and corporate development teams as early adopters, with banks potentially co-building tools rather than defending against them. The future, he says, is a mix of automation and human judgment—AI handling routine diligence while bankers focus on strategic, relationship-driven work, with specialization delivering competitive advantage.

20VC

Aidan Gomez: What No One Understands About Foundation Models | E1191
Guests: Aidan Gomez
reSee.it Podcast Summary
The reality of the matter is there's no market for last year's model. If you throw more compute at the model, if you make the model bigger, it'll get better. There will be multiple models—verticalized and horizontal—and consolidation is coming. It's dangerous when you make yourself a subsidiary of your cloud provider. I grew up in rural Ontario. We couldn't get internet; dial-up lasted for years after high-speed came. That early hardship fueled a fascination with tech and coding and gaming that taught resilience. On the scaling question, 'the single biggest rate limiter that we have today' is not just more compute but smarter data and algorithms. There will be both large general models and smaller focused ones. The pattern is to 'grab, you know, an expensive big model, prototype with, prove that it can be done, and then distill that into an efficient Focus model at the specific thing they care about.' 'The major gains that we've seen in the open-source space have come from data improvements'—higher quality data and synthetic data. We need to 'let them think and work through problems' and even 'let them fail.' 'Private deployments like inside their VPC on Prem' are essential as data stays on their hardware. Enterprises are sprinting toward production, focusing on employee augmentation and productivity. The hype around 'agents' is justified; they could transform workflows, but the value will come from human–machine collaboration. Robotics are viewed as 'the era of big breakthroughs' once costs fall. Beyond models, the drive is 'driving productivity for the world and making humans more effective' and to push growth over displacement.

Uncapped

Agents in the Enterprise | Aaron Levie, CEO of Box
Guests: Aaron Levie
reSee.it Podcast Summary
AI is the big unlock for data, Levie argues, because Box has spent nearly two decades storing and managing critical assets, including financial documents, contracts, marketing assets, and employee records, and most of that data sits idle after early use. Box serves about 115,000 customers and is in roughly two-thirds of the Fortune 500; yet the real value lies in the data's potential to reveal product opportunities, boost sales, and speed onboarding. AI, he says, lets the company reimagine itself as if it started in 2025, grappling with how to organize a data-rich platform from the ground up while staying fast and secure. The ambition is to plug AI at the core of everything Box does, not treat it as a bolt-on. Levie envisions millions of AI agents focused on content-driven workflows. In Box AI Studio, customers can create agents or rely on automatically created ones to review contracts for risky clauses, process invoices, extract asset data for marketing campaigns, and automate related tasks. An agent could research dozens of financial documents, assemble a trends report, and even reach across outside systems via a tool-use framework. The vision extends beyond Box: agents will thread data from Salesforce, ServiceNow, Slack, Workday, and other platforms to build a complete picture or drive a workflow. In practice, this means background agents that execute tasks, free up human time, and accelerate decision-making. An important thread is Box’s architecture and neutrality. Levie notes Box’s cloud-native, multi-tenant design allowed new AI capabilities to plug in without version fragmentation. Acquisitions must feed into a common platform rather than operate in silos. He argues the future of work is not confined to Box but spans Salesforce, ServiceNow, and dozens of other platforms, with agents conversing across systems. This openness is framed by business logic: AI’s economics may initially track labor costs, but over time software margins should prevail as agents scale beyond headcount limits. He invokes Seven Powers, arguing that cornered resources will determine who wins in this AI era.

Conversations (Stripe)

Stanislas Polu (Dust) and Roxanne Varza (Station F) fireside chat | Stripe AI Day—Paris
Guests: Stanislas Polu, Roxanne Varza
reSee.it Podcast Summary
Stan described Dust's evolution from a developer-focused platform to a production-ready AI stack. One year ago, they saw that those models were powerful and available through an API, motivating a product and research challenge. The initial Dust iteration focused on building LLM apps by chaining model calls and external APIs; while developers showed interest, they doubted long-term value, especially beside Langchain. They learned production often requires removing a framework and using direct models. They opted to base the company in France, drawn by a deep talent pool in Paris and the ease of building a French topco; OpenAI remote work didn’t change the strategic choice. The ecosystem's strengths include strong French mathematical training, and a large Paris AI research community fostered by CIF/Gafas. Remaining gaps include GPU incentives and funding access. Dust targets tech companies for internal-data productivity, with plans to extend to external data and evolving interfaces beyond conversational chat for growth.

Generative Now

Arvind Jain: Why Now Is the Time to Solve Enterprise Search
Guests: Arvind Jain
reSee.it Podcast Summary
Imagine an enterprise where every piece of knowledge lives in Confluence, Jira, Google Drive, and a dozen other systems, yet no one can find what they need fast enough. That challenge fueled Arvind Jain's move from Rubric to founding Glean in 2019, years before the current AI boom. At Rubric, rapid growth created silos and a drop in productivity as knowledge sprawled across teams. Jain, a search veteran, set out to build a powerful, secure enterprise search that unifies data and people. From day one, Glean blended traditional retrieval with transformer-based understanding. They built integrations to connect enterprise data sources via published APIs, then layered security to ensure permissions aren't breached. The product used BERT-era ideas and later a hybrid approach: retrieve relevant fragments from internal data, then pass them to a model for reasoning. They also train small enterprise-specific encoders on each company corpus to improve semantic matching, while relying on multiple model providers for reasoning. Market fit arrived slowly. Initially, many saw search as a vitamin, not a painkiller. After about 30 tech customers scaled usage, momentum grew through word of mouth. The ChatGPT moment amplified demand: enterprises imagined a personalized, internal ChatGPT that knows their data. Glean crossed 100 million ARR and tripled last year, helped by the belief that AI should be accessible to every employee. Jain emphasizes education inside organizations so workers become AI-first and adoption becomes practical, not theoretical. Strategically, Glean positions itself as a horizontal AI platform atop diverse models and data sources, rather than a single-model vendor. They partner with model providers and hyperscalers, and offer an agent-building layer that lets business users define multi-step workflows in natural language. Competition is welcomed: it accelerates R&D and expands the ecosystem. Jain's experience at Google and Rubric informs a focus on recruiting top engineers and maintaining fast, ambitious execution as the company scales.

20VC

Aravind Srinivas:Will Foundation Models Commoditise & Diminishing Returns in Model Performance|E1161
Guests: Aravind Srinivas
reSee.it Podcast Summary
Today’s models are just giving you the output. Tomorrow’s models will start with an output, reason, elicit feedback from the world, go back, and improve the reasoning. That is the beginning of a real reasoning era. The biggest beneficiaries of the commoditization of foundation models are the application layer companies ready to go. Harry describes his accidental entry into AI via an undergrad ML contest, exploring scikit-learn and reinforcement learning. He notes diminishing returns and the central role of data curation in scaling. What makes these models magical is not domain-specific data but general-purpose emergent capabilities. They are trained to predict the next token, yet they show reasoning-like flexibility. 'The magic in these models' emerges from vast, diverse data; the debate about verticalization is not settled—some argue domain specialization helps, others doubt. Memory and long-context remain challenges; some see a Gmail-like storage approach as practical, while infinite context remains elusive. The path forward may depend on how we orchestrate data, prompts, and tools. On the business side, the conversation centers on commoditization, funding, and monetization. 'The second tier models' will be commoditized; OpenAI, Anthropic, and others are valued more for the people who build the models than for the models themselves. Perplexity pursues a mix of advertising, subscriptions, APIs, and enterprise offerings, aiming to scale with a strong product and user base. They view advertising as potentially dominant if they crack the relevance code, while enterprise remains a separate, longer-term path. The 2034 vision is Perplexity as the go-to assistant for facts and knowledge.

Possible Podcast

The SECRET to scaling your business
reSee.it Podcast Summary
AI agents listening in on every professional meeting may seem like science fiction, but it is becoming practical. In a live session, Reid Hoffman asks founders to explain how they misread scaling in an era of rapid AI leverage. The first question focuses on misconceptions about growing a company quickly, and the answer emphasizes scale product market fit instead of simply hiring more people. Scaling is not merely adding fuel; it requires proving the fit while expanding, and deciding how the business model will evolve. Blitz scaling is risky when the probability of scale product market fit is uncertain, and Hoffman names Uber, Airbnb, and the early days of Facebook as examples. The discussion then turns to how AI changes scale decisions, including whether model size truly matters, the rise of open source models, and how multimodal options create competition among large providers. Teams must stay nimble, adjusting licenses and strategies as models evolve, while balancing network effects that can slow or speed adoption. The talk returns to concrete loops where AI can serve front line customer interactions, sales, and enterprise workflows, all while monitoring the human factors that drive deployment. Large scale adoption will depend on clear value.

20VC

Noam Shazeer: How We Spent $2M to Train a Single AI Model and Grew Character.ai to 20M Users | E1055
Guests: Noam Shazeer
reSee.it Podcast Summary
Noam Shazeer, co-founder and CEO of Character.ai, calls it a full-stack AI computing platform giving people access to their own flexible super intelligence. The mission is 'a billion users inventing a billion use cases,' with examples like 'I'm talking to a video game character who's now my new therapist, and this makes me feel better.' He contrasts a direct-to-consumer approach with a traditional B2B path, citing Google's lesson that general tech should launch to billions. He explains language modeling as 'guess what the next word' with scalable neural models. The biggest challenge is making a system that is both very general and usable: 'make it very general, and make it usable.' Privacy matters: 'we are careful to not compromise anyone's privacy,' and user data helps improve the product. He also notes an ecosystem of open and closed approaches and that startups often move faster than giants.

20VC

Alex Wang: Why Data Not Compute is the Bottleneck to Foundation Model Performance | E1164
Guests: Alex Wang
reSee.it Podcast Summary
Alex and the host discuss AI's potential as a military asset, arguing AGI could outpace traditional weapons and empower aggressors. The conversation notes the CCP’s ability to drive centralized industrial policy and questions whether a future where China or Russia possesses AGI would allow them to conquer. They explore model performance, noting GPT-4’s era and a current data/compute arms race with NVIDIA’s revenue surging since GPT-4, while large models have not produced a jaw-dropping leap. Three pillars—compute, data, algorithms—shape progress, with a data wall limiting gains. To move beyond emulating the internet, they advocate Frontier data: complex reasoning chains, tool use, and agent-based workflows. The strategy combines enterprise data mining (e.g., 150 PB in JP Morgan vs sub-petabyte internet training) with forward data production and human-guided synthetic data. They discuss roles like AI trainers and the need for data abundance, including longitudinal workplace data and consumer data, to train powerful agents. They describe a hybrid process: autonomous generation of data by AI, guided by human experts to correct factuality and improve coverage across scenarios. On business models and deployment, they argue data strategy can create durable advantages; data access and exclusive data sources may outpace compute or algorithms over time. Enterprises may favor on-prem or closed systems to protect data, while open models remain viable for broader value. Regulation remains a tension, with calls for data pooling in some sectors and careful anonymization in healthcare. They foresee a future where open-source or on-prem solutions coexist with hyperscalers, and where value accrues above the model in apps, services, and data networks. The discussion ends with hiring, leadership, and a pragmatic, 'Navy Seals' approach to building elite teams.

Lenny's Podcast

Why LinkedIn is turning PMs into AI-powered "full stack builders” | Tomer Cohen (LinkedIn CPO)
Guests: Tomer Cohen, Michael Truell, Varun Mohan, Anton Osika
reSee.it Podcast Summary
The episode dives into LinkedIn’s ambitious experiment with AI-augmented product building, where the traditional product development lifecycle is being reimagined through a full stack builder model. Tomer Cohen, LinkedIn’s CPO, explains how the time constants of change now outpace organizational response, forcing a rethink of who builds what and how. Instead of a multi-team, handoff-driven process that expands research, design, and validation into a lengthy gauntlet, LinkedIn is pushing builders to own end-to-end experiences that blend human judgment with AI capabilities. The conversation emphasizes that the key traits for builders—vision, empathy, communication, creativity, and especially judgment—must be sharpened, while automation absorbs everything that can be quantified or standardized. The goal is not to replace talent but to enable skilled builders to move faster, adapt to shifting contexts, and operate with greater resilience by composing a human-AI product team that can pivot as needed. Cohen makes clear that this shift requires more than new tools; it demands cultural change, incentives, and a clear pathway for career progression as the organization flattens hierarchies into flexible pods of full stack builders who can ideate, prototype, test, and launch with velocity. The discussion details the three pillars of LinkedIn’s approach: a re-architected platform that AI can reason over, bespoke internal tools and agents built to work with their own stack, and a culture that rewards rapid experimentation and sharing of successful practices. A standout theme is how much effort has gone into data curation, context creation, and the design of governance and trust mechanisms to guard against misuse. The guests walk through concrete examples—a trust agent that flags vulnerabilities in a spec, a growth agent that critiques ideas, a research agent that leverages LinkedIn’s corpus to assess market insights, and an analyst agent that navigates the graph—to illustrate how a suite of purpose-built agents can augment human capabilities without sacrificing accountability. The interview also covers practical timelines, the internal pilot structure, staff incentives, and the balance between specialization and full-stack fluency, underscoring that the road to scale is iterative, expensive upfront, and demands persistent leadership and clear communication about progress and outcomes. The episode culminates in reflections on talent, management, and career pathways, including the Associate Product Builder program as a future-facing replacement for traditional APM tracks, the need for inclusive mentorship, and the imperative to celebrate wins to sustain momentum. Throughout, the speakers stress that change management—through visibility, early wins, cross-functional collaboration, and a culture of experimentation—is as crucial as the technology itself. They acknowledge the friction and challenges of converging tools, the risks of over-reliance on external solutions, and the reality that not everyone will want to become a full stack builder, making the shift as much about culture and incentives as about capabilities. The overall message is one of ambitious but patient transformation, with a clear eye toward continuous progress rather than a final state.

Generative Now

Arvind Jain: Why Now Is the Time to Solve Enterprise Search (Encore)
Guests: Arvind Jain
reSee.it Podcast Summary
Generative Now begins with a bold premise: a company's knowledge is powerful, but if employees can't find it, it's barely useful. Arvind Jain, founder and CEO of Glean, explains that the problem came into focus while scaling Rubric, a data-security startup. As Rubric grew beyond a thousand people, frustrations rose: information lived across Confluence, Jira, Google Drive, SharePoint, and emails, and nobody could quickly locate experts or documents. Jain's background in Google search inspired the idea of an enterprise search that connects disparate systems and understands context, not just keywords. The 2018 spark: transformer-based models hinted at semantic search improvements, setting the stage for what would become Glean. From day one, Glean fused retrieval with generation. The team built integrations to Confluence, Jira, Google Drive, SharePoint using published APIs, and created a secure, permission-aware search experience for privileged enterprise data. They used BERT as the initial model, retraining it on each enterprise corpus to tailor it to company terms and acronyms, while combining traditional ranking with semantic matching. The system operates as a hybrid stack: a retrieval layer fetches relevant bits of knowledge, then a foundation model reasons and generates responses. They're not a pure foundation-model company; they connect multiple models and let customers pick the best fit. Market fit evolved in two waves: first, about 30 tech-sector companies adopted Glean at scale, creating word-of-mouth; second, ChatGPT's rise reframed the value of enterprise knowledge as something to be embedded in internal data. Jain notes that employees now want an AI that already knows their company, and adoption grew as ROI and ease of use improved. Competition would not derail them; they see themselves as a horizontal AI platform on top of model providers, enabling agents and workflows across many apps. Features like operators and browser automation extend AI to tasks even without API access. He credits recruiting as the key early move and says one executive use case per quarter helps embed AI, fostering an AI-first culture across the organization.

20VC

Zico Kolter: OpenAI's Newest Board Member on The Biggest Questions and Concerns in AI Safety | E1197
Guests: Zico Kolter
reSee.it Podcast Summary
Kolter, a professor and head of CMU's machine learning department who recently joined the OpenAI board, explains that LLMs work by training on vast internet data to predict the next word; 'you take a lot of data from the internet, you train a model' and 'use that model to predict what's the next word.' He calls this 'a little bit absurd that this works' but says the output is 'intelligent' and 'demonstrably intelligent.' On data, Kolter outlines two opposing views: some say resources are exhausted, others that we haven't approached the data frontier. He insists we are 'not even close to hitting the limits of available data' and that 'public models are trained on the order of 30 terabytes of data—a tiny amount' compared with what's possible. There is far more data in video and audio across modalities, and compute remains the big bottleneck. Kolter says he uses the largest models for daily work because 'it just works better,' and only after establishing repeatable tasks would smaller, task-specific models come into play. He notes commoditization and potential consolidation among providers, with powerful capabilities often debuting in closed models. To combine data access with safety, he highlights retrieval-augmented generation (RAG): 'the model will not be retrained on that' data through API use. On safety and governance, he warns misinformation is amplified; 'The real negative outcome is that people are not going to believe anything that they see anymore' and AI acts as an accelerant. He discusses jailbreaks and prompt-injection, cyber risks, and 'correlated failures' in critical infrastructure like power grids. Regulation is needed but must adapt; he remains optimistic and wants AI tools to be used safely.

Generative Now

Chris Pedregal + Sam Stephenson: Making Meetings More Effective with Granola
Guests: Chris Pedregal, Sam Stephenson
reSee.it Podcast Summary
Granola co-founders Chris Pedregal and Sam Stephenson built a note-taking AI tool after witnessing how meetings generate tedious follow-up work. The duo met through a shared conviction that AI could reshape tools for thinking, inspired by GPT-3’s instruct version and a fascination with tools for thought. They described three years of exploration, from leaving Google to chase a startup in London to identifying a painful, universal problem: turning meeting conversations into usable, actionable notes and tasks rather than menial aftercare. They designed Granola to sit inside meetings and become a habit. They stressed the app layer versus frontier models: it’s more valuable to build a polished product that leverages the best models than to train one from scratch. They discussed examples like real-time transcription, multiple language support, and retrieval-augmented generation to manage long meeting histories beyond the model’s context window. They described a design philosophy they call the lizard brain approach: keep the interface simple because users operate under stress during back-to-back meetings. The goal is an experience that surfaces what matters from a single meeting and across teams. On business and growth, they described a capital-intensive, long-horizon bet. Revenue comes from enterprise adoption and network effects through shared granola workspaces, not just AI credits. They acknowledged expensive compute today but expect costs to fall over time, enabling broader use. They contrasted London's talent with Silicon Valley, framing Granola as a Silicon Valley-style startup in a European hub. They emphasized product quality and taste, screening for product thinking in engineers, and balancing rapid iteration with preserving a simple, elegant user experience. Looking ahead, they envision Granola becoming a jetpack for the mind, a workspace for people whose work is conversation, with meeting transcripts, emails, and documents interwoven into a coherent knowledge base. They imagined use cases for venture memos, sales calls, and company reorgs, all powered by context-rich AI. Privacy discussions emerged as they noted Granola does not store audio and users control access to transcripts, signaling norms that will shape adoption. The conversation closed with a reminder that the era of AI-enabled tools is accelerating, and Granola aims to lead with usefulness.

20VC

Douwe Kiela: Why Data Size Matters More Than Model Size; Why Open Source Isn't Going to Win | E1032
Guests: Douwe Kiela
reSee.it Podcast Summary
Hallucination: these models make things up with high confidence. Attribution: we don't know why they're saying what they're saying, we can't really trace it back to anything. There are compliance issues, so we can't really remove information, which is tricky from a GDPR perspective. There's data privacy issues where you have to send your valuable company data to someone else's servers, especially in Enterprise contexts. Contextual emerged as a response to these gaps. The founders noted after chat GPT went viral that enterprise adoption requires reliability, grounding, and deployability. They built a retrieval augmented generation architecture, decoupling memory from the language model to ground generations in retrieved information, enabling attribution and easier memory updates. We discuss data, proprietary data, and the importance of data vs model size. Data size matters more than model size; training on more data yields better results than merely increasing parameters. The data advantage and proprietary practices are highlighted, as well as a data plane/model plane separation to preserve privacy while maintaining performance. The market remains competitive with open-source and frontier models. Regulation is debated; existential risk concerns exist but are not the sole focus. EU regulation could hinder innovation; the US is seen as more permissive. Evaluation challenges include dynamic testing and adversarial evaluation like DinBench as future standards. The potential for enterprise AI services is highlighted, alongside ongoing VC enthusiasm. Books: Superintelligence by Nick Bostrom.

20VC

Christian Kleinerman: Do OpenAI and Anthropic Have a Sustaining Moat? Who Wins the AI Wars? | E1063
reSee.it Podcast Summary
It's not a mass firing happening next week. It's more incremental productivity boosts happening over the next 6, 12, 24 months, and then over time you decide whether you take those productivity gains and you turn it into fewer employees versus more productively deployed employees. I would say that for sure there is hype. There is fundamental Innovation there. I think it's comparable. I think it's on the scale of the internet. This is a real shot in the arm to the creative business. The opportunity to democratize data dramatically more than who we are today. The vast majority goes to data. to bring Gen AI and LLMs to the data. There will be new models and new refinements on an ongoing basis. The best companies will be able to transition between models at ease and those that can will win. There are platforms where it's easy to bring LLMs to the data as opposed to send large data volumes to where the LLMs are. Microsoft will stand by customers from a copyright perspective. It's not just a line, it's a truth that we strongly believe in. We want to bring Gen AI and LLMs to the data.

Doom Debates

AI Genius Returns To Warn Of "Ruthless Sociopathic AI" — Dr. Steven Byrnes
Guests: Dr. Steven Byrnes
reSee.it Podcast Summary
In this episode of Doom Debates, the conversation with Dr. Steven Burns centers on why some researchers remain convinced that future AI could become ruthlessly sociopathic, even as current systems appear friendly or subservient. The guest outlines two broad frameworks for how powerful AIs might make decisions: imitative learning, which mirrors human behavior by copying observed actions, and consequentialist approaches like model-based planning and reinforcement learning, which optimize outcomes. The host and guest debate where the true power lies, arguing that while imitative learning explains much of today’s AI capability, the next generation may rely more on decision-making processes that actively shape real-world results. The discussion delves into why LLMs, despite impressive feats, still rely heavily on weight-based knowledge acquired during pre-training, and why a future regime with continual self-modification could yield much more capable systems, potentially with ruthless goals if not properly aligned. A central thread is the distinction between the current “golden age” of imitative AI—where tools like code-writing assistants deliver enormous productivity gains—and a coming paradigm in which agents learn and adapt in a more open-ended, self-improving way. The host highlights how agents already outperform humans in certain tasks by organizing orchestration, yet Burns argues that true general intelligence with robust, long-horizon planning will require deeper shifts beyond the context-window limitations of today’s models. Throughout, the pair explores the risk calculus: even with safety measures and constitutional prompts, the fundamental architecture could tilt toward instrumental convergence if the underlying learning loop is shaped by outcomes rather than imitation. The discussion also touches on practical implications for society, economics, and policy. They compare current capabilities with future possibilities, debating how unemployment could respond to increasingly capable AI and whether a scenario of “foom” is imminent or a more gradual transformation. The guests scrutinize the feasibility of a “country of geniuses in a data center” and whether truly open-ended, continuous learning could unlock a new regime of intelligence that rivals or surpasses human adaptability. Throughout, Burns emphasizes the importance of continuing work on technical alignment and multiple problem spaces—from pandemic prevention to nuclear risk—while acknowledging that many uncertainties remain and the pace of change could be rapid and disruptive.

20VC

Adarsh Hiremath @ Mercor: The Fastest Growing Startup in Silicon Valley | E1261
Guests: Adarsh Hiremath
reSee.it Podcast Summary
The round is 100 million and the price was at 2 billion. We'll live in a world with many models with different use cases. The recruiter is the one who controls the talent inflows and outflows of every company, and you can gather all you need to know about a company from those inflows and outflows. The businesses that succeed when software costs approach zero will be built on network effects. At the time it wasn't obvious that we should drop out, and I sympathize with my parents for not approving because there was no teal Fellowship. The moment I knew I wanted to drop out was back when we had an office in Paula walto. We were 19. We raised over 3 million; General Catalyst led the round. The money was wired and we changed our salaries in Gusto. Net retention is over 100%. There's not a single person who works on sales at Meror. We leverage llms and all these models throughout our product, and data is the bottleneck.

Cheeky Pint

Satya Nadella describes how lessons from Microsoft’s history apply to today’s boom
Guests: Satya Nadella
reSee.it Podcast Summary
Satya Nadella reflects on Microsoft’s journey from information management to a cloud and AI-driven era, emphasizing architecture over ad hoc tools. He discusses the need for an ensemble of models, robust data governance, and memory, entitlements, and action spaces to enable reliable AI in enterprises. Nadella highlights the importance of the Microsoft 365 graph, Copilot, and the dream of a company possessing its own foundation model to retain sovereignty over knowledge. He contrasts past internet pivots with today’s AI transition, stressing the urgency of scalable infrastructure and the governance required to deploy AI at enterprise scale. The conversation delves into practicalities of adoption: the Ignite conference’s role in diffusing AI inside enterprises, the challenge of data plumbing, and the push to build internal AI factories rather than mimic external AI only. Nadella asserts that value comes from organizing data into a single semantic layer that can be integrated with ERP and other systems, and from embedding governance to protect confidential information. He also explores how the next generation of tools—ranging from IDE-like experiences to agent-based workflows—will change how professionals work, not just what they work with. On strategy and culture, Nadella discusses the tension between bundling and modularity, the need to stay platform-agnostic yet deeply integrated, and lessons learned from Microsoft’s journey across Windows, Azure, and open ecosystems. He emphasizes a growth mindset over rigidity, translating founder-driven energy into scalable leadership, and the importance of hiring, memory, and decentralization to sustain momentum as the company grows. The chat shifts to industry foresight, including the evolution of commerce through agentic experiences, personalized catalogs, and conversational checkout. Nadella and Collison debate how many apps a future platform will need, the role of open ecosystems, and the sovereignty of corporate AI models. They touch on the potential for AI to redefine corporate structures, and the enduring appeal of tools like Excel as parables for user-friendly, programmable interfaces. Towards the close, Nadella recalls the 1990s internet pivot, the dot-com era, and the need for adaptable strategy as new paradigms emerge. The dialogue ends on human elements—founder mindsets, mentorship, and Hyderabad’s culture—underscoring that tech leadership blends engineering excellence with resilient, community-driven leadership.

20VC

Surge CEO & Co-Founder, Edwin Chen: Scaling to $1BN+ in Revenue with NO Funding
Guests: Edwin Chen
reSee.it Podcast Summary
Edwin frames Surge as a company where quality is the North Star, distinguishing it from what he calls body shops or body shops masquerading as technology firms. He says quality is the most important thing and that profitability and control over destiny matter, even while aiming for billion-dollar exits. The show splits into two parts: the rise story and a data-labeling future analysis. He argues that at large tech firms, 90% of people work on useless problems, and smaller teams move 10x faster with higher talent density and clearer customer focus. Surge differentiates itself by building the technology to measure and improve data quality rather than supplying warm bodies. He notes data quality is hard and adversarial: graduates cheat, labeling is flawed, so the company relies on sophisticated algorithms and evaluation. The core principle is that the quality of data drives large-model training, and throwing more humans at the problem does not scale. He emphasizes visceral understanding of data and a product mindset anchored in solving customer problems, not chasing internal metrics or logos. Founding moment: leaving Twitter after confronting data-labeling bottlenecks, he built a V1 in a couple weeks, spoke to customers directly, and declined VC fundraising because the business was profitable from month one. Early customers negotiated contracts quickly; Surge avoided a large sales push and grew by serving committed customers who shared the vision. He leans on strong product principles—quality above all else—and rejects ‘build fast, pivot’ pressure that undercuts long-term strategy. Post-ChatGPT, demand surged and Scale’s acquisition broadened exposure. He argues data quality remains the bottleneck—far more critical than compute or algorithms—because flawed data misleads progress. The company emphasizes providing high-quality data that customers could not obtain elsewhere. He envisions a future with multiple frontier AI labs and a mix of monolithic and specialized models; synthetic data has limits, and hundred- or thousand-project scalability depends on tech to identify high-quality contributors and curb cheaters.

Generative Now

Rajarshi Gupta: Artificial Intelligence and Crypto at Coinbase
Guests: Rajarshi Gupta
reSee.it Podcast Summary
Generative AI is reshaping how Coinbase protects accounts, personalizes experiences, and governs risk, according to Rajarshi Gupta, the head of machine learning. He describes a platform approach that includes an internal employee assistant, a customer-facing assistant, and a guardrail system that governs behavior and actions as models operate across the stack. Deployments span multiple clouds, and Gupta emphasizes the ongoing challenge of GPU capacity, which shapes how aggressively the company can scale its AI initiatives while maintaining reliability and safety for users. Gupta’s background mixes academia, industry, and entrepreneurship. He did a PhD at Berkeley, then spent a decade at Qualcomm Research, where he led the industry’s first on-device ML engine for Android malware detection. The model ran in the phone’s secure stack and was written in C for performance; training happened offline while inference ran on-device. After stints at Balbix and Avast, including Avast’s IPO and Norton merger, he moved to AWS as a SageMaker GM before joining Coinbase three years ago to lead AI across the company. This track underpins his focus on security, privacy, and robust engineering. On the product side, Coinbase released an internal employee assistant in fall 2023, connected to data sources via Gleam, and a performance-review assistant used by thousands. The company also built an API layer so other teams can build on the AI stack. Examples include an incident bot in Slack and a text-to-SQL bot for data queries. For customers, Coinbase deployed an LLM-based chatbot in November and began surfacing Gemini-like answers in site search. The CB GPT platform is multi-cloud (Azure, AWS, GCP) and supports Claude and Gemini, with ongoing guardrails, evaluation, and human-in-the-loop testing. Gupta also discusses the challenges of evaluating and deploying LLMs in enterprise settings, including the absence of reliable confidence estimates. He highlights the use of an evaluation portal, external guardrails, and the need to balance innovation with compliance across financial and crypto regulations. He notes that enterprise data plumbing and integration remain the main blockers for broader AI adoption.
View Full Interactive Feed