reSee.it - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Foundry provides an open architecture for closing the loop between operations and analytics. It allows users to bring existing data and model tooling together inside of an ontology to build workflows, applications, and capture decisions to inform better operations and continuous learning. Data teams can bring data lakes and warehouses into Foundry as the nouns of the enterprise. Analytics teams can bring models, linear programming models, ML models, and stored procedures as the verbs. Assembling this operating layer iteratively builds a foundation to drive operational workflows, conduct sophisticated analytics, and capture decisions to pipe to enterprise systems. Foundry includes data integration, model integration, an ontology layer encompassing objects, relationships, actions, and business processes, and a workflow layer with application building and self-serve analytics.

Video Saved From X

reSee.it Video Transcript AI Summary
Foundry provides an open architecture to close the loop between operations and analytics. It allows users to bring existing data and model tooling together inside of an ontology to build workflows, applications, and capture decisions to inform better operations and continuous learning. Data teams can bring data lakes and warehouses, connecting them into Foundry as the nouns of the enterprise. Analytics teams can bring models, linear programming models, ML models, and stored procedures, connecting them as the verbs that go along with the nouns to create business processes. Assembling this operating layer iteratively builds a foundation to drive operational workflows, conduct sophisticated analytics like scenario planning, and capture decisions to pipe to enterprise systems. Foundry includes data integration, model integration, an ontology layer, a workflow layer, and a decision orchestration layer to capture learnings from end users and feed them back to analytics and data teams. Foundry can get users operational in days.

Video Saved From X

reSee.it Video Transcript AI Summary
Amy and her colleague discuss integrating AI-native innovation with a human-centered design approach, focusing on how technology can be made accessible through natural interaction with AI and through rapid, user-friendly development flows. They begin by positioning AI as the new user interface. The other speaker notes that AI’s ease and approachability come from the ability to use human language, enabling conversations that let people interact with technology in a fundamentally new way. This language-based interaction is highlighted as a core shift in how users engage with digital tools and services. Beyond language, the conversation expands to include other modalities that users can employ to communicate with AI. The speakers identify text, images, and audio as essential inputs. The concept of multimodality is introduced to describe the ability to input using whatever format feels most natural to the user. Examples given include dropping in a screenshot, using voice to talk to the AI, or providing a video or a document. The emphasis is on a flexible, conversational experience that can accept diverse media and still deliver the necessary answers and help. The speakers then pivot to the question of how to create applications quickly and easily. They express enthusiastic interest in a partnership with Figma, a design platform. The collaboration is described as enabling designers who create an application design in Figma to hand off that design to a build agent, which can translate the design into an enterprise-grade application. This suggests a streamlined pipeline from design to production, leveraging AI to automate aspects of the development process and accelerate delivery while maintaining enterprise quality. Throughout, the emphasis remains on combining AI-driven capabilities with human-centered design principles to simplify interactions and speed up application development. The dialogue underscores the idea that users can engage with AI through natural language and multiple input formats, and that design-to-deployment workflows can be accelerated through integrated tools and partnerships. To learn more about AI experience, the conversation points listeners to a link in the comments, inviting further exploration of the described capabilities and partnerships.

Video Saved From X

reSee.it Video Transcript AI Summary
Patrick Sarval is introduced as an author and expert on conspiracies, system architecture, geopolitics, and software systems. Ab Gieterink asks who Patrick Sarval is and what his expertise entails. Sarval describes himself as an IT architect, often a freelance contractor working with various control and cybernetics-oriented systems, with earlier experience including a Bitcoin startup in 2011, photography work for events, and involvement in topics around conspiracy thinking. He notes his books, including Complotcatalogus and Spiegelpaleis, and mentions Seprouter and Niburu in relation to conspiratorial topics. Gieterink references a prior interview about Complotcatalogus and another of Sarval’s books, and sets the stage to discuss Palantir, surveillance, and the internet. The conversation then shifts to explaining Palantir and its significance. Sarval emphasizes Palantir as a key element in a broader trend rather than focusing solely on the company itself. He uses science-fiction analogies to describe how data processing and artificial intelligence are evolving. In particular, he introduces the concept of a “brein” (brain) or “legion” that integrates disparate data streams, builds an ontology, and enables predictive analytics and tactical decision-making. Palantir is described as the intelligence brain that aggregates data from multiple sources to produce meaningful insights. Sarval explains that a rudimentary prototype of such a system operates under the name Lavender in Gaza, where metadata from sources like Meta (Facebook, WhatsApp, Instagram), cell towers, satellites, and other sensors are fed into Palantir. The system performs threat analysis, ranks threats from high to low, and then a military operator—still human—must approve the action, with about 20–25 seconds to decide whether to fire a weapon. The claim is that Palantir-like software functions as the brain behind this process, orchestrating data integration, ontology creation, data fusion, digital twins, profiling, predictions, and tactical dissemination. The discussion covers how Palantir integrates data from medical records, parking fines, phone data, WhatsApp contacts, and more, then applies an overarching data model and digital twin to simulate and project outcomes. This enables targeted marketing alongside military uses, illustrating the broad reach of the platform. Sarval notes there are two divisions within Palantir: Gotum (military) and Foundry (business models), which he mentions to illustrate the dual-use nature of the technology. He warns that the system is designed to close feedback loops, allowing it to learn and refine its outputs over time, similar to how a thermostat adjusts heating based on sensor inputs. A central concern is the risk to the rule of law and human agency. The discussion highlights the potential erosion of the presumption of innocence and due process when decisions increasingly rely on predictive models and AI. The panel considers the possibility that in a high-stress battlefield scenario, soldiers or commanders might defer to the Palantir-presented “world view,” making it harder to refuse an order. There is also concern about the shift toward autonomous weapons and the removal of human oversight in critical decisions, raising fears about the ethics and accountability of such systems. The conversation moves to the political and ideological backdrop surrounding Palantir’s leadership. Peter Thiel, Elon Musk, and a close circle with ties to PayPal and other tech-industry figures are discussed. Sarval characterizes Palantir’s leadership as ideologically defined, with statements about Zionism and a political worldview influencing how the technology is developed and deployed. The dialogue touches on perceived connections to broader geopolitical influence, including the role of influence campaigns, media shaping, and the involvement of powerful networks in technology development and national security. As the discussion progresses, the speakers explore the implications of advanced AI and the “new generative AI” era. They consider the nature of AI and the potential for it to act not just as a data processor but as a decision-maker with emergent properties that challenge human control. The concept of pre-crime—predicting and acting on potential future threats before they materialize—is discussed as a troubling possibility, especially when a machine’s probability-based judgments guide life-and-death actions. Towards the end, the conversation contemplates what a fully dominated surveillance state might look like, including cognitive warfare and personalized influence through media, ads, and social networks. The dialogue returns to questions about how far Palantir and similar systems have penetrated international security programs, with speculation about Gaza, NATO adoption, and commercial uses beyond military applications. The speakers acknowledge the possibility of multiple trajectories and emphasize the need for checks and balances, transparency, and critical reflection on the power such systems confer upon a relatively small group of technologists and influencers. They conclude with a nod to the transformative and potentially dystopian future of AI-enabled surveillance and decision-making, cautioning against unbridled expansion and urging vigilance.

Video Saved From X

reSee.it Video Transcript AI Summary
Enterprises collect and store data hoping to gain insights to improve customer experience and increase revenue, but to gain insights, they need to know how each piece of data maps together. Focusing on individual touch points yields only minor successes; substantial improvement comes from the accumulation of touch points, considering sequence and time, which are called journeys. ClickFOX connects all touch points into journeys. The ClickFOX experience analytics platform (CEA) ingests raw data from any source, identifies detailed events, connects them into individual paths, applies business logic to create tasks, transforms, and merges tasks into journeys, providing insights critical to the entire enterprise. ClickFox's CEA platform can ingest raw data from any source across any channel and discover and test hypotheses without heavy technical resources, providing unique filtering with views into the data from multiple dimensions. Enterprises can measure customer journeys with corresponding fluctuations in social media sentiment. CEA outputs journey data assets that can be leveraged by the enterprise. ClickVox provides much faster time to insights, allowing business professionals to directly answer questions and test hypotheses. By looking at the end to end journey, ClickFox and its CEA platform help transform metrics, culture, the front lines, and the overall customer experience.

The OpenAI Podcast

Codex and the future of coding with AI — the OpenAI Podcast Ep. 6
Guests: Greg Brockman, Thibault Sottiaux
reSee.it Podcast Summary
AI helpers that can actually write code are now routine enough to reshape how developers work, yet the episode opens by recalling the early signs of life in GPT-3, when a string of characters could complete a Python function and hint at a future where a language model writes thousands of lines of coherent code. The OpenAI team then walks through Codex and the new Codeex, GP5, and the idea that the greatest leap comes not from a single model but from how it is woven into a practical harness. Latency remains a product feature, guiding choices about interface style, whether ghost text, dropdowns, or more sophisticated integrations. The guests describe a long trajectory from the first demos to today’s richer coding workflows, where AI is a collaborator that you actually trust to help you ship real software. central to that vision is the harness, the set of tools and workflows that connect the model to the outside world. The hosts explain that the harness is not a luxury but a prerequisite: the model supplies input and output, while the harness enables action, iteration, and environment awareness. They describe the agent loop, in which the AI can plan, execute, and reflect, becoming a collaborator that can navigate codebases, run tests, and refactor across long sessions. Different form factors—terminal, IDE extensions, cloud tasks, and web interfaces—are explored, with an emphasis on meeting developers where they are. The team recalls internal experiments that evolved from asynchronous, agentic prototypes to a more integrated, multi‑modal reality, including a terminal‑based workflow, a code editor workflow, and a remote‑task flow that keeps working even when a laptop is closed. Looking ahead, the conversation sketches an agentic future in which coding agents live in cloud and on local machines, supervised to produce tangible value. They discuss safety, sandboxed permissions, and escalation for risky actions, along with alignment challenges. Beyond code, they imagine applications in life sciences, materials research, and infrastructure where formal verification could change reliability. They recount how code review powered internal velocity at OpenAI, and how AI‑driven reviews surface contracts, dependencies, and edge cases, often revealing faults top engineers might miss. The hosts emphasize practical adoption today—zero‑setup entry, breadth of tools, and cross‑tool integration—while keeping the horizon in view: a future where a coding assistant amplifies human effort without erasing judgment.

Generative Now

Reinventing Wall Street: Rogo’s AI Revolution with Gabriel Stengel
Guests: Gabriel Stengel
reSee.it Podcast Summary
A finance startup is quietly threatening to reshape Wall Street’s workflow by turning minutes of research into seconds of insight. Gabe Stangel, a Lazard alumnus turned founder of Rogo, explains how the idea began as Princeton senior projects pairing computer science with econometrics, then evolved into a product that now helps banks and hedge funds analyze earnings, benchmark peers, and build decks in moments. He describes his path from banking to data science at Lazard, and why a focused AI tool mattered more than novelty. Rogo’s edge rests on three pillars. First, it combines licensed data from providers like S&P Global Cap IQ with company-internal data, giving the model access to both external and proprietary sources. Second, it uses tooling that matters on finance teams—Excel, filings, and precedent transactions—so outputs are auditable and actionable. Third, it relies on post-training and reinforcement fine-tuning to teach the model how to use these tools and to follow Wall Street workflows, not merely generate plausible text. Market entry hinged on a shift from structured data to structured and unstructured data, and on reframing the pitch around ROI. A large private equity firm’s pricing question—quoted loosely as two million dollars a year—became a turning point, signaling that users valued the automation and speed. The team also wrestled with enterprise-grade security, multi-cloud deployment, and governance, treating security as a core feature. They still face the challenge of enterprise sales, preferring top‑down deals while exploring a possible product-led growth path. Looking ahead, Stangel envisions Rogo becoming the most effective analyst on Wall Street within five to ten years, enabling firms to win transactions faster and extend sophisticated financial services to smaller players. He sees consulting firms and corporate development teams as early adopters, with banks potentially co-building tools rather than defending against them. The future, he says, is a mix of automation and human judgment—AI handling routine diligence while bankers focus on strategic, relationship-driven work, with specialization delivering competitive advantage.

Generative Now

Adam Wenchel: How Arthur AI is Making LLMs Trustworthy
Guests: Adam Wenchel
reSee.it Podcast Summary
Arthur AI’s Adam Wenchel describes a career spent shaping AI at every scale, from DARPA research to a high‑velocity startup path and then Capital One’s AI transformation. Arthur positions itself as a steward of trust for AI, focusing on measuring and monitoring performance across generative systems and large language models, with guardrails and explainability built in. Wenchel recalls launching Capital One’s AI team to support hundreds of millions of consumers, balancing complex model systems with regulatory scrutiny and risk management, and recognizing the need for modern tools to surface issues before the balance sheet is affected. That experience seeded Arthur’s core vision: bring the same visibility engineers enjoy with Data Dog or Splunk into the data science realm. In practice, Arthur helps teams track not just model accuracy but the health of a model ecosystem—credit decisions, fraud detection, supply chain optimizations, and other AI‑driven processes—so executives can see what’s driving outcomes, where biases may lie, and when interventions are required. Wenchel notes the shift from pure AI performance to governance, guardrails, and transparency as central to enterprise adoption. With the generative AI revolution, Arthur rebuilt its roadmap in early 2024. The company now emphasizes three life‑cycle stages: pre‑production validation to choose the right model and data strategy; real‑time controls to block hallucinations, guard against prompt injection, and render risky outputs safe; and continuous monitoring to measure how changes in providers or models affect performance. The team expanded tools for evaluation, including testing prompts on a company’s own data (Bench), and deploying objective metrics for conciseness, usefulness, and reliability. Wenchel describes how hazard prompts like hedging have grown as guardrails, sometimes overly cautious, shaping the choice of models for different use cases. Looking ahead, the interview covers market dynamics and regulation. Wenchel argues for balanced policy: light touch to spur innovation, strong accountability for deployers, and a framework like the recent executive order to clarify responsibilities. He notes the reality that enterprises often use multiple providers and need a single pane of glass to supervise them all. Arthur remains open to both third‑party models and fine‑tuned in‑house systems, stressing that many customers will still require best‑of‑breed guardrails rather than building everything alone. The conversation closes with reflections on hiring and the excitement of contributing to AI’s practical, safe deployment.

Uncapped

Agents in the Enterprise | Aaron Levie, CEO of Box
Guests: Aaron Levie
reSee.it Podcast Summary
AI is the big unlock for data, Levie argues, because Box has spent nearly two decades storing and managing critical assets, including financial documents, contracts, marketing assets, and employee records, and most of that data sits idle after early use. Box serves about 115,000 customers and is in roughly two-thirds of the Fortune 500; yet the real value lies in the data's potential to reveal product opportunities, boost sales, and speed onboarding. AI, he says, lets the company reimagine itself as if it started in 2025, grappling with how to organize a data-rich platform from the ground up while staying fast and secure. The ambition is to plug AI at the core of everything Box does, not treat it as a bolt-on. Levie envisions millions of AI agents focused on content-driven workflows. In Box AI Studio, customers can create agents or rely on automatically created ones to review contracts for risky clauses, process invoices, extract asset data for marketing campaigns, and automate related tasks. An agent could research dozens of financial documents, assemble a trends report, and even reach across outside systems via a tool-use framework. The vision extends beyond Box: agents will thread data from Salesforce, ServiceNow, Slack, Workday, and other platforms to build a complete picture or drive a workflow. In practice, this means background agents that execute tasks, free up human time, and accelerate decision-making. An important thread is Box’s architecture and neutrality. Levie notes Box’s cloud-native, multi-tenant design allowed new AI capabilities to plug in without version fragmentation. Acquisitions must feed into a common platform rather than operate in silos. He argues the future of work is not confined to Box but spans Salesforce, ServiceNow, and dozens of other platforms, with agents conversing across systems. This openness is framed by business logic: AI’s economics may initially track labor costs, but over time software margins should prevail as agents scale beyond headcount limits. He invokes Seven Powers, arguing that cornered resources will determine who wins in this AI era.

Sourcery

Text-to-CAD: AI Revolutionizing Hardware Design with Jordan Noone of Zoo
Guests: Jordan Noone
reSee.it Podcast Summary
Zoo is pursuing a major shift in hardware development by integrating AI and cloud-based computation into the design process. The guest discusses how traditional hardware design tools rely on manual, mouse-driven workflows and how Zoo aims to replace that with a computational geometry engine that is GPU-optimized, cloud-based, and API-accessible. The core idea is to enable automation, code-based interaction, and AI-driven generation of geometry, so that design can scale like software. The conversation outlines how the company started with geometry and data-access as bottlenecks, then built an end-to-end stack: a geometry engine behind the scenes, a modeling app for manual edits, code-based tooling for automation, and Text-to-CAD for generative design prompts. The team emphasizes that the most valuable customers are those who combine software and hardware expertise, including aerospace, robotics, and autonomous systems, and that overlapping capabilities at the hardware-software frontier create the biggest opportunities for efficiency gains. The discussion details how Text-to-CAD works alongside the modeling app, enabling three modes of interaction: traditional mouse-based edits, scripting via code, and conversational prompts to generate or modify parts. Onboarding is described as self-serve and intuitive for mechanical engineers, with APIs available for developers to extend workflows. The founders share their backgrounds in aerospace and software hardware, including Relativity Space, Docker, and Oxide, and explain how those experiences shaped Zoo’s approach to tooling, automation, and data rails. Funding history is disclosed (about $6 million to date, with a pre-seed from Embedded Ventures and a seed from V-Rex, plus notable angel investors). Looking ahead, the team plans to ship enhanced text-to-CAD editing in the modeling app, begin manufacturing-focused capabilities such as CNC tool-pathing, and eventually build an internal data center and a factory to verify end-to-end workflows. The overarching vision is a full hardware development lifecycle platform that reduces labor and accelerates iteration across design and manufacturing, leveraging a geometry-first data model.

Generative Now

Arvind Jain: Why Now Is the Time to Solve Enterprise Search
Guests: Arvind Jain
reSee.it Podcast Summary
Imagine an enterprise where every piece of knowledge lives in Confluence, Jira, Google Drive, and a dozen other systems, yet no one can find what they need fast enough. That challenge fueled Arvind Jain's move from Rubric to founding Glean in 2019, years before the current AI boom. At Rubric, rapid growth created silos and a drop in productivity as knowledge sprawled across teams. Jain, a search veteran, set out to build a powerful, secure enterprise search that unifies data and people. From day one, Glean blended traditional retrieval with transformer-based understanding. They built integrations to connect enterprise data sources via published APIs, then layered security to ensure permissions aren't breached. The product used BERT-era ideas and later a hybrid approach: retrieve relevant fragments from internal data, then pass them to a model for reasoning. They also train small enterprise-specific encoders on each company corpus to improve semantic matching, while relying on multiple model providers for reasoning. Market fit arrived slowly. Initially, many saw search as a vitamin, not a painkiller. After about 30 tech customers scaled usage, momentum grew through word of mouth. The ChatGPT moment amplified demand: enterprises imagined a personalized, internal ChatGPT that knows their data. Glean crossed 100 million ARR and tripled last year, helped by the belief that AI should be accessible to every employee. Jain emphasizes education inside organizations so workers become AI-first and adoption becomes practical, not theoretical. Strategically, Glean positions itself as a horizontal AI platform atop diverse models and data sources, rather than a single-model vendor. They partner with model providers and hyperscalers, and offer an agent-building layer that lets business users define multi-step workflows in natural language. Competition is welcomed: it accelerates R&D and expands the ecosystem. Jain's experience at Google and Rubric informs a focus on recruiting top engineers and maintaining fast, ambitious execution as the company scales.

a16z Podcast

a16z Podcast | From Data Warehouses to Data Lakes
Guests: Gaurav Dhillon, Scott Kupor
reSee.it Podcast Summary
Gaurav Dhillon, founder of SnapLogic, discusses the evolution of enterprise application integration from the late 90s to today. Initially, integration focused on connecting core applications like finance, with business processes dictated by major vendors like SAP and Oracle. Today, the landscape has shifted dramatically due to the web, self-service expectations, and the proliferation of SaaS applications. Companies now face a complex web of data types and sources, necessitating new integration approaches. Dhillon highlights the transition from data warehouses to data lakes, emphasizing the need for real-time data processing and predictive analytics. He notes that the future involves cloud-based data lakes and a hybrid architecture, enabling businesses to leverage vast amounts of data for informed decision-making. The roles within IT are also evolving, with CIOs becoming more business-oriented and collaborating closely with other departments.

Sourcery

Sequoia Leads $75M Series B Into Nominal | Alfred Lin Joins Board
Guests: Cameron McCord, Stephen Slattery
reSee.it Podcast Summary
Cameron and Steven join Molly O’Shea to discuss Nominal’s rapid Series B, a $75 million round led by Sequoia with Alfred Lin joining the board, and co-led by Lightspeed. The founders describe a tight, high-velocity fundraising process where diligence ran at breakneck speed, with Sequoia reportedly interviewing many customers to build conviction. They emphasize that the round signals strong growth momentum and validation of Nominal’s ambitious vision for continuous hardware testing, which seeks to unify development and deployment data into a single platform used across both DoD and commercial aerospace, energy, and manufacturing sectors. The conversation highlights Nominal’s dual-use stance, with the same Core and Connect products serving federal and civilian customers, while keeping the go-to-market distinct for each segment. The core product started as a data-review tool for hardware testing and has evolved into a three-workflow platform covering data management, analysis, and automated validation, all designed to be deployed quickly via first-class data integrations. Connect, launched recently, extends the platform to edge-heavy environments with a desktop-native UI and rich hardware drivers, enabling rapid value on production lines and test facilities. A key design principle is that every asset is a test asset, and data flows seamlessly from development to operations, providing a continuous feedback loop for engineers and managers alike. The episode paints a vivid industry backdrop: a hardware-centric, software-defined shift in aerospace and defense, onshoring supply chains, and a defense budget pent-up demand that favors faster testing and validation cycles. Investors’ appetite is linked to a belief that the DoD’s shift toward distributed, autonomous, and attritable systems will be underpinned by better testing tools. The founders recount customer wins with Shield AI, Antares, and Vatin Systems, illustrating Nominal’s expanding footprint from traditional flight-testing into maritime and nuclear domains, while maintaining a consistent platform that scales with growing teams. They discuss Delta Qual—the idea of qualifying only what changes when you introduce a modification—accelerating deployment timelines, and how Nominal aims to become the reference platform for hardware engineers, ultimately making Nominal the standard vessel for describing what hardware engineers do every day.

Invest Like The Best

The Future of AI Agents | Jesse Zhang Interview
Guests: Jesse Zhang
reSee.it Podcast Summary
The episode centers on Jesse Zhang’s journey building Decagon, an AI customer-service agent platform, and on the broader currents shaping entrepreneurial work in the AI era. Zhang discusses the core belief that a company’s future interface with users could become an AI agent—a “new UI” that sits at the front end of brands, capable of initiating conversations, performing actions, and carrying context across interactions. He reflects on what it means to compete in a hot, rapidly evolving space, arguing that large markets attract intense competition but that durable advantage comes from a strong, hard-to-replicate culture, disciplined problem solving, and a customer-centric discovery process. He shares how his own background in competitive environments and math contests informs his approach to building, validating, and scaling a startup: how to structure conversations with potential clients, how to quantify willingness to pay, and how to translate early signals into a defensible product direction. He recounts the origin story of Loki, a prior venture, and contrasts the emotional, high-pressure early days with the current stage, where sleep, pace, and prioritization are balanced against the thrill of rapid growth and a capable team. A key theme is the iterative method of customer discovery: starting with high-level exploration, forming hypotheses about use cases, testing with senior buyers, and pushing for measurable ROI to align incentives and unlock large deployments. He explains why customer service is a particularly attractive entry point for AI—because ROI is straightforward to quantify and the path to live deployment is well-defined through escalation to human support when needed. The conversation also delves into how Decagon structures its product around guardrails, brand voice, and enterprise data, and how the team navigates talent dynamics, investor relationships, and the strategic choice between fine-tuning models versus building a bespoke software layer on top of existing models. The overall arc paints a future in which brands operate through a unified, capable agent that knows their context and can execute across sales, support, and operations, while maintaining a disciplined, humane workplace culture.

The Koerner Office

Build Your Next Business With This Viral AI Tool
reSee.it Podcast Summary
The episode centers on Gum Loop, an automation platform described as AI-first, drag-and-drop tooling that lets non-engineers build powerful AI workflows. Max Broer explains how Gum Loop enables users to create multistep automations for tasks like lead enrichment, customer support analysis, and outbound outreach, effectively replacing large chunks of manual work with scalable “flows.” He positions Gum Loop as the next Zapier for the AI era, emphasizing that it expands what is possible with automation rather than just replacing existing tools. A core theme is the distinction between traditional automation (Zapier-style) and AI-powered workflows. Gum Loop’s strength lies in combining AI reasoning with programmable blocks to perform complex, data-rich tasks—such as researching a lead, drafting personalized emails, summarizing thousands of chat messages, and generating research reports—without requiring engineering resources. The co-founder notes the product’s philosophy of measured agent capabilities, focusing on reliable, auditable steps rather than fully autonomous agents. The conversation delves into practical use cases and pricing dynamics, highlighting a diverse customer base from large enterprises like Instacart to small businesses. Common patterns include lead scoring, content generation, CRM enrichment, and programmatic SEO. The show explores how Gum Loop is used to build agencies or “experts” who construct custom workflows for clients, and discusses the upcoming co-pilot feature intended to lower the learning curve and enable users to go from idea to running workflow in minutes. Towards the end, Max discusses the future roadmap and business strategy, including an emphasis on the interviewees’ belief that AI will catalyze productivity at scale. He mentions an upcoming marketplace for expert flows, privacy considerations around sharing credentials, and the potential for white-labeling Gum Loop. The dialogue closes with reflections on model selection for different tasks and the value of treating AI like a capable employee who operates within clearly defined steps.

Lenny's Podcast

Why LinkedIn is turning PMs into AI-powered "full stack builders” | Tomer Cohen (LinkedIn CPO)
Guests: Tomer Cohen, Michael Truell, Varun Mohan, Anton Osika
reSee.it Podcast Summary
The episode dives into LinkedIn’s ambitious experiment with AI-augmented product building, where the traditional product development lifecycle is being reimagined through a full stack builder model. Tomer Cohen, LinkedIn’s CPO, explains how the time constants of change now outpace organizational response, forcing a rethink of who builds what and how. Instead of a multi-team, handoff-driven process that expands research, design, and validation into a lengthy gauntlet, LinkedIn is pushing builders to own end-to-end experiences that blend human judgment with AI capabilities. The conversation emphasizes that the key traits for builders—vision, empathy, communication, creativity, and especially judgment—must be sharpened, while automation absorbs everything that can be quantified or standardized. The goal is not to replace talent but to enable skilled builders to move faster, adapt to shifting contexts, and operate with greater resilience by composing a human-AI product team that can pivot as needed. Cohen makes clear that this shift requires more than new tools; it demands cultural change, incentives, and a clear pathway for career progression as the organization flattens hierarchies into flexible pods of full stack builders who can ideate, prototype, test, and launch with velocity. The discussion details the three pillars of LinkedIn’s approach: a re-architected platform that AI can reason over, bespoke internal tools and agents built to work with their own stack, and a culture that rewards rapid experimentation and sharing of successful practices. A standout theme is how much effort has gone into data curation, context creation, and the design of governance and trust mechanisms to guard against misuse. The guests walk through concrete examples—a trust agent that flags vulnerabilities in a spec, a growth agent that critiques ideas, a research agent that leverages LinkedIn’s corpus to assess market insights, and an analyst agent that navigates the graph—to illustrate how a suite of purpose-built agents can augment human capabilities without sacrificing accountability. The interview also covers practical timelines, the internal pilot structure, staff incentives, and the balance between specialization and full-stack fluency, underscoring that the road to scale is iterative, expensive upfront, and demands persistent leadership and clear communication about progress and outcomes. The episode culminates in reflections on talent, management, and career pathways, including the Associate Product Builder program as a future-facing replacement for traditional APM tracks, the need for inclusive mentorship, and the imperative to celebrate wins to sustain momentum. Throughout, the speakers stress that change management—through visibility, early wins, cross-functional collaboration, and a culture of experimentation—is as crucial as the technology itself. They acknowledge the friction and challenges of converging tools, the risks of over-reliance on external solutions, and the reality that not everyone will want to become a full stack builder, making the shift as much about culture and incentives as about capabilities. The overall message is one of ambitious but patient transformation, with a clear eye toward continuous progress rather than a final state.

Generative Now

Arvind Jain: Why Now Is the Time to Solve Enterprise Search (Encore)
Guests: Arvind Jain
reSee.it Podcast Summary
Generative Now begins with a bold premise: a company's knowledge is powerful, but if employees can't find it, it's barely useful. Arvind Jain, founder and CEO of Glean, explains that the problem came into focus while scaling Rubric, a data-security startup. As Rubric grew beyond a thousand people, frustrations rose: information lived across Confluence, Jira, Google Drive, SharePoint, and emails, and nobody could quickly locate experts or documents. Jain's background in Google search inspired the idea of an enterprise search that connects disparate systems and understands context, not just keywords. The 2018 spark: transformer-based models hinted at semantic search improvements, setting the stage for what would become Glean. From day one, Glean fused retrieval with generation. The team built integrations to Confluence, Jira, Google Drive, SharePoint using published APIs, and created a secure, permission-aware search experience for privileged enterprise data. They used BERT as the initial model, retraining it on each enterprise corpus to tailor it to company terms and acronyms, while combining traditional ranking with semantic matching. The system operates as a hybrid stack: a retrieval layer fetches relevant bits of knowledge, then a foundation model reasons and generates responses. They're not a pure foundation-model company; they connect multiple models and let customers pick the best fit. Market fit evolved in two waves: first, about 30 tech-sector companies adopted Glean at scale, creating word-of-mouth; second, ChatGPT's rise reframed the value of enterprise knowledge as something to be embedded in internal data. Jain notes that employees now want an AI that already knows their company, and adoption grew as ROI and ease of use improved. Competition would not derail them; they see themselves as a horizontal AI platform on top of model providers, enabling agents and workflows across many apps. Features like operators and browser automation extend AI to tasks even without API access. He credits recruiting as the key early move and says one executive use case per quarter helps embed AI, fostering an AI-first culture across the organization.

Generative Now

Mike Krieger: Product Building Lessons from Instagram and Anthropic (Encore)
Guests: Mike Krieger
reSee.it Podcast Summary
From Instagram to Anthropic, Mike Krieger sketches a deliberate pivot that blends hands‑on product building with high‑caliber AI research. After a brief spell in semi‑retirement, he realized he missed leading large, multi‑team efforts and the thrill of turning ideas into shipped products. He sought a place with momentum where a zero‑to‑one mindset could flourish inside an established company. Anthropic offered a fit: a world‑class research engine paired with a product function still hungry to ship, and a culture aligned with his values. The decision centered on getting closer to AI while staying close to product reality, and it clicked when the company needed someone who could bridge research progress with practical product strategy. The result is a deliberate, portfolio‑driven approach. On the product side, the process is not the all‑at‑once invention mode of a startup but a cadence of model releases and feature work. Research unlocks new capabilities, but product ideas require close collaboration, fast feedback loops, and careful safety considerations. Anthropic uses a labs pair with early research and a mix of model‑dependent features, model‑adjacent improvements, and more traditional polish. Artifacts and projects illustrate surfaces that help people organize work with AI, while technologies like the model context protocol enable others to build on top of Cloud AI. The enterprise path adds governance, procurement, and enablement, shaping early adoption and the information flows that demonstrate value to buyers. Krieger reflects on three takeaways from the DeepSeek moment: first, exposure to Claude beyond the flagship model helps broaden awareness; second, the market overreactions are real but the core need for compute‑driven improvement persists; third, the geopolitical implications of AI have moved from fringe to front and center. Looking ahead, he envisions products that help people become their best selves, with guardrails and context that make AI useful rather than opaque. He emphasizes surfaces that connect to company data and knowledge, not just tokens, and notes the parallel with Instagram’s evolution—where the vibe and the user experience will increasingly differentiate models. The overarching aim is a productive partnership between people and AI across work and life.

20VC

Nabeel Hyatt, GP @ Spark Capital: To Win in AI, Investors Need to Change Their Approach | E1255
Guests: Nabeel Hyatt
reSee.it Podcast Summary
The industry today is run basically by principal Associates and Junior GPs. A principal is not actually waiting for an exit; they just want a promotion. We are in the industrialization of startups, Playbook land, where everybody's trying to churn out some piece of ridiculous arbitrage every week in order to get through the end of their incubator and raise their seed round. There is absolutely a belief that too much capital can mess up a company. There is a thing called founder market fit, and there's also frankly a thing called VC Market fit, and this market for AI is wildly different. To adapt, founders and investors must rethink the craft. The guest argues we are moving from puzzles to mysteries, and that this market for AI requires a different posture from the old puzzle-solving playbooks. The industry is 'an artisanal business'—a small, hands-on firm where you build a team capable of subjective bets. The shift means conversations with founders should move beyond playbooks; the questions become about unknowns, not knowable puzzles. Revenue metrics and traditional benchmarks can mislead in fast-moving AI markets. Three categories of AI startups—adaptation, evolution, and revolution—provide the lens Spark uses to evaluate opportunities. Adaptation copies incumbent products; Evolution reshapes workflows; Revolution creates a new platform. They largely avoid adaptation and prefer disruption that changes behavior or builds something fundamentally new. The tagline 'data exhaust is more important than models, that being the consumer insights layer' underscores why owning the interface with the customer matters for learning and iteration. They emphasize a full-stack approach and direct customer feedback loops. On founders and investors, the guest says 'The best Founders don't need the help of a VC,' and argues for engaged, obsessive partners who do tough work and have difficult conversations. He warns against conflict avoidance, stresses a balance of taste and execution speed, and says you should invest with conviction rather than packaging deals. He notes Europe vs. US dynamics, but still believes great founders can win anywhere; talent concentration and intensity still drive where you want to be present.

Cheeky Pint

Satya Nadella describes how lessons from Microsoft’s history apply to today’s boom
Guests: Satya Nadella
reSee.it Podcast Summary
Satya Nadella reflects on Microsoft’s journey from information management to a cloud and AI-driven era, emphasizing architecture over ad hoc tools. He discusses the need for an ensemble of models, robust data governance, and memory, entitlements, and action spaces to enable reliable AI in enterprises. Nadella highlights the importance of the Microsoft 365 graph, Copilot, and the dream of a company possessing its own foundation model to retain sovereignty over knowledge. He contrasts past internet pivots with today’s AI transition, stressing the urgency of scalable infrastructure and the governance required to deploy AI at enterprise scale. The conversation delves into practicalities of adoption: the Ignite conference’s role in diffusing AI inside enterprises, the challenge of data plumbing, and the push to build internal AI factories rather than mimic external AI only. Nadella asserts that value comes from organizing data into a single semantic layer that can be integrated with ERP and other systems, and from embedding governance to protect confidential information. He also explores how the next generation of tools—ranging from IDE-like experiences to agent-based workflows—will change how professionals work, not just what they work with. On strategy and culture, Nadella discusses the tension between bundling and modularity, the need to stay platform-agnostic yet deeply integrated, and lessons learned from Microsoft’s journey across Windows, Azure, and open ecosystems. He emphasizes a growth mindset over rigidity, translating founder-driven energy into scalable leadership, and the importance of hiring, memory, and decentralization to sustain momentum as the company grows. The chat shifts to industry foresight, including the evolution of commerce through agentic experiences, personalized catalogs, and conversational checkout. Nadella and Collison debate how many apps a future platform will need, the role of open ecosystems, and the sovereignty of corporate AI models. They touch on the potential for AI to redefine corporate structures, and the enduring appeal of tools like Excel as parables for user-friendly, programmable interfaces. Towards the close, Nadella recalls the 1990s internet pivot, the dot-com era, and the need for adaptable strategy as new paradigms emerge. The dialogue ends on human elements—founder mindsets, mentorship, and Hyderabad’s culture—underscoring that tech leadership blends engineering excellence with resilient, community-driven leadership.

Sourcery

Inside Theory Ventures: Tomasz Tunguz’s $688M Thesis on Go-To-Market Disruption
Guests: Tomasz Tunguz
reSee.it Podcast Summary
Tomas Tunguz outlines Theory Ventures’ data-driven, AI-focused thesis and how it informs an investment approach that targets early-stage companies leveraging technology discontinuities to gain market advantages. He reflects on the firm’s capital formation, noting a second fund of 450 million in 2024 following a 238 million breakout fund launched in 2023, and describes a disciplined team structure centered on an intelligence unit that processes text data to identify and map investment ideas. The guest explains why AI adoption varies by segment: consumer AI is accelerating with high engagement and large user bases, while enterprise AI faces tighter acceptable-outcome ranges and higher integration hurdles. He emphasizes techniques to manage AI system errors, such as cross-model judgments, ensemble methods, and chain-of-thought approaches, and discusses the maturation of inference costs, the looming dominance of smaller, more accurate models, and the strategic role of energy consumption and capital expenditure in hyperscale deployments. The conversation delves into the shift from a traditional software stack to a data-centric paradigm, where data infrastructure, governance, and the integration of AI into routine workflows determine competitive advantage. Tunguz also highlights fund-building decisions, the value of recruiting leaders with deep data architecture experience, and the way the firm operates with a small, highly collaborative team that encourages lab-like exploration and rapid iteration on research projects, ultimately tying these practices back to Theory’s goal of identifying “toil” and labor-market gaps that AI can fill through scalable software and AI-enabled platforms.

Generative Now

Rahul Roy-Chowdhury: AI as a Tool for Co-Creation at Grammarly
Guests: Rahul Roy-Chowdhury
reSee.it Podcast Summary
AI is evolving into a partner, not just a tool, as this conversation with Grammarly’s CEO Rahul Roy-Chowdhury shows. He traces Grammarly’s path from rule-based NLP to machine learning and now large language models that enable co-creation with users. Roy-Chowdhury, a former Google executive, explains that Grammarly’s mission to improve lives by improving communication has guided the company long before Gen AI, and AI now provides a powerful tailwind to move beyond grammar to conciseness, tone, and clarity across emails, documents, and messages. The result is an experience users genuinely love, amplified by AI’s capabilities while staying true to the product’s core goals. Roy-Chowdhury frames AI’s impact as a gradual platform shift, likely more consequential than mobile or cloud, and argues adoption will unfold across workflows over years. The focus is on usefulness: helping users do their work better and faster, not replacing human thinking. Grammarly’s approach blends established NLP foundations with data-driven tuning from tens of millions of users, and it uses a mix of open-source and closed models, including GPT-based systems. A concrete example is Knowledge Share, which surfaces definitions and related pages from tools like Confluence when you hover a term in a document. Looking ahead, Roy-Chowdhury envisions specialized models and multi-model architectures that act as a horizontal layer across tools, delivering a consistent experience and context across apps. He describes a future of co-creation rather than outsourcing writing, where the user maintains agency while the AI proposes, critiques, and refines. He also imagines multimodal and multi-language support, with Grammarly expanding beyond text; scheduling and other agent-like capabilities are on the horizon if they serve users’ needs. Open-source contributions and safety-focused tools, such as detectors for sensitive output, anchor Grammarly’s responsible path in this evolving AI landscape.

a16z Podcast

The Era of AI Agents | Aaron Levie on The a16z Show
Guests: Aaron Levie
reSee.it Podcast Summary
The episode centers on how AI agents will reshape software, work, and enterprises, arguing that diffusion of AI capability will unfold more slowly than some expect because the opportunity scales with the number of agents relative to people. The speakers discuss shifting the software abstraction layer from human-centric interfaces to agent-centric interfaces, with tools like APIs, CLIs, and other interfaces enabling agents to read, write, and act across multiple systems. They describe a future where agents not only access data but also code their way through tasks by invoking tools and APIs, leading to a “Claw Cloud Co-work” dynamic and a broader rethinking of how software is built for an agent-dominated workflow. The conversation emphasizes that the real bottleneck is not the availability of data but the ability to structure incentives, permissions, and interfaces so agents can operate securely and effectively within complex organizations. A recurring theme is the necessity of new governance and controls to manage the interaction between agents and enterprise systems, including how to handle privacy, access, and potential misuses when agents operate with broad context and autonomy. The speakers compare the evolution of agent-enabled software to historical shifts in technology adoption, noting that the end state will likely blend standard layers and governance with increasingly capable instrumental agents. They highlight practical examples such as Box CLI enabling natural-language-driven operations, and discuss the tension between rapid experimentation and enterprise risk, including the need for standards and more robust data-layer APIs that agents can rely on. The dialogue also touches on economic implications, debating how to price and monetize agent use—from usage-based models to broader organizational spend—while considering the impact on legacy systems like SAP and ERP platforms. Finally, the episode reflects on a bifurcated landscape where startups move faster with unconstrained experiments, while large enterprises move more cautiously to secure their data and architecture. The overall arc is a forward-looking discussion about how agents will demand a fundamentally evolvable software stack and governance framework to unlock scalable, safe, and economical AI-enabled workflows across industries.

Relentless

#28 - Automating Production Planning | Fil Aronshtein, CEO Dirac
Guests: Fil Aronshtein
reSee.it Podcast Summary
Fil Aronshtein discusses the rapid pivot from a two-office setup to a focused New York operation, emphasizing that two strong in-person teams created more friction than collaboration, which led to a decisive consolidation and a move into the Empire State Building. He describes Build OS v1 and the aggressive push to scale, noting that the product rebuild from October to March yielded an enterprise-grade, ITAR-compliant, GovCloud-integrable platform that is now seeing a flood of pipeline activity with a lean sales and support model. A core theme is context-aware production planning, where DRA aims to unify design, production, and sustainment by linking every piece of manufacturing information. Aronshtein explains the shift from a point solution to a platform that can understand interdependencies across line layouts, DFMs, and maintenance instructions, enabling automatic propagation of changes across work instructions and layouts. He uses the three-blind-men-and-an-elephant metaphor to illustrate how different roles in manufacturing see only pieces of a larger system, which DRA intends to address through integrated, context-rich tooling. The company emphasizes user-centric adoption: manufacturing engineers instantly grasp automated work instructions, while management historically resists because it doesn’t see immediate ROI. To bridge this, Build OS includes Operator Plus, giving operators feedback and time studies, and a leadership-facing “Commander Console” concept to surface KPI-driven insights. Aronshtein highlights the gap between legacy, paper-based instructions and modern, animated, model-based ones, stressing that easier, more engaging tools reduce errors, shorten onboarding, and improve competitiveness. Strategic growth levers include deepening enterprise partnerships, integrating with Tier 1-3 suppliers, and targeting verticals such as aerospace, automotive, and, notably, shipbuilding. He notes data-center manufacturing as a high-growth area due to standardization needs across hundreds of facilities and speaks to the broader reshoring trend, supplier diversification, and the need for scalable, standardized work instructions. The conversation also touches company culture, leadership evolution, and the personal toll and discipline required to transform into a serious, mission-driven organization that can deliver on a grand vision of context-rich production planning and a true platform for manufacturing."], topics otherTopics booksMentioned

a16z Podcast

Where does consumer AI stand at the end of 2025?
Guests: Anish Acharya, Olivia Moore, Justine Moore, Bryan Kim
reSee.it Podcast Summary
This year marked a turning point as the biggest model providers, OpenAI and Google, pushed hard into consumer AI with new models, interfaces, and standalone products. The conversation underscored a rapid shift toward winner-take-some dynamics in a space where a single dominant product still commands a large share of usage, and multi-product adoption remains shallow among average users. Panelists highlighted that the core entry points for many users still revolve around familiar brands, with a significant gap between top players and smaller challengers in terms of scale and engagement, even as new viral tools spike attention and accelerate experimentation. A key theme was multimodal capability and product design as drivers of adoption. They discussed how recent launches moved beyond simple text prompts to integrated experiences where image, video, search, and even real-time data interplay within single ecosystems. The moment belongs to tools that can connect context, memory, and workflows—whether it’s weaving search into creative tasks, enabling persistent agent-like capabilities, or blending packaging into apps that feel native to everyday work and life. Across this landscape, companies are racing to offer “prosumers” and professionals efficient, interceptive experiences that feel intelligent and helpful without overwhelming the user with complexity. The dialogue also touched on the role of platforms versus startups in shaping next-year trajectories. While large labs provide breadth and distribution, startups are leaning into specialized interfaces, tailored templates, and app-generation patterns that unlock rapid experimentation. Topics included the balance between raw model capability and opinionated product design, the economics of usage-based tiers, and the strategic importance of apps stores and cross-tool orchestration for both consumer and enterprise use. The panel closed with pragmatic recommendations for instant takeaways: explore multimodal tools that automate design and content workflows, experiment with startup-grade creative tools, and watch how enterprise integrations may bleed into consumer habits as workplaces begin to normalize AI-assisted workstreams. topics otherTopics booksMentioned
View Full Interactive Feed