reSee.it - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
This is the alchemy of intelligence. This newly manufactured intelligence will spawn a new chapter of unprecedented productivity and development, and that will serve to improve human quality of life. The IDC estimates that AI will generate $20,000,000,000,000 in economic impact by 2030. So even if you can earn a small slice of that, that hundreds of billions of dollars of investment will earn an amazing return. For each dollar invested into, business related AI, it's expected to generate $4.60. As my friend Jensen would say, the more you buy, the more you save. Or in this case, the more you buy, the more you make. And we can grow the pie together and usher in a new era of AI driven

Video Saved From X

reSee.it Video Transcript AI Summary
I asked about AI, and he mentioned that the public only sees a fraction of its capabilities. Most of the powerful technology is kept under wraps, which is concerning. For instance, BlackRock uses an AI called Aladdin for forecasting, developed over several years. This model outperforms all other software and human predictions.

Video Saved From X

reSee.it Video Transcript AI Summary
"Aladdin now controls $21,000,000,000,000 of our global economy." "Aladdin is the brainchild of Larry Fink, the founder of BlackRock." "The genie is out of the bottle, and Aladdin has already reached a tipping point where one robot controls more wealth than any person or country." "On Aladdin's 20 birthday, Larry launched a top secret project at BlackRock, codenamed Monarch, led to the firing of its fund managers and replacing their funds with Aladdin's funds." "Joe Biden has appointed BlackRock executive Brian Deese as head of the National Economic Council, which basically means the oversight of Latin and BlackRock is now the responsibility of BlackRock."

Video Saved From X

reSee.it Video Transcript AI Summary
I'm part of the Norwegian sovereign wealth fund. We are a $2,000,000,000,000 sovereign wealth fund and 70% equities, 30% fixed income and bonds, and we are only 700 employees worldwide. So we have a single owner, the Ministry of Finance, who represents the Norwegian people, and I'm part of the team here in North America that manages the bulk of the equity assets.

Video Saved From X

reSee.it Video Transcript AI Summary
I spoke to the CEO of a a major company that everyone will know of. Lots of people use. And he said to me in DMs that they used to have seven just over 7,000 employees. He said, by last year, they were down to, I think, 5,000. He said right now, they have 3,600. And he said by the end of summer, because of AI agents, they'll be down to 3,000. So you've got So it's happening already? Yes. He's halved his workforce because AI agents can now handle 80% of the customer service inquiries and other things. So it's it's happening already.

Video Saved From X

reSee.it Video Transcript AI Summary
In 2014, the speaker's company hired Manuela Veloso from Carnegie Mellon to run machine learning. They have a 200-person AI research group and spend approximately $2 billion on AI, with about 600 end use cases. This number of use cases is expected to double or triple next year. The company moved AI and data out of the technology department because it was deemed too important. The head of AI and data now reports to the speaker and the president. The company focuses on accelerating AI development and tests extensively, collaborating with many people. AI will change everything.

The Knowledge Project

Nicolai Tangen on AI, Ambition, and the Speed of Success
Guests: Nicolai Tangen
reSee.it Podcast Summary
Nicolai Tangen discusses ambition as a driver of achievement and frames AI as a central lever for national and corporate advancement. He argues that open economies with free movement and free thought tend to sustain periods of high growth, and he contends that embracing AI broadly across society would amplify productivity, a view he ties to organizational outcomes where digital tools enable more with the same headcount. He contrasts the high-energy, highly ambitious American ecosystem with European norms, noting how mindset shapes outcomes, and he emphasizes the value of speed, urgency, and decisive action in a rapidly changing world. A recurring theme is the need to manage risk through disciplined, data-informed decision making while remaining open to dissenting views. In investment and governance, he highlights the importance of pattern recognition tempered by rigorous analysis, the benefit of diverse inputs, and the necessity of a long-run perspective—even for complex institutions like Norway’s sovereign wealth fund, which he describes as anchored by transparency, political consensus, and a conservative spending rule. The interviewennial arc moves from personal experience—his shift from AKO Capital to leading a national wealth fund—to practical methods for changing organizations: build a unified leadership group, prioritize a few initiatives, overcommunicate, and maintain a steady cadence of feedback. He illustrates the tension between risk-taking and risk management with anecdotes from his own career and from investing legends, advocating a stance that blends contrarian bets with disciplined evaluation. Throughout, he stresses the social dimension of technology: the importance of free speech, open trade, and collaboration as prerequisites for innovation. He closes by reflecting on the pace of change, the potential for AI to reshape education and business, and the ongoing need to keep learning, stay curious, and foster environments where dissenting ideas can be heard without personal attribution or fear of reprisal.

a16z Podcast

Box CEO: Why Big Companies Are Falling Behind on AI | a16z
Guests: Steven Sinofsky, Aaron Levie, Martin Casado
reSee.it Podcast Summary
The episode analyzes how large organizations struggle to adopt AI beyond creating centralized projects that fail to align with day-to-day operations. The speakers argue that simply adding AI without fixing governance, data access, and workflows tends to produce more complexity, higher downtime, and security risks. They emphasize that increasing code volume does not reduce the engineering burden; in fact, it makes upgrades and maintenance harder, particularly when legacy systems and fragmented data coexist with new agents. A recurring theme is that AI alone cannot fix integration; enterprises with thousands of employees or long-standing processes require fundamental changes to data governance, access controls, and operating models before agents can meaningfully participate in production workflows. The conversation then shifts to the tension between Silicon Valley’s rapid experimentation and the slower, risk-averse reality of large enterprises, explaining why diffusion takes years and often meets skepticism after a few early AI failures. The panelists contrast the engineer’s toolkit—where code can be debugged quickly and tools are highly technical—with the less technical end users in many organizations, whose workflows, data fragmentation, and legacy systems demand different architectures. They discuss the idea of “agents” as either information providers or action-takers, the role of security and identity in agent-based systems, and the necessity of treating agents as legitimate users with carefully scoped permissions. The discussion also covers the implications of “headless” software, the strategic shifts for product companies to rearchitect around agent-centric models, and the potential for platforms like Salesforce to redefine how software operates behind the scenes. Throughout, the speakers stress the ongoing need for change management, collaboration with system integrators, and a realistic view of productivity gains, noting that gains may be 2–3x in development pipelines and less dramatic in broader knowledge work. They conclude with optimism about AI expanding jobs by enabling more sophisticated analysis and decision-making across industries, while acknowledging the complexity and time required for enterprises to adapt.

Sourcery

Embracing American Dynamism to Upgrade Manufacturing Production With Oden CEO Willem Sundblad
Guests: Willem Sundblad
reSee.it Podcast Summary
The episode centers on how Odin Technologies is advancing American manufacturing through data-driven process optimization and real-time visibility. Willem Sundblad describes a multi-pillar approach—process AI, data enrichment, and knowledge—designed to raise quality, reduce downtime, and increase supplier and customer satisfaction. He explains that manufacturing today faces complex, variable production environments where operators must adapt to changing product mixes, materials, and equipment. Odin’s view is that manufacturing should be easier for operators, turning their expertise into reliable, data-supported decisions rather than relying on scattered intuition. This philosophy underpins Process AI, a tool that not only predicts product quality in real time but also provides actionable recommendations, moving performance from a broad scatter of outcomes toward tighter, more profitable operating envelopes. Sundblad cites concrete improvements, such as a plant that lifted average performance by 10% above a target after adopting Process AI, and he emphasizes ease of use and interpretable models to build trust with operators and managers alike. The conversation also delves into the practical realities of selling industrial technology to legacy manufacturers. Sundblad highlights the importance of concrete value propositions, short to medium sales cycles, and risk-sharing contracts that reduce buyers’ perceived risk. He discusses the scale and composition of Odin’s investor base, the strategic choice to focus on large, enterprise customers and a few high-value industries (notably paper, plastics processing, and metals), and the geographic strategy centered on North America with gradual expansion to Europe and Latin America. The talk underscores how labor constraints and the broader narrative of American dynamism are reshaping investment and adoption timelines, with a strong emphasis on how data quality, standardization across disparate systems, and domain knowledge integration will unlock sustained growth and talent attraction in manufacturing.

All In Podcast

OpenAI's GPT-5 Flop, AI's Unlimited Market, China's Big Advantage, Rise in Socialism, Housing Crisis
reSee.it Podcast Summary
The episode features the Be Allin crew— Chamath Palihapitiya, Jason Calacanis, David Sacks, and David Friedberg—joined by Gavin Baker, Ben Shapiro, and Phil Deutsch for a wide‑ranging discussion that blends business, technology, energy, and politics. The hosts open with playful self‑deprecation and plug the All‑In Summit lineup, teasing flagship figures from pharma, e‑commerce, ride‑hailing, semiconductors, software, and investing, while hinting at more announcements to come and promoting summit tickets and scholarships. GPT‑5 dominates the AI thread. The panel notes that GPT‑5, announced by Sam Altman, released two open‑weight models and offered a mixed reception: some benchmarks were not decisively superior to prior generations, and the presentation was messy. Gavin Baker explains that while Grok 4 made a big leap, GPT‑5’s lead isn’t clear across all metrics, marking OpenAI’s first instance of not clearly beating a rival on every measure. The group discusses multimodality and a new level of model routing inside ChatGPT—that the system can self‑select which underlying models and paths to use, which could improve user experience by eliminating manual model selection. Freeberg adds that the routing component actually had issues in early hours after release, but he emphasizes the UX upgrade’s potential. The talk broadens to the AI investment milieu: Ben Shapiro notes the business case for AI tools in media and content production, while Phil Deutsch mentions AI’s role in energy and climate modeling and cites a climate model from Nvidia. The panel also touches on the AI‑driven acceleration of energy efficiency and ad spending, with ROI metrics improving as AI is adopted. Energy, climate, and the macro‑tech ecosystem come to the fore. Deutsch highlights a broader shift toward energy demand created by hyperscalers, noting an apparent need for large‑scale, clean power to support data centers. The group cites Nvidia’s climate experiments and Anthropic’s stated goal of tens of gigawatts of AI‑related power demand in the U.S., arguing that the energy transition is being reshaped by AI workloads. The discussion moves to nuclear energy and policy, with arguments that subsidies for wind and solar helped deploy renewables but discouraged nuclear innovation; the need for regulatory streamlining for Gen 4 reactors is emphasized, alongside the reality that capital is following the private sector’s demand signals. The panel frames the energy issue as a case where the private market can outperform top‑down subsidies if policy remains stable and capital is directed toward scalable, low‑emission power. Geopolitics and economics ensue. The crew debates whether there is an existential AI race with China, touching on TikTok, Luckin Coffee, BYD, and the broader question of rule of law versus central planning. Centralization versus market‑driven innovation is questioned, with Ben arguing that long‑term success requires light‑touch governance and robust rule of law. The discussion expands to tariffs and industrial policy: revenue signals from tariffs rise, inflation risk remains, and the group weighs reciprocity, supply chain resilience, and the risk of policy oscillation. They acknowledge the complexity of predicting outcomes a year out and debate whether a more aggressive tariff stance can be sustained without stifling growth. Other topics include smuggling of Nvidia GPUs to China, Apple’s massive stock buybacks versus slower product innovation, and a flurry of lighter moments—pop culture riffs, summer reading lists, and personal recommendations. The show closes with calls to attend the All‑In Summit, invites for potential guests, and a nod to the ongoing, provocative conversation that defines the podcast.

20VC

Nicolai Tangen: Managing the Largest Sovereign Wealth Fund in the World | E1122
Guests: Nicolai Tangen
reSee.it Podcast Summary
Organizations which make fast decisions are better. I have a countdown clock in my office: a five-year job with 580 days left. When someone says, 'Over the next three months we can do it,' I reply, 'I got 580 days left, we need to hurry up.' I was a loner child who loved books; I found finance early, even selling bottles to earn money. I joined a hedge fund at 32–35 and later studied art history. Joining the Sovereign wealth fund wasn't easy; I wanted to combine asset management with national impact. The fund's ticker was 'the most watched number in the country' and 'short term in our thinking' if removed. I advocate long horizons, quality companies, and market-share gainers. I discussed productivity with Sam Altman: '20%' was his target. Three goals shaped the transformation: 'performance focused,' 'the people,' and 'communication.' Leaders must admit mistakes to build safety for dissent. I run a weekly podcast to show transparency, which aids recruitment. Climate and geopolitics are major risks, but I remain optimistic about human-centered AI and empowering people, valuing unconditional love in parenting.

20VC

Klarna CEO: SaaS is Dead: Why Systems of Record Will Die in an Agentic World
reSee.it Podcast Summary
Klarna’s CEO Sebastian Siemiatkowski discusses the rapid acceleration of AI and its implications for enterprise software and the workforce, emphasizing that software creation costs are falling while data switching costs are about to collapse due to agent-enabled migrations across platforms. He argues that the future operating system of large enterprises will be AI-native, integrating AI into deterministic and probabilistic code to unify disparate data silos, which could threaten traditional systems of record and the dominance of incumbents in software and ERP. The conversation covers Klarna’s strategy to shift from a primarily payment-centric business to a high-engagement, AI-driven banking provider that leverages a broad data set from its own rails to offer personalized financial advice and services. Siemiatkowski highlights an “enterprise compression” dynamic where valuable context is extracted from a unified data model, enabling AI to both accelerate internal efficiencies and enhance customer experiences. He also reflects on leadership decisions, noting how AI-driven efficiency allowed him to shrink the workforce by about half without external funding, while re-structuring roles to preserve critical, relationship-based functions such as merchant and partner engagements. The interview delves into customer service evolution, revealing Klarna’s in-house approach and an innovative “Uber-like” model for recruiting highly engaged customers as service agents, arguing that human connection will remain essential for VIP experiences. The discussion extends to public company dynamics, long-term strategy, and risk management, including experiences with Sequoia, the US expansion, and how data depth and trusted branding support Klarna’s aim to become a global digital financial assistant. The host presses on topics of market timing, the US as a core battleground, and the broader arc of technology-driven disruption, while the guest balances realism about near-term turbulence with optimism about a potential AI-enabled “golden age” where productivity soars and consumer experiences improve through better financial tools and services.

Invest Like The Best

The Playbook on Buying and Running Companies Forever
reSee.it Podcast Summary
The episode centers on Luca, co-founder of Bending Spoons, explaining how his company operates as a permanent owner of a portfolio of digital businesses rather than a traditional private equity sponsor or a standalone startup. He emphasizes a model that blends private equity discipline with deep, hands-on technology execution, where acquisitions are made off the balance sheet to be owned and run forever. The dialogue delves into the vision for creating an institution, akin to Berkshire Hathaway, with a focus on scale, excellence, and especially talent density—identifying and cultivating the world’s best inexperienced talent and turning Bending Spoons into the ultimate testing ground for ambitious professionals. The interview traces the firm’s origins from Evertale’s failure and the bootstrap phase of building revenue through small software contracts, to a disciplined path of acquiring and integrating companies. It highlights the evolution from asset deals to structured, department-level transformations of larger businesses, with a clear emphasis on being able to deeply rethink a business—rewriting software, rebuilding cloud infrastructure, and redesigning monetization—across multiple units under one umbrella. The conversation also outlines how the team leverages cross-business resources, R&D, and marketing to optimize efficiency and outcomes, arguing that the work is not about chasing sheer scale alone but about creating a platform where capital, talent, and technology compound. A recurring theme is the importance of rigorous decision-making grounded in data inputs, Monte Carlo simulations, and disciplined negotiation, paired with a preference for permanent capital and a cautious approach to dilution. Evernote’s transformation serves as a milestone case study: a complex, gradual overhaul of product, engineering, pricing, and retention that yielded higher engagement and stronger unit economics, supported by a sharpened focus on customer needs and a high-talent environment—albeit with ongoing questions about pricing strategy and product scope. The episode closes with reflections on AI’s role as an accelerator for a diversified, adaptable platform business, and a window into the cadence of leadership, culture, and long-term thinking that governs Bending Spoons’ unique playbook.

All In Podcast

Software Stocks Implode, Claude's Hit List, State of the Union Reactions, Trump's Tariff Pivot
reSee.it Podcast Summary
The episode opens with a brisk, ritualistic dive into the latest anxieties and provocations surrounding technology and markets, weaving together rapid-fire market signals, AI narratives, and what-if storytelling. The hosts dissect a sequence of AI-driven corporate moves—new tools, code security features, and COBOL-era dependencies—to explain how investors and executives count risk differently when cash-flow durability is uncertain. They argue that the market has shifted from a “when” to an “if” framework, where investors demand a much larger margin of safety and lower valuations as they price the possibility that cash-generating businesses could degrade or disappear. This reframing echoes through the discussion of compensation optics in tech, talent markets, and the behavioral shifts shaping how companies recruit and reward employees in an AI-augmented world. The conversation then pivots to a speculative Substack piece about AI-driven disruption, the viral dynamics of such narratives, and the appropriate balance between science fiction and real analytics, highlighting how uncertainty and narratives influence risk markets more than precise forecasts. A substantial portion of the episode drills into the practical implications of AI-enabled productivity, with examples from hands-on experimentation inside the hosts’ own firm. They describe deploying multiple agent-powered workflows to automate sales, outreach, and internal operations, noting dramatic efficiency gains while debating whether this productivity translates into net job destruction or a reallocation of labor toward higher-value tasks. The dialogue extends to broader macro questions: how AI might reshape cost structures, the potential for a world where knowledge workers become maestros of agents, and whether there is an upper bound to consumption growth given unprecedented productivity. Against this backdrop, they address policy and political themes—from tariffs and regulatory balance to the tempo of energy and data-center expansion—framing this as a test of governance as much as technology. The episode ends with reflections on how to balance innovation with societal impact, underscoring the need for pragmatic collaboration across factions to navigate a future where technology both creates opportunity and intensifies debate about value, power, and accountability.

All In Podcast

Debt Spiral or NEW Golden Age? Super Bowl Insider Trading, Booming Token Budgets, Ferrari's New EV
reSee.it Podcast Summary
The episode centers on a rapid evolution in AI as a driver of work, value creation, and enterprise strategy. The hosts discuss a Harvard Business Review study showing that AI tools increase throughput and scope at work, raising productivity while also elevating stress and burnout. The conversation emphasizes a shift from task-based to purpose-based work, with early adopters of AI—“AI natives”—likely to demonstrate outsized value to employers, cutting timelines from days to hours and turning AI-assisted tasks into high-value outcomes. They explore how bottom-up adoption of consumerized AI within organizations can outpace traditional top-down transformation efforts, potentially accelerating enterprise-wide AI deployment through replicants, agents, and orchestration platforms. The group also probes the practical constraints of using AI in business, including data security and confidentiality, the potential need for on-prem solutions versus public-cloud usage, and the economic trade-offs of private provisioned networks as AI-driven efficiency pressures rise. Across these points, the discussion contends that the current wave is less about replacing knowledge workers and more about augmenting them, and it examines how token budgets, cost per task, and the productivity delta will shape compensation, hiring, and organizational design in the near term. The conversation then broadens to prediction markets and real-world use at the Super Bowl, debating insider information, regulation, and societal impact as such platforms scale, while balancing the public-interest value of faster truth with the risk of manipulation. The hosts pivot to macroeconomics, evaluating the Congressional Budget Office’s debt trajectory, debt-to-GDP concerns, and the potential consequences of higher interest costs and entitlements funding. They underscore the possibility of a “golden age” scenario driven by AI-related capital expenditure, innovation, and a booming tech economy, while acknowledging the structural risks of rising deficits if growth does not accelerate. The episode closes with a digest of consumer tech and automotive trends, including Ferrari’s forthcoming all-electric hypercar and broader shifts in mobility and autonomy, which sit against a backdrop of a larger productivity boom that could reshape labor markets and consumer behavior for years to come.

Cheeky Pint

Bret Taylor of Sierra on AI agents, outcome-based pricing, and the OpenAI board
Guests: Bret Taylor
reSee.it Podcast Summary
Bret Taylor sits at the intersection of software engineering craft and AI-enabled business transformation. The conversation navigates how agentic AI is reshaping what it means to build software, operate at scale, and manage a company’s strategic priorities. Taylor argues that the real product shift isn’t just the emergence of powerful models but how teams design, govern, and harness them to run end-to-end processes. He emphasizes the shift from code as the primary artifact to harnesses and documentation as durable, collaborative outputs that guide autonomous agents. Throughout, the discussion grounds itself in Sierra’s real-world deployments: AI agents powering customer support for healthcare providers, payers, and lenders, and the move to unify digital and telephone channels into a single, agent-driven customer experience. The episode also delves into organizational implications, such as the rise of high-agency, problem-focused contributors—hybrid product engineers who understand customer needs and can leverage AI to implement end-to-end processes. Taylor frames AI adoption as a multi-year, multi-domain transition where the value lies in designing processes, governance, and guardrails that allow agents to operate safely and effectively across departments like sales, finance, and legal. He draws contrasts between coding-centric workflows, where memory and tests live in the codebase, and the messy, real-world knowledge encoded in enterprise systems, advocating for structures that treat knowledge, context, and memory as first-class assets. The interview also touches on business models, arguing for outcomes-based pricing to align incentives with client value, and discusses macro questions about where AI productivity will land across industries, pointing to software and finance as the most tractable early beneficiaries and acknowledging broader uncertainty in the economy. Overall, the episode presents a pragmatic, product-centric view of AI adoption: not a wholesale replacement of humans, but a reimagining of work that leverages agents to drive outcomes, grounded in concrete customer use cases and evolving enterprise platforms.

Sourcery

Inside Klarna's IPO CEO Sebastian Siemiatkowski
Guests: Sebastian Siemiatkowski
reSee.it Podcast Summary
Sebastian Siemiatkowski discusses Klarna’s ambitious trajectory from a disrupted fintech to a potential trillion-dollar retail bank, framing the company’s growth around a large, disruption-ready US and European credit market. He traces Klarna’s evolution from a controversial buy now, pay later player to a broader financial services platform guided by profitability and efficiency. A central thread is how the leadership embraced AI as a catalyst for transformation: rapid experimentation, empowering employees to prototype with AI, and then scaling what delivers measurable value. He recounts the early OpenAI collaboration as a turning point that catalyzed a cultural shift toward experimentation, transparency, and data-driven decision making. The conversation delves into concrete implementations, such as using AI to extract deeper insights from employee feedback through in-house interviewing tools, and to streamline information flows by standardizing data across disparate systems. This standardization is described as foundational for both human and AI productivity, enabling faster decision making and more consistent customer experiences. The interview highlights Klarna’s focus on efficiency over headcount growth, explaining how the company shifted from burning substantial capital to achieving profitability by combining AI-enabled optimization with disciplined cost management. Questions about the business model emphasize customer-centric innovation: Klarna’s aim to replace friction-filled, high-fee financing with a cheaper, simpler, digital financial assistant that makes switching easy and everyday spending less burdensome. The discussion also touches on the implications of AI for jobs, with a frank acknowledgment of potential disruptions in white-collar roles and a call for policy-minded solutions to support workers during the transition. Throughout, the host and guest explore how branding, aesthetics, and creative partnerships have shaped Klarna’s image as an approachable, playful brand in contrast to traditional banking, including multi-faceted campaigns and celebrity collaborations that humanize a financial product and broaden its appeal. The overall tone underscores a forward-looking, customer-obsessed approach, grounded in rapid iteration, rigorous data use, and a willingness to reimagine core financial products through AI-driven design and experimentation.

Possible Podcast

James Manyika on global AI and inclusion
Guests: James Manyika
reSee.it Podcast Summary
AI is shaping opportunity and risk across continents, and a handful of voices map that path from the UN to the factory floor. James Manyika describes a career that began with an undergraduate AI paper in 1992, a robotics PhD at Oxford, work at JPL, early ties to DeepMind, and now a leadership role at Google. He co-chairs the UN High-Level Advisory Board on AI, a 39-member body spanning 33 countries and diverse sectors, focused on governance, norms, and collaboration. The Global South tends to view AI as transformative but voices concern about participation, capacity, and broadband access, while the UN’s power depends on member states’ support, making progress a collective effort. Manyika emphasizes two pillars for inclusion: access to the ingredients of AI—compute, models, and relevant data—and the basic infrastructure that enables usage, such as reliable broadband and electricity. Open-source AI is discussed as a means to broaden participation, but he notes ongoing tensions around resource concentration. He also highlights linguistic diversity and the need for data that reflect local contexts, arguing that without accessible languages and culturally attuned data, participation remains limited. Beyond governance, the conversation turns to tangible AI benefits and deployments. Notebook LM, built on Gemini Pro, uses long-context memory and multimodal capabilities to ground a notebook in personal materials, allowing grounded dialogue with one's own papers. He cites climate and science use cases: five-day flood alerts in Bangladesh now expanded to over 80 countries, and wildfire boundary information in 22 countries, plus rapid language expansion from 38 to 276 languages enabling broader communication. He notes AI’s potential to raise productivity across sectors, with wide adoption and worker resilience, citing research suggesting benefits for less-skilled workers and potential middle-class gains, if supported by smart policy and training.

Lenny's Podcast

How Block is becoming the most AI-native enterprise in the world | Dhanji R. Prasanna
Guests: Dhanji R. Prasanna
reSee.it Podcast Summary
Dhanji R. Prasanna, CTO at Block, discusses the company's significant transformation into an AI-native organization, driven by an "AI manifesto" presented to Jack Dorsey. Block has seen substantial productivity gains, with AI-forward engineering teams reporting 8-10 hours saved per week and a company-wide estimate of 20-25% manual hours saved. Prasanna emphasizes that this is just the beginning, as the value of AI is constantly evolving, requiring companies to adapt and ride the wave of innovation. A key enabler of this productivity is "Goose," Block's open-source, general-purpose AI agent. Built on the Model Context Protocol (MCP), Goose provides LLMs with the ability to interact with various digital tools and systems, effectively giving them "arms and legs" to perform tasks. This has led to surprising uses, such as non-technical teams building their own software tools, compressing weeks of work into hours, and automating mobile UI tests with a related tool called Gling. The shift to an AI-native culture at Block involved a fundamental organizational change, moving from a General Manager (GM) structure to a functional one. This re-emphasized Block's identity as a technology company, centralizing engineering and design under single leaders to foster technical depth and a unified strategy. Prasanna highlights the power of Conway's Law, noting that organizational structure significantly impacts what a company builds. In terms of engineering work, AI is enabling "vibe coding" and autonomous agents that can work overnight, anticipating needs and even drafting code. This opens the possibility of frequently rewriting entire applications from scratch, challenging traditional software development wisdom that advises against such large-scale rewrites. Block's hiring strategy has also evolved, prioritizing a "learning mindset" and eagerness to embrace AI tools over specific AI expertise. Prasanna encourages leaders to personally use these tools to understand their strengths and weaknesses. He shares personal anecdotes, like using Goose to organize receipts, demonstrating the practical problem-solving capabilities of AI agents. The company's commitment to open source is evident with Goose, which is freely available and extensible, reflecting a belief in contributing to open protocols and the broader tech ecosystem. This open approach contrasts with the trend of companies locking down AI capabilities in walled gardens. Prasanna shares several leadership lessons, including the importance of starting small with new initiatives, as exemplified by Goose, Cash App, and Block's early Bitcoin product. He also stresses the need to constantly question base assumptions and focus on the core purpose of the company, rather than getting sidetracked by optimizing processes or tools that don't serve that ultimate goal. Reflecting on past product failures like Google Wave and Google+, he emphasizes that code quality, while important, often has little to do with a product's ultimate success, citing YouTube's early, messy codebase as a prime example. Ultimately, he advises individuals and companies to focus on what is meaningful and fun, and to demand openness and shared benefit from technology, especially in the evolving landscape of AI.

The Koerner Office

99% of Companies Have No Idea How to Use AI (Here's How to Profit)
reSee.it Podcast Summary
The episode centers on the practical, sometimes gritty realities of adopting AI in large organizations, emphasizing that most companies lack even basic tools to leverage AI effectively. The speakers argue that many corporate teams struggle with fundamental tasks like searching the web or applying AI to real workflows, and they challenge listeners to rethink what it means to turn AI into tangible value. A key theme is the idea that AI isn’t just a fad or a toy; it requires disciplined experimentation, rapid prototyping, and a clear plan for how AI can replace or augment specific job tasks. The conversation moves from high-level hype to concrete tactics, illustrating how AI agents can act as rapid testing machines, enabling quick validation of ideas, demand, and pricing. The hosts discuss building “KGs” of data and tools to support ongoing AI work, including locally hosted models to reduce costs and dependencies on third-party inference. They recount hands-on experiments with Claude, Gemini, and Opus models, comparing performance, cost, and practicality, and they stress that the best early leverage is in designing workflows that save executives and teams time—such as automating data gathering, summarizing meetings, and drafting communications. A large portion of the episode is dedicated to a template for creating value: record and transcribe meetings, extract structured insights, and build an archival, queryable system that surfaces actionable follow-ups. The speakers share a candid view of their own ventures, highlighting the importance of clean data, careful data organization, and a taxonomy that makes information retrievable for AI agents. They also discuss go-to-market ideas, from executive education and roundtables to fractional AI leadership, and stress that success comes from understanding clients’ pain points and delivering high-leverage tools rather than flashy, one-off projects. Overall, the episode blends practical engineering detail with strategic business thinking, illustrating how to move from “AI as a toy” to “AI as a disciplined, revenue-generating capability.”

a16z Podcast

Software finally eats services - Aaron Levie
Guests: Aaron Levie, Steven Sinofsky, Martin Casado
reSee.it Podcast Summary
AI is rewriting how we hire, build, and compete, and the panel dives into a provocative question: should the United States speed up or reform skilled‑worker immigration to fuel this next wave? The discussion centers on policy shifts that affect startups and tech giants alike. Reed Hastings is cited as endorsing a policy that aligns supply with demand, replacing the lottery system with price signals or other allocations. Participants debate whether cap levels like 100k a year would empower startups or simply tilt the field toward the biggest incumbents, and they emphasize the need for a cohesive framework that balances talent depth, wage dynamics, and merit. On productivity, Aaron Levie details how senior teams using AI become almost superhuman, while junior users report similar gains in different contexts. He notes that roughly 30% of his company's code now comes from AI, with ranges from 20% to 75% depending on the person. Tools like Cursor enable background tasking and longer prompts, transforming how engineers work: code review becomes central, and projects that took days or weeks can be compressed into minutes. The panel also discusses the difficulty of measuring productivity and the phenomenon of 'shadow productivity' that isn't immediately visible in output. They contrast incumbents and startups in a platform‑shift moment. AI lowers marginal costs and widens the addressable market, enabling verticals like agriculture or construction to become software‑enabled through AI labor. Startups, including young founders, can compete with giants because the barrier of distribution is offset by a new velocity and the ability to test ideas quickly. The group notes that consumer adoption has reached widespread use, with up to three‑quarters of adults using AI weekly, and anticipates a wave of new, AI‑native business models, such as specialized digital agencies or vertical‑focused integrators. They also reflect on how experience and domain expertise amplify AI's value, arguing that experts are more powerful with AI than less experienced workers. The conversation touches education and talent pipelines, suggesting that the best recruits may come from non‑traditional paths and from a broad set of schools. They reference the broader historical pattern of platform shifts reshaping incumbents and startups alike, and close by acknowledging the ongoing challenge of measuring impact in a rapidly evolving landscape while exploring the long tail of new AI‑driven efficiency and opportunity.

Lenny's Podcast

Head of Claude Code: What happens after coding is solved | Boris Cherny
Guests: Boris Cherny
reSee.it Podcast Summary
Boris Cherny discusses a transformative shift in software development driven by Claude Code and the broader AI tooling at Anthropic. He describes a world where code is largely authored by AI, with humans focusing on higher-level design, strategy, and safety—shifting the craft from writing lines of code to shaping problem-solving approaches and tool usage. The conversation covers the launch trajectory of Claude Code, its rapid adoption across organizations, and how it has redefined productivity per engineer. Cherny notes that Claude Code not only writes code but also uses tools, reviews pull requests, and assists in project management, illustrating a broader move toward agentic AI capable of acting within real-world workflows. He emphasizes the importance of latency demand, where user feedback and real-world use reveal new product directions, such as Co-Work and terminal-based interfaces. He explains how early releases and fast feedback loops were essential to discovering and validating latent use cases beyond traditional coding tasks, including automation of mundane administrative work and cross-functional collaboration. The discussion also explores the safety and governance layers that accompany these advances, including observation of model reasoning, evals, sandboxing, and the open-source efforts that aim to balance rapid innovation with responsible deployment. Cherny reflects on personal perspectives, recounting his own background, the inspiration drawn from long time scales and miso making, and the aspirational view that a future where anyone can program is possible, albeit with significant societal and workforce disruption to navigate. The episode closes with practical guidance for builders: embrace generalist thinking, grant engineers broad access to tokens, avoid over-constraining models, race toward general models, and design products around the model’s evolving capabilities rather than forcing the model into rigid workflows. Throughout, the thread remains: incremental experimentation with AI can unlock extraordinary capabilities, while maintaining a strong focus on safety, human oversight, and alignment to responsible outcomes.

Possible Podcast

Does AI really save time?
reSee.it Podcast Summary
The conversation centers on whether AI actually saves time in knowledge work, or simply raises expectations and increases throughput. The hosts discuss a recent Harvard Business Review argument that AI accelerates work pace and volume rather than delivering a straightforward time-saver, noting that more drafts, reviews, and risk checks can follow AI-assisted outputs. They acknowledge the potential for higher quality results and faster turnarounds, but emphasize that the real impact depends on context, task type, and how teams configure AI into their processes. The discussion moves to practical implications: even with faster analysis and decision support, expensive activities like due diligence, contracting, and strategic coordination will still require human judgment and thorough review. They explore scenarios where AI reduces the time for repetitive, high-volume tasks but does not eliminate the need for critical oversight, risk management, and cross-functional alignment. The speakers highlight a core tension between speed and quality, and how competitive dynamics shape how organizations adopt AI—sometimes trading longer, more thorough processes for quicker terms or faster market responses. They also reflect on the broader organizational consequences: meetings and bureaucratic routines persist, but AI can trim unproductive engagement while revealing new forms of collaboration and governance that require ongoing human input. The overall message is that AI acts as a powerful accelerant; its value lies in how individuals and teams recalibrate workflows, incentives, and decision-making in a changing landscape.

Possible Podcast

Possible 109 ParthPt2 NoIntro V3
reSee.it Podcast Summary
The conversation centers on how large organizations are deploying AI, focusing on the gap between declared AI strategies and real-world execution. The speakers describe a “first inning” phase where proposals exist in committees and pilot projects, but actual integration into daily workflows remains limited. They emphasize that the most immediate value from AI comes from language-model–driven tasks that touch everyday communication and coordination, such as meeting transcription, action-item tracking, and surfacing relevant information from business intelligence in real time. They argue that AI’s impact will compound as it moves from isolated pilots to bottom-up changes in how people work, enabling employees to reimagine processes rather than merely automate old ones. They illustrate this with examples from software migrations, translation workflows, and the creation of dashboards from raw data, suggesting that AI can dramatically shorten what used to take weeks into minutes by augmenting human judgment rather than replacing it. The dialogue also explores the role of agents and “coding agents” in accelerating analysis, orchestrating tasks across multiple projects, and enabling new forms of collaboration where a single executive can guide numerous parallel explorations. The participants discuss how to design environments that reward experimentation, share wins, and reduce resistance by normalizing rapid prototyping. They highlight concerns about secrecy around productivity gains and contrast individual acceleration with organizational learning, arguing that scalable adoption hinges on creating common tools, knowledge graphs, and ambient AI that supports decision-making across teams. Throughout, the emphasis is on practical steps—transcribe meetings, automate routine actions, and empower non-technical leaders by partnering with technically adept colleagues to build internal tools that unlock faster, broader problem-solving across the company.
View Full Interactive Feed