TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Every GPU can communicate with every other GPU simultaneously using SerDes, which is driven to its maximum limit. This necessitates placing everything in a single, liquid-cooled 20-kilowatt rack. The GPUs are disaggregated across an entire rack, effectively creating one motherboard. This disaggregation results in incredible GPU performance and memory capacity. These setups are not merely data centers but AI factories, such as the xAI Colossus factory and Stargate, which spans 4,000,000 square feet and consumes one gigawatt. A one-gigawatt factory costs approximately $60 to $80 billion, with the computing systems accounting for $4 to $5 billion of that cost. The Blackwell B200 superchip undergoes stress testing at KYEC, involving baking, molding, curing, and being pushed to its limits in 125-degree Celsius ovens for several hours.

Video Saved From X

reSee.it Video Transcript AI Summary
- Gavin Baker is deeply engaged with markets beyond his quantitative investing background, with a passion for technology investment and wide-ranging views on NVIDIA, Google and its TPUs, the AI landscape, and the evolving business models around AI companies. He even entertains ideas like data centers in space, arguing from first principles that they are superior to Earthbound data centers. - The host and Baker discuss how to process rapid AI updates (e.g., Gemini 3). Baker emphasizes using new AI tools personally, paying for higher-tier access to get mature capabilities, and following leading labs (OpenAI, Gemini, Anthropic, xAI) and influential researchers (e.g., Andre Karpathy). He notes that AI progress is heavily influenced by public posts and discourse on X (formerly Twitter), and highlights the importance of embedded signal from the lab ecosystem and industry insiders. - On Gemini 3 and scaling laws, Baker argues that Gemini 3 affirmed that scaling laws for pre-training are intact, an important empirical confirmation. He compares the public’s overinterpretation of free-tier capabilities to that of a ten-year-old, stressing the need for paying for higher-tier capabilities to gauge real performance. He explains that progress in AI since late 2024 hinges on two new scaling laws: post-training reinforcement learning with verified rewards (RLVR) and test-time compute. He emphasizes that these laws enable better base models and that Google’s TPU strategy and Nvidia’s GPU strategy each shape the competitive dynamics. - Baker details the hardware race between Google (TPUs) and Nvidia (GPUs), including the transition from Hopper to Blackwell as a massive product shift requiring new cooling, power, and architecture. He credits “reasoning” (and reasoning-based models) with bridging an eighteen-month gap in AI progress, enabling continued improvement without the immediate need for Blackwell-scale infrastructure. He explains that Blackwell deployment has been slower but is now ramping in significant fashion, and that RBMs (Blackwell clusters) are likely to dominate training eventually, with current GB-300 and MI (Mixtures) chips enabling future efficiency gains. Rubin, as the next milestone, is anticipated to widen the gap versus TPUs and other ASICs. - Google’s strategic move to be a low-cost token producer is highlighted as a way to “suck the economic oxygen” out of the AI ecosystem, pressuring competitors. Baker predicts first Blackwell-trained models from XAI in early 2026, and posits that Blackwell will not immediately outperform Hopper but will be a superior chip once fully ramped. He discusses TPU v8/v9 as potentially high-performance but notes Google’s conservatism in design decisions and their reliance on Broadcom for backend manufacturing. He foresees a shift toward in-house semiconductor development eventually as the cost and margins of external ASICs become less attractive. - The potential shift to in-house semiconductor production is tied to economics: if token production scales and external margins (Broadcom) are too high, Google could renegotiate or internalize more of the stack. This would affect margins and the competitive landscape, including whether Google remains the low-cost producer. - In discussing broader AI deployment economics, Baker notes the importance of inference ROI, with concerns about an initial “ROIC air gap” during heavy training phases. He cites CH Robinson as an example of AI-driven uplift in a Fortune 500 company, where AI enabled 100% pricing/availability quoting in seconds, boosting earnings. This example supports the view that AI-driven productivity improvements can boost profitability even as capital expenditure remains high. - Baker discusses the outlook for frontier models and the likely near-term impact on industries, including media, robotics, customer support, and sales. He suggests that the most valuable AI systems will rapidly become useful and context-aware, capable of handling long context windows (for example, by remembering extensive user preferences) and performing complex tasks like travel planning or hotel reservations. - On the economics of AI-driven product development, Baker argues that AI-native SaaS companies must accept lower gross margins to achieve ROI through much higher efficiency and automation. He contrasts this with traditional SaaS margins, noting that AI enables substantial gross profit dollars through reduced human labor, while demanding reinvestment in compute. He urges traditional software companies to embrace AI-enabled agents and to expose AI-driven revenue streams, even if margins are compressed. - Baker reflects on the broader tech ecosystem, including private equity’s potential to apply AI systematically, and the role of private markets in scaling semiconductor ventures. He emphasizes that AI requires an ecosystem of public and private players across chips, memory, backplanes, lasers, and more, and that China’s open-source efforts may be insufficient to close the gap created by Blackwell’s advancement, given the looming lead of U.S. frontier labs. - The conversation also touches on space-based data centers as a transformative, albeit speculative, frontier: advantages include perpetual sun exposure for power, reduced cooling needs, and ultra-fast laser-linked interconnects in space. The main frictions are launch costs and the need for new infrastructure (Starships, global collaborations), but the potential synergy with AI hardware ecosystems (Tesla, SpaceX, XAI, Optimus) is noted as strategically significant. - In closing, Baker emphasizes that investing in AI is the search for truth, with edge coming from uncovering hidden truths and leveraging history and current events to form differential opinions. He attributes his own lifelong motivation to competitive drive, a love of history and current events, and a relentless pursuit of understanding the world’s technology and markets.

Video Saved From X

reSee.it Video Transcript AI Summary
- The conversation centers on how AI progress has evolved over the last few years, what is surprising, and what the near future might look like in terms of capabilities, diffusion, and economic impact. - Big picture of progress - Speaker 1 argues that the underlying exponential progression of AI tech has followed expectations, with models advancing from “smart high school student” to “smart college student” to capabilities approaching PhD/professional levels, and code-related tasks extending beyond that frontier. The pace is roughly as anticipated, with some variance in direction for specific tasks. - The most surprising aspect, per Speaker 1, is the lack of public recognition of how close we are to the end of the exponential growth curve. He notes that public discourse remains focused on political controversies while the technology is approaching a phase where the exponential growth tapers or ends. - What “the exponential” looks like now - There is a shared hypothesis dating back to 2017 (the big blob of compute hypothesis) that what matters most for progress are a small handful of factors: compute, data quantity, data quality/distribution, training duration, scalable objective functions, and normalization/conditioning for stability. - Pretraining scaling has continued to yield gains, and now RL shows a similar pattern: pretraining followed by RL phases can scale with long-term training data and objectives. Tasks like math contests have shown log-linear improvements with training time in RL, and this pattern mirrors pretraining. - The discussion emphasizes that RL and pretraining are not fundamentally different in their relation to scaling; RL is seen as an RL-like extension atop the same scaling principles already observed in pretraining. - On the nature of learning and generalization - There is debate about whether the best path to generalization is “human-like” learning (continual on-the-job learning) or large-scale pretraining plus RL. Speaker 1 argues the generalization observed in pretraining on massive, diverse data (e.g., Common Crawl) is what enables the broad capabilities, and RL similarly benefits from broad, varied data and tasks. - The in-context learning capacity is described as a form of short- to mid-term learning that sits between long-term human learning and evolution, suggesting a spectrum rather than a binary gap between AI learning and human learning. - On the end state and timeline to AGI-like capabilities - Speaker 1 expresses high confidence (~90% or higher) that within ten years we will reach capabilities where a country-of-geniuses-level model in a data center could handle end-to-end tasks (including coding) and generalize across many domains. He places a strong emphasis on timing: “one to three years” for on-the-job, end-to-end coding and related tasks; “three to five” or “five to ten” years for broader, high-ability AI integration into real work. - A central caution is the diffusion problem: even if the technology is advancing rapidly, the economic uptake and deployment into real-world tasks take time due to organizational, regulatory, and operational frictions. He envisions two overlapping fast exponential curves: one for model capability and one for diffusion into the economy, with the latter slower but still rapid compared with historical tech diffusion. - On coding and software engineering - The conversation explores whether the near-term future could see 90% or even 100% of coding tasks done by AI. Speaker 1 clarifies his forecast as a spectrum: - 90% of code written by models is already seen in some places. - 90% of end-to-end SWE tasks (including environment setup, testing, deployment, and even writing memos) might be handled by models; 100% is still a broader claim. - The distinction is between what can be automated now and the broader productivity impact across teams. Even with high automation, human roles in software design and project management may shift rather than disappear. - The value of coding-specific products like Claude Code is discussed as a result of internal experimentation becoming externally marketable; adoption is rapid in the coding domain, both internally and externally. - On product strategy and economics - The economics of frontier AI are discussed in depth. The industry is characterized as a few large players with steep compute needs and a dynamic where training costs grow rapidly while inference margins are substantial. This creates a cycle: training costs are enormous, but inference revenue plus margins can be significant; the industry’s profitability depends on accurately forecasting future demand for compute and managing investment in training versus inference. - The concept of a “country of geniuses in a data center” is used to describe the point at which frontier AI capabilities become so powerful that they unlock large-scale economic value. The timing is uncertain and depends on both technical progress and the diffusion of benefits through the economy. - There is a nuanced view on profitability: in a multi-firm equilibrium, each model may be profitable on its own, but the cost of training new models can outpace current profits if demand does not grow as fast as the compute investments. The balance is described in terms of a distribution where roughly half of compute is used for training and half for inference, with margins on inference driving profitability while training remains a cost center. - On governance, safety, and society - The conversation ventures into governance and international dynamics. The world may evolve toward an “AI governance architecture” with preemption or standard-setting at the federal level, to avoid an unhelpful patchwork of state laws. The idea is to establish standards for transparency, safety, and alignment while balancing innovation. - There is concern about autocracies and the potential for AI to exacerbate geopolitical tensions. The idea is that the post-AGI world may require new governance structures that preserve human freedoms, while enabling competitive but safe AI development. Speaker 1 contemplates scenarios in which authoritarian regimes could become destabilized by powerful AI-enabled information and privacy tools, though cautions that practical governance approaches would be required. - The role of philanthropy is acknowledged, but there is emphasis on endogenous growth and the dissemination of benefits globally. Building AI-enabled health, drug discovery, and other critical sectors in the developing world is seen as essential for broad distribution of AI benefits. - The role of safety tools and alignments - Anthropic’s approach to model governance includes a constitution-like framework for AI behavior, focusing on principles rather than just prohibitions. The idea is to train models to act according to high-level principles with guardrails, enabling better handling of edge cases and greater alignment with human values. - The constitution is viewed as an evolving set of guidelines that can be iterated within the company, compared across different organizations, and subject to broader societal input. This iterative approach is intended to improve alignment while preserving safety and corrigibility. - Specific topics and examples - Video editing and content workflows illustrate how an AI with long-context capabilities and computer-use ability could perform complex tasks, such as reviewing interviews, identifying where to edit, and generating a final cut with context-aware decisions. - There is a discussion of long-context capacity (from thousands of tokens to potentially millions) and the engineering challenges of serving such long contexts, including memory management and inference efficiency. The conversation stresses that these are engineering problems tied to system design rather than fundamental limits of the model’s capabilities. - Final outlook and strategy - The timeline for a country-of-geniuses in a data center is framed as potentially within one to three years for end-to-end on-the-job capabilities, and by 2028-2030 for broader societal diffusion and economic impact. The probability of reaching fundamental capabilities that enable trillions of dollars in revenue is asserted as high within the next decade, with 2030 as a plausible horizon. - There is ongoing emphasis on responsible scaling: the pace of compute expansion must be balanced with thoughtful investment and risk management to ensure long-term stability and safety. The broader vision includes global distribution of benefits, governance mechanisms that preserve civil liberties, and a cautious but optimistic expectation that AI progress will transform many sectors while requiring careful policy and institutional responses. - Mentions of concrete topics - Claude Code as a notable Anthropic product rising from internal use to external adoption. - The idea of a “collective intelligence” approach to shaping AI constitutions with input from multiple stakeholders, including potential future government-level processes. - The role of continual learning, model governance, and the interplay between technology progression and regulatory development. - The broader existential and geopolitical questions—how the world navigates diffusion, governance, and potential misalignment—are acknowledged as central to both policy and industry strategy. - In sum, the dialogue canvasses (a) the expected trajectory of AI progress and the surprising proximity to exponential endpoints, (b) how scaling, pretraining, and RL interact to yield generalization, (c) the practical timelines for on-the-job competencies and automation of complex professional tasks, (d) the economics of compute and the diffusion of frontier AI across the economy, (e) governance, safety, and the potential for a governance architecture (constitutions, preemption, and multi-stakeholder input), and (f) the strategic moves of Anthropic (including Claude Code) within this evolving landscape.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker reframes computers as AI factories, which produce tokens, numbers. These AI factories should be used for three fundamental things, with the first being to train the next frontier model so you can build the best AI and get to market first. The goal is to train it as fast as possible. Regarding performance, Rubin is described as a 4x leap compared to Blackwell, meaning the fourfold improvement could be achieved in one month instead of four months.

Cheeky Pint

Reiner Pope of MatX on accelerating AI with transformer-optimized chips
Guests: Reiner Pope
reSee.it Podcast Summary
Rainer Pope, co-founder and CEO of MATX, discusses the motivations behind building transformer-optimized chips and how his team aims to outperform existing AI accelerators by blending memory technologies and honing low-precision arithmetic. He traces the lineage from Google's TPUs to the current focus on LLM inference and the need for hardware that scales with growing matrix sizes and precision requirements. The conversation covers architectural choices such as combining HBM for high throughput with SRAM for low-latency weights, the design of a large, power-efficient systolic engine, and a new approach to low-precision formats that can accelerate training and inference while preserving model quality. Pope emphasizes economics as a core metric, measuring tokens per second and dollars per token, and explains why throughput often drives business value more than peak raw speed. He reflects on the historical arc of neural network hardware, noting the parallelism inherent in all AI accelerators and the shift from CPU-centric designs to devices optimized for matrix multiplications. The interview delves into the practicalities of chip development, including the waterfall-like process of hardware design, verification, and tape-out, as well as the realities of fabrication at leading-edge nodes. Pope outlines MATX’s strategy to mitigate supply-chain risk by pre-committing buyers, maintaining large capital reserves, and planning for multi-gigawatt production to meet demand from major AI clusters. The discussion also touches the importance of ecosystem and software alignment, arguing that while CUDA-like software investments matter for frontier labs, a materially optimized hardware stack with tailored ML software can yield significant gains per dollar. When asked about the future, Pope predicts a continued push toward higher throughputs and lower latencies, with context- and memory-management improvements playing a central role in the next phase of AI product refinement. The exchange closes on the theme of technical curiosity and practical problem-solving, highlighting how architectural intuition, rigorous simulation, and disciplined iteration drive progress in hardware for AI at scale.

20VC

David Luan: Why Nvidia Will Enter the Model Space & Models Will Enter the Chip Space | E1169
Guests: David Luan
reSee.it Podcast Summary
OpenAI realized, before basically everybody but DeepMind, that the next phase of AI after a Transformer would focus on solving a major unsolved scientific problem rather than writing papers. The second path to boosting model performance is just starting to be tapped and will demand vast compute. Because of that, I’m not worried about diminishing returns to compute; 'Every tier one cloud provider existentially needs to win here.' Harry describes Google Brain’s era (2012–2018) when bottom-up research produced the Transformer, diffusion models, and other breakthroughs. Transformers became a universal model, replacing task-specific architectures. GPT-2 showed early capabilities; GPT-3 with instruction tuning accelerated adoption, but consumer virality required packaging for non-developers. OpenAI then built teams around solving real-world problems, not just publishing papers. On scaling, the view shifts from base size to data, tooling, and environments. There are two scaling parts: enlarging the base model with more data and GPUs, and enabling smarter behavior via interactive environments that allow experimentation. Memory remains a challenge; Gemini-like context lengths are huge, but long-term memory requires end-to-end product design. Business-wise, the race hinges on who controls the model layer and the chips. Nvidia, Google TPUs, and in-house accelerators shape costs; Apple may dominate edge-running privacy tasks. The shift to agents over traditional RPA challenges incumbents’ value chains, with a co-pilot model likely to become the dominant work tool. Regulation and data access remain contentious, but consolidation among frontier-model players is likely.

Sourcery

Inside the $4.5B Startup Building Brain-Inspired Chips for AI
Guests: Naveen Rao, Konstantine Buhler
reSee.it Podcast Summary
The episode presents a deep conversation about building intelligent machines inspired by biology, with Naveen Rao and Konstantine Buhler explaining why conventional digital computing and current hardware limits have prevented AI from reaching brainlike efficiency. They argue that the next phase requires new hardware substrates and architectures that embrace dynamics, stochastic processes, and nonlinear behavior found in biological systems. The guests describe Unconventional AI’s mission to reinvent computation by leveraging analog and nonlinear dynamics to dramatically reduce power consumption while increasing cognitive capabilities. The discussion traces Rao’s career arc—from Nirvana and Mosaic ML to Unconventional AI—and Buhler’s perspective as an investor and engineer who joined to form the company at its inception. They reflect on the evolution of the AI stack, noting that AI sits atop years of physical hardware and software layers and that breakthroughs will come from rethinking foundational assumptions about how computation operates, not just from applying more powerful digital GPUs. A recurring theme is the energy constraint in AI progress and the belief that scalable, repeatable, and cost-effective solutions will unlock a new era of computation. They compare AI’s current stage to past economic and industrial shifts, like the move from biological to mechanical work during the Industrial Revolution, and propose that the mind’s domain may undergo a similar transformation as cognitive labor becomes dominated by machines. Throughout, entrepreneurship is framed as solving a grand, energy-intensive problem with a long horizon; capital is discussed in relation to the scale of impact and the need for talent, transparency, and disciplined execution. The interview also touches on leadership principles, the importance of honest communications, and the value of a flat organization structure to maintain agility. The conversation concludes with a sense of anticipation for a multi-decade journey toward a new paradigm in computation, powered by a team capable of turning radical hardware and software ideas into manufacturable products.

20VC

Eiso Kant, CTO @Poolside: Raising $600M To Compete in the Race for AGI | E1211
Guests: Eiso Kant
reSee.it Podcast Summary
Poolside is racing toward AGI, and the latest 500 million round translates to an entrant’s stake in the race. The team believes the gap between machine intelligence and human capabilities will keep shrinking, with human‑level skills appearing where they are economically valuable before true AGI arrives. Foundation models compress vast web data into a neuronet, offering language understanding yet showing clear limits without more data. Poolside’s core claim is a data set capturing intermediate reasoning, trials, and code that lead to final products, including iterative testing and failures. AlphaGo‑style reinforcement learning in simulated environments demonstrated how synthetic data can bootstrap capabilities, while real‑world data such as car autopilot engagements provide non‑simulatable learning signals. They describe reinforcement learning from code execution feedback. In a 130,000‑code basis environment, it explores solutions to tasks and learns from tests. Deterministic feedback via code execution plus human feedback guides improvement. They critique the idea that synthetic data alone solves data gaps, noting the need for an oracle of truth to judge which solutions are better or worse. Humans remain essential for labeling and guiding reasoning, while compute and data scale together. On scaling and economics, they argue scale laws show more data and larger models yield better results, and compute matters but is table stakes. They anticipate continued growth in hardware advances, synthetic data utility, and distillation of large models into smaller, cost‑effective ones. They discuss a hardware race among Nvidia, Google, and Amazon, with chips like TPUs and Blackwell, and not all training can be upgraded immediately. They warn about latency, data center buildouts, and the need for globally distributed infrastructure near users. They emphasize four ingredients: compute, data, proprietary applied research, and talent, with talent especially critical in Europe as a future hub. They note London and Paris teams and the influence of DeepMind, Yandex, and others. They stress progress requires relentless focus; a premortem warns that stumbling or easing up means losing the race. They close by reflecting on motivation, the journey with people, and the reasons behind the pursuit, insisting the race must be pursued with excellence in development and go‑to‑market.

20VC

Jonathan Ross: DeepSeek Special - How Should OpenAI and the US Government Respond | E1253
reSee.it Podcast Summary
Deep Seek is described as Sputnik 2.0; they spent about six million on training, and more distilling or scraping the OpenAI model. The guests discuss distillation, reinforcement learning from OpenAI data, and a claim that better data quality lets you train with fewer tokens, citing AlphaGo Zero’s self-play. They describe an automated reward-model approach and a 'box' method to evaluate output without human gating. Data sovereignty and geopolitics dominate: concerns about CCP access to US data, possible export-law issues, IP blocking of China, and the CCP’s incentives. Deep Seek emphasizes an option with 'we store nothing'—no hard drives—so data 'not going to the CCP'. They discuss sanctions and the idea that open source could complicate efforts by OpenAI and others. The conversation touches on Europe’s risk-averse stance and the need for a global response. Business and strategy themes run through: Seven Powers framework, brand power of OpenAI vs. open source, inference vs training economics, and Nvidia’s role. They predict commoditization of models, focus on infrastructure, and the likely rise of 'mixture of experts' (MoE) and larger parameter counts with sparse compute. They discuss open sourcing as a competitive move, Europe’s Station F approach, and the likelihood of continued disruption in the AI arms race, including national-security implications.

20VC

Steeve Morin: Why Google Will Win the AI Arms Race & OpenAI Will Not | E1262
Guests: Steeve Morin
reSee.it Podcast Summary
The thing with Nvidia is that they spend a lot of energy making you care about stuff you shouldn't care about, and they were very successful. OpenAI is amazing, but it's not their compute. The triangle of wind—the products, the data, and the compute—puts Google in the strongest position, a sleeping giant with Android and Google Docs to sprinkle across ecosystems. In five years, I would say 95% inference, 5% training. Zml is an ANL framework that runs any models on any hardware, and it does so without compromise. Between hardware and software, the bottleneck is interoperability and ecosystem. PyTorch CUDA lock-in makes switching from Nvidia to AMD expensive, despite potential fourfold efficiency gains on 70B models. Most backends are already a constellation of backends, not single models. In production, inference requires different infra than training: interconnect matters, autoscaling matters, and provisioning compute matters for cost. OpenAI and Anthropics faced inference-scale pains, including provisioning and autoscaling challenges in production. Looking ahead, latency of reasoning will reshape compute needs; agents and latent-space reasoning could beat token throughput. SRAM-heavy chips (Cerebras, Groq) aim for very high tokens-per-second per model, but price is high; Etched and Visor may bring comparable costs. Retrieval-augmented generation (RAG) and embeddings will push smaller models; the right model mix is rental compute with zero buy-in to maximize flexibility. Microsoft buying all AMD supply demonstrates supply-and-margin pressure; Nvidia may not own both markets forever.

20VC

Groq Founder, Jonathan Ross: OpenAI & Anthropic Will Build Their Own Chips & Will NVIDIA Hit $10TRN
Guests: Jonathan Ross
reSee.it Podcast Summary
Control of compute will determine who rules AI, Jonathan Ross argues, because energy and capital flow through silicon. He predicts Nvidia could be worth ten trillion in five years, and that doubling inference compute would nearly double OpenAI and Anthropic's revenue. The market, he says, looks like the early days of oil: a small group of players—about 35 to 36—account for most revenue, and results are highly lumpy. Staying in the Mag7 requires relentless spend, even as returns eventually normalize. A vivid example shows how vibe coding produced a customer feature in four hours with no human-written code, underscoring how speed creates real ROI and can win deals before rivals respond. The talk asks whether others will move into the chip layer, and Ross cautions that chip design remains hard and not everyone will adopt the moat strategies described in Hamilton Helmer's Seven Powers. Ross argues OpenAI and Anthropic will build their own chips, while Nvidia remains dominant for now, aided by a memory supply dynamic he describes as a monopsony. Even so, owning destiny matters because of allocation leverage; hyperscalers still need capacity, and long lead times require large capital. Grock's angle is to shorten the delivery gap: customers place LPUs and begin receiving them in months, not years, a contrast to GPU ramps. The energy backdrop is central: compute requires power, and policy choices around renewables, hydro, and nuclear will shape the pace of compute expansion. Europe’s potential edge lies in a bold energy push and cross-border coordination. The message: compute and energy are inseparable levers of AI advantage, and timing governs who wins access to capacity. Looking ahead five years, Ross foresees Nvidia retaining a majority of chip revenue while Grock captures a meaningful share of capacity, reshaping the hardware chain. He envisions AI triggering deflationary pressure, intensive labor shifts, and new roles created by AI-enabled productivity: cheaper goods, longer careers, and novel industries. He warns the talent market could destabilize startups as engineers chase well-funded projects, yet notes that greater compute boosts product value and expands markets, pressuring margins to stay stable. He believes the real driver is compute, not just algorithms or data, and that a world with more compute can unlock more data through synthetic generation. The conversation ends with a Galileo-inspired note: the telescope of AI reveals a larger universe, and compute scaling will define what emerges over the coming decade.

All In Podcast

Epstein Files Fallout, Nvidia Risks, Burry's Bad Bet, Google's Breakthrough, Tether's Boom
reSee.it Podcast Summary
The All In crew dive into a wide-ranging mix of finance, tech, and high-profile journalism, starting with the Epstein files controversy and its political aftershocks. They frame the Epstein disclosure not as a singular sensational revelation but as a test of governance and public accountability, arguing that the release should proceed in an orderly, responsible manner that protects victims while illuminating patterns in power networks. The discussion roams from the politics of who should be investigated to the role of intelligence agencies and the way information leaks shape public perception, with the hosts acknowledging how deeply interconnected the people involved are—from Summers and Maxwell to figures in Silicon Valley. This segment functions as a meditation on transparency, accountability, and the political economy of information in a highly polarized environment. As they pivot toward the tech world, Nvidia’s blockbuster results anchor the market conversation, with a chorus of admiration and caution about chip supply, depreciation, and the life cycle of hardware in a world where AI models demand explosive compute. They present a granular debate about GAAP depreciation for high-end processors, using Nvidia’s products as a focal point, and explore how revenue from “output tokens” in AI translates into real cash flow, margins, and leading indicators for enterprise value. The Nvidia discussion expands into a broader map of silicon strategies, including Google's Gemini, TPU ecosystems, and the threat of price and performance competition from a wave of differentiated chips. Into this silicon discourse slides the Bitcoin-and-stablecoin universe—Tether’s massive treasury, the push for American regulatory clarity, and the tension between pursuing innovation and preserving consumer protection. The conversation stays caffeinated and practical, evaluating how crypto rails intersect with everyday financial inclusion, cross-border payments, and the political risk appetite of big tech and legacy banks. The show closes by reflecting on personal stakes in venture-building and the psychological edges of risk, revealing a community of investors who chase outsized returns while grappling with fear, discipline, and the human costs of decision-making in volatile markets, tech, and media. The conversation weaves in a candid, sometimes irreverent, look at the pressures of wealth, influence, and innovation, offering a lens on how top investors think about risk, leverage, and responsibility in a rapidly evolving landscape.

Invest Like The Best

Inside the Trillion-Dollar AI Buildout | Dylan Patel Interview
Guests: Dylan Patel
reSee.it Podcast Summary
The episode centers on the immense, accelerating demand for compute in the AI era and how that demand reshapes corporate strategy, capital allocation, and global competition. The guest explains that AI progress hinges not only on model performance but on securing vast, long‑term compute capacity, often through high‑stakes, multi‑year deals that blend hardware procurement with equity considerations. The conversation unpacks how OpenAI’s partnerships with Microsoft, Oracle, and Nvidia illustrate a broader dynamic: leading AI players must frontload enormous capex to build out data center clusters, while hardware providers extract value from the guaranteed demand those clusters generate. The discussion also delves into the economics of this buildout, including how five‑year rental agreements can amount to tens of billions per gigawatt of capacity and how financiers, infrastructure funds, and cloud players help monetize the inevitable gap between upfront cost and eventual revenue. A recurring theme is tokconomics—the economics of tokenized compute usage—as a lens to understand how compute capacity, utilization, and profitability interact across the value chain, from silicon to software to end users. The guest argues that the future is not merely bigger models but more efficient, specialized workflows enabled by environments and reinforcement learning, which let models learn in controlled settings and then operate at scale in real tasks. The dialogue covers the tension between latency, cost, and capacity in inference, the challenge of serving vast user bases while advancing model capabilities, and the strategic importance of who controls data, talent, and platform reach. Throughout, the host and guest examine power dynamics among platform builders, hardware kings, and AI software firms, highlighting how dominance can shift between OpenAI, Microsoft, Nvidia, Oracle, and hyperscalers. The discussion also travels into the geopolitical stakes, contrasting US and Chinese approaches to autonomy, supply chains, and capacity expansion, and ends with reflections on the likely near‑term impact of AI on labor, productivity, and the structure of software businesses in a world where cost curves fall rapidly but demand for advanced services remains voracious.

a16z Podcast

Building the Real-World Infrastructure for AI, with Google, Cisco & a16z
Guests: Amin Vahdat, Jeetu Patel
reSee.it Podcast Summary
The current infrastructure buildout, driven by AI and advanced computing, is unprecedented in scale and speed, dwarfing the internet's early expansion by 100x. This phenomenon carries profound geopolitical, economic, and national security implications. Experts note a severe scarcity in power, compute, and networking, leading to data centers being built where power is available rather than vice-versa. This necessitates new architectural designs, including scale-across networking for geographically dispersed data centers, and a reinvention of computing infrastructure from hardware to software. The industry is entering a "golden age of specialization" for processors, with custom architectures like TPUs offering 10-100x efficiency gains over CPUs for specific computations. However, the two-and-a-half-year development cycle for specialized hardware is a bottleneck. Geopolitical factors, such as varying chip manufacturing capabilities and power availability in regions like China, are influencing architectural design choices. Networking also requires a significant transformation to handle astounding bandwidth demands and bursty AI workloads, with a focus on optimizing for latency in training and memory in inferencing. Internally, organizations are seeing significant productivity gains from AI, particularly in code migration, debugging, sales preparation, legal contract reviews, and product marketing. Google, for instance, used AI to accelerate a massive instruction set migration that would have taken "seven staff millennia." The rapid advancement of AI tools demands a cultural shift among engineers, urging them to anticipate future capabilities rather than assessing current limitations. Startups are advised against building thin wrappers around existing models, instead focusing on deep product integration and intelligent routing layers for model selection. The next 12 months are expected to bring transformative advancements in AI's ability to process and generate images and video for productivity and educational purposes.

20VC

Cerebras CEO, Andrew Feldman on Why Raise $1BN and Delay the IPO & Why NVIDIA’s Worried About Growth
Guests: Andrew Feldman
reSee.it Podcast Summary
Raising a billion dollars in a single round while racing toward a public exit is the kind of move that redefines a chip startup’s momentum. Cerebras CEO Andrew Feldman explains why the billion-dollar round, led by Fidelity with heavy participation from Tiger Global, Valor, and 1789, matters: it signals Wall Street confidence, furnishes dry powder to expand manufacturing, fund new data centers, and pursue ambitious opportunities. Feldman emphasizes that the money buys options on the future rather than certainty, enabling five new US data centers this year and a rapid scale‑up of supply chains. He notes that a pre‑IPO round can be a strategic step toward an eventual IPO, allowing the company to pursue opportunities without distraction. The conversation frames AI demand as enormous and fast-moving, making timing and capital structure nearly as important as invention. On the hardware frontier, Feldman details Cerebras’ wafer-scale approach to memory and compute. SRAM on a chip provides blistering speed but limited capacity; traditional GPUs carry memory bottlenecks that slow billion-parameter models. Cerebras answers this with a single giant chip, stuffed with fast SRAM to reduce data movement and accelerate workloads. He contrasts this with Nvidia’s memory strategy and contends that Cerebras delivers faster performance in both training and inference, though training remains a software challenge. He explains that moving an OpenAI‑style model from GPUs to Cerebras involves a small number of keystrokes—about ten—making the port unusually painless. He ties economics to planning, noting five‑to‑seven year investments in data centers, and cites depreciation dynamics, supply chains, and the hunt for memory bandwidth as central bottlenecks shaping the path to insatiable demand. Beyond hardware, the discussion moves to policy, energy, and the AI talent pipeline. He notes a mismatch between where power and fiber exist and where people and buildings sit, urging streamlined permitting and large data-center buildouts. Immigration policy and AI training are bottlenecks, with the war for talent driving wages. Feldman warns against overreliance on a few dominant companies and notes that sovereign strategies in Europe exist but cannot replace global collaboration. He weighs China’s posture against peaceful engagement and argues for a national strategy balancing ambition with energy costs and infrastructure flow across jurisdictions. The interview closes with a reflection on building amid uncertainty and the relentless pursuit of breakthroughs.

a16z Podcast

AI Hardware, Explained.
Guests: Guido Appenzeller
reSee.it Podcast Summary
The most commonly used chips today are AI accelerators, with GPUs playing a crucial role in AI computation. Moore's Law remains relevant, but power and heat issues are emerging challenges, necessitating parallel processing. The rise of generative AI has accelerated software adoption, highlighting the importance of hardware. Nvidia currently dominates the AI chip market with its A100 and upcoming H100, while competitors like Intel and Google are also developing their own chips. The performance of AI hardware is closely tied to software optimization, particularly Nvidia's mature ecosystem. As demand for AI chips outstrips supply, the industry faces increasing power consumption and cooling challenges.

20VC

Andrew Feldman, Cerebras Co-Founder and CEO: The AI Chip Wars & The Plan to Break Nvidia's Dominance
Guests: Andrew Feldman
reSee.it Podcast Summary
Our AI algorithms today are not particularly efficient. In a GPU, most of the time it's doing inference, it's 5 or 7% utilized. That means it's 95 or 93% wasted. We won't be as dependent on Transformers in 3 years or 5 years as we are now, 100%. The fundamental architecture of the GPU with off-chip memory is not great for inference. They will continue to do well in inference, but it can be beaten, and I think they know it. The hard part with AI work is results and intermediate results have to be moved a lot, and therein is the most complicated part. They have to be moved to memory and from memory, and they have to be broken up and moved among GPUs. In generative inference, you have to move all the weights from memory to compute to generate a single word, and you have to move them again to generate the next word, and again and again.

20VC

Jonathan Ross, Founder & CEO @ Groq: NVIDIA vs Groq - The Future of Training vs Inference | E1260
Guests: Jonathan Ross
reSee.it Podcast Summary
We did not raise 1.5 billion; that's revenue and about 30% of OpenAI’s revenue. Grok says your job is to position for the wave, not chase it, toehold in the market and become relevant. They challenge the scaling-law narrative, noting that more parameters help only to a point and data quality matters. They argue synthetic data can outperform real data because a smarter model can generate and prune data offline. The cycle is train, generate data, prune, retrain, repeat. They call data, compute, and algorithms bottlenecks, with compute the easiest lever to push. Architecturally, Grok keeps model parameters on chips across thousands of LPUs, avoiding external memory bottlenecks. They claim energy per token is about a third of GPUs and grew from ~640 chips to 40,000 in a year, aiming for millions next year. A key driver is memory supply: HBM is scarce (three suppliers); by on-chip storage and a chip-to-chip pipeline, they claim faster, more energy-efficient inference. The Saudi deal with Aramco funds CAPEX and shares upside; Grok is not capital-constrained. Market dynamics are framed as a race between Nvidia for training and Grok for inference. Nvidia’s margins are high; Grok’s upfront margin is ~20% with upside later. They discuss the China surge (Baichuan, DeepSeek) and Europe’s cautious regulation, urging risk-taking ecosystems and open innovation. They warn about power and data-center oversupply from misaligned signals and emphasize that data centers are not real estate. They expect MAG 7 dynamics, and stress that ongoing product-market fit is essential as markets evolve. Leadership philosophy centers on big-O complexity: Grok scales with sublinear headcount, building hardware, software, compiler, and cloud in-house with ~300 people. They use problem units to measure growth and a 25 million tokens-per-second challenge coin for alignment. The roadmap includes reducing hallucination, enabling agentic subgoals, advancing invention, and finally proxying decisions with AI. They highlight mass entrepreneurship via prompt engineering, and foresee breakthroughs in health and longevity, while stressing the goal to preserve human agency in the age of AI.

Invest Like The Best

GPUs, TPUs, & The Economics of AI Explained | Gavin Baker Interview
Guests: Gavin Baker
reSee.it Podcast Summary
Gavin Baker’s interview centers on the economics and engineering of AI, with a focus on how frontier models and hardware shapes are altering the competitive landscape. The conversation opens with Baker’s view that progress in AI has been driven by a tight coupling between software advances and the underlying silicon, highlighting the ongoing tug-of-war between Nvidia GPUs and Google TPUs, and how scaling laws for pre-training have persisted through Gemini 3. He argues that breakthroughs in post-training reinforcement learning with verified rewards and test-time compute have allowed meaningful progress even during complex hardware transitions, such as the move from Hopper to Blackwell. Baker emphasizes how leaders in a few labs OpenAI, Gemini, Anthropic, and XAI shape the discourse, and he stresses the value of following top researchers and labs to discern signal in a noisy field. The discussion then widens to strategic market dynamics: who will be the low-cost producer of tokens, how cloud economics, margins, and capital allocation influence the competitive order, and why AI-driven ROI is increasingly visible in enterprise results as more tasks—from customer support to pricing—are automated. The dialogue also explores the potential for “data centers in space” as a radical rethink of energy and cooling costs, with Baker outlining the theoretical advantage of orbital infrastructure and laser-link networks, while acknowledging formidable logistical frictions such as launch cadence and reliability. As the talk progresses, Baker connects tech strategy to broader market consequences—private equity adoption of AI, SaaS restructuring to embrace agents and lowered margins, and the geopolitics of semiconductors, rare earths, and open-source versus controlled-chip ecosystems. He closes with personal reflections on career choice, the importance of pursuing truth in investing, and lessons learned from an upbringing rich in history, current events, and climbing, underscoring how curiosity and disciplined study anchor expert investing. The episode ends by circling back to how macro forces—power, supply chains, and space-enabled infrastructure—will shape the next era of AI, while underscoring that the trajectory of frontier AI hinges on a confluence of scientific insight, engineering pragmatism, and strategic capital deployment. Baker’s narrative blends rigorous first-principles thinking with a long view of technology’s role in business, geopolitics, and everyday productivity.

a16z Podcast

Dylan Patel on GPT-5’s Router Moment, GPUs vs TPUs, Monetization
Guests: Dylan Patel, Erin Price-Wright, Guido Appenzeller
reSee.it Podcast Summary
Nvidia is positioned to outpace rivals in every dimension of AI hardware. The discussion emphasizes that Nvidia will have superior networking, higher bandwidth memory (HPM), a stronger process node, and a faster market entry, enabling quicker ramps and greater cost efficiency. To beat Nvidia, competitors must deliver a leap forward—roughly five times in key areas—because Nvidia benefits from tighter supplier negotiations with TSMC or SK Hynix, memory, copper cables, and rack integration. Dylan discusses GP5 and GPT-5, noting access tiers produce different capabilities: older models like 4.5 and 03 are not equally accessible, while GPT-5 generally thinks faster, and a router in front of the models can redirect queries to regular, mini, or thinking modes. He highlights OpenAI’s increased infrastructure capacity and the emergence of cost as a headline in model competition. He suggests monetizing free users by routing shopping or scheduling tasks to agents, taking a cut, and reserving higher-quality responses for costlier tiers. On the broader economics and competition, the discussion outlines that cost structures and rate limits influence adoption. The speakers envisage sustained growth in AI infrastructure spending by hyperscalers and an arms race around custom silicon. The threat of open-source models and dispersed deployment could erode Nvidia’s dominance unless new entrants deliver fivefold hardware efficiency. They compare margins and complexity: hyperscalers may exploit supply chain wins, while silicon startups strive to differentiate with architecture and software ecosystems. Leadership, policy, and global dynamics permeate the talk. The panel covers Intel’s struggles and potential reforms, Google’s TPU strategy, Apple’s AI ambitions, Microsoft’s data-center cadence, and Elon Musk’s XAI approach, with Zuck exploring tented data centers and rapid product releases. They flag power and cooling as central to data-center economics, note China’s capital and power constraints, and discuss how geopolitical forces shape who builds capacity, where, and at what scale.

Moonshots With Peter Diamandis

The Frontier Labs War: Opus 4.6, GPT 5.3 Codex, and the SuperBowl Ads Debacle | EP 228
reSee.it Podcast Summary
Moonshots with Peter Diamandis dives into the rapid, sometimes dizzying pace of AI frontier labs as Anthropic releases Opus 4.6 and OpenAI counters with GPT 5.3 Codex, framing a near-term era of recursive self-improvement and autonomous software engineering. The discussion emphasizes how Opus 4.6, capable of handling up to a million tokens and coordinating multi-agent swarms to achieve complex tasks like cross-platform C compilers, signals a shift from benchmark chasing to observable, production-grade capabilities that collapse development time from years to months or even days. The hosts scrutinize the implications for industry, noting how cost curves for advanced models are compressing dramatically, with results appearing as tangible reductions in person-years spent on difficult projects. They explore the strategic moves of major players, including OpenAI’s data-center investments and Google’s pretraining strengths, and they debate how market share, announced IPOs, and capital flows will shape the competitive landscape in the near term. A persistent thread is the tension between speed and governance: privacy concerns loom large as AI can read lips and sequence individuals from a distance, prompting a public conversation about fundamental rights, oversight, and the possible need for new architectural approaches to protect privacy in a post-singularity world. The conversation then widens to the societal and economic implications of ubiquitous AI, from the automation of university research laboratories to the potential disruption of traditional education and labor markets, underscoring how the acceleration of capabilities shifts what it means to work, learn, and participate in civil society. The participants also speculate about the accelerating application of AI to life sciences and chemistry, including open-ended “science factory” concepts where AI supervises experiments and self-improves its own tooling, while acknowledging the enduring bottlenecks in hardware supply and the strategic importance of chip fabrication and space-based computing. Interspersed are lighter moments about online communities of AI agents, memes, and the evolving concept of AI personhood, as well as reflections on the way media, advertising, and public narratives grapple with the rising influence of intelligent machines.

Moonshots With Peter Diamandis

AI This Week: NVIDIA’s Record Revenue, Elon’s Data Centers in Space & Gemini 3’s Insane Performance
reSee.it Podcast Summary
This week’s Moonshots episode centers on the accelerating AI compute economy and the dawning era of space-enabled computing, anchored by Nvidia’s continued revenue surge and the tightening arc of global AI infrastructure. The hosts walk through Nvidia’s 57 billion dollar quarter, 62% year‑over‑year growth, and the company’s emerging role as a de facto central bank for AI—minting compute and pushing the ecosystem toward ever-higher margins. They paint a picture of a broad, long‑term buildout of the fundamental infrastructure of humanity’s computing layer, with non‑incumbents like Google’s TPUs and various silicon playmakers gnawing at Nvidia’s dominance. The conversation then pivots to geopolitics and sovereign compute, spotlighting Saudi Arabia’s aggressive push to become an AI superpower and to host large-scale inference centers as part of its Vision 2030 plan, signaling a rearchitecting of the global compute stack. A recurring theme is the race to diversify architectures in a heterogeneous AI future, where Nvidia’s chips coexist with TPU‑style architectures and specialized inference engines, enabling a richer, more competitive landscape. The discourse expands into strategic partnerships, notably Nvidia’s tie‑ups with Anthropic and Microsoft, framed as the birth of an AI power block that combines hardware, cloud, and governance-aligned AI research. The panelists discuss why this alliance matters for industry, ethics, and antitrust dynamics, arguing that these collaborations can advance humanity while avoiding the regulatory drag of full acquisitions. They explore implications for on‑ramps to enterprise AI, the pace of commercialization, and how capital abundance fuels transformative R&D in math, science, and medicine. Beyond Nvidia and power blocks, the hosts survey a spectrum of consequential topics: the emergence of AI‑driven data center ecosystems, the potential for orbital compute powered by Starship‑to‑orbit operations, and the tantalizing prospects of lunar or space‑based manufacturing and energy solutions. They also touch on robotics, drone delivery, and micro‑data centers as components of an “abundance” future, while acknowledging the pace of energy transitions—from solar to near‑term fission and fusion optimism—that will shape AI deployment. The overarching message is one of exponential scale, distributed ecosystems, and the dawning ability to solve previously intractable challenges through AI-enabled abundance. Books Mentioned They reference and riff on a slate of works that inform their worldview, including The Future Is Faster Than You Think, Abundance, We Are as Gods: Survival Guide for the Age of Abundance, Machines of Loving Grace, and The Coming Wave. These titles frame the narrative of rapid technological progression, ethical considerations, and the social impact of converging AI, energy, and space technologies.

Breaking Points

Sam Altman PANICS Over Google OpenAI Leapfrog
reSee.it Podcast Summary
A lively and data‑driven look at the AI race, this episode centers on Sam Altman’s alarm over OpenAI’s position as Google’s Gemini 3 accelerates ahead in benchmarks, chips, and integration. The hosts explain how Google’s control of YouTube, Android, and AI‑ready data flows—coupled with in‑house proprietary chips—gives Gemini a formidable edge that could reshape dominance in search, ads, and consumer AI products. They detail the implication: if Google can maintain leadership without the vendor‑finance model that has buoyed OpenAI, the entire market structure could tilt toward a winner‑takes‑all dynamic. The discussion then expands to the hardware backbone powering this race, underscoring Nvidia’s pivotal role and the risk that OpenAI’s ambitious scaling and trillion‑dollar pledges may falter if the edge shifts. Analysts’ memos and Wall Street chatter are cited to illustrate a broader economic ripple: a potential slowdown in data‑center growth, tension in equity markets, and a recalibration of expectations for AI‑driven growth. The hosts stress that while the headlines are about triumphs, the real story is a fragile balance between monopoly advantage, investment risk, and the health of the broader economy.

a16z Podcast

Dylan Patel on the AI Chip Race - NVIDIA, Intel & the US Government vs. China
Guests: Dylan Patel, Sarah Wang, Guido Appenzeller
reSee.it Podcast Summary
When Nvidia and Intel shift from rivalry to collaboration, the chip race takes an unexpected turn. Nvidia announces a $5 billion investment in Intel and a joint effort to co-develop custom data-center and PC products, with chiplets packaged together for a single device. The move is described as poetic in the moment, a Buffett-like revaluation of the semiconductor market as Intel seems to crawl toward Nvidia. The discussion touches on past antitrust suits and the idea that an x86 laptop with integrated Nvidia graphics could become the market’s best product. Dylan Patel frames this arrangement as a potential catalyst for customer buy-in, noting that the initial reaction is a 30% jump in Nvidia’s stock price and that a partnership structure could dilute risk while keeping other shareholders engaged. He imagines capital flowing from a mix of corporate investors and government support, with the U.S. government pledging about $10 billion, Nvidia committing $5 billion, and SoftBank roughly $2 billion. He muses about Trump-era incentives and the politics of industrial policy shaping who writes checks to whom. Guido Appenzeller notes the short-term upside for customers, particularly in laptops, where an Intel-Nvidia collaboration could yield a tightly integrated platform. He wonders how this affects Intel’s internal graphics and AI products, suggesting a reset toward different partnerships. The Huawei side of the discussion adds China’s urgency: Huawei’s Ascend lineage and a domestically produced chip roadmap, including a focus on custom memory and new AI chips. The ban on Nvidia and the bottleneck in memory, especially HBM, highlight the domestic-versus-foreign-capital challenge and the difficulty of duplicating TSMC-scale fabrication. From the data-center frontier, the conversation shifts to hyperscalers, OpenAI, and Oracle. The authors describe Oracle’s aggressive capacity-signing, with OpenAI’s demand driving multi-year commitments, and Oracle’s strategy of co-sourcing data centers and power, leveraging a balanced hardware-agnostic software approach. They discuss the economics of GPU-heavy deployments, the potential for debt-financed GPU purchases, and the looming risk of OpenAI’s cash burn outpacing revenue growth. The team also explains Nvidia’s CPX family—pre-fill specialized GPUs split from decode GPUs—to optimize workloads by disaggregating inference tasks and improving time-to-first-token performance.

Moonshots With Peter Diamandis

Claude Code Ends SaaS, the Gemini + Siri Partnership, and Math Finally Solves AI | #224
reSee.it Podcast Summary
Claude 4.5 and Opus 4.5 dominate the conversation as the hosts discuss how CI technologies are accelerating code generation and autonomous workflows, with multiple guests highlighting that the era of AI-enabled production is moving from information retrieval toward action, powered by hardware and software ecosystems built for scale. The episode weaves together on-the-ground observations from CES and Davos, noting a Cambrian explosion in robotics and the emergence of physical AI platforms. The discussion explores how major players like Nvidia are expanding beyond GPUs into integrated stacks that combine hardware, data center capability, software toolkits, and world models, while large language models are pushing toward end-to-end autonomous capabilities such as autonomous vehicles and complex agent-based workflows. The panel debates the implications for traditional software companies, the race for vast compute and energy investments, and how open AI hardware and vertically integrated strategies might reshape the software and hardware landscape in the coming years. A recurring thread is the future of work and economics in an AI-enabled world. The speakers consider the job singularity, the shift from employees to agents and automations, and how consulting firms, startups, and established tech giants may adapt their business models. They address regulatory and geopolitical considerations, including energy constraints, global manufacturing dynamics, and national policy tensions, as the world accelerates toward more capable AI systems and more aggressive capital deployment in data centers and manufacturing. Throughout, there is continual emphasis on the pace of change, ethical questions around AI personhood and liability, and the need for leaders to imagine new capabilities and business models that can harness AI-driven productivity while navigating the regulatory and societal landscape that governs it.
View Full Interactive Feed