TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
- XAI is two and a half years old and has achieved rapid progress across multiple domains, outperforming many competitors who are five to twenty years older and have larger teams. The company claims to be number one in voice, image and video generation, and to be leading in forecasting with Grok 4.20. Grok is integrated into apps like Imagine and Grokipedia, with Grokipedia positioned to become Encyclopedia Galactica—much more comprehensive and accurate than Wikipedia, including video and image data not present on Wikipedia. - XAI has achieved a 100,000-hour GPU training cluster and is about to reach 1,000,000 GPU-equivalent hours in training. The company emphasizes velocity and acceleration as the key drivers of leadership in technology. - The company outlines a four-area organizational structure: Grok Main and Voice (the main Grok model), a coding-focused model (Grok Code), an image and video model (Imagine), MacroHard (digital emulation of entire companies), and the infrastructure layers. - Grok Main and Voice will be merged into one team. In September 2024, OpenAI released a voice product, but XAI states it started later and, in six months, developed an in-house model surpassing OpenAI, with Grok in over 2,000,000 Teslas and a Grok voice agent API. The aim is to move beyond question answering toward building and deploying broader capabilities, such as handling legal questions, generating slide decks, or solving puzzles. - Product vision stresses that Grok Main’s intent is genuinely useful across engineering, law, and medicine, aiming to be valuable in a wide range of areas necessary to understand the universe and make things useful. - MacroHard is described as the effort to digitally emulate entire companies, enabling end-to-end digital output and the emulation of human workers across various functions (rocket design, AI chips, physics, customer service, etc.). MacroHard is presented as potentially the most important project, with the Roof of the training cluster bearing the MacroHard name. The team emphasizes that most valuable companies produce digital output and that MacroHard could replicate the outputs of companies like Apple, Nvidia, Microsoft, and Google, among others, across multiple domains. - Imagine focuses on imaging and video generation; six months into the project, Imagine released v1 and topped leaderboards across several metrics. The team highlights rapid iteration with multiple product updates daily and model updates every other week. Users are generating close to 50,000,000 videos per day and 6,000,000,000 images in the last 30 days, claiming this surpasses other providers combined. The goal is to turn anything you can imagine into reality. - Hakan discusses longer-form video capabilities, predicting end-of-year capabilities for generating 10 to 20-minute videos in one shot, with real-time rendering and interaction in imagined worlds. The expectation is that most AI compute will be real-time video understanding and generation, with XAI leading in this trajectory and continuing to improve Grok code toward state-of-the-art performance within two to three months. - MacroHard details: the team envisions building a fully capable digital human emulator to perform any computer-based task, including using advanced tools in engineering and medicine, like rocket engines designed by AI. The project is framed as a response to the remaining gap between AI and human capability in this domain, making it a high-priority area for recruitment of top talent. - XChat and X Money are described as major products in development. XChat is planned as a standalone standalone messaging app with full features (encrypted messaging, audio and video calls, screen sharing, etc.), with no advertising or hooks in Grok Chat. X Money is currently in closed beta within the company, moving toward external beta and then worldwide, intended to be the central hub for all monetary transactions, including mortgages, business loans, lines of credit, stock ownership, and crypto. - The presentation also emphasizes the synergy between XAI and SpaceX, noting that SpaceX has acquired xAI and that orbital AI data centers are being pursued to dramatically increase available AI training compute. FCC filings indicate plans to launch a million AI satellites for training and inference, with annual launches potentially reaching 200–300 gigawatts per year, and longer-term goals including moon-based factories, satellites, and a mass driver to launch AI satellites into orbit. The mass driver on the moon is described as a path to exponentially greater compute, potentially reaching gigawatts or terawatts per year, with the broader ambition of enabling a self-sustaining lunar city and interplanetary expansion. - The overall message stresses extraordinary progress, a relentless push toward greater compute and capability, and aggressive growth in user adoption and product scope. The company frames its trajectory as a fundamental shift toward real-time, scalable AI that can transform work, communication, and the management of digital assets across the globe and beyond Earth.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the transformative potential of combining artificial intelligence, quantum computing, and big data. They predict a future where physical, digital, and biological dimensions merge, creating a new world. They anticipate significant changes in society within the next decade.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker believes humanoid robots will be the biggest product ever, with insatiable demand, like having a personal C-3PO and R2-D2. They mentioned that "tens of billions of robots" is at least a decade away, but the growth will be very fast. The speaker's goal is to produce a million robots by 2029 or 2030, which they consider a reasonable target, and then move towards sustainable abundance.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the need for a third competitor in the AI industry, alongside OpenAI, Microsoft, and Google DeepMind. They hint at their own new AI company that will soon be revealed. They suggest that this new venture may involve collaboration with Microsoft, Twitter, and Tesla, although no specific details are provided. The speaker also mentions the importance of regulation in the field of AI.

Video Saved From X

reSee.it Video Transcript AI Summary
We will become a hybrid species, still human but enhanced by AI, no longer limited by our biology, and free to live life without limits. We're going to find solutions to diseases and aging. Having worked in AI for sixty-one years, longer than anyone else alive, and being named one of Time's 100 most influential people in AI, I predicted computers would reach human-level intelligence by 2029, and some say it will happen even sooner.

Video Saved From X

reSee.it Video Transcript AI Summary
The current wave is also wrong. So the idea that, you know, you just need to scale scale up or have them generate, you know, thousands of sequence of tokens and select the good ones to get to human level intelligence. Are you gonna have, you know, within a few years, two years, I think, for some predictions, a country of geniuses in a data center, to quote someone who we may name less. I think it's nonsense. It's complete nonsense. I mean, sure, there are going to be a lot of applications for which systems in the near future are going to be PhD level, if you want. But in terms of you know, overall intelligence, no, we're still very far from it. I mean, you know, when I say very far, it might happen within a decade or so. So it's not that far.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker believes AI will make intelligence commonplace in the next decade, providing free access to expertise like medical advice and tutoring, which could solve shortages in healthcare and mental health. This shift will bring significant changes, raising questions about the future of jobs and the potential for reduced work weeks. While excited about AI's innovative potential, the speaker acknowledges the uncertainty and fear surrounding its development. The speaker suggests AI may eventually handle tasks like manufacturing, logistics, and agriculture. Humans will still be needed for some things, and society will decide what activities to reserve for humans.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker expresses optimism about eventually achieving artificial general intelligence (AGI) and artificial superintelligence (ASI), suggesting it could occur in our lifetimes, over the next few decades, or perhaps even centuries. The timeline is uncertain: we'll see how long it takes. The speaker notes that AI is bound by the laws of physics, implying physical constraints will limit progress. Nevertheless, they argue that the potential upper bound on intelligence and on what we can command such systems to accomplish remains very high. The overall takeaway is a recognition of vast future possibilities tempered by fundamental physical limits. This framing leaves room for dramatic advancements while grounding expectations in physics.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 argues that the human brain is a mobile processor: it weighs a few pounds and consumes around 20 watts. In the brain, signals are sent through dendrites, with a channel frequency in the cortex of about 100 to 200 Hz. The signals themselves are electrochemical wave propagations, moving at about 30 meters per second. When comparing the brain to a data center, there is a vast gap in several dimensions. In a data center, you could have about 200 megawatts of power (instead of 20 watts), several million pounds of mass (instead of a few pounds), about 10,000,000,000 Hz on the channel (instead of roughly 100–200 Hz), and signals propagating at the speed of light, 300,000 kilometers per second (instead of about 30 meters per second). Thus, in terms of energy consumption, space, bandwidth on the channel, and speed of signal propagation, there are six, seven, or eight orders of magnitude differences in all four dimensions simultaneously. Given these disparities, the question arises whether human intelligence will be the upper limit of what’s possible. The speaker answers emphatically, “absolutely not.” As our understanding of how to build intelligence systems develops, we will see AIs go far beyond human intelligence. The speaker likens this to other domains where humans are outmatched by machines in specific capabilities, such as speed, strength, and sensory reach. Humans cannot outrun a top fuel dragster over 100 meters, cannot lift more than a crane, and cannot see beyond the Hubble Telescope. Yet machines already surpass these limits in certain areas. The speaker foresees a similar trajectory for cognition: just as machines can outperform humans in other tasks, AI will eventually exceed human cognitive capabilities as technology and understanding advance.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker believes their company is the premier one for developing and scaling products to billions of people and is leading in the next generation of computing platforms with glasses that are doing exceptionally well. They think glasses will be the best form factor for AI because they can see and hear what you do, and once a display and holograms are added, they'll generate a UI. The speaker envisions a future where AI glasses observe your life and follow up on things for you, providing information in real time. They believe not having AI glasses will create a cognitive disadvantage, similar to needing vision correction and not having optical glasses. The company is also focused on entertainment, culture, and personal relationships, believing AI can be valuable in these areas.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses building AI factories to run companies, describing it as more significant than buying a TV or bicycle. They state that the world is building trillions of dollars worth of AI infrastructure over the next several years, characterizing this as a new industrial revolution. The speaker compares AI factories to historical innovations like the steam engine and railroads, but asserts that AI factories are much bigger due to the current scale of the world economy. They claim that with a $120 trillion global GDP, AI factories will underpin a substantial portion of it, suggesting that trillions of dollars in AI factories supporting a hundred trillion dollars of the world's GDP is a sensible proposition.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker claims that AI advancements are entering completely new territory, which some people find scary. They suggest that humans may not be needed for most things in the future.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker reframes computers as AI factories, which produce tokens, numbers. These AI factories should be used for three fundamental things, with the first being to train the next frontier model so you can build the best AI and get to market first. The goal is to train it as fast as possible. Regarding performance, Rubin is described as a 4x leap compared to Blackwell, meaning the fourfold improvement could be achieved in one month instead of four months.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the need for a third player in the AI industry, alongside companies like OpenAI, Microsoft, and Google DeepMind. They hint at their own new AI company that will soon be revealed. The speaker suggests that this new venture may involve integrating the capabilities of Twitter and Tesla, similar to the successful relationship between OpenAI and Microsoft. They also mention the importance of regulation in the AI field.

Cheeky Pint

Reiner Pope of MatX on accelerating AI with transformer-optimized chips
Guests: Reiner Pope
reSee.it Podcast Summary
Rainer Pope, co-founder and CEO of MATX, discusses the motivations behind building transformer-optimized chips and how his team aims to outperform existing AI accelerators by blending memory technologies and honing low-precision arithmetic. He traces the lineage from Google's TPUs to the current focus on LLM inference and the need for hardware that scales with growing matrix sizes and precision requirements. The conversation covers architectural choices such as combining HBM for high throughput with SRAM for low-latency weights, the design of a large, power-efficient systolic engine, and a new approach to low-precision formats that can accelerate training and inference while preserving model quality. Pope emphasizes economics as a core metric, measuring tokens per second and dollars per token, and explains why throughput often drives business value more than peak raw speed. He reflects on the historical arc of neural network hardware, noting the parallelism inherent in all AI accelerators and the shift from CPU-centric designs to devices optimized for matrix multiplications. The interview delves into the practicalities of chip development, including the waterfall-like process of hardware design, verification, and tape-out, as well as the realities of fabrication at leading-edge nodes. Pope outlines MATX’s strategy to mitigate supply-chain risk by pre-committing buyers, maintaining large capital reserves, and planning for multi-gigawatt production to meet demand from major AI clusters. The discussion also touches the importance of ecosystem and software alignment, arguing that while CUDA-like software investments matter for frontier labs, a materially optimized hardware stack with tailored ML software can yield significant gains per dollar. When asked about the future, Pope predicts a continued push toward higher throughputs and lower latencies, with context- and memory-management improvements playing a central role in the next phase of AI product refinement. The exchange closes on the theme of technical curiosity and practical problem-solving, highlighting how architectural intuition, rigorous simulation, and disciplined iteration drive progress in hardware for AI at scale.

20VC

Eiso Kant, CTO @Poolside: Raising $600M To Compete in the Race for AGI | E1211
Guests: Eiso Kant
reSee.it Podcast Summary
Poolside is racing toward AGI, and the latest 500 million round translates to an entrant’s stake in the race. The team believes the gap between machine intelligence and human capabilities will keep shrinking, with human‑level skills appearing where they are economically valuable before true AGI arrives. Foundation models compress vast web data into a neuronet, offering language understanding yet showing clear limits without more data. Poolside’s core claim is a data set capturing intermediate reasoning, trials, and code that lead to final products, including iterative testing and failures. AlphaGo‑style reinforcement learning in simulated environments demonstrated how synthetic data can bootstrap capabilities, while real‑world data such as car autopilot engagements provide non‑simulatable learning signals. They describe reinforcement learning from code execution feedback. In a 130,000‑code basis environment, it explores solutions to tasks and learns from tests. Deterministic feedback via code execution plus human feedback guides improvement. They critique the idea that synthetic data alone solves data gaps, noting the need for an oracle of truth to judge which solutions are better or worse. Humans remain essential for labeling and guiding reasoning, while compute and data scale together. On scaling and economics, they argue scale laws show more data and larger models yield better results, and compute matters but is table stakes. They anticipate continued growth in hardware advances, synthetic data utility, and distillation of large models into smaller, cost‑effective ones. They discuss a hardware race among Nvidia, Google, and Amazon, with chips like TPUs and Blackwell, and not all training can be upgraded immediately. They warn about latency, data center buildouts, and the need for globally distributed infrastructure near users. They emphasize four ingredients: compute, data, proprietary applied research, and talent, with talent especially critical in Europe as a future hub. They note London and Paris teams and the influence of DeepMind, Yandex, and others. They stress progress requires relentless focus; a premortem warns that stumbling or easing up means losing the race. They close by reflecting on motivation, the journey with people, and the reasons behind the pursuit, insisting the race must be pursued with excellence in development and go‑to‑market.

20VC

Aidan Gomez: What No One Understands About Foundation Models | E1191
Guests: Aidan Gomez
reSee.it Podcast Summary
The reality of the matter is there's no market for last year's model. If you throw more compute at the model, if you make the model bigger, it'll get better. There will be multiple models—verticalized and horizontal—and consolidation is coming. It's dangerous when you make yourself a subsidiary of your cloud provider. I grew up in rural Ontario. We couldn't get internet; dial-up lasted for years after high-speed came. That early hardship fueled a fascination with tech and coding and gaming that taught resilience. On the scaling question, 'the single biggest rate limiter that we have today' is not just more compute but smarter data and algorithms. There will be both large general models and smaller focused ones. The pattern is to 'grab, you know, an expensive big model, prototype with, prove that it can be done, and then distill that into an efficient Focus model at the specific thing they care about.' 'The major gains that we've seen in the open-source space have come from data improvements'—higher quality data and synthetic data. We need to 'let them think and work through problems' and even 'let them fail.' 'Private deployments like inside their VPC on Prem' are essential as data stays on their hardware. Enterprises are sprinting toward production, focusing on employee augmentation and productivity. The hype around 'agents' is justified; they could transform workflows, but the value will come from human–machine collaboration. Robotics are viewed as 'the era of big breakthroughs' once costs fall. Beyond models, the drive is 'driving productivity for the world and making humans more effective' and to push growth over displacement.

a16z Podcast

Why Technology Still Matters with Marc Andreessen
Guests: Marc Andreessen
reSee.it Podcast Summary
The a16z podcast, hosted by Seth Smith, features co-founder Marc Andreessen discussing the significance of building the next generation of technologies. They explore the historical context of technology, emphasizing that advancements have consistently improved human life, contrasting past hardships with today's benefits. Andreessen argues that technology is essential for progress, asserting that it is the only reason life has improved over time. He highlights the psychological resistance to new technologies, illustrating this with historical examples like fire and the bicycle, which faced societal backlash due to fears of change and disruption of social order. Andreessen notes that every new technology undergoes a cycle of skepticism, often starting with ignorance, followed by rational arguments against it, and ultimately leading to a moral panic about its implications. The conversation shifts to the impact of remote work, particularly post-COVID, which has fundamentally altered the traditional role of cities as centers of innovation. Andreessen believes this shift allows for a re-examination of how and where people work, potentially leading to new community structures that better suit modern needs. He reflects on the challenges of maintaining an optimistic view of technology amidst societal pessimism, suggesting that this negativity often stems from complacency and a lack of perceived need for further progress. Andreessen argues that the entrepreneurial spirit remains vital, as new ideas and innovations are essential for societal advancement. The discussion also touches on the evolution of capitalism from individual-driven to managerial systems, where bureaucratic structures often stifle innovation. Andreessen posits that true progress comes from starting new ventures rather than attempting to reform existing institutions, which tend to resist change. Ultimately, he expresses optimism about the future, citing advancements in AI, biotech, and crypto as areas ripe for innovation. He believes that as more individuals gain access to technology and remote work opportunities, the potential for groundbreaking ideas and societal progress will increase, emphasizing the importance of building and creating in a world that often resists change.

Generative Now

Andrew Feldman: Building the World’s Largest and Fastest Computer Chip for AI
Guests: Andrew Feldman
reSee.it Podcast Summary
Imagine a dinner-plate-sized chip that runs AI at unprecedented scale without racks of GPUs. Cerebras’ Wafer Scale Engine 3 delivers four trillion transistors and 900,000 cores on a single wafer. Feldman says the hard part of AI is the interchip communication, so the solution is to keep computation on one giant wafer instead of fragmenting across many devices. The result is faster training and lower power, supported by an integrated system for data handling, cooling, and networking. Over the past year Cerebras has deployed exaflop-scale AI compute with customers across North America, Europe, and the Middle East, including cloud partners. The approach contrasts with GPU clusters by removing the need for large-scale distributed compute; Nvidia’s Mellanox acquisition underscored the same problem. Cerebras’ technology has been applied to diverse challenges: predicting virus mutations with Argonne National Laboratory, analyzing epigenomic data with GlaxoSmithKline, and training an Arabic language model with G42 that powers regional services. They collaborate with Mayo Clinic and TotalEnergies on imaging, genomics, and reservoir modeling. Looking ahead, Feldman says the path is iterative: scale hardware, improve software utilization, and leverage sparsity to cut compute without losing accuracy. He envisions broader AI adoption in healthcare and industry, with sovereign clouds expanding access to massive AI compute. The hardware-software-data ecosystem will continue to evolve, and the company aims to be 10x better rather than marginally improved. Their focus on domain-specific efficiency—rather than chasing a single architecture—helps them adapt as models evolve, from transformers to new ideas. The pace is relentless.

Cheeky Pint

A Cheeky Pint with OpenAI cofounder Greg Brockman
Guests: Greg Brockman
reSee.it Podcast Summary
OpenAI's path ran counter to the startup script: we chased the technology first, with a problem unclear at the outset, and this exploration proved the hardest project. The team observed that progress across AI subfields converged around deep learning, driven by compute scale and scalable algorithms. They weathered early skepticism, noting the Dota 2 project showed scale needed to grow and input-driven experimentation trumped fixed milestones. The scaling hypothesis emerged from practice, not premise, with 16 cores evolving to large-scale training; later, GPT-3 and GPT-4 demonstrated broader viability, first via AI Dungeon as a paid user and eventually as a reliable platform. Personalization and memory became crucial product directions, bridging product and research. They reflected on medicine, education, life coaching, coding, and enterprise use, predicting a future with base models and better integration via plugins and multi-modal interfaces, while power and data-wall bottlenecks shift with intent and policy. The team outlined levels of AGI, expecting level four as innovators and emphasizing continuous step-function breakthroughs each year.

TED

AI Won’t Plateau — if We Give It Time To Think | Noam Brown | TED
Guests: Noam Brown
reSee.it Podcast Summary
The progress in AI over the past five years is primarily due to scale, with models becoming larger and trained on more data. Concerns exist about potential plateaus in AI development, but Noam Brown believes progress will accelerate. His research on poker AIs revealed that allowing the bot to think longer significantly improved performance, equating 20 seconds of thought to a 100,000x model scale increase. This insight applies beyond games, as demonstrated by OpenAI's o1 language models, which benefit from extended thinking time, suggesting a new paradigm for AI development.

Lenny's Podcast

He saved OpenAI, invented the “Like” button, and built Google Maps: Bret Taylor (Sierra)
Guests: Bret Taylor
reSee.it Podcast Summary
Bret Taylor, CTO of Meta and co-CEO of Salesforce, discusses the future of the AI market, emphasizing a shift towards agents and outcomes-based pricing. He reflects on his career, including his early mistakes at Google, where he learned valuable lessons about product differentiation and user experience, particularly in the development of Google Maps. Taylor highlights the importance of a flexible identity as a builder, adapting to the needs of the company and focusing on impactful work. The conversation covers the significant potential of AI agents in transforming business operations, particularly in customer service, where they can automate interactions and improve efficiency. Taylor believes that the software industry is moving towards a model where agents will become the new standard, akin to the evolution of SaaS. He argues that this shift will lead to measurable productivity gains, as agents can autonomously accomplish tasks that traditionally required human intervention. Taylor also discusses the importance of outcomes-based pricing, which aligns the interests of software providers and their clients by tying costs to the value delivered. He shares insights on effective go-to-market strategies for AI products, stressing the need for founders to choose the right sales model based on their target market and product type. Throughout the conversation, Taylor emphasizes the need for continuous improvement and adaptation in the AI space, suggesting that companies should focus on understanding their customers' needs and leveraging AI to enhance their offerings. He concludes by encouraging a mindset of innovation and flexibility, which he believes will be crucial for success in the rapidly evolving tech landscape.

a16z Podcast

Unlocking Creativity with Prompt Engineering
Guests: Guy Parsons
reSee.it Podcast Summary
In this episode, Guy Parsons discusses the emerging role of prompt engineers alongside AI technologies like DALL-E 2, Midjourney, and Stable Diffusion. He highlights the challenges designers face when clients struggle to articulate their needs, emphasizing the importance of effective prompting to guide AI outputs. Parsons shares insights from his experience writing a prompt book, noting that successful prompting requires understanding how to describe images as if they already exist. He estimates spending hundreds of hours mastering these tools and observes that the field is evolving rapidly, with new capabilities allowing users to prompt with images. He discusses the nuances of different AI models, likening their prompting systems to learning different languages rather than just switching software. Parsons also points out the potential for prompt engineering to become a specialized skill, while acknowledging that user-friendly interfaces may make it accessible to more people. He envisions a future where AI tools enhance creativity and design processes, ultimately integrating into various industries.

Possible Podcast

OpenAI Chairman Bret Taylor on the new jobs AI will usher into the future
Guests: Bret Taylor
reSee.it Podcast Summary
OpenAI's current wave of artificial intelligence feels unlike past tech fads, because large language models are already delivering practical utility across education, healthcare, law, and everyday life. The guest envisions a future where an AI agent could handle an insurance change, tutor a student in esoteric topics, or draft a lease analysis for free, all in real time. He argues this democratization of expertise could transform learning, medical advice, and access to professional help worldwide. Despite Silicon Valley’s bubble talk, he believes the trend will ultimately redefine how we live and work over the next decade. He outlines three engines driving progress: algorithms, data, and compute. The Transformers architecture catalyzed the current wave, followed by chain-of-thought breakthroughs powering newer models. Data remains abundant not only in text but in video, images, and audio, with simulation and synthetic data generation opening new frontiers. Compute continues to scale with Nvidia’s rising stock, enabling longer training and more capable inference. Because progress can advance in one area even if another stalls, the field benefits from parallel momentum in all three, increasing the odds of continued breakthroughs for the foreseeable future. Turning to practical applications, Sierra builds customer-facing AI agents that can operate across chat and phone channels. Harmony powers retail and subscription services, helping customers manage plans, while Sonos' AI assists with setup and troubleshooting. The firm highlights that bringing AI to voice calls can dramatically reduce contact costs, from roughly $10–$20 per call to far less, enabling more proactive, 24/7 interactions. The agents are multilingual, empathetic, and able to act on a company’s systems, turning negative moments into positive brand experiences. The conversation touches new roles like conversation designers and AI architects who craft these agent behaviors. On entrepreneurship, the guest compares AI markets to cloud markets, with three layers: infrastructure, toolmakers, and applications delivering end-user solutions. He argues most future value will come from building problem-solving applications not just training models, and predicts many new roles such as AI architects and conversation designers. Voice will reshape human-computer interaction, moving toward agentic interfaces where personal and work agents manage conversations, tasks, and decisions. He envisions super agency enabling a child anywhere to access advanced education, a future where technology democratizes expertise and expands opportunity.

The Pomp Podcast

Is A Recession Coming Soon?!
Guests: Jordi Visser, Scott Bessent
reSee.it Podcast Summary
Tesla is viewed not just as a car company but as a leader in robotics and AI, which influences its stock valuation. Jordi Visser discusses the Federal Reserve's decision not to cut interest rates, suggesting uncertainty in economic policies. He emphasizes the importance of the upcoming April 2nd tariff deadline, noting that market sentiment is currently negative but may present buying opportunities. Investors are cautious, particularly with the S&P 500 below its 200-day moving average. Visser highlights a rotation in market investments, with the "Magnificent Seven" tech stocks facing declines while other sectors show resilience. He believes that the market is frozen until clearer signals emerge post-April 2nd. The discussion also touches on the impact of AI on productivity and the economy, with Nvidia's recent event illustrating the momentum in tech. Visser predicts that as AI and hardware converge, companies that adapt will thrive, suggesting a potential hardware boom. He encourages investors to focus on sectors benefiting from AI advancements and to remain optimistic about future growth despite current challenges.
View Full Interactive Feed