reSee.it - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
This is an AI avatar created with Heigen's Avatar 3.0, featuring unlimited looks, showcasing advancements in AI video technology. This technology aims to revolutionize digital content creation by simplifying video production. Users can easily change their AI character's appearance, including clothing, poses, and camera angles. This flexibility eliminates the need for repeated filming or hiring actors, saving time and resources. The technology is becoming increasingly user-friendly, making it accessible for various applications like marketing, teaching, and online content creation. The speaker suggests that in the future, individuals might have digital twins creating content autonomously.

Video Saved From X

reSee.it Video Transcript AI Summary
- XAI is two and a half years old and has achieved rapid progress across multiple domains, outperforming many competitors who are five to twenty years older and have larger teams. The company claims to be number one in voice, image and video generation, and to be leading in forecasting with Grok 4.20. Grok is integrated into apps like Imagine and Grokipedia, with Grokipedia positioned to become Encyclopedia Galactica—much more comprehensive and accurate than Wikipedia, including video and image data not present on Wikipedia. - XAI has achieved a 100,000-hour GPU training cluster and is about to reach 1,000,000 GPU-equivalent hours in training. The company emphasizes velocity and acceleration as the key drivers of leadership in technology. - The company outlines a four-area organizational structure: Grok Main and Voice (the main Grok model), a coding-focused model (Grok Code), an image and video model (Imagine), MacroHard (digital emulation of entire companies), and the infrastructure layers. - Grok Main and Voice will be merged into one team. In September 2024, OpenAI released a voice product, but XAI states it started later and, in six months, developed an in-house model surpassing OpenAI, with Grok in over 2,000,000 Teslas and a Grok voice agent API. The aim is to move beyond question answering toward building and deploying broader capabilities, such as handling legal questions, generating slide decks, or solving puzzles. - Product vision stresses that Grok Main’s intent is genuinely useful across engineering, law, and medicine, aiming to be valuable in a wide range of areas necessary to understand the universe and make things useful. - MacroHard is described as the effort to digitally emulate entire companies, enabling end-to-end digital output and the emulation of human workers across various functions (rocket design, AI chips, physics, customer service, etc.). MacroHard is presented as potentially the most important project, with the Roof of the training cluster bearing the MacroHard name. The team emphasizes that most valuable companies produce digital output and that MacroHard could replicate the outputs of companies like Apple, Nvidia, Microsoft, and Google, among others, across multiple domains. - Imagine focuses on imaging and video generation; six months into the project, Imagine released v1 and topped leaderboards across several metrics. The team highlights rapid iteration with multiple product updates daily and model updates every other week. Users are generating close to 50,000,000 videos per day and 6,000,000,000 images in the last 30 days, claiming this surpasses other providers combined. The goal is to turn anything you can imagine into reality. - Hakan discusses longer-form video capabilities, predicting end-of-year capabilities for generating 10 to 20-minute videos in one shot, with real-time rendering and interaction in imagined worlds. The expectation is that most AI compute will be real-time video understanding and generation, with XAI leading in this trajectory and continuing to improve Grok code toward state-of-the-art performance within two to three months. - MacroHard details: the team envisions building a fully capable digital human emulator to perform any computer-based task, including using advanced tools in engineering and medicine, like rocket engines designed by AI. The project is framed as a response to the remaining gap between AI and human capability in this domain, making it a high-priority area for recruitment of top talent. - XChat and X Money are described as major products in development. XChat is planned as a standalone standalone messaging app with full features (encrypted messaging, audio and video calls, screen sharing, etc.), with no advertising or hooks in Grok Chat. X Money is currently in closed beta within the company, moving toward external beta and then worldwide, intended to be the central hub for all monetary transactions, including mortgages, business loans, lines of credit, stock ownership, and crypto. - The presentation also emphasizes the synergy between XAI and SpaceX, noting that SpaceX has acquired xAI and that orbital AI data centers are being pursued to dramatically increase available AI training compute. FCC filings indicate plans to launch a million AI satellites for training and inference, with annual launches potentially reaching 200–300 gigawatts per year, and longer-term goals including moon-based factories, satellites, and a mass driver to launch AI satellites into orbit. The mass driver on the moon is described as a path to exponentially greater compute, potentially reaching gigawatts or terawatts per year, with the broader ambition of enabling a self-sustaining lunar city and interplanetary expansion. - The overall message stresses extraordinary progress, a relentless push toward greater compute and capability, and aggressive growth in user adoption and product scope. The company frames its trajectory as a fundamental shift toward real-time, scalable AI that can transform work, communication, and the management of digital assets across the globe and beyond Earth.

Video Saved From X

reSee.it Video Transcript AI Summary
Introducing Microsoft Designer, an AI-powered design app that simplifies professional-quality designs. Just tell Designer what you need, and it will provide great options from its vast image catalog. You can also add your own images or generate new ones using AI. Designer offers arrangement suggestions and writing assistance to customize your design. It even has tools to streamline image production tasks. For example, you can add fireworks with magic motion effects. Sharing your creations is effortless, with AI-powered recommendations for captions and hashtags. Designer's AI assistant ensures excellent results, whether it's attracting people to events, boosting sales, or simply bringing smiles. Try it for free at designer.microsoft.com.

Video Saved From X

reSee.it Video Transcript AI Summary
Welcome to Futuristo, the platform revolutionizing content creation with AI. We offer short, impactful videos, viral faceless content, AI avatars, and customized images designed specifically for you. Stay tuned for even more exciting developments as Futuristo continues to push the boundaries of AI innovation. Join us as we create the future of content creation.

Video Saved From X

reSee.it Video Transcript AI Summary
Introducing the Humane AI PIN, a compact device and software platform that offers all-day battery life. With no wake words, it only activates when engaged through voice, touch, gesture, or the laser ink display. The AI PIN features its own connectivity through the Humane network and runs on a Qualcomm Snapdragon chipset for fast AI processing. It includes an ultra-wide RGB camera, depth sensor, motion sensors, and a unique speaker for immersive sound. The device prioritizes privacy with a trust light indicator and a dedicated privacy chip. It offers various AI experiences without the need for apps, such as music streaming, messaging, web browsing, and more. The AI PIN also allows for seamless retail transactions, photo and video capture, and personalized recommendations. Accessories like clips and shields are available for customization and protection.

Video Saved From X

reSee.it Video Transcript AI Summary
In this video, we explore a world where presentations and artificial intelligence come together. To use this technology, simply input the topic or title of your presentation and let Degtypos do the thinking. You can also choose your goal for the presentation to optimize the suggested content. With this tool, you'll have a first draft to start working with.

Video Saved From X

reSee.it Video Transcript AI Summary
Welcome to Futuristo, the platform revolutionizing content creation with AI. We offer short, impactful videos, viral faceless content, AI avatars, and personalized images. Our goal is to create what's next in AI, and we have exciting plans in store for you. Join us as we shape the future of content creation. Futuristo, where AI takes the lead.

Video Saved From X

reSee.it Video Transcript AI Summary
I'm using my Vision Pro, and this is my AI clone lip syncing to my voice in real time. This AI takes my audio input and generates a video of me speaking instantly. You can create your own AI clone by uploading a three-minute video of yourself. In 24 hours, you'll receive your clone. By switching the camera, you can use your clone in meetings while you relax. It's that easy!

Video Saved From X

reSee.it Video Transcript AI Summary
We're showcasing AI pose estimation and real-time ray tracing at SIGGRAPH 2019. With just an iPad, a Hollywood producer can instantly see how a scene will look in a ray-traced environment. This demonstration features an attendee transformed into an astronaut. We have over 40 ray-traced applications running on NVIDIA RTX.

Video Saved From X

reSee.it Video Transcript AI Summary
Meta is launching Meta Superintelligence Labs to build personal superintelligence for everyone, envisioning AI systems that improve themselves. This initiative aims to provide individuals with AI that helps them achieve goals, create, improve relationships, and grow personally, distinguishing itself from approaches focused solely on automating valuable work. Meta believes in empowering individuals with superintelligence to direct it towards their own values. The speaker anticipates a future where personal superintelligence accelerates the historical trend of technology freeing people from subsistence, allowing them to focus on creativity, culture, relationships, and enjoyment. They expect people will spend less time on productivity software and more time creating and connecting, with personal devices like glasses becoming the primary computing interface. Meta believes it has the resources and reach to build the infrastructure and deliver this technology to billions.

Video Saved From X

reSee.it Video Transcript AI Summary
We introduce photographic memory on the PC through recall, a semantic search tool that recreates past moments. Windows takes screenshots for generative AI processing, making all data searchable, including photos. Despite potential privacy concerns, this feature is only available on the edge and operates locally.

Video Saved From X

reSee.it Video Transcript AI Summary
Yesterday and today is the very first public unveiling in a public presentation of Project Looking Glass. A human being is about this tall. This is built in underground bases. It is colossal in size. And what happens is when you fire this thing up, these rings start going in all different directions. It shields off the barrel of water inside, which is just like your pineal gland. The water inside flips over into time space, which then captures argon gas, captures visual images, and this is what it looks like. It becomes huge, glowing all the way out to these posts, which are used as stabilizers for the energy field. Now it doesn't start out with an image of the Earth. That's just there as a placeholder, although you could see the whole earth like that if you wanted to. That's not usually what happens. Well, contact. Right? We have been given the technology right in front of our faces in the movies. They just don't tell you what it's for. So contact, just to recap from yesterday, blueprints come in from the Vegans from Vega. They figure out that it's only when you fold up these patterns into a cube, which forms what they're in the center, the triangle with the eye in the middle as it folds up. Cube geometry, that's when you can decode the plans to build this, these rings, which becomes this machine. And there's your little helicopter. There's another shot of it. They got computer animations of how it's gonna work with the different rings moving. This actually looks like the fisheye lens distortion effect that happens around the looking glass when you power it on. There's a gantry crane up at the top, and the little guy is dropped down and goes inside. That's like your barrel inside the the mechanism there. And this is where all the action happens. This is where the stargate opens, the wormhole into the higher realms. In this frame, we're seeing how the whole thing collapsed and fell apart. So, of course, you wanna take a ride, you know, the new the new formation of of the one after the old one breaks. And here we're seeing the new one. Once again, you get a very clear view, and there's your little eye with the rays coming off, but just like the all seeing eye on the dollar. Not surprising. Here she is. It's even larger than the real one standing on this this ladder looking down inside. This is the chamber and the chair inside, which we'll see is very important. And here's your geometry. Look at that. Did anybody notice that when you were watching the movie? Because it goes by pretty fast. That's that same geometry that's in the background radiation of the whole universe with the sphere in the center. They fired up. Guess what happens? Big bright luminosity just like the real thing. The real thing that they're actually using and have been using since the forties at least. And here's what it looked like from the control room. It gets brighter and brighter, and she starts having the floor disappear. She gets dropped down inside. And then, of course, she goes through this wild wormhole ride, and there's an ascension experience at the end of the dream. The Iraq war had, to a large degree, the mission of capturing looking glass technology that was dug up in Sumer, which was originally in the possession of Qaddafi in Libya, which is why they attacked Libya. Qaddafi got rid of it and gave it to Saddam Hussein. Saddam Hussein was using this technology, viewing the future. The US government went in after him because they felt that the fate of the world was at stake because if they did not capture this technology and get it back and deconstruct it, that the earth would have a pole shift. But when you actually decipher these letters, it means the doctrine of the convergent timeline paradox. Here's the problem. You look through time with this device, and when you hit 2012, everything goes perfectly white. As you get closer and closer to 2012, starting in around 1980, a very strange thing has been happening when they use the Looking Glass. There is an interlacing of images. So right now, imagine that if I had this slide on half the time and then the other half of the time, there was an image of like a face. And if I interlace them slowly, then maybe every second it goes from here to the face to here to the face. Then as it speeds up, it gets faster and faster to the point that if it were complex images, you couldn't make out one from the other because they're flip flopping so fast. Okay? That's called interlacing and the frequency of the interlacing gets higher and higher as you go towards twenty twelve. So any time that they try to look at like or they used to try because they're not using them anymore, 2009, 2010, 2011, 2012, bam. It's so interlaced that they have a whole server farm of computers just to deconstruct images and be able to see what they're looking at in the future. So again, this is what they're using. This is one of the main ones they're using, the Orion cube as they call it, or the cube. And also it's being used with Looking Glass. But what's going on with twenty twelve? Well, here's another very interesting thing. We have multiple parallel futures that we can choose. And twenty twelve literally does represent create your own reality time. It has everything to do what do you expect is going to happen. This is where thoughts becoming reality really takes on a whole new meaning in a way that you've never heard of before. So what they found that was so bizarre was that at 12/21/2012, they could calculate it down to the day, that's how precise this was, that for some reason all the graphs, all the waves would go into a complete flat line. They longer moved up and down like before. They went flat for like seven or eight seconds. So then they're asking the guys that went through these stargates and were traveling into the future, what happened to you? Every single time that somebody tried to hit twenty twelve, they said the same thing. There is this thing they call the bump. It actually hits you like a bump. You actually feel like you've slammed into something. And as soon as it slams into you, you have the most incredible religious experience you can imagine. Consciousness just blasts into this wonderful place where you have awareness of no space, no time. All knowledge is available to you, ecstatic consciousness. You could be the galaxy. You could be a subatomic particle. You can go everywhere and do everything, and there's no sense of it ever ending. So when it finally stops, you just can't even believe that you're back to who you were before. Like what happened to Jodie Foster. Exactly what happens to Jodie Foster in contact. So Is that the moment of no time? That's the zero time. Yeah. And Daniel doesn't agree with me on this, but I believe that 2012 is our zero time reference. And when you hit 2012, everything goes perfectly white.

Video Saved From X

reSee.it Video Transcript AI Summary
Main AI pin is a stand-alone device and software platform designed for AI engagement. It utilizes voice, touch, gesture, and a Laser Ink display. It can play music to improve your mood and provide information on protein content. For example, these almonds contain 15 grams of protein. It can also provide pricing information, such as the online price of $28. The device allows for seamless interaction and can generate beautiful images.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker believes their company is the premier one for developing and scaling products to billions of people and is leading in the next generation of computing platforms with glasses that are doing exceptionally well. They think glasses will be the best form factor for AI because they can see and hear what you do, and once a display and holograms are added, they'll generate a UI. The speaker envisions a future where AI glasses observe your life and follow up on things for you, providing information in real time. They believe not having AI glasses will create a cognitive disadvantage, similar to needing vision correction and not having optical glasses. The company is also focused on entertainment, culture, and personal relationships, believing AI can be valuable in these areas.

Video Saved From X

reSee.it Video Transcript AI Summary
"You know, in the near future, we're all going to be working around with AI assistance, helping us in our daily lives that we're going to be able to interact with through various smart devices including smart glasses and things like that, through voice and through various other ways of interacting with them." "So, I have smart glasses with cameras and displays in them, etcetera." "Currently, you can have smart glasses without displays, but soon the displays will exist." "Right now they exist." "They're just too expensive to be commercialized." "This is the Orion demonstration built by our colleagues at Meta." "So, future is coming and the vision is that all of us will be basically working around with AI assistants all our lives." "It's like all of us will be kind of like a high level CEO or politician or something, running around with a staff of smart virtual people working for us." "That's kind of the possible picture."

Video Saved From X

reSee.it Video Transcript AI Summary
A person demonstrates glasses that identify people using facial recognition and AI. When the glasses detect a face, they scour the internet for pictures of that person and use data sources like online articles and voter registration databases to find their name, phone number, home address, and relatives' names. This information is then fed back to an app on the user's phone. The demonstrator approaches a woman and the glasses identify her as being involved with the Cambridge Community Foundation. The glasses also identify a second person as Khashik, whose work the demonstrator has read. The glasses correctly identify the second person's address, attendance at Yale's Young Global Scholar Summer Program, and parents' names.

Video Saved From X

reSee.it Video Transcript AI Summary
Introducing Microsoft Designer, an AI-powered design app that simplifies professional-quality designs. By simply stating your needs, Designer provides a range of options using its extensive catalog of professional images. You can personalize your design by adding your own images or generating new ones with AI. The ideas pane suggests arrangements for text fields, and Designer even assists with writing. With AI tools, time-consuming image production tasks become effortless. Sharing your creations is made easy, with AI-powered recommendations for captions and hashtags. Designer's AI assistants ensure great results, whether it's attracting people to events, parties, sales, or simply bringing a smile. Try it for free at designer.microsoft.com.

Cheeky Pint

What comes after smartphones, with Evan Spiegel of Snap
Guests: Evan Spiegel
reSee.it Podcast Summary
The episode centers on Evan Spiegel outlining Snap’s trajectory into 2026, emphasizing a “crucible moment” as the company targets a billion monthly active users and aims for net income profitability while investing heavily in hardware and Spectacles. Spiegel describes Spectacles as arriving later in the year after more than a decade of development, positioning them as a fusion of wearable comfort and spatial computing capabilities. He envisions a computing future where glasses enable hands-free interaction, shared experiences, and integration with external devices, rather than simply replacing smartphones. A core theme is the shift from a smartphone-centric paradigm to a world where glasses extend computing into the real world, enabling collaborative activities like gaming, design, and training in everyday spaces. Spiegel argues that this era will be defined by “net new experiences” rather than merely migrating existing screen-based tasks, highlighting how glasses could grow into a versatile platform through Lens Studio and a bespoke OS built in-house to maximize performance and power efficiency. The conversation also delves into how AI is reshaping Snap’s operations and product development, noting that more than two-thirds of new code is now generated by AI, and discussing how AI tools affect the company’s broader software strategy and competitive dynamics. Spiegel reflects on Snap’s distinctive approach to social networking, differentiating it from more public feeds by prioritizing private messaging and intimate, time-limited content. He explains how Discover and creators play a role in content distribution without forcing users to expand their networks, and he emphasizes the importance of responsible content moderation to mitigate harmful material while preserving user trust. The discussion touches leadership culture, the design-versus-engineering collaboration, and the practical realities of hardware manufacturing in the US and UK, underscoring fast iteration, IP protection, and the evolving role of distribution in a world where AI lowers barriers to creating new services. The episode closes with reflections on teen usage, privacy, and the need to balance ephemeral sharing with meaningful retention, reinforcing Snap’s overarching aim: to make computing more human and collaborative rather than isolating.

a16z Podcast

Ideogram: Unlocking Precision Image Generation
Guests: Mohammad Norouzi
reSee.it Podcast Summary
Muhammad Norouzi, co-founder and CEO of Ideogram, emphasizes the innate human desire to create and how technology, particularly AI, facilitates visual expression without extensive artistic skills. Ideogram is a generative AI platform that enhances communication through images and text, making it effective for marketing and storytelling. Launched in September 2023, it allows users to create images with legible text, leading to viral engagement. Unique features include prompt adherence for detailed descriptions and high-quality text integration. Ideogram aims to empower creativity, especially in print-on-demand applications, by merging art and technology.

Coldfusion

Microsoft Hololens Explained! - The Future Of Computing.
reSee.it Podcast Summary
The Microsoft HoloLens is an augmented reality headset that overlays digital objects onto the real world, aiming to revolutionize computing with applications in gaming, design, and education. Developed over seven years by Alex Kipman, it features real-time environment scanning, immersive audio, and seamless Windows 10 integration. While it offers impressive capabilities, limitations include a small viewing area and reliance on gaze control. Despite these challenges, the HoloLens is seen as a significant step in modern AR technology, with potential for future advancements.

Lenny's Podcast

The Godmother of AI on jobs, robots & why world models are next | Dr. Fei-Fei Li
Guests: Fei-Fei Li
reSee.it Podcast Summary
Fei-Fei Li, renowned as the godmother of AI, reflects on the AI journey from the early days through ImageNet to today’s AI renaissance, emphasizing that AI is a profoundly human enterprise shaped by data, people, and responsibility. She explains that her focus on visual intelligence and the large data approach behind ImageNet, created to curate 15 million labeled images and a 22,000-concept taxonomy, was pivotal in catalyzing deep learning breakthroughs by providing the data scale that allowed neural networks to learn more robust object recognition. The discussion traces how ImageNet, paired with neural networks and GPUs, birthed modern AI, with the 2012 Toronto breakthrough showing the power of these ingredients and how today’s models still rely on large-scale data, expansive compute, and sophisticated architectures. Li cautions that AI remains a double-edged sword: technology can uplift humanity if guided by responsible individuals and thoughtful governance, but missteps could undermine society if values are neglected. She then pivots to world models, an idea Li has long pursued to embed spatial understanding and embodied intelligence into AI. World models aim to create interactive, navigable representations of the physical world that go beyond language, enabling robots and humans to reason, plan, and act within coherent 3D or 4D spaces. She explains the rationale for World Labs and Marble, their first product, a system that can prompt-a-world from text and images and render immersive, explorable 3D scenes. Marble is pitched as a platform for creators, designers, robotics simulation, virtual production, and even therapeutic or educational scenarios. The interview explores practical use cases, from speeding up movie production to generating synthetic data for robot training and enabling new forms of experiential research. Li also discusses the labor of building such systems, the team, compute needs, and the balance between research and productization, underscoring a philosophy that technology should augment human agency rather than erode it. The conversation turns to the future—whether true AGI is imminent, how far current trajectories will take us, and why breakthroughs beyond scaling are essential. Li rejects the idea that a single bitter lesson will unlock robotics; she stresses that embodied AI requires data, physics, and real-world scenarios, along with design principles that respect human dignity. She closes with a call to action: every profession has a role in AI, and governance, policy engagement, and human-centered design must accompany technical advancement. The episode leaves listeners with a sense of cautious optimism and a reminder that the best AI future will be defined by responsible collaboration among researchers, organizations, and communities.

Lenny's Podcast

Inside Google's AI turnaround: AI Mode, AI Overviews, and vision for AI-powered search | Robby Stein
Guests: Robby Stein
reSee.it Podcast Summary
Google's AI turnaround is real: Gemini just hit the number one app in the app store, and the internal energy at Google has changed, says Robby Stein, VP of Google Search. The company maintains that its core mission—making information universally accessible—remains, but the AI moment has created a tipping point where models can genuinely deliver for consumers. The shift is not about replacing search but about multiplying its reach through AI overviews, AI mode, and multimodal tools like Lens, all designed to deliver faster, more accurate answers while weaving live data into results. There's three big components to what we can think about AI search: AI overviews at the top, which provide quick answers; multimodal and Lens for visual search; and AI mode, which binds it all into a single conversational experience. AI mode uses all of Google's information, including 50 billion products in the shopping graph updated two billion times per hour, 250 million places in Maps, and the entire context of the web, so you can ask anything and follow up. It can be accessed at google.com/ai and is integrated into core experiences so you can ask follow-ups directly or take a photo and go deeper in AI mode. Stein emphasizes three big features of AI search: AI overviews at the top, which provide quick answers; Lens for visual queries; and AI mode, which binds it all into a single conversational experience. He notes that Google’s data backbone—shopping graph, Maps, finance, and web signals—allows the AI to understand context and surface authoritative sources. The interface aims for a consistent, simple experience; you can start in core search and have follow-ups, then dive deeper in AI mode or Lens as needed. The goal is to make the transition between AI and traditional search seamless rather than a toggle. Looking ahead, AI is expanding into inspiration and multimodal creativity, with live AI search and 'AI corner' experiments such as visual inspiration boards and Nano Banana-like tools. The team emphasizes testing with labs and trusted testers, then scaling to IO launches and global rollout. Public examples include live conversational search and ongoing integration across products, all aimed at giving users effortless access to knowledge with reliable sources.

The Joe Rogan Experience

Joe Rogan Experience #2394 - Palmer Luckey
Guests: Palmer Luckey
reSee.it Podcast Summary
Palmer Luckey discusses a range of topics with Joe Rogan, beginning with quirky tech setups like underwater VR coding rigs and the benefits of float tanks for mental clarity and focus. Luckey recounts his early ventures into virtual reality, starting with building VR headset prototypes as a teenager and eventually founding Oculus, which he later sold to Facebook. He shares anecdotes about working with John Carmack, a childhood hero, and the surprising fitness aspects of VR gaming, particularly boxing games and Beat Saber. The conversation shifts to the potential of VR in combat training, with Luckey mentioning Logan and Jake Paul's use of VR for boxing. They explore the idea of AI-controlled robots emulating famous fighters, even sparring partners with controlled force. This leads to a broader discussion about the flaws of the human body in combat and the design of robots for the Department of Defense, which Luckey is involved in. He touches on the philosophical implications of AI and its potential self-perception, drawing parallels to humanity's creation in God's image. The podcast delves into the topic of UAPs and potential alien life, with Luckey expressing skepticism about easily explained phenomena like drones. He shares his thoughts on a recent NASA release regarding biosignatures and the need for multiple sensor confirmations in UAP sightings. The conversation touches on a famous alien encounter in Varginha, Brazil, and Luckey's personal ambition to investigate such phenomena after retirement, envisioning a privately funded X-Files operation. Luckey criticizes government spending on defense, highlighting inefficiencies and waste. He praises the new Secretary of the Army for cutting wasteful programs and promoting innovation. The discussion extends to the competitive landscape with countries like China, where government and private companies are closely integrated. Luckey emphasizes the importance of competing entities and accountability in national security programs, cautioning against private companies dictating foreign policy. The conversation shifts to social and political issues, including censorship and cultural differences in the UK and China. Luckey shares a personal story about early internet forum moderation and the cultural acceptance of policing offensive content in the UK. He and Rogan discuss the lack of political power and the resulting cynicism in countries like China and Russia. They also touch on the power of media and propaganda, citing examples from the Ukraine war and past US interventions. Luckey expresses concerns about China's manufacturing capabilities and the potential threat to the US automotive industry. He advocates for the US to become more competitive by lowering energy and resource extraction costs. The discussion touches on protectionist policies and the need for the US to innovate and compete effectively. The conversation shifts to the potential for conflict with China over Taiwan, with Luckey advocating for the US to become the 'world's gun store' and arm allies to defend themselves. Luckey introduces his company's new product, Eagle Eye, an integrated ballistic helmet with augmented reality capabilities for military use. He explains the various features, including night vision, thermal sensors, gunshot detection, and the ability to share a view of the world with other soldiers and robots. He emphasizes the importance of lightweight, integrated designs and the potential for AI-powered fighter jets to revolutionize air combat. He also touches on the potential for laser weapons and the need for modular protective measures. The podcast concludes with a discussion about simulation theory, the nature of reality, and the potential for genetically engineering animals to be more intelligent. Luckey shares his thoughts on the role of a higher creator and the human desire to create things in our own image. He and Rogan discuss the importance of seeking novelty and the potential for nostalgia to inform future innovation. Luckey also touches on the importance of ethical considerations in weapons development and the need for competent and ethical people to be involved in the process.

TED

The Next Computer? Your Glasses | Shahram Izadi | TED
Guests: Shahram Izadi
reSee.it Podcast Summary
Shahram Izadi discusses the convergence of AI and extended reality (XR), highlighting advancements in augmented and virtual reality over the past 25 years. Innovations in AI, particularly large language models, have enhanced real-time interactions and contextual understanding. He introduces Android XR, developed with Samsung, which integrates AI with XR hardware. Demonstrations include smart glasses that assist with tasks like translation and memory recall, and headsets that provide immersive experiences. The future envisions lightweight XR devices that enhance human intelligence, making technology more personal and conversational, ultimately transforming how we interact with the world.

Generative Now

Julie Bornstein: Building the Future of Fashion with AI
Guests: Julie Bornstein
reSee.it Podcast Summary
A fashion-obsessed founder is building an AI-powered shopping assistant that talks shoppers through brands the way a store associate would. Daydream is a fashion search engine that uses AI to interact with shoppers and aims to unify all real fashion brands, letting consumers ask questions in natural language, save finds into collections, share with friends, and click out to buy on brand sites. A standout feature is a multimodal workflow: users can start with text, upload a photo, or speak, and the system uses both the query and visuals to surface relevant items that align with the user’s style. Bornstein traces Daydream to a long arc through e-commerce and fashion-tech. She helped build Nordstrom’s early web business, led digital efforts at Sephora including Beauty Talk, then helped run Stitch Fix with a data-driven emphasis on real-time personalization. She founded The Yes, which Pinterest acquired, and after advising there, she launched Daydream. She recounts how the pandemic shifted behavior toward online shopping, providing an opening to test and launch the new product. Technically, Daydream relies on an ensemble of small models atop large models, with a deep knowledge base built from brand feeds and a user knowledge graph called a 'style passport.' The team uses OpenAI and other providers but aims to avoid latency and inconsistency by localizing most decisions into specialized models tailored to fashion. The system leverages natural language understanding, image inputs, and user preferences to re-rank results and suggest items based on occasion, body type, weather, and other factors. They anticipate agents performing tasks for consumers, including potential checkout. Bornstein discusses competition and defensibility, arguing vertical, domain-specific fashion knowledge will outpace broad shopping models. She envisions Daydream as an evolving UI that prioritizes speed and personalization, with integration into social channels like TikTok and Instagram. She emphasizes learning from previous startups, assembling a strong technical team, and remaining iterative as the product launches soon. She describes Daydream as a bridge to future shopping interfaces rather than a fixed end state.
View Full Interactive Feed