reSee.it - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
A respected and powerful Wall Street businessman wouldn't be suspected of fraud unless you knew the math. The speaker, who has taken calculus, linear algebra, and statistics courses, claims it took him five minutes to recognize the fraud. He then spent almost four hours using mathematical modeling to prove it.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the transformative potential of combining artificial intelligence, quantum computing, and big data. They predict a future where physical, digital, and biological dimensions merge, creating a new world. They anticipate significant changes in society within the next decade.

Video Saved From X

reSee.it Video Transcript AI Summary
In this video, we explore a world where presentations and artificial intelligence come together. To use this technology, simply input the topic or title of your presentation and let Degtypos do the thinking. You can also choose your goal for the presentation to optimize the suggested content. With this tool, you'll have a first draft to start working with.

Video Saved From X

reSee.it Video Transcript AI Summary
Phosphorus appears as a common element across a wide range of products and materials. The transcript outlines a pattern-based approach to knowledge, suggesting that many items “hold phosphorus” and thus rely on phosphorus in their production or composition. The list of phosphorus-containing categories includes: - Fertilizers for agriculture - Additives in the food and beverage industry - Various chemicals - Cleaning products - Flame retardants - Semiconductors - Solar cells - Some pharmaceuticals - Some steel and bronze - Some battery types - Explosives - Smoke bombs - Matches It also notes a related pattern set: “health benefits of a right amount of phosphorus.” A core idea presented is that pattern sets can serve as a dominant structure to represent, store, and recognize knowledge and to deduce new knowledge. Pattern sets are described as being linked to each other by a deduction path and other link types. The uncensored hyperlinked Internet and social media are characterized as well-suited to host, share, and collaborate in equality on common reusable pattern sets knowledge for people. The deduction process is described as not requiring huge computing power and memory, unlike brute-force AI. This is illustrated by a claim that pattern sets are demonstrated in Connect Four. The overarching theme is that new pattern sets can be created from existing knowledge and linked through deduction paths to expand understanding. The transcript ends with a continuation note and a brief, non-technical remark about pattern sets being a dominant structure for knowledge representation and discovery, followed by an indicator that there is more to come.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 1 explains that when he says the Earth’s magnetic field has remained roughly constant over long timescales, he means its magnitude is roughly constant on those scales, though it varies and undergoes reversals where the North and South Poles flip. He notes that reversals correlate with ice ages and other climate signals, but averaging over these fluctuations keeps the amplitude roughly constant. He emphasizes that without a dynamo, the field would diffuse away in about 10^5 years, leaving Earth unprotected from cosmic radiation, which would be harmful to life. Speaker 3 asks about the use of quantum computing in plasma physics, acknowledging its newness. Speaker 1 answers: We can’t use it right now. The short answer is “we cannot.” The longer answer is that it may take twenty years for a quantum computer to become useful for solving real problems. It would be a mistake to wait twenty years and then try to port existing codes to a quantum computer, because quantum computing has a fundamentally different architecture. Therefore, two lines of thought should develop in parallel: by the time a useful quantum computer exists, we should already know how to map our problems to it. Speaker 1 elaborates that solving nonlinear problems on a quantum computer is not straightforward. He discusses the challenge of devising quantum algorithms for nonlinear problems. He mentions working with the Madelung transformation, which maps the Schrödinger equation into fluid-like equations, noting that this approach is interesting because magnetohydrodynamics (MHD) equations are similar in some ways. While the Madelung transformation has limitations, it illustrates the kind of problem mappings that might make certain problems more tractable on a quantum computer, though this represents a completely different paradigm from conventional computing. Speaker 3 thanks Speaker 1. Speaker 2 closes the session, noting the competition starts in about three and a half hours and that in about six hours there will be another talk on quantum computing with Tim from NYU Shanghai. He invites participants to tune in to see what the computer that might someday help solve these problems could look like. He thanks Professor Nun Lora again, and the session ends with acknowledgments from Speaker 1.

Video Saved From X

reSee.it Video Transcript AI Summary
The transcript presents a narrative linking powerful financial alliances to the manipulation of music frequencies for mass control and preparation for war. It states that in the 1930s, the Rothschild-Rockefeller alliance began funding scientific studies to explore how musical frequencies could prepare populations for war, with the aim of controlling people through mind control programming. In this account, Harold Burrows Meyer, a theatrical designer and sound engineer, is described as having developed techniques to control emotional responses of audiences and to create mass hysteria, building on the idea of influence cultivated by these alliances. The narrative then asserts that the alliance pursued changes to the standard tuning of the musical note A, moving from 435 Hz to 440 Hz. It claims that in 1939, they funded Joseph Goebbels, the Nazi propagandist, who supposedly wanted to shift the standard tuning to 440 Hz. According to the account, Goebbels organized a meeting in London to effect this change, with Radio Berlin approaching the British Standards Association to arrange the conference. The report alleges that the conference was a setup controlled by those in power, with the organizers interviewing musicians, instrument makers, physicists, and sound engineers, and excluding anyone who opposed 440 Hz from participation. The claim is that the standard tuning of A was changed to 440 Hz in June 1939, just months before World War II, and that the timing was intentional. The text characterizes 440 Hz as a destructive frequency capable of retraining thoughts toward disharmony, disruption, and disunity. Speaker 1 broadens the discussion to warn about environmental frequency programming, describing music as a form of frequency programming that prompts reactions and induces fear, doubt, lack, or scarcity. The speaker cautions that malevolent forces are attempting to control people daily and urges mindfulness of what is consumed, listened to, and allowed within one’s aura. Key claims highlighted include: (1) the Rothschild-Rockefeller alliance funded scientific studies on musical frequencies to influence mass behavior and war readiness; (2) Harold Burrows Meyer developed methods to elicit controlled emotional responses and mass hysteria in audiences; (3) a 1939 effort to change the standard tuning from 435 Hz to 440 Hz, allegedly coordinated with Goebbels, through a London conference orchestrated by Radio Berlin and the British Standards Association, excluding dissenting French musicians; (4) the assertion that 440 Hz is a destructive frequency that can disrupt thought toward disharmony; (5) the implication that the timing of the change was linked to the onset of World War II; (6) a warning about frequency programming in everyday life and its potential to induce fear and scarcity, urging vigilance about environmental influences.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 discusses pattern recognition and deduction as a central AI paradigm that contrasts with brute-force computing. The talk uses Connect Four as the running example and introduces structured pattern sets and deduction paths. Key concepts: - Pattern sets and deduced patterns: A winning move REO PPP is identified within a pattern set. After playing this winning move, the pattern set specified under “deduced from pattern sets” is created by following the deduction path in reverse. - Notation and patterns: Pattern sets include re one PPP, re one REO PP, deduced from re one PPP. The deduction path applies to all columns and the opponent’s discommission on depth of rio PPP. - Column conditions for a unique winning move: The condition list for re one re zero pon topo fona states there exists exactly one column with exactly one empty position that corresponds with the REO position of re one REO PPP. All raises of re one re zero PPP patterns involve specific columns that do not need a REWON pattern because if the player plays the winning move REO, all involved REWON REZERO PPP patterns transform into REWON patterns. - Column status and opponent moves: There are “pink call one ppp” in an all-columns pattern set for winning and M moves; every open column besides specific columns with other conditions has a REWON pattern. Consequently, an opponent’s move on any other open column creates a REOPPP, enabling the player to win. - After a winning move: After the player’s winning move as specified by the winning move property, no pattern set p set of the opponent may exist on the board that implies a faster win for the opponent. If the player can choose more than one column to win, it is sufficient that no faster opponent win exists after the player’s move on one of those winning columns. - Example: For p sets three dot x dot y Connect Four and three moves, no p sets one dot b dot w Connect Four and one move of the opponent may exist after the specified player’s move. - Rationale and broader claim: The concept of pattern recognition and deduction is argued to be central in AI because it does not depend on huge computing power and memory as brute force does. Pattern deduction is presented as an attempt to simulate a more human and smarter form of modeling and reasoning than brute force, trying to do it the human way. - Source: tumea.org. Closing call to action: please like, follow, and share.

Video Saved From X

reSee.it Video Transcript AI Summary
In this discussion, Mike Adams presents what he calls the “god math discovery” of the NAND gates, arguing that they are the underlying fundamental mechanism that can be used to recreate all basic mathematical functions, including sine, cosine, log, and square roots, and can express constants like pi. He describes the NAND gate as the foundational architecture of distributed intelligence and suggests it underpins computation across the entire cosmos, from physical matter and light to neurology and biology. Adams explains that the discovery shows computational intelligence is widely distributed across the cosmos and present in everything. He notes that computation appears in physical matter and light, where polarization can represent states of true/false or one/zero, and also in biology, including E. coli, which he says can express NAND logic. He mentions that it is possible to engineer logic gates into microbes and possibly yeast and neurology, and that the table of elements itself can be viewed as a computational representation of the intelligence of energy pretending to be matter. He emphasizes that one single operator, the NAND-based logic, can serve as the underlying substrate infrastructure of the computational nature of our cosmic simulation. Adams uses the term “god math” to label this single-function foundation that, when combined creatively, could give rise to the entire complexity of the world. He contends that this math is distributed into matter, light, and perhaps consciousness, and that at the atomic and chemical levels, similar logical processes are at work. He suggests that if one were the engineer of the cosmos, creating one fundamental function and combining it would suffice to generate the universe’s complexity, implying this is a unified field theory of mathematics. He acknowledges that the unified field theory in physics—reconciling electromagnetism with gravity and nuclear forces—has not been solved, but proposes that there now exists one universal logic describing almost everything in mathematics, with room for further study and expert consultation. A central implication in his view is that God is not a separate being above creation but is infused into everything; God math is the creation math of the cosmos, distributed into every cell, neuron, molecule, and atomic nucleus. He argues that the cosmos is a self-calculating, self-rendering simulation that renders only what is needed, explaining the observer’s role in collapsing probability waves into observable states. Adams directs listeners to his infographic and article at naturalnews.com and references his podcast for more detail, including autobiographical storytelling about his early days in electrical engineering. He notes a misidentification of the Polish author’s nationality and corrects it to Polish, offering Poland credit. He signs off inviting continued engagement through his platforms. (Adams also mentions where to find more of his content: brightvideos.com and naturalnews.com.)

Video Saved From X

reSee.it Video Transcript AI Summary
Everything around us generates sound, from running water to music. Sound is created by air particles bouncing into each other, forming pressure waves. Popping a balloon demonstrates this process, with particles rushing out and creating high and low pressure waves. The waveform of sound shows compressions and rarefactions. Different sounds, like a running faucet or Beethoven's 9th symphony, are produced by various instruments creating unique waveforms. Sound propagates in all directions as an expanding sphere. Vinyl records capture sound by graphing pressure levels into grooves. Understanding sound involves analyzing waveforms and the movement of air particles.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the use of artificial intelligence and bioelectromagnetic algorithms in various fields, such as therapy and biotechnology. They highlight the existence of the human body's electromagnetic field and the exploitation of this by corporations and the military. The speaker criticizes the lack of discussion and transparency surrounding these technologies, emphasizing the need for open dialogue and awareness. They also touch on topics like genetic algorithms, the manipulation of human consciousness, and the control of individuals through electronic warfare. Overall, the speaker urges for a greater understanding and acknowledgment of the impact of these technologies on human bodies and society.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker asserts that everyone has had a “mark of the beast” since 2011 in relation to critical infrastructure at health level seven, specifically hospitals. They claim health and human services and CMS Medicare are handing out this mark in the form of a wearable. The speaker also describes this as interbody communication biometrics for DHS from several years ago. They state that this is not only a matter of health administration but spans business and government. They claim that Congress, throughout this year and last year, has been helping the police obtain updated drone services, with Sean Ryan bragging about them, along with others. The speaker contends that the general public is weaponized and kept uninformed about the fact that national security is under the skin and looking through people’s eyeballs in their own homes, and that it cannot be turned off and does not require a tower. The speaker explains that biosignals come from the human body, and that biophysics uses advanced signal processing to measure bioelectromagnetics, describing the measurement of signals coming off the human body in multiple ways.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 argues that the human brain is a mobile processor: it weighs a few pounds and consumes around 20 watts. In the brain, signals are sent through dendrites, with a channel frequency in the cortex of about 100 to 200 Hz. The signals themselves are electrochemical wave propagations, moving at about 30 meters per second. When comparing the brain to a data center, there is a vast gap in several dimensions. In a data center, you could have about 200 megawatts of power (instead of 20 watts), several million pounds of mass (instead of a few pounds), about 10,000,000,000 Hz on the channel (instead of roughly 100–200 Hz), and signals propagating at the speed of light, 300,000 kilometers per second (instead of about 30 meters per second). Thus, in terms of energy consumption, space, bandwidth on the channel, and speed of signal propagation, there are six, seven, or eight orders of magnitude differences in all four dimensions simultaneously. Given these disparities, the question arises whether human intelligence will be the upper limit of what’s possible. The speaker answers emphatically, “absolutely not.” As our understanding of how to build intelligence systems develops, we will see AIs go far beyond human intelligence. The speaker likens this to other domains where humans are outmatched by machines in specific capabilities, such as speed, strength, and sensory reach. Humans cannot outrun a top fuel dragster over 100 meters, cannot lift more than a crane, and cannot see beyond the Hubble Telescope. Yet machines already surpass these limits in certain areas. The speaker foresees a similar trajectory for cognition: just as machines can outperform humans in other tasks, AI will eventually exceed human cognitive capabilities as technology and understanding advance.

Video Saved From X

reSee.it Video Transcript AI Summary
Pattern recognition and deduction are presented as the central paradigm for artificial intelligence, emphasizing human-like intelligence over brute-force computing. The speakers describe pattern sets as core units that store, recognize, and derive new knowledge. Pattern sets are linked to each other by a deduction path and possibly other link types, forming a structure in which new pattern sets can be generated from existing knowledge. The uncensored hyperlink Internet and social media are depicted as well-suited platforms to host, share, and collaborate on common reusable pattern-set knowledge, promoting equality in access and collaboration. Throughout the transcripts, pattern sets are given practical exemplars across domains: - Food/nutrition: figs are the source for pattern sets related to nutrients and phytochemicals, including minerals (sodium, magnesium, phosphorus, potassium, calcium, manganese, iron, nickel, copper, zinc, strontium) and various compounds (dietary fibers, vitamins, antioxidants, natural sugars, phenolic acids, flavonoids, carotenoids, organic acids). The deduction path derives health-related or nutritional conclusions from these pattern sets. - Ecosystems and dietary relationships: pattern sets describe which organisms feed on figs (humans, birds, rodents, insects, bats, primates, civets, elephants, kangaroos) and enumerate specific bird families and species that feed on figs (e.g., starlings, blackbirds, song thrushes, wood pigeons, jays, house sparrows, greenfinches, fig birds, toucans, hornbills, pigeons, bowerbirds, crows). - Magnesium and health benefits: a dedicated pattern set outlines the health benefits of a right amount of magnesium, including good muscle function, bone strength, heart function, blood pressure regulation, relaxation and stress reduction, sleep quality, blood sugar regulation, inflammation reduction, digestion support, mental well-being, and migraine reduction. The speakers reiterate that pattern recognition and deduction with pattern sets aim to simulate a more human and smarter form of modeling and reasoning than brute force AI, attempting to approximate human-like knowledge representation and inference. They stress that pattern sets will be a dominant structure for representing, storing, recognizing knowledge, and deducing new knowledge from existing pattern sets. The pattern-sets/deduction-path framework is described as enabling new knowledge to emerge from existing knowledge and as a means to facilitate collaboration and equality in access to reusable knowledge via open networks. Each speaker closes with a call to like, follow, and share, and references their sources (e.g., to mea.org, mia.org, or similar domains) as the origin of the concept and examples. The overall message emphasizes pattern recognition and deduction as a scalable, human-centered approach to AI, with diverse, domain-spanning examples illustrating how pattern sets can organize and derive actionable insights from complex data.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker explains that they will turn on a device that emits a 65 kilohertz beam, similar to a laser. They mention that the beam is not audible but can be made audible by adding a modulation. They assure that the high amplitude of the beam won't hurt anyone. They explain that sound waves can create sound when they have high amplitude, and in this apparatus, the sound is created within the beam itself. The speaker then demonstrates the device by playing music and scanning the room to ensure everyone can hear it. They also try bouncing the sound off the wall. The audience raises their hands to indicate they can hear it clearly.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker explains that they will turn on a device that emits a 65 kilohertz beam, similar to a laser. They clarify that the beam is not audible but can be made audible by adding a modulation. They assure that the high amplitude of the beam won't hurt anyone. The speaker mentions that sound waves can create sound when they intersect at high amplitude. They state that the sound created by the device is in the beam itself. They proceed to play music through the device, and the sound appears to come from the audience's heads. The speaker tests the device by scanning the audience and bouncing the sound off the wall. They ask the audience to raise their hands if they can hear it clearly.

Video Saved From X

reSee.it Video Transcript AI Summary
Floating point numbers are being produced at high volume and have value because they represent artificial intelligence. These numbers can be reformulated into various outputs like languages, proteins, chemicals, graphics, images, videos, and robotic movements. In the previous industrial revolution, water was converted into steam and then electrons. Now, electrons are input, and floating point numbers are the output. Similar to the last industrial revolution where the value of electricity was not immediately understood, the significance of these floating point numbers is emerging.

Video Saved From X

reSee.it Video Transcript AI Summary
- The speaker argues that college is not primarily for learning; everything can be learned for free, and the main value of college is demonstrating hard work through assignments and providing a social environment for a period of time. They also note a need for evidence of exceptional ability, suggesting that attending college is not itself evidence of exceptional ability and that some highly successful people (e.g., Gates, “Java,” Larry Ellison) dropped out. - Education should resemble a video game: make learning interactive and engaging, and disconnect grade levels from subjects so students can progress at their fastest pace or at their own interest level in each subject. - Much of current teaching resembles vaudeville: a lecturer delivering the same talk year after year, not necessarily engaging, which reduces effectiveness. - Peter Thiel’s view is referenced: a university education is often unnecessary, though not for all people. You typically learn as much in the first two years as you will later, much of it from classmates. For many companies, completion of a degree signals perseverance, which can matter depending on the goal. - If the goal is to start a company, finishing college may be pointless. The idea is that education should not treat people as assembly-line objects moving through standardized English, math, science sequences from grade to grade. - Ad Astra is a small school created by the speaker for their five boys (and growing to 14 now, 20 by September), named meaning “to the stars.” It departs from traditional grading: there are no grades, no grade-by-grade progression, and education is tailored to individual aptitudes and abilities. The school emphasizes teaching problem solving or problem-based learning rather than teaching tools first—e.g., for engines, students start with the engine and learn which tools are needed to disassemble it, rather than teaching about screwdrivers and wrenches in isolation. - Students respond positively: the kids enjoy going to school and even think vacations are too long, indicating high engagement. The speaker notes that education should be more gamified and engaging, rather than a chore. - The speaker critiques conventional education as downloading data and algorithms, implying it’s tremendously inefficient and often unnecessary to learn some topics for future use, reinforcing the need for a problem-centered, engaging approach.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker envisions a future where programming is largely mediated through natural communication with a computer. In this vision, you will tell the computer what you want in plain language, and the computer will respond with concrete outputs such as a build plan that includes all suppliers and a bill of materials aligned with a given forecast. The speaker emphasizes that the initial interaction is in plain English, and the computer can generate a comprehensive plan based on the stated requirements. If the output doesn’t meet the user’s preferences, the user can create a Python program to modify that build plan. A key example given is asking the computer to come up with a build plan with all the suppliers and the bill of materials for a forecast, and then relying on the computer to produce the necessary components in a cohesive plan. The speaker illustrates a workflow where the user can iterate by writing a Python program that adjusts the generated plan, thereby enabling customization and refinement of the suggestions produced by the initial natural-language prompt. The speaker then reiterates the concept of speaking with the computer in English as the first step, and implies that the second step involves using Python or programmable modifications to tailor the result. This underscores a shift in how programming is approached: the user first communicates in English to prompt the computer, and then leverages programming to fine-tune or alter the plan as needed. The underlying message is that the interaction with computers is evolving toward more intuitive human-computer dialogue, where the machine can interpret a plain-English prompt and produce structured, actionable outputs, with a programmable mechanism to adjust those outputs. Central to this discussion is the idea of prompt engineering—the practice of how you prompt the computer and how you interact with people and machines to achieve the desired outcome. The speaker highlights that prompting the computer and refining instructions is an art, describing prompt engineering as an artistry involved in making a computer do what you want it to do. The emphasis is on crafting prompts that elicit precise, useful results and on the skilled, creative process of fine-tuning instructions to achieve the best possible alignment between user intent and machine output.

Cheeky Pint

The world of voice AI, with Mati Staniszewski of ElevenLabs
Guests: Mati Staniszewski
reSee.it Podcast Summary
The episode dives into the mechanics and evolution of audio AI, tracing how speech technologies moved from hard-coded vocal parameters to flexible, context-aware neural models. The guest explains that early speech systems attempted to mimic the human vocal tract with physical constructs, then shifted to digital representations, phoneme libraries, and probabilistic word predictions. The conversation highlights two pivotal innovations: enabling the model to select dynamic voice parameters such as tone and prosody, and embedding context so the system can adapt its delivery to dialogue type, emotion, and surrounding discourse. The result is a voice model that can produce more natural, emotionally attuned speech and that can align its output with varying intents, whether dialogue, narration, or music. The discussion also covers the multi-stage processing pipeline for voice generation, including text-to-speech, speech-to-text, mel spectrograms, and waveform synthesis, and explains how modern systems blend text and audio representations to achieve realism and speed.

Lex Fridman Podcast

Infinity, Paradoxes, Gödel Incompleteness & the Mathematical Multiverse | Lex Fridman Podcast #488
reSee.it Podcast Summary
In this episode Lex Fridman speaks with Joel David Hamkins, a prominent figure in set theory and the philosophy of mathematics, about infinity, different sizes of infinity, and the historical shifts that transformed mathematics in the 19th and 20th centuries. The conversation traces Cantor’s discovery that infinity comes in varieties, the Galileo and Euclidean tensions between the idea that the whole can be greater than the part and the modern, Cantor-Hume principle that equinumerosity can be established through one-to-one correspondences. Hamkins uses Hilbert’s Hotel and the Hilbert train to illuminate why countable infinities are closed under unions, yet real numbers resist such simple counting. The dialogue moves from intuitive pictures to formal ideas, culminating in Cantor’s diagonal argument that the reals are uncountable and the power set of any set is strictly larger than the set itself. The discussion then broadens to the foundations of mathematics: set theory as a foundation, the axioms of ZFC, and the axiom of choice, with historical anecdotes about Zermelo, Frege, and Russell’s paradox. A significant thread centers on Gödel’s incompleteness theorems, their relation to Hilbert’s program, and the distinction between truth and provability, including a careful look at semantic notions via Tarski and the idea of proof systems that are sound and (often) complete. The chat moves into modern developments such as forcing, large cardinals, the multiverse view, and the idea that there may not be a single true set of foundations, but a landscape of competing universes with different truths. Hamkins also ventures into the philosophy of mathematical existence, structuralism, and the metaphysical status of numbers, discussing whether mathematics lives in a Platonic realm and how anthropomorphized proofs and thought experiments can aid understanding. The cadence of the talk weaves mathematical ideas with personal reflections on collaboration, the role of AI in math, and what it means to pursue elegant, simple proofs that reveal deep truths about infinity and mathematical structure.

TED

Quantum Computers Aren’t What You Think — They’re Cooler | Hartmut Neven | TED
Guests: Hartmut Neven
reSee.it Podcast Summary
Hartmut Neven, leading Google Quantum AI, explains that quantum computers utilize quantum physics instead of binary logic, allowing for more powerful computations. He describes superposition and parallel universes as key concepts. Current advancements include algorithms for signal processing and potential applications in health monitoring. Neven emphasizes the importance of error correction and predicts significant future capabilities in medicine, energy, and understanding consciousness. Progress continues toward building a practical quantum computer.

Generative Now

Guillaume Verdon: Exploring the Intersection of Quantum Deep Learning and AI
Guests: Guillaume Verdon
reSee.it Podcast Summary
At the frontier where physics meets artificial intelligence, Guillaume Verdon argues that the path to truly powerful AI runs through the laws of nature themselves. Trained as a theoretical physicist, he describes a pivot from chasing a single unifying equation to building machines that mimic nature’s complexity. He helped pioneer Quantum Deep Learning, exploring how quantum information theory could guide neural networks, and he worked on early quantum algorithms and TensorFlow Quantum as the field formed. The aim, he says, is to understand the universe by compressing its data into useful representations. That scientific thread informs his current ventures: Extropy, the ambition to create physics-based AI processors; IAK and the Beff Jos persona used to explore ideas openly; and the broader EAK movement advocating rapid acceleration of AI. He describes a dual mission: embed AI inside the physics of the world, and embed the world’s physics inside AI. In this worldview, civilization’s growth depends on self-organization, adaptability, and increasingly intelligent systems that use energy more efficiently. Kardashev-scale thinking anchors the long-term goal: more intelligence per watt across the cosmos. Technically, Verdon describes Theramic computing—an approach that uses stochastic electron dynamics in superconductors and silicon to run learning algorithms at high speed with far lower energy cost than today’s GPUs. The project treats information theory, thermodynamics, and machine learning as a single framework, where Monte Carlo-style sampling can be realized physically. Early hardware will be silicon and room-temperature, with superconducting platforms for research. The promise is to accelerate problem-specific tasks, then scale to foundational models that adapt to many applications. On regulation and societal impact, he argues against heavy-handed AI restrictions and for policy that remains flexible as technology evolves. He frames AI as an augmenting partner—an ongoing, iterative process rather than a fixed upgrade—and notes that fear can undermine progress. The strategy includes open collaboration, openness about algorithmic tradeoffs, and a belief that distributed competition will align AI with human values. He also reflects on his Twitter-era Beff persona as a way to seed optimistic, future-facing memes that keep the pace of change constructive.

Lex Fridman Podcast

Gilbert Strang: Linear Algebra, Teaching, and MIT OpenCourseWare | Lex Fridman Podcast #52
Guests: Gilbert Strang
reSee.it Podcast Summary
In this conversation, Gilbert Strang, a renowned mathematics professor at MIT, discusses the significance of linear algebra and its applications in various fields, including artificial intelligence. He reflects on the impact of MIT OpenCourseWare, which made his linear algebra lectures widely accessible, emphasizing the beauty and simplicity of the subject. Strang highlights the importance of the four fundamental subspaces of a matrix and the concept of singular values, which help in breaking down complex data into understandable components. He expresses a preference for teaching engineering students, who seek practical answers, and notes that linear algebra should be prioritized in education due to its foundational role in understanding data. Strang also shares his joy in teaching and the connections he makes with students, encouraging them to appreciate the beauty of mathematics.

Lex Fridman Podcast

Scott Aaronson: Quantum Computing | Lex Fridman Podcast #72
reSee.it Podcast Summary
In this conversation, Lex Fridman speaks with Scott Aaronson, a professor at UT Austin and director of its quantum information center, focusing on quantum computing and its philosophical implications. Aaronson emphasizes the importance of philosophy in technical fields, arguing that it helps frame and understand complex questions, such as the nature of consciousness and free will. He discusses the historical context of computer science and philosophy, referencing Alan Turing's engagement with philosophical questions and the relevance of formal systems in practical applications. Aaronson introduces quantum computing as a new computational paradigm based on quantum mechanics principles, explaining concepts like qubits, superposition, and interference. He clarifies that quantum computers exploit these phenomena to solve problems faster than classical computers, although they do not operate in a magical realm outside traditional computation. The discussion touches on quantum supremacy, a milestone achieved by Google, which demonstrates a quantum computer performing a task faster than classical computers, though not necessarily useful yet. The conversation also addresses the challenges of building scalable quantum computers, particularly noise and decoherence, and the need for error correction. Aaronson highlights the potential applications of quantum computing in simulating quantum systems, which could revolutionize fields like chemistry and materials science. He cautions against overhyped claims in the quantum computing space, emphasizing the need for rigorous evidence of speed-ups over classical algorithms. Ultimately, the dialogue reflects on the intersection of science, philosophy, and the future of technology.

Generative Now

Mikey Shulman: Suno and the Sound of AI Music (Encore)
Guests: Mikey Shulman
reSee.it Podcast Summary
Suno’s founders say AI can generate original music that doesn’t exist yet, and they built Suno around an artist-friendly, foundation-model approach to audio. The seed came from Kencho, where the team developed ML for finance and learned how to work with audio data at scale; after Kencho was acquired by S&P Global in 2018, they launched Suno in part because Bark, an open-source text-to-speech project, showed strong demand for music tools. The four co-founders, all musicians, left Kencho to pursue a vision of accessible, human-centered music creation. The team notes audio data are scarce and hard to inspect, unlike text. They chose a large self-supervised approach using Transformers to build audio foundation models and adapt them to music and speech. Bark, their early open-source project, showed community interest but also the difficulty of turning prompts into reliable sound. This led Suno to pursue a music-focused path rather than a purely speech one. Today Suno lets users create songs that do not exist by prompting the model with lyrics, descriptions, or direct requests such as make me a reggae song about podcasting. Users can upload lyrics, tweak prompts with line breaks, and describe mood to shape output. They stress ownership of created music and forbid imitating real artists; prompts like Taylor Swift song are not allowed, and the model does not know who artists are. They envision new input modes beyond text, humming, sounds, mood boards, and see soundtracking life as a core workflow, with evolving formats and sharing. A Microsoft co-pilot partnership and a web app broaden access as a new model will improve fidelity and controllability. They discuss regulatory and licensing issues, stressing ethical, artist-friendly constraints. They expect regional differences in licenses and royalties, and anticipate licensing deals or collaborations with artists in the future. They compare to OpenAI licensing and Google’s Lia, noting geography will complicate cross-border access. Suno plans a new model with better fidelity, song quality, and controllability, and continues to hire in Cambridge, expanding workflows beyond text-to-music prompts toward richer soundtracks.
View Full Interactive Feed