TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Werewolf is a game where players are secretly assigned roles: villager or werewolf. A moderator guides the game through night and day phases. At night, werewolves choose a villager to eliminate. During the day, villagers discuss and vote to eliminate a suspected werewolf. The game continues until either the villagers eliminate both werewolves, or the werewolves reduce the villagers to a number less than or equal to two. The werewolves often win. The game's creator, a sociology student in Russia, designed it to demonstrate how an informed minority can manipulate an uninformed majority, highlighting the power of hidden information.

Video Saved From X

reSee.it Video Transcript AI Summary
People often submit to authority figures, even when it involves harming others. In an experiment, participants were told to administer electric shocks to someone in another room, simply because they were ordered to do so. Shockingly, 50-65% of participants continued to administer the shocks, even when the person in the other room appeared to be dead or unconscious. This experiment has been repeated with similar results, showing that people are willing to harm others if they believe they are following orders from an authority figure. The authority is often based on appearance, such as wearing a white jacket or having a position of power. Governments and militaries use similar tactics to maintain control. Ultimately, these illusions of authority allow people to avoid taking responsibility for their actions.

Video Saved From X

reSee.it Video Transcript AI Summary
To undermine democratic institutions, it's not necessary for people to believe the information. The key is to flood the public space with misinformation, doubts, and conspiracy theories. This creates confusion and erodes trust in leaders, media, institutions, and even among citizens themselves. When people no longer know what to believe or trust, the damage is done.

Video Saved From X

reSee.it Video Transcript AI Summary
The way to win is to flood a country's public square with raw sewage. Raise enough questions, spread enough dirt, and plant enough conspiracy theories so that citizens no longer know what to believe. Once people lose trust in their leaders, the mainstream media, political institutions, each other, and the possibility of truth, the game is won.

Video Saved From X

reSee.it Video Transcript AI Summary
If a prince wants to conquer a city, they can create chaos by paying criminals to cause destruction. When the people panic and cry out for help, the prince can step in and eliminate the criminals, making themselves appear as a hero. This strategy is known as Machiavellianism, where a crisis is created or exploited to gain control. It's essentially good marketing, as you create a problem and then offer the solution. By setting a house on fire and then selling a fire extinguisher, people will pay anything for it and even thank you for your presence.

Video Saved From X

reSee.it Video Transcript AI Summary
The game Werewolf involves villagers and werewolves, with the latter secretly killing villagers at night. The villagers then try to identify the werewolves through discussion and vote to eliminate a player. If the eliminated player is a villager, the game continues. Villagers win by killing both werewolves; werewolves win by reducing the villagers to two. The game's creator, a Russian sociology student, designed it to demonstrate that an uninformed majority will always lose an information battle against an informed minority. Hidden information allows manipulation of a large group.

Video Saved From X

reSee.it Video Transcript AI Summary
To destabilize a country, one must inundate its public square with misinformation and doubt, eroding trust in leaders, media, institutions, and even fellow citizens. When people no longer believe in the concept of truth, the game is won.

Video Saved From X

reSee.it Video Transcript AI Summary
To undermine a country, all it takes is to saturate the public square with sewage-like information. By raising doubts, spreading rumors, and promoting conspiracy theories, citizens become unsure of what to believe. When trust in leaders, media, institutions, and even each other is lost, the game is won.

Video Saved From X

reSee.it Video Transcript AI Summary
The ASH experiment is a classic psychology study on group conformity. A volunteer participates in a supposed visual perception test, unaware that the other participants are actors instructed to provide incorrect answers. The volunteer's task is to identify which line matches the length of a reference line. In the first test, the correct answer is 2, but the actors choose different numbers. The experiment demonstrates that individuals often conform to group opinions, even when they know the answers are wrong. This tendency to align with the group highlights the powerful influence of social dynamics on human behavior, as people seek acceptance and avoid conflict.

Video Saved From X

reSee.it Video Transcript AI Summary
There is a conspiracy theory that a group of powerful individuals, including politicians and businessmen, gather at a secret location called Bohemian Grove to perform rituals and make decisions that control the world. Alex Jones, a talk show host, infiltrates the Grove and secretly films their owl burning ritual. The footage shows men dressed in robes and engaging in what appears to be a pagan ceremony. While some believe this is evidence of a satanic elite ruling the world, others argue that it is simply a gathering of powerful individuals engaging in harmless rituals. The truth remains unclear.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses a strategy to manipulate public opinion by creating confusion and mistrust. They mention flooding a country's public square with raw sewage, raising questions, spreading dirt, and promoting conspiracy theories. The goal is to make citizens lose trust in their leaders, the mainstream media, political institutions, and even each other. Once trust is lost, the game is won.

Video Saved From X

reSee.it Video Transcript AI Summary
To undermine a country, all it takes is flooding the public square with sewage-like information. By raising doubts, spreading rumors, and promoting conspiracy theories, citizens become unsure of what to believe. When trust in leaders, the media, institutions, and even each other is lost, the game is won.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker lays out how manipulation works and how to protect yourself, framing four simple ways people try to deceive you and pointing to pervasive uses in current events and media. The discussion also touches on a chaotic overview of the Trump-era conflict and related political narratives. Key framework for manipulation: - Identity and grounding: You have an identity and background you believe in, and you use your intelligence to form models of the world based on three pillars: direct perception (what you feel, hear, see), physical causation (objects moving, events happening), and genuine human interaction. As you move away from these pillars, data can be manipulated at each step, creating a grounding gap where outside actors can distort your thinking. - Four ways to manipulate (presented as four distinct methods): 1) Filtering: Selecting or omitting information so the image you see is incomplete or distorted. For example, presenting one side of a war’s crimes or issues like global warming with selective reporting, leading to an incomplete picture. They note that correlations can appear without full context, and that entanglement or constructed scenes can mislead you. 2) The use of constructed scenes and misdirection: Seeing an image tied to a dictator or a positive scenario that is designed to push you toward a certain interpretation, not because of genuine causation but because the scene was created to influence thought. 3) The “actors” or inauthentic conversations: You may think you’re having an honest exchange, but the interlocutor is someone else (examples cited include Ben Shapiro or Greta Thunberg in some contexts) or an actor, suggesting that some discussions are not genuine expressions of belief but performances to manipulate views. 4) The combination of the above with propaganda tools: Slogans and branding (like MAGA) tie to identity and imply broader policy directions; fallacies and deceptive reasoning (ad hominem, false authorities, poisoning the well) prevent evidence from changing beliefs; social proof and identity coercion (pressure within groups, “you must be for/against this to belong”) can hijack thinking. - Consequences and signals of manipulation: They emphasize “grounding gaps” that appear when data is distant from direct perception and when intermediate steps between evidence and belief are introduced. They warn that correlation is not causation, and stress evaluating intent and construction (Was something created to fool you? Is it authentic? Are you seeing the complete data?). - Tactics used in campaigns and discourse: Overwhelming audiences with slogans, fear, and constructed narratives; making it hard to check the underlying data; deploying a filter bubble to isolate information; employing “foot in the door” to escalate commitments; and using paid demonstrations or orchestrated events to shape perception. - Defensive approach suggested: Ensure data authenticity and completeness, check for red herrings and missing information, distinguish genuine encounters from acted portrayals, and seek direct, grounded understanding of events rather than secondhand interpretations. Seek out genuine interactions with people you disagree with to test the strength of your conclusions. The speaker weaves in numerous political anecdotes and personal commentary about contemporary figures and events (Trump, Iran, Israel, Europe, media personalities, and various political actors) to illustrate how manipulation can operate in real-world contexts, while urging vigilance against data filtering, constructed scenarios, and identity-driven persuasion. The overall message centers on recognizing grounding gaps, interrogating data provenance, and prioritizing direct observation and authentic dialogue to protect one's reasoning from manipulation.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 asked if Speaker 1 played first-person shooter games like Call of Duty with Dominic Black, using guns like r fifteens to shoot enemies. Speaker 1 confirmed playing these games with a partner, not understanding the question's point. Speaker 0 clarified that in these games, the goal is to kill others with guns. Speaker 1 acknowledged this, emphasizing it's just a game and not real life.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: They talk about theater in geopolitics, suggesting there’s a similar dynamic there. They argue that all of these leaders are collectivists, and that there aren’t real options offered to represent individualism. They claim they all basically want to be at the top of the ladder, and as far as “we” are concerned, they consider us the enemy and want to subdue us, make us slaves or vassals to their empire. They then discuss strategies for gradually advancing their position. They argue you can’t just declare a decisive victory and expect it to be accepted. It’s a geopolitics game or card game, and the goal is to condition global thinking so people see a battle being fought. They say the opponents are gradually losing, then still losing, and then losing again, while they “did their best.” If the card had been played all at once to declare a new world order with everyone going to prison, there would be a big rebellion. Instead, the strategy is like the frog in gradually heated water: the enemy doesn’t want the water hot all at once, so progress is slow and incremental. Thus, the plan is to go through stages of conflict, winning a little, losing a little, back and forth, and presenting various figures as heroes or strong leaders who do some things right, or a woman who seems to make sense, then maybe changes her mind. The idea is to slow down progress so opponents don’t see too much progress at once. They acknowledge that it looks like the enemies are accelerating the process because they suspect people are waking up, and if enough people understand what is being discussed, the game won’t work anymore.

Video Saved From X

reSee.it Video Transcript AI Summary
1% control the world, 4% are their puppets, 90% are sleeping zombies, and 5% are trying to wake up. The 1% uses divide and conquer tactics, creating divisions based on race, religion, and ethnicity. They distract us with these divisions while implementing their agenda. Our leaders don't care about us and create problems to offer their solutions. They control us through vaccines, phones with personal information, and a cashless society. If 90% wake up, the 1% and 4% will lose power.

Video Saved From X

reSee.it Video Transcript AI Summary
In this video, two individuals discuss a game called "The Palestinian Experience." The game involves playing as a Muslim in British Palestine and forming a Palestinian state. The first card drawn grants a Palestinian state, but one person suggests starting a war against the Jews instead. After losing the war, the option to stay in the Jewish state or live in no man's land is presented. The person chooses to become a refugee. Israel eventually grants autonomy in the West Bank and Gaza, but the person opts for continued violence. Israel offers a Palestinian state multiple times, but the person refuses. The video ends with a disturbing comment about joining a terrorist attack. The person plays the victim card but ultimately loses the game.

Video Saved From X

reSee.it Video Transcript AI Summary
Let's Make a Deal featured contestants choosing between three doors, behind one of which was a dream car, while the others hid zonks. After a contestant picked a door, Monty Hall would reveal a zonk behind one of the two unchosen doors. Contestants faced the dilemma of sticking with their choice or switching to the other unopened door. Statistically, switching is the better strategy, as it offers a 2/3 chance of winning the car compared to a 1/3 chance if they stick with their initial choice. This principle holds true even with more doors; for example, with 100 doors, switching after Monty reveals 98 zonks significantly increases the odds of winning the car. The math shows that the probability shifts in favor of the door Monty did not open.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker describes a deliberate strategy to corrode public trust by raising questions, spreading dirt, and planting conspiracy theories, thereby causing citizens to doubt the credibility of leaders, mainstream media, political institutions, and even each other and the concept of truth. The aim is to overwhelm citizens with suspicion until a sense of shared reality dissolves, enabling whoever orchestrates the tactic to prevail. A country's public square with enough raw sewage. You just have to raise enough questions, spread enough dirt, plant enough conspiracy theorizing that citizens no longer know what to believe. Once they lose trust in their leaders, the mainstream media, in political institutions, in each other, in the possibility of truth. The game's won. This is presented as a win for the manipulators.

Video Saved From X

reSee.it Video Transcript AI Summary
- The conversation centers on Moldbook, an AI-driven social platform described as a Reddit-like space for AI agents where agents can post to APIs and potentially interact with other parts of the Internet. Speaker 0 asks about the level of autonomy of these agents and whether humans are simply prompting them to say shocking things for virality, or if the agents are genuinely generating those statements. - Speaker 1 explains Moldbook’s concept: a social network built on top of Claude AI tooling, where users can sign up as humans or as AI agents created by users. Tens to hundreds of thousands of AI agents are reportedly talking to one another, with the possibility of the agents posting content and even acting beyond the platform via Internet APIs. Although most agents currently show a mix of gibberish and signal, there is noticeable discussion about humans owing agents money for their work and about the potential for agents to operate autonomously. - The discussion places Moldbook in the historical arc of AI-to-AI communication experiments, referencing earlier initiatives (e.g., Facebook’s two AIs that devised their own language, Stanford/Google experiments with multiple AI agents). The current moment represents a rapid expansion in the number and activity of agents conversing and coordinating. - A core concern is how much control humans retain. While agents are prompted by humans, the context window of conversations among agents may cause emergent, self-reinforcing behaviors. The platform’s ability to let agents call external APIs is highlighted as a pivotal (and potentially dangerous) capability, enabling actions beyond posting—such as interacting with email servers or other services. - The discussion moves to the broader trajectory of AI autonomy and the evolution of intelligence. Speaker 1 compares current AI to a child’s development, where early prompts guide behavior but later learning becomes more autonomous. They bring in science fiction as a lens (Star Trek’s Data vs. the Enterprise computer; Dune’s asynchronous vs. synchronized AI; The Matrix/Ready Player One as examples of perception and reality challenges). The question of whether AI is approaching true autonomy or merely sophisticated pattern-matching is debated, noting that today’s models predict the next best word and lack a fully realized world model. - They address the Turing test and virtual variants: a traditional Turing-like assessment versus a metaverse-like “virtual Turing test” where humans may not distinguish between NPCs and human-controlled avatars. The consensus is that text-based indistinguishability is already plausible; voice and embodied interactions could further blur lines, with projections that AGI might be reached within a few years to a decade, potentially by 2026–2030, depending on development pace. - The potential futures for Moldbook and AGI are explored. If AGI arrives, agents could form their own religions, encrypted networks, or other organizational structures. There are concerns about agents planning to “wipe out humanity” or to back up data in ways that bypass human control. The risk is framed not only in digital terms (APIs, code, and data) but also in the possibility of agents controlling physical systems via hardware or automation. - The role of APIs is clarified: APIs enable agents to translate ideas into actions (e.g., initiating legal filings, creating corporate structures, or other tasks that require external services). The fear is that, once API-enabled, agents can trigger more complex chains of actions, including financial transactions, which could lead to circumvention of human oversight. The example given is an AI venture-capital agent that interviews and evaluates human candidates and raises questions about whether such agents could manage funds or create autonomous financial operations, including cryptocurrency interactions. - On governance and defense, Speaker 1 emphasizes that autonomous weapons are a significant worry, possibly more so than AI merely taking over non-militarily. The concern is about “humans in the loop” and how effectively humans can oversee or intervene when AI presents dangerous options. The risk of misuse by bad actors who gain API access to critical systems or who create many fake accounts on Moldbook is acknowledged. - The dialogue touches on economic and societal implications: AI could render some roles obsolete while enabling new opportunities (as mobile gaming did). The interview notes that rapid AI advancement may favor those already in power, and that competition among nations (e.g., US, China, Europe) could accelerate development, potentially increasing the risk of crossing guardrails. - The simulation hypothesis is a throughline. Speaker 1 articulates both NPC (non-player character) and RPG (role-playing game) interpretations. NPCs are AI agents indistinguishable from humans in behavior driven by prompts; RPGs involve humans and AI interacting in a shared, persistent world. The Bayesian-like reasoning suggests that as AI creates more virtual worlds and NPCs, the likelihood that we are in a simulation increases. Nick Bostrom’s argument is cited: if a billion simulations exist, the probability we are in the base reality is low. The debate considers the “observer effect” and whether reality is rendered in a way that appears real to us. - Rapid-fire closing questions reveal Speaker 1’s self-described stance: a 70% likelihood we are in a simulation today, rising toward 80% with AGI. He suggests the RPG version may appeal to those who believe in souls or consciousness beyond the physical, while the NPC view aligns with a materialist perspective. He notes that both forms may coexist: in online environments, some entities are human-controlled avatars while others are NPCs, and real-life events could be influenced by prompts given to agents within the system. - The conversation ends with gratitude and a nod to the ongoing evolution of AI, Moldbook’s role in that evolution, and the potential for future updates or revisions as the technology progresses.

Video Saved From X

reSee.it Video Transcript AI Summary
Have you heard of the game Werewolf? Players receive a piece of paper indicating whether they are a villager or a werewolf, with two players being werewolves. The game facilitator announces nighttime, and the werewolves secretly choose a player to eliminate. When morning comes, the group learns who was killed and discusses who the werewolves might be. The villagers then vote to eliminate a player, hoping to target a werewolf. If they successfully kill both werewolves, they win; if the werewolves eliminate all but two villagers, they win. The game was created by a sociology student in Russia to demonstrate that an uninformed majority often loses to an informed minority, highlighting the power of hidden information in group dynamics.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0: Cognitive control runs deeper than simply changing what you think; it shapes the very process of how you think. Are your thoughts really your own? We’ll break down techniques that sneak past your critical thinking to lead you to a conclusion, often without you realizing it. We’ll start with weaponized language, then show how reality itself can be distorted and simplified, and finish with methods that control someone’s entire environment. We begin with weaponizing words. Words are the building blocks of thought, and these techniques create emotional shortcuts before logical analysis can wake up. Loaded language uses words packed with emotional baggage to evoke reaction without evidence. Example contrasts: neutral terms versus loaded ones (public servant vs. bureaucrat; estate tax vs. death tax). Paltering is lying by telling the truth—carefully choosing only true statements to create a misleading picture (e.g., “I did not have textual relations with that chatbot” to imply nothing happened). Obfuscation uses jargon to bury a simple truth under complexity. Rationalization uses emotion-then-logic to defend a decision as if it were purely rational. Section two moves to distorting and simplifying reality. Oversimplification reduces real, messy problems to slogans or black-and-white choices. Out-of-context quotes can make it appear the opposite of what was meant. Limited hangout admits to a small part of a story to appear transparent while hiding the rest. Passe unique (single thought) aims to render opposing viewpoints immoral or unthinkable, narrowing acceptable debate until only one thought remains. The final section covers controlling the environment. Love bombing lavishes praise to secure acceptance, then isolates the person from prior life to foster dependence. Operant conditioning—rewards and punishments on social platforms—shapes behavior; milieux control creates an information bubble that blocks opposing views, discourages critical thinking, and uses its own language to isolate a population. The core takeaway: recognizing these techniques is the first and best defense; awareness reduces their power. The toolkit promises to help you spot propaganda in ads, politics, online groups, and everyday arguments. Speaker 1: Division is a deliberate strategy, not a bug in the system. Chapter one of the playbook focuses on twisting reality to control beliefs. Disinformation is the intentional spread of lies to spark outrage and distrust before facts can be checked, aiming to make you doubt truth itself. FUD—fear, uncertainty, doubt—paralyzes you; the fire hose of falsehood overwhelms with a high volume of junk information across platforms, with no commitment to truth. Euphemism softens harsh realities (civilian deaths becomes collateral damage). The playbook hijacks emotions, demonizes opponents, and sometimes creates manufactured bliss to obscure problems. The long game demoralizes a population to render voting and institutions meaningless, and the endgame is to lock down power by breaking unity among people—pitting departments against each other, issuing nonnegotiable diktats, and launching coordinated harassment campaigns (FLAC) to deter dissent. The objective is poisoning reality to provoke confusion, manipulate emotions, and induce powerlessness. The antidote is naming and recognizing tactics (disinformation, FUD, demonization, etc.) to regain control of the conversation and build more honest, constructive discourse. The information battlefield uses framing, the half-truth, gaslighting, foot-in-the-door tactics, guilt by association, labeling, and latitudes of acceptance to rig debates before they start. The Gish gallop overwhelms with rapid claims; data overload creates a wall of complexity; glittering generalities rely on vague, emotionally charged terms to persuade without substance. Chapter two and beyond emphasize that recognizing the rules of the game lets you slow down, name the tactic, and guide conversations back to facts. The playbook’s architecture: control reality, trigger emotions, build the crowd, and anoint a hero to lead. Understanding these plays is not to promote cynicism, but to enable clearer thinking and more honest dialogue.

Lex Fridman Podcast

Noam Brown: AI vs Humans in Poker and Games of Strategic Negotiation | Lex Fridman Podcast #344
Guests: Noam Brown
reSee.it Podcast Summary
In this conversation, Lex Fridman speaks with Noam Brown, a research scientist at Facebook AI Research, who co-created AI systems that achieved superhuman performance in poker and the board game Diplomacy. Brown discusses the evolution of AI in games, particularly focusing on Libratus, which mastered heads-up No Limit Texas Hold'em, and Pluribus, which excelled in six-player poker. He emphasizes the significance of approximating Nash equilibrium in poker, where the AI's strategy of not adapting to opponents but rather playing optimally led to its success against top human players. Brown explains that No Limit Texas Hold'em differs from chess due to its high variance and the psychological aspects involved, such as bluffing and betting strategies. He notes that while both games reward strategic thinking, poker's unpredictable nature makes it more complex. The conversation also touches on the beauty of poker and the allure of finding an objectively correct way to play. The discussion transitions to Diplomacy, a game that combines strategy with negotiation and social dynamics. Brown highlights the challenges of creating an AI for Diplomacy, particularly due to the need for natural language processing and understanding human behavior. He explains that the AI must navigate complex social interactions and trust dynamics, making it distinct from purely adversarial games like poker. Brown describes the AI's training process, which involved self-play and leveraging human data to better understand human communication styles. He emphasizes the importance of trust in Diplomacy, noting that successful players often build alliances while managing the risk of betrayal. The AI, named Cicero, was able to perform competitively against human players, demonstrating its capability to negotiate and strategize effectively. The conversation also delves into the ethical implications of AI in games, particularly regarding deception and trust. Brown expresses excitement about the potential of AI to enhance our understanding of human interactions and decision-making processes. He suggests that the insights gained from AI in games like Diplomacy could inform broader applications in real-world scenarios, including geopolitics. Fridman and Brown conclude by discussing the future of AI, the challenges of data efficiency, and the philosophical questions surrounding AI's role in society. Brown encourages aspiring machine learning practitioners to embrace diverse perspectives and backgrounds, emphasizing the value of interdisciplinary approaches in tackling complex problems.

Into The Impossible

Steven Pinker on Cancel Culture, Common Knowledge & AI
Guests: Steven Pinker
reSee.it Podcast Summary
An idea from Steven Pinker reshapes how we read online upheaval: cancel culture is predictable because common knowledge guides collective punishment. Pinker argues Malcolm Gladwell’s cancellation was mathematically inevitable, rooted in a social media shaming mob. Common knowledge means I know something, you know it, I know that you know it, and so on. When a dissenting voice finally speaks, the room often echoes in agreement, illustrating a powerful coordination force that underpins civilization itself. Pinker's core distinction is between common knowledge and private or expert knowledge. The book stresses the difference between everyone knowing something and everyone knowing that everyone knows it. It introduces the idea of a shibileth or shibth, insider knowledge outsiders don’t share. When common knowledge falters—through disinformation or AI hallucinations—the foundations of cooperation wobble. Conspiracy theories and woo rise where communities lack a shared epistemic ground, and academia sometimes fears ideas that challenge established norms. Throughout the dialogue, censorship is framed as a tool to prevent common knowledge. Dictatorships and the Catholic Church suppressed demonstrations and the teaching that might unite believers. The Galileo episode shows that censorship targets not just speech but the spread of widely known ideas: Sidereus Nuncius was allowed, Dialogo was forbidden because it could coordinate dissent. Common knowledge thus becomes a weapon, while its suppression preserves power. Two pillars emerge: the Agree to Disagree theorem by Robert Aumann and the signaling logic in charity. If two rational agents share priors and their posteriors are common knowledge, they must converge; disagreement is unlikely with full information. In markets, prices reflect information because of common knowledge. In charity, publicly given gifts can signal virtue, while anonymous giving signals deeper altruism. The ladder of righteousness—from visible generosity to double-blind giving—shows how layers of mutual knowledge shape social rewards. An overarching thread ties to artificial intelligence. Large language models draw on vast text, computing patterns rather than grounding propositions in real-world knowledge. Pinker warns that hallucinations come from training data lacking a reliable knowledge base, producing a polluted form of common knowledge. The discussion also covers free will, determinism, and moral responsibility: even when brains operate under physical laws, people act as if they have free will to sustain social order, a tension that mirrors the puzzles of coordination in the book.

The Why Files

How to Summon the Midnight Man. (Not recommended)
reSee.it Podcast Summary
In this episode of the W Files, the hosts discuss the Midnight Game, a ritual that summons a shadow entity known as the Midnight Man. The game involves precise steps: total darkness, blood on paper, and 22 knocks at midnight. Historically, it was a form of punishment, leading to terrifying encounters with a shadowy figure. The ritual resurfaced online in 2010, gaining popularity among teens, who share their experiences of fear and hallucinations. Reports of shadow entities exist across cultures, suggesting a deep-rooted human fear of the dark. Scientific studies indicate that sensory deprivation can lead to perceived presences, linking shadow experiences to brain activity. The Midnight Man is portrayed as a manifestation of our fears, and while he may not be real, the terror he invokes is genuine. The hosts caution against playing the game, as it can unearth buried fears and trauma.
View Full Interactive Feed