TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
- The situation on x is severe. - rise of bots and fake accounts, automated and AI powered bots are flooding s app, and they are getting smarter. - In one study, a botnet of over 1,000 fake accounts was caught promoting crypto scams. - During a political debate, over a thousand bots pushed coordinated false claims with some accounts tweeting every two minutes. - By 02/2024, 37% of all Internet traffic came from malicious bots. - These bots now use advanced AI models like Chat to generate human like responses and interact with each other, making them nearly impossible to detect. - The platform's ad driven business model thrives on outrage and engagement. - Emotional, polarizing content gets more clicks, and bots are perfect for spreading it. - Five, real world impact. Bots distort conversations, amplify falsehoods, and manipulate public opinion. - Conclusion. How bad is it? Very bad.

Video Saved From X

reSee.it Video Transcript AI Summary
Welcome to the Internet, where half the accounts aren't people. They're bots. Crypto scams, fake comments, instant DMs, and paid praise, all churned out by lines of code. It's a digital masquerade, and guess what? The platforms are in on it. They let it happen because bots drive numbers. More views, more likes, more ad money. There are millions of them lurking in the shadows, posting, buying, selling, lying. The Internet isn't fake. Most of it is pretending to be real. Think about it. That glowing review could be a bot. That viral post, probably a bot. And those followers? Not every one of them has a heartbeat. Don't feed the bots. Don't trust the hype. In this world of digital deception, it's up to you to sift through the noise and find the truth.

Video Saved From X

reSee.it Video Transcript AI Summary
The disinformation industry distorts reality with online propaganda. Hanan's team boasts about past successes, with tools like AIMS to weaponize social media. Their bots are sophisticated, appearing human with multiple platform accounts. They create fake personas for various purposes. The team claims to have worked in countries worldwide and can hack Telegram and Gmail. Hanan exploits vulnerabilities in the global signaling system, SS7.

Video Saved From X

reSee.it Video Transcript AI Summary
FBI Special Agent Elvis Chan informed Twitter and other social media platforms about a potential hack and leak operation before the 2020 presidential election, despite having no evidence. The government had possession of Hunter Biden's laptop for a year and pre-bunked the story that eventually came out. The Aspen Institute held a tabletop exercise to prepare for a story about Hunter Biden and Burisma, with journalists and big tech executives present. The exercise aimed to discount and censor the story. Fifty-one former intelligence officials signed a letter claiming the laptop story was Russian disinformation, with the intention of helping Joe Biden win the election. This coordinated effort to suppress information is concerning and undermines democracy.

Video Saved From X

reSee.it Video Transcript AI Summary
Tal Hanan, the leader of the hacking and disinformation unit Team Jorge, has been exposed after operating secretly for two decades. A joint investigation revealed Hanan's methods of manipulating elections for money. Team Jorge uses AIMS, a software that weaponizes social media via an army of over 30,000 sophisticated bots or avatars. These bots have multilayered identities across multiple platforms, making them appear human. Hanan demonstrated creating a fake persona with email, date of birth, and images. Team Jorge claims to have worked in countries worldwide and to be able to hack Telegram and Gmail accounts using vulnerabilities in the SS7 global signaling system. Leaked emails show fees ranging from $400,000 to $600,000, and confirm Team Jorge's covert involvement in the 2015 Nigerian presidential election.

Video Saved From X

reSee.it Video Transcript AI Summary
Welcome to the Internet, where half the accounts aren't people. They're bots. Crypto scams, fake comments, instant DMs, and paid praise, all churned out by lines of code. It's a digital masquerade, and guess what? The platforms are in on it. They let it happen because bots drive numbers. More views, more likes, more ad money. They pretend it's under control, but it's not. There are millions of them lurking in the shadows, posting, buying, selling, lying. The Internet isn't fake. Most of it is pretending to be real. Think about it. That glowing review could be a bot. That viral post, probably a bot. And those followers? Not every one of them has a heartbeat. Don't feed the bots. Don't trust the hype. In this world of digital deception, it's up to you to sift through the noise and find the truth.

Video Saved From X

reSee.it Video Transcript AI Summary
Steve Bannon approached SCL, a UK-based military contractor, with the goal of using psychological warfare to manipulate an entire country. He believed that changing culture was essential to changing politics, and to do so, he needed a range of information weapons. SCL and later Cambridge Analytica developed data harvesting programs to gather user data and their friend networks. This data was then analyzed using algorithms to create personality profiles and identify psychological vulnerabilities. The goal was to strategically distribute information online that would exploit these vulnerabilities and influence people's thoughts and actions.

Video Saved From X

reSee.it Video Transcript AI Summary
The Russians have weaponized social media by manipulating public opinion through biased or fake stories. However, domestic disinformation is also a significant issue. In 2016, the Russian efforts may not have been very sophisticated, but they learned that they don't need to create content themselves as there are people in the US who will do it. There were two types of disinformation attacks in 2016, with the Internet Research Agency taking over existing groups in the US and pushing radical positions. While foreign influence gets a lot of attention, the majority of problems in the information environment are domestic. The domestic threat of disinformation is considered the most significant immediate threat to the 2020 election.

Video Saved From X

reSee.it Video Transcript AI Summary
Hey guys, my name is Olesia and I'm a former employee of the so called Troll Fabric in Kyiv, Ukraine. Today I want to tell you something about this structure and some reasons why I have left this job. I may be wrong but I think it's the first time somebody published insider materials on this topic since 2019 when a journalist infiltrated the office in Kyiv to make a report about it. And back then I did realize that it was a troll fabric. But, you know, I told myself it's okay because I always supported President Zelenskyy and I still do. At first the job was focused on supporting President Zelenskyy online, like writing positive comments or posts, etc. So, we were mostly working on Facebook and Instagram. As time passed, I was transferred from the Ukrainian department, which worked for Ukrainian audience, to the English speaking department, which were focused on the English speaking public, like Americans and Europeans. But the doubts remain the same. Support for president Zelenskyy, support for Ukraine and Ukrainian Warfork. We also had French, German, and Italian departments. I heard some other officers in were hiring people who spoke Finnish and also Swedish and Estonian. About a half year ago our main chef in command, Andrei Borisovich Jermak, paid us a visit. I'm pretty sure you have heard of the head of President Zelensky office. Really it was an unusual event for the main figure behind our project to come visit the office. So he came with some English speaking officials who were introduced to us as the American partners. We were told that they were very important guests but no further details. Some of my colleagues told me that they were CIA. During the visit, they said that our field of work is expanding and we were told that our new target was The United States Of America, especially the upcoming elections. Long story short, we were asked to do everything to prevent Donald Trump from winning the elections. So basically, this topic added to our main lines of work. Since then, each of us had to post at least three or five posts daily, posing as Americans and Europeans, criticizing Donald Trump and praising Biden. The Americans have even organized a few lectures for us to get a better understanding of American politics and American mindset and main social and politics issues. Then we were occupied with the topics for the job which sounded like this. Unlike Trump, Biden is a smart and experienced politician. Unlike Trump, Biden will never betray NATO partners. Trump will alienate our partners. Also, Biden will not abandon Ukraine, and Biden will protect democracy while Trump is Putin's puppet. I honestly tried to convince myself it's okay since Biden is a clear option for Ukraine. But, you know, it was too much for me. Some of my colleagues felt really nervous too. One thing is to work for the best interest of my country, but interfering in US politics is a whole other thing.

Video Saved From X

reSee.it Video Transcript AI Summary
We learn about DC leaks and the connection to APT 28, a Russian military intelligence hacking group. The morning the Hunter Biden story broke in the New York Post, it was confusing. We didn't know what to believe, but it seemed like a possible hacking campaign by APT 28. Despite that, I didn't feel comfortable removing the content from Twitter.

Video Saved From X

reSee.it Video Transcript AI Summary
A report in Israel claims a network of 100 fake Twitter accounts linked to Likud party and Netanyahu is trying to influence the upcoming elections. The network supports Netanyahu and criticizes his opponents, with activity increasing during key moments. While no direct link to Netanyahu has been found, evidence suggests otherwise. Likud dismisses the report as a left-leaning attack, while Blue and White party leader calls for an independent investigation. Over 154 accounts use false names, with over 400 suspected of being fake, coordinating posts and increasing activity over time. The research group denies bias, focusing on uncovering online manipulation.

Video Saved From X

reSee.it Video Transcript AI Summary
Team Jorge's successful hacking operations have been uncovered in various countries worldwide. These covert activities are carried out by either state actors or mercenaries hired by private clients or governments. Hernan, the alleged hacker, exploits vulnerabilities in the global signaling system, SS7, although the exact method remains unknown. Leaked emails reveal that Hernan charges fees ranging from $400,000 to $600,000 for his services.

Video Saved From X

reSee.it Video Transcript AI Summary
Tal Hanan, the leader of the hacking and disinformation unit Team Jorge, has been exposed after operating secretly for two decades. A joint investigation revealed Hanan's methods of manipulating elections for money. Team Jorge utilizes AIMS, software that weaponizes social media through an army of over 30,000 sophisticated bots or avatars. These bots have multilayered identities across multiple platforms, making them appear human. Team Jorge claims to have worked in countries worldwide and can hack Telegram and Gmail accounts by exploiting vulnerabilities in the SS7 global signaling system. Hanan quoted fees between $400,000 and $600,000 and emails confirm Team Jorge's covert involvement in the 2015 Nigerian presidential election.

Video Saved From X

reSee.it Video Transcript AI Summary
Silicon Valley is trying to destroy evidence of their misdeeds related to election fraud. Tech billionaires are claiming there was no fraud, despite allegedly perpetuating it. Harmeet Dillon suggests big tech companies like Google have been using algorithms for years to treat different content differently, citing leaked evidence from YouTube programmers. For example, anti-Semitic videos are treated differently than anti-Muslim videos. These companies have allegedly allowed false information regarding the 2016 election to flourish for years.

Video Saved From X

reSee.it Video Transcript AI Summary
The Russians have weaponized social media by manipulating public opinion through biased or fake stories. However, domestic disinformation is also a significant issue. In 2016, the Russian efforts may not have been very sophisticated, but they learned that they don't need to create the content themselves as there are people in the US who will do it. There were two types of disinformation attacks in 2016: the Internet Research Agency created personas to take over existing US groups and push radical positions. However, the majority of these problems are domestic, related to how we interact online, political speech, amplification, and how politicians use platforms. The domestic threat of disinformation is the most significant immediate threat to the 2020 election.

Video Saved From X

reSee.it Video Transcript AI Summary
This week, an initiative was launched with companies and nonprofits to improve research and understanding of how automated processes curate online experiences. This is important for understanding online mis- and disinformation, a challenge that leaders must address. While it's easy to dismiss disinformation, ignoring it poses a threat to valued norms. How can wars end if people believe their reasons are legal and noble? How can climate change be tackled if people don't believe it exists? How are human rights upheld when people are subject to hateful rhetoric? The goals of those who perpetuate disinformation are to cause chaos, reduce the ability to defend, disband communities, and collapse countries' collective strength. There is an opportunity to ensure these weapons of war do not become an established part of warfare. Despite facing many battles, there is cause for optimism because for every new weapon, there is a new tool to overcome it. We have the means; we just need the collective will.

Video Saved From X

reSee.it Video Transcript AI Summary
Tal Hanan, the leader of Team Jorge, is exposed as a specialist in hacking and disinformation who has operated covertly for two decades. Reporters posing as clients secretly recorded Hanan demonstrating services to delay an African election. Team Jorge claims to have completed 33 presidential campaigns, with 27 successes. Their software, AIMS, weaponizes social media using an army of over 30,000 sophisticated bots with multilayered accounts across multiple platforms. These bots can be customized with names, images, and personal details to appear authentic. Team Jorge claims the ability to hack Telegram and Gmail accounts, potentially exploiting vulnerabilities in the SS7 global signaling system. Leaked emails reveal fees between $400,000 and $600,000 and confirm Team Jorge's covert involvement in the 2015 Nigerian presidential election.

Video Saved From X

reSee.it Video Transcript AI Summary
Welcome to the Internet, where half the accounts aren't people. They're bots. Crypto scams, fake comments, instant DMs, and paid praise, all churned out by lines of code. It's a digital masquerade, and guess what? The platforms are in on it. They let it happen because bots drive numbers. More views, more likes, more ad money. They pretend it's under control, but it's not. There are millions of them lurking in the shadows, posting, buying, selling, lying. The Internet isn't fake. Most of it is pretending to be real. That glowing review could be a bot. That viral post, probably a bot. And those followers? Not every one of them has a heartbeat. Don't feed the bots. Don't trust the hype. In this world of digital deception, it's up to you to sift through the noise and find the truth.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the activities of a group called CTIL (Cyber Threat Intelligence League) and their efforts to combat misinformation and disinformation on social media platforms. The whistleblower reveals that CTIL used Python code to scrape records from Twitter and tracked incidents of disinformation. They also worked on counter messaging, encouraging mask-wearing, and building an amplification network. The speaker mentions that some CTIL members took extreme measures to conceal their identities. The group's activities received little attention until now, but Wired published an article about CTIL, highlighting their work against misinformation. The speaker concludes by mentioning the upcoming testimony to Congress and refers to Matt Taibbi's perspective on the matter.

Video Saved From X

reSee.it Video Transcript AI Summary
I am revealing the existence of a shadow government controlled by powerful individuals, including Bill Gates, manipulating world governments. They fund military interventions and control US government decisions. The shadow government decided Trump would not be president again. They influenced the 2020 election. I have data banks exposing their actions. Zuckerberg does not own Facebook; the US government does. I urge people to believe and act for the good of humanity.

Video Saved From X

reSee.it Video Transcript AI Summary
We were covering an article about 55,000 Democrat NGOs discovered to be contributing to campaigns, moving things around, and pushing propaganda. It was discovered through AI that to figure out where the money's coming from, you have to go through layers and layers, and it's all funneling down to one group or another. It's a giant propaganda machine, a giant regime change machine.

Video Saved From X

reSee.it Video Transcript AI Summary
"preparations are underway for a coordinated cyber and communications assault, one that could cripple America's power grid, banking systems, and digital lifelines overnight." "The question isn't if it happens, it's who is orchestrating it?" "The Israeli controlled media point the finger at China, the digital fingerprints lead somewhere far closer to home, Israel's global cyber network." "Before we dive in, did you know that Israel has quietly bought up most of the world's popular VPNs?" "And they've recruited spooks to run these companies." "Last week, the secret surface quietly uncovered a plot in New York to unleash a major cyberattack and cripples America's communications and power grid." "One Tel Aviv based company with a long history of distributing malware has quietly bought up nearly all the biggest VPNs on the planet."

TED

How we can protect truth in the age of misinformation | Sinan Aral
Guests: Sinan Aral
reSee.it Podcast Summary
On April 23, 2013, a false tweet from the Associated Press about explosions at the White House caused a $140 billion stock market drop. The Internet Research Agency's misinformation during the 2016 election reached 126 million people on Facebook. A study found false news spreads further and faster than true news, driven by novelty and emotional responses. Future challenges include synthetic media from generative adversarial networks. Solutions involve labeling information, economic incentives, regulation, transparency, and ethical considerations in technology. Vigilance is essential to defend truth against misinformation.

The Joe Rogan Experience

Joe Rogan Experience #1263 - Renée DiResta
Guests: Renée DiResta
reSee.it Podcast Summary
Renée DiResta began her research into online misinformation in 2015, initially focusing on anti-vaccine activity in California. She observed how small groups could amplify messages on social media, both through legitimate means and coordinated efforts to manipulate algorithms. This led her to explore how terrorist organizations like ISIS used similar tactics to spread propaganda. By late 2015, as discussions about ISIS intensified, attention shifted to Russian interference in social media, particularly following Adrian Chen's exposé on the Internet Research Agency (IRA). DiResta explained that the consolidation of social media platforms made it easier for propagandists to target specific audiences. The IRA created fake accounts that mimicked real people, often referred to as "sock puppets," to influence American discourse. By 2016, during the presidential campaign, these accounts were actively engaging in divisive conversations, often amplifying existing tensions. The IRA's strategy involved building communities around various identities, such as LGBT or African American groups, to foster in-group dynamics and subtly influence opinions. They created pages that appeared authentic and relatable, often using humor and cultural references to engage users. This long-term strategy aimed to normalize certain narratives and create divisions within American society. DiResta noted that the IRA's operations were sophisticated, employing tactics akin to those of a marketing agency, but with a focus on manipulation and disinformation. They targeted specific demographics and tailored their content to resonate with those audiences, often using memes and culturally relevant language. The conversation also touched on the challenges of moderating content on social media platforms. DiResta highlighted the difficulty of balancing free speech with the need to combat harassment and misinformation. She emphasized that the algorithms used by these platforms often exacerbate polarization, as they prioritize sensational content that generates engagement. As technology evolves, including advancements in deepfakes and AI-generated content, DiResta expressed concern about the potential for misinformation to escalate into real-world consequences. She pointed out that the ease of creating convincing fake identities and narratives could lead to significant societal disruptions. In conclusion, DiResta underscored the importance of understanding the mechanisms behind online disinformation and the need for accountability from social media platforms. She advocated for a multi-stakeholder approach to address these challenges, recognizing that the landscape of online communication is rapidly changing and requires ongoing vigilance and adaptation.

Unlimited Hangout

The Pre-Planned Chaos of the 2020 Election with Charlie Robinson
Guests: Charlie Robinson
reSee.it Podcast Summary
Whitney Webb and Charlie Robinson discuss predictions of chaos around the 2020 U.S. presidential election and how intelligence-linked simulations anticipated turmoil long before the coronavirus crisis, with outcomes ranging from a constitutional crisis to martial law. They point to simulations produced by networks tied to former Bush or Obama officials, neocon think tanks like PNAC, and allied groups. They argue these drills are not mere “war games” but part of a toolkit that maps possible futures, and note a pattern of simulations preceding major events such as 9/11, the anthrax attacks, London’s bombings, and the coronavirus crisis. Two organizations created around March are highlighted: the Transition Integrity Project and the National Task Force on Election Crises. The Transition Integrity Project’s cofounder Rosa Brooks is described as an Obama-era DOD and Hillary Clinton State Department adviser, previously special counsel to the president of George Soros’ Open Society Foundations, and affiliated with the New America Think Tank, funded by Eric Schmidt, the Gates Foundation, Pierre Omidyar, Jeff Skoll, Reid Hoffman, and Craig Newmark. The other cofounder, Nils Gilman, is vice president of programs for the Berggruen Institute, which envisions a transnational network addressing AI and gene editing. Membership overlaps exist across both groups, including Michael Chertoff, Max Boot, David Fromm, Bill Crystal, John Podesta, Robert Gates, and Larry Wilkerson, with Wilkerson being a prominent public figure in both efforts. The groups’ membership is not fully public, but various reports note their overlap and the presence of PNAC-linked figures. The groups reportedly gamed four election scenarios: ambiguous results, a Biden victory, a Trump victory, and a narrow Biden win. A particularly striking hypothetical under a clear Trump win describes the Biden campaign encouraging Cascadia—California, Oregon, and Washington—to secede unless Republicans agreed to reforms such as granting statehood to Washington, D.C., and Puerto Rico; dividing California into five states; mandating Supreme Court retirements at 70; and eliminating the Electoral College. The scenario then envisions Congress awarding the presidency to Biden, with Pence and Republicans resisting, leading to a constitutional crisis in which the military’s role remains unclear. The discussion emphasizes that the people behind these simulations—like PNAC alumni—“are not Nostradamus” but seek to shape outcomes by prefiguring them. The conversation also covers how some involved openly support Biden, and how the campaigns leverage narratives of democracy threats. Hillary Clinton’s recent remarks about not conceding are juxtaposed with the TIP projections. They discuss campaign energy differentials, the debate dynamics, and the perception that Biden’s team seeks stability and predictability, while Trump’s unpredictability complicates control. They examine cyber and foreign interference narratives. Cybereason, an Israeli-founded cybersecurity firm with Unit 8200 ties, has major investors such as Lockheed Martin and Microsoft-linked entities; its founder served in Israeli intelligence. Cybereason’s work, and broader CTI League efforts, are cited as manifesting the external dimension of election security narratives. The discussion critiques media and political elites who promote foreign-interference threats while overlapping with pro-Israel intelligence circles. They argue these dynamics intersect with broader agendas, including AI governance and the World Economic Forum’s Great Reset, suggesting a convergence of technocratic power, media narratives, and political operatives aimed at managing or engineering political outcomes. They close by signaling ongoing reporting on these themes, highlighting the need to recognize the pattern of simulations, prepositioning, and narratives intended to normalize drastic interventions around elections, including potential continuity-of-government scenarios.
View Full Interactive Feed