TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 describes Tim Ballard as having worked with Glenn Beck to build Underground Railroad, portraying Beck as Ballard’s close ally whenever Ballard needed to break a story on child trafficking. When Ballard considered running for Senate and would have likely won with momentum after the Sound of Freedom release, attacks began, and Glenn Beck reportedly “threw him under the bus.” Speaker 0 asserts that Beck pledged allegiance to Israel, is “bought and paid for,” and “Israel's bitch,” claiming Ballard watched a video and realized this. Speaker 1 adds a claim about theSound of Freedom narrative: the child trafficking ring Ballard busted in South America, depicted in the movie, was an Israeli-run sex trafficking ring, run by Israelis. The head of that ring allegedly escaped to Portugal where a judge let him go, and nobody knows where this guy ended up. The speakers state that this is the real story of Sound of Freedom and that “It was an Israeli run sex trafficking ring,” noting that this is not told to the audience and urging others to research it. Speaker 1 then transitions to commentary on Twitter, stating that Twitter is not a free speech platform and is not an open information highway; it is a military application, a propaganda operation, highly bodied, highly artificial, highly synthetic, and manipulated. They acknowledge using it daily but emphasize that not everything is as it seems on the platform. They caution that prominent accounts cannot be taken at face value because campaigns are run, the algorithm is manipulated, and there are bots and unauthentic accounts. The speakers urge awareness of the battlefield on which Twitter is engaged, and advise developing a wary eye toward content, encouraging audiences to examine profiles, retweets, boosts, follows, and networks to understand who is using the same messaging and why.

Video Saved From X

reSee.it Video Transcript AI Summary
We focus on collecting data from surveillance and monitoring social media platforms to counter negativity and hate speech. Our media analysis unit has increased monitoring to catch incitement to violence and direct threats. Our goal is to ensure the safety and sense of safety for New Yorkers.

Video Saved From X

reSee.it Video Transcript AI Summary
Twitter firings have left us without a way to flag abusive or violent content. Disinformation can now be promoted by anyone with a blue tick. However, we are working with platforms to promote reliable information on COVID and climate change. We have a strong group of messengers who share UN content with their followers and educate users on combating disinformation. Our new slogan is "pause, take care before you share." We are establishing a central capacity at the UN to monitor and react to mis and disinformation and hate speech. We will also develop a UN code of conduct for digital platforms to set global standards for information integrity and create a more humane Internet.

Video Saved From X

reSee.it Video Transcript AI Summary
We are enhancing disinformation research and tracking in the Surgeon General's office. Additionally, we are flagging problematic posts on Facebook for review.

Video Saved From X

reSee.it Video Transcript AI Summary
The foundation of democracy is vital, especially regarding freedom of speech. A recent policy titled "freedom of speech, not freedom of reach" emphasizes that while free speech is essential, platforms like Twitter can choose whom to amplify. It's important to limit the reach of extremist views without censoring speech entirely. Social media companies should follow the same business rules as other publishers. Providing a platform for hate groups and harmful individuals is unacceptable. The ADL has been actively monitoring and collaborating with major tech companies since 2017 to address these issues, ensuring that platforms are held accountable for the content they promote.

Video Saved From X

reSee.it Video Transcript AI Summary
Twitter's push for truthfulness is evident through its community notes feature. This feature aims to ensure accuracy and prevent biases. It encourages users to think carefully about their statements, as they can be community noted on Twitter. The impact of community notes is powerful, as it promotes honesty and discourages deception.

Video Saved From X

reSee.it Video Transcript AI Summary
The ADL works with various companies in Silicon Valley, including Apple, Zoom, Amazon, Microsoft, Meta, and Twitter, to address the issue of hate speech on their platforms. They have expressed concern about Twitter allowing toxic content to persist, which has led to real-world violence in places like Pittsburgh, Poway, El Paso, and Washington, D.C. The ADL urges companies to use their innovation to combat hate speech. They have observed that anti-Semitic speech remains on the platform for longer periods, and toxic content is not being removed as quickly as before. The ADL emphasizes the importance of all users, including journalists and watchdog organizations, working together to make Twitter a safe space, as freedom of speech should not be used to slander or incite violence.

Video Saved From X

reSee.it Video Transcript AI Summary
A new technology has been developed to address the issue of extremists having podcasts. This software scans the entire podcast for flagged words and extracts the parts where they discuss extremist topics. This is useful because most of what these extremists talk about is unrelated to their extremist views, such as video games. By using this software, the time-consuming task of listening to the entire podcast is eliminated, making it easier to identify and address extremist content.

Video Saved From X

reSee.it Video Transcript AI Summary
Many people overlook their options in dealing with misinformation on social media. Early detection is key to tracking and countering harmful narratives. Legal action can be taken against profit-driven disinformation networks. Fact-checking alone may not change beliefs, so building counter narratives is crucial. Our organization helps detect, assess, and mitigate the impact of misinformation to prevent future issues. The recent events at the US Capitol highlight the real-world consequences of online disinformation. Translation: It is important to detect and counter harmful narratives early to prevent misinformation from causing real-world harm. Legal action can be taken against profit-driven disinformation networks, and building counter narratives is essential. Our organization helps organizations address the impact of misinformation to prevent future issues. The recent events at the US Capitol show the consequences of online misinformation.

Video Saved From X

reSee.it Video Transcript AI Summary
Over the past decade, anti-Semitism has shifted online, making it easier to generate and spread hateful content. To address this, the Ministry of Diaspora Affairs developed a system that monitors anti-Semitism on the entire internet, focusing on Facebook and Twitter. Using artificial intelligence, the system identifies around 10,000 anti-Semitic posts daily out of 200,000 suspect posts. By making this information public, it aims to shame individuals and deter anti-Semitism. Additionally, a command center in Tel Aviv analyzes the data and takes action, such as notifying law enforcement or city officials about specific instances. The speaker urges Facebook and Twitter to take responsibility and not allow anti-Semitism under the guise of freedom of speech.

Video Saved From X

reSee.it Video Transcript AI Summary
- "ADL and the University of California at Berkeley's D Lab have been working to develop a new approach to tackle online hate using the latest methods." - "The goal of the online hate index is to help tech platforms better understand the growing amount of hate on social media and to use that information to address the problem." - "By combining artificial intelligence and machine learning with social science, the online hate index will ultimately uncover and identify trends and patterns in hate speech across different platforms." - "We've just completed our first phase of research and we found that the machine learning model identified hate speech accurately between seventy eight and eighty five percent of the time." - "We'll examine content on multiple social media sites and we'll identify strategies to deploy the model more broadly."

Video Saved From X

reSee.it Video Transcript AI Summary
One strategy is shadowbanning, where a user is effectively banned without their knowledge. They can still post and interact, but no one else sees their content, leading them to believe there's a lack of engagement. While this gives control, it's risky because users may eventually discover the ban, resulting in negative backlash and ethical concerns. People have historically reacted strongly against shadowbanning, viewing it as a terrible practice. It's a controversial tactic that some platforms, like Reddit, have used, though it's unclear if Twitter still employs it.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses a use case involving a government agency and a network analysis tool. They explain how the tool can identify coordinated attacks and misinformation by analyzing events, such as the sharing of an image on social media. The tool can build a network of accounts involved in spreading the information and identify patterns. In this case, the tool discovered a network spreading Russian propaganda and misinformation about the Nord Stream pipeline. The speaker demonstrates how the tool can counteract the narrative by generating tweets in Arabic that provide a different perspective. They also mention the potential for the tool to create knowledge across different networks and incorporate multimodal content like images and videos.

Video Saved From X

reSee.it Video Transcript AI Summary
We have developed brand safety and content moderation tools after acquisitions. Our new policy, "freedom of speech, not reach," addresses hate speech. Illegal or against the law content results in zero tolerance and removal. However, if something lawful but awful is posted, it gets labeled, de-amplified, and demonetized. This ensures brand safety by avoiding association with such content. It's worth noting that when a post is labeled and cannot be shared, users themselves take it down 30% of the time.

Video Saved From X

reSee.it Video Transcript AI Summary
Twitter's new CEO, Linda Yakinaro, has a history of censorship and promoting mask-wearing. Elon Musk hired her for her advertising background to help Twitter become profitable through ads and subscriptions. Twitter is struggling financially and needs revenue sources like video content and ads. However, Twitter is also increasing censorship, labeling and reducing the reach of "violative" content by 81%. Many accounts are being banned or restricted, and Twitter is partnering with Sprinklr to measure and reduce hate speech. Elon Musk has expressed concern about the lack of absolute free speech on platforms like Rumble. Overall, Twitter's focus on ad-friendliness and censorship is not aligned with being a free speech platform.

Video Saved From X

reSee.it Video Transcript AI Summary
Twitter firings have left us without a way to flag abusive or violent content. Disinformation can now be promoted by anyone with a blue tick. However, we are working with platforms to promote reliable information on COVID and climate change. We have a strong group of messengers who share UN content with their followers and educate users on combating disinformation. Our new slogan is "pause, take care before you share." We are establishing a central capacity at the UN to monitor and react to mis and disinformation and hate speech. We will also develop a UN code of conduct for digital platforms to set global standards for information integrity and create a more humane Internet.

Video Saved From X

reSee.it Video Transcript AI Summary
Twitter firings have left us without a way to flag abusive or violent content. Disinformation can now be promoted by anyone with a blue tick. However, we are working with platforms to promote reliable information on COVID and climate change. We have a strong group of messengers who share UN content. We are also educating users on how to combat disinformation. Our new slogan is "pause, take care before you share." We are creating a central capacity at the UN to monitor and react to mis and disinformation. We will also develop a UN code of conduct for digital platforms to set global standards for information integrity. Our goal is a more humane Internet.

Video Saved From X

reSee.it Video Transcript AI Summary
I used to be on Twitter, but it has become toxic and not worth my time. I'm trying to find an alternative to it. Social media needs a code of conduct to address issues like spreading false news and racism. The power of social media platforms should be reflected upon by society. The policy of the owner of X is also problematic. This is a problem that future society needs to address, focusing on ethics in social media.

Video Saved From X

reSee.it Video Transcript AI Summary
We have developed brand safety and content moderation tools after acquisitions. Our new policy, "freedom of speech, not reach," addresses hate speech. Illegal or against the law content results in zero tolerance. However, if something lawful but awful is posted, it gets labeled, de-amplified, and demonetized. This ensures brand safety. Once a post is labeled and cannot be shared, users often remove it themselves, as seen in 30% of cases.

Video Saved From X

reSee.it Video Transcript AI Summary
Experiment to see what people want! I believe it's less about free speech and more about choosing how algorithms program us, because they definitely are. It's hard to predict what algorithms will do, and that's risky. But, imagine an algorithm store instead of an app store, where I choose algorithms to filter my content. This gives me more control and creates a healthier relationship with technology. This applies beyond Twitter, to YouTube and financial tech too. Algorithms know our preferences better than we do, and this will only increase. We need to increase individual agency by choosing different algorithms, turning them off, or even creating our own.

Video Saved From X

reSee.it Video Transcript AI Summary
We focus on collecting data from surveillance and monitoring social media platforms. Our goal is to counter negativity and reach out to people when we see hate speech online. Our media analysis unit has increased monitoring to catch incitement to violence and direct threats. We are committed to ensuring the safety and sense of safety for New Yorkers.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 and Speaker 1 discuss hate speech and content moderation on Twitter, as well as COVID misinformation policies and broader editorial questions. - Speaker 0 says they have spoken with people who were sacked and with people recently involved in moderation, and they claim there is not enough staff to police hate speech in the company. - Speaker 1 asks if there is a rise in hate speech on Twitter and prompts for personal experience. - Speaker 0 says, personally, they see more hateful content in their feed, but they do not use the For You feed for the rest of Twitter. They describe the content as something that solicits a reaction and may include something slightly racist or slightly sexist. - Speaker 1 asks for a concrete example of hateful content. Speaker 0 says they cannot name a single example, explaining they have not used the For You feed for the last three or four weeks and have been using Twitter since the takeover for the last six months. When pressed again, Speaker 0 says they cannot identify a specific example but that many organizations say such information is on the rise. Speaker 1 again pushes for a single example, and Speaker 0 repeats they cannot provide one. - Speaker 1 points out the inconsistency, noting that Speaker 0 claimed more hateful content but cannot name a single tweet as an example. Speaker 0 responds that they have not looked at that feed recently, and that the last few weeks they saw it but cannot provide an exact example. - The discussion moves to COVID misinformation: Speaker 1 asks about changes to COVID misinformation rules and labels. Speaker 0 clarifies that the BBC does not set the rules on Twitter and asks about changes to the labels for COVID misinformation, noting there used to be a policy that disappeared. - Speaker 1 questions why the labels disappeared and asks whether COVID is no longer an issue, and whether the BBC bears responsibility for misinformation regarding masking, vaccination side effects, and not reporting on that, as well as whether the BBC was pressured by the British government to change editorial policy. Speaker 0 states that this interview is not about the BBC and emphasizes that they are not a representative of the BBC’s editorial policy, and tries to shift to another topic. - Speaker 1 continues pushing, and Speaker 0 indicates the interview is moving to another topic. Speaker 1 remarks that Speaker 0 wasn’t expecting that, and Speaker 0 suggests discussing something else.

TED

How Twitter needs to change | Jack Dorsey
Guests: Jack Dorsey, Whitney Pennington Rodgers
reSee.it Podcast Summary
Jack Dorsey expresses concern about the health of conversations on Twitter, highlighting issues like abuse, harassment, and misinformation that have emerged over the years. He emphasizes the need for a systemic approach to address these problems, including a rigorous appeals process for errors. Dorsey acknowledges the disproportionate harassment faced by women, particularly women of color, and outlines efforts to use machine learning to proactively identify abusive content, which has improved from 0% to 38% in proactive detection. He discusses the importance of diversity within the company to better understand and serve all communities. Dorsey proposes shifting the platform's focus from follower counts to interest-based engagement, aiming to foster healthier conversations. He also mentions the need to combat foreign meddling in elections and outlines four indicators of conversational health. Dorsey stresses the importance of transparency and prioritizing meaningful engagement over mere user metrics, asserting that Twitter's role in public conversation is critical for addressing global issues.

The Joe Rogan Experience

Joe Rogan Experience #1258 - Jack Dorsey, Vijaya Gadde & Tim Pool
Guests: Jack Dorsey, Tim Pool, Vijaya Gadde
reSee.it Podcast Summary
Joe Rogan hosts a discussion with Tim Pool, Vijaya Gadde, and Jack Dorsey, focusing on Twitter's policies, censorship, and the challenges of moderating content on a global platform. They address the complexities of enforcing rules against hate speech and harassment while balancing free speech rights. Rogan highlights a recent incident involving Dr. Sean Baker, whose account was locked due to a profile image deemed graphic, raising questions about the role of algorithms in content moderation. Gadde explains that reports are typically reviewed by humans after being flagged, but acknowledges the potential for mass reporting to influence moderation decisions. The conversation shifts to the implications of misinformation and the responsibility of platforms to manage harmful content, particularly regarding public health discussions. Pool raises concerns about the potential bias in moderation practices, suggesting that certain ideologies may be disproportionately targeted. They discuss the challenges of defining and policing hate speech, with Gadde emphasizing that Twitter's policies aim to protect marginalized groups. The group debates the effectiveness of these policies and the potential for creating echo chambers that stifle diverse viewpoints. Rogan and Pool express skepticism about the long-term impact of current moderation practices, suggesting that banning users may drive them to darker corners of the internet where extremist views can flourish. They advocate for a more transparent approach to moderation, including the possibility of allowing users to appeal bans and providing clearer guidelines on acceptable behavior. The discussion touches on the influence of external pressures, such as advertisers and activist organizations, on content moderation decisions. Dorsey acknowledges the need for Twitter to evolve its policies and improve communication with users about the rationale behind moderation actions. As the conversation concludes, they explore the idea of a path to redemption for banned users and the potential for implementing a jury system for content moderation decisions. The group emphasizes the importance of fostering healthy discourse and the challenges of navigating the rapidly changing landscape of online communication.

The Joe Rogan Experience

Joe Rogan Experience #1236 - Jack Dorsey
Guests: Jack Dorsey
reSee.it Podcast Summary
Joe Rogan and Jack Dorsey discuss the origins and evolution of Twitter, highlighting its unpredictable impact on communication and society. Dorsey explains that Twitter began as a project for personal use, inspired by a desire for connection and collaboration. The platform's unique features, such as the hashtag and the @ symbol, emerged organically from user behavior rather than being pre-designed by the company. Dorsey reflects on the transformative nature of Twitter, emphasizing its role in facilitating public discourse and global conversations. He acknowledges the challenges that arise from its open nature, including harassment and the spread of misinformation. The conversation touches on the responsibility of Twitter to manage these issues while maintaining free speech. Dorsey notes that the platform has evolved to address concerns about user conduct and the amplification of harmful content, often relying on automated systems to manage interactions. They discuss the complexities of moderating content, especially when it comes to high-profile figures like politicians, and the balance between allowing free expression and preventing harm. Dorsey emphasizes the importance of understanding user behavior and the need for Twitter to adapt to foster healthier conversations. The discussion also covers the potential of emerging technologies, including blockchain and cryptocurrency, and their implications for the future of finance and communication. Dorsey expresses a belief in the necessity of a global currency for the Internet and the importance of education around these technologies. Throughout the conversation, Dorsey reflects on the ethical considerations of running a tech company and the importance of transparency and accountability. He acknowledges the need for ongoing dialogue about the role of social media in shaping public discourse and the responsibility that comes with it. The conversation concludes with a recognition of the unique moment in history that both Dorsey and Rogan find themselves in, as technology continues to rapidly evolve and influence society.
View Full Interactive Feed