TruthArchive.ai - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Recently, internal files from the Cyber Threat Intelligence League were released, revealing that US and UK military contractors were involved in censoring and using psychological operations and disinformation tactics against the American people. While some argue that social media platforms have the right to remove content that violates their terms of service, it is important to note that the government cannot encourage or promote actions that infringe upon freedom of speech. The whistleblower behind these files claims that the leader of the CTIL was present at the Obama White House in 2017, receiving instructions to counter disinformation and prevent a repeat of the events in 2016.

Video Saved From X

reSee.it Video Transcript AI Summary
The FBI alerted our team about the presence of Russian propaganda in the 2016 election. They informed us that there might be a release of similar content soon.

Video Saved From X

reSee.it Video Transcript AI Summary
We are partnering with Twitter to provide accurate vaccine information when users search hashtags like vaccination or anti-vaccine. Public health agencies' websites will appear in the search results. Similar collaborations have been done with other organizations. We have also discussed with Facebook about removing scientifically disproven or debunked information. Facebook is currently working with the US CDC and seeking input from experts to identify misinformation. If information is proven to be false, they have the opportunity to remove it. Additionally, we are collaborating with Google and other platforms.

Video Saved From X

reSee.it Video Transcript AI Summary
Digital platforms are being misused to subvert science and spread disinformation and hate to billions of people. This global threat demands clear and coordinated global action. A policy brief on information integrity on digital platforms puts forward a framework for a concerned international response.

Video Saved From X

reSee.it Video Transcript AI Summary
We are enhancing disinformation research and tracking in the Surgeon General's office. Additionally, we are flagging problematic posts on Facebook for review.

Video Saved From X

reSee.it Video Transcript AI Summary
Facebook and other platforms should measure and share the impact of misinformation, along with the audience it reaches. They should work with the public to create strong enforcement strategies that apply across all their properties. Transparency about rules is important, so people shouldn't be banned from one platform while allowed on others for spreading misinformation.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker states that interference or federal election crimes will be aggressively investigated, and they will work with partners to quickly take appropriate action. They are also coordinating with private sector technology and social media companies. The goal is to ensure these platforms aren't used by foreign adversaries to spread disinformation and propaganda.

Video Saved From X

reSee.it Video Transcript AI Summary
We flag problematic posts on Facebook in the surgeon general's office.

Video Saved From X

reSee.it Video Transcript AI Summary
We need to collaborate with other countries to regulate misinformation online. An international body, similar to Interpol, could ensure accurate information on the internet and social media.

Video Saved From X

reSee.it Video Transcript AI Summary
We're advocating for talent to join the private sector. Transparency is crucial in combating harmful content and misinformation. Russia's involvement in election interference is unprecedented. Platforms are taking steps to combat misinformation and protect democracy. Stronger partnerships with government agencies are being formed. Coordination is key in decreasing fake news dissemination. 2018 is crucial for elections worldwide, and efforts are being made to safeguard their integrity.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the request for tech companies to combat misinformation and the actions the federal government is taking. They mention being in regular contact with social media platforms, increasing disinformation research, flagging problematic posts, and working with medical professionals to share accurate information. They also mention the creation of the COVID Community Corps and investing time in meeting with influencers. Proposed changes for social media platforms include measuring and sharing the impact of misinformation, creating a robust enforcement strategy, taking faster action against harmful posts, and promoting quality information sources in feed algorithms. The speaker emphasizes the importance of accurate information and the need for cooperation from social media platforms.

Video Saved From X

reSee.it Video Transcript AI Summary
Chris Wray informed Congress that he guarantees the election security of the United States. However, there are concerns about the FBI's involvement with social media giants like Twitter and Facebook. In the past, the FBI's election security task force would advise these companies on what content to restrict, which is not their role. The media should be outraged that a law enforcement agency is dictating their content. It seems that the focus is on preventing pro-Trump information from being shared. This raises questions about the integrity of the upcoming election, as these contracts are still in place.

Video Saved From X

reSee.it Video Transcript AI Summary
We are partnering with Twitter to provide accurate vaccine information when users search certain hashtags like vaccination or anti-vaccine. Public health agency websites will appear in the search results. Similar collaborations have been done with other organizations. We have also discussed with Facebook how to remove scientifically disproven or debunked information. Facebook is currently working with the WHO and the US CDC to identify misinformation. If experts confirm it as misinformation, Facebook can remove it. Additionally, we are collaborating with Google and other platforms.

Video Saved From X

reSee.it Video Transcript AI Summary
The panel discussion focuses on how major platforms like Google, Twitter, and Facebook are addressing false and misleading narratives surrounding COVID-19. The speakers discuss their policies and strategies for moderating and mitigating misinformation. They highlight the importance of providing authoritative information, removing harmful content, and addressing borderline content that could lead to vaccine hesitancy. The panelists also acknowledge the challenges of handling misinformation during a rapidly evolving crisis and emphasize the need for flexibility and adaptability in their approaches. They mention the use of AI systems and human review to sift through vast amounts of data and the importance of partnerships with health authorities and fact-checking organizations.

Video Saved From X

reSee.it Video Transcript AI Summary
We cannot completely eliminate interference in elections, but we can make it significantly harder. Our focus is on protecting election integrity and ensuring Facebook supports democracy. Although the problematic content we've identified is minimal, any interference is serious. We are collaborating with the US government on investigations into Russian interference, having recently uncovered some activity and shared our findings with Congress. While we can't disclose everything publicly due to ongoing investigations, we support Congress in informing the public and expect the government to release its findings once complete. Additionally, we will continue our investigation into Facebook's role in the election, looking into foreign actors and campaigns to better understand their use of our platform.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker discusses the intelligence community's efforts to share information with social media platforms to address inauthentic content. They clarify that the office of the director of National Intelligence would only participate in approved election security briefings with private companies like Twitter, YouTube, Microsoft, and state election officials. These briefings focus on discussing threats and have nothing to do with content moderation or the Biden laptop as Russian disinformation. The speaker mentions that there were weekly meetings between the FBI, DHS, and Twitter, but only one reference to their office. They hope that this reference was part of the approved process for election security briefings.

Video Saved From X

reSee.it Video Transcript AI Summary
Americans spreading misinformation, whether intentionally or unknowingly, can pose a significant threat to elections. This misinformation can be shared on social media without us realizing it's fake. While foreign interference is a concern, we value and encourage free speech in our country. However, we also need to ensure that if we or the involved firms are aware of foreign-sponsored and covertly sponsored information, we take steps to manage it effectively.

Video Saved From X

reSee.it Video Transcript AI Summary
The panel discussion focuses on how major platforms like Google, Twitter, and Facebook are addressing false and misleading narratives surrounding COVID-19. The panelists discuss their strategies for content moderation, including removing harmful misinformation, reducing the distribution of certain content, and providing authoritative information to users. They also address the challenges of handling misinformation during a pandemic when information is constantly evolving. The panelists emphasize the importance of partnerships with health authorities and fact-checking organizations. They highlight the use of AI and human review in content moderation and the need for flexibility and adaptability in policies and systems. The panel concludes by discussing the balance between free expression and safety on social media platforms.

Video Saved From X

reSee.it Video Transcript AI Summary
Thank you to everyone for their work in the Science and Technology, Innovation Program. Nina Jankiewicz, the Wilson Center's disinformation fellow, discussed efforts in Brazil to combat disinformation. Social media platforms are taking action to remove false information, aligning with international standards. Setting common standards is crucial for effective regulation in combating disinformation globally.

Video Saved From X

reSee.it Video Transcript AI Summary
Digital platforms are being misused to subvert science and spread disinformation and hate to billions of people. This global threat demands clear and coordinated global action. A policy brief on information integrity on digital platforms puts forward a framework for a concerned international response.

Video Saved From X

reSee.it Video Transcript AI Summary
We are focused on attracting top talent to the private sector. Transparency is key in combating harmful content and coronavirus misinformation. Russia's involvement in US elections is unprecedented and concerning. Social media platforms are working to combat fake news and misinformation. Strengthened partnerships with government agencies are crucial in safeguarding democracy during important election cycles worldwide.

Video Saved From X

reSee.it Video Transcript AI Summary
Multiple agencies within the intelligence community collaborate with social media platforms to address and remove inauthentic content. These agencies work tirelessly to collect intelligence and provide real-time information to the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI). The FBI and DHS take appropriate action by working with social media companies to remove such content.

Video Saved From X

reSee.it Video Transcript AI Summary
Addressing disinformation requires a whole society response involving governments, social media platforms, and individuals. Cooperation is needed from tech platforms and government to combat the issue. Collaboration across sectors is crucial for a solution.

Video Saved From X

reSee.it Video Transcript AI Summary
The administration is urging companies to be more aggressive in policing misinformation. They are in regular contact with social media platforms through senior staff and the COVID-19 team. The Surgeon General's office has increased disinformation research and tracking. The federal government is taking actions to address this issue.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 and Speaker 1 discuss hate speech and content moderation on Twitter, as well as COVID misinformation policies and broader editorial questions. - Speaker 0 says they have spoken with people who were sacked and with people recently involved in moderation, and they claim there is not enough staff to police hate speech in the company. - Speaker 1 asks if there is a rise in hate speech on Twitter and prompts for personal experience. - Speaker 0 says, personally, they see more hateful content in their feed, but they do not use the For You feed for the rest of Twitter. They describe the content as something that solicits a reaction and may include something slightly racist or slightly sexist. - Speaker 1 asks for a concrete example of hateful content. Speaker 0 says they cannot name a single example, explaining they have not used the For You feed for the last three or four weeks and have been using Twitter since the takeover for the last six months. When pressed again, Speaker 0 says they cannot identify a specific example but that many organizations say such information is on the rise. Speaker 1 again pushes for a single example, and Speaker 0 repeats they cannot provide one. - Speaker 1 points out the inconsistency, noting that Speaker 0 claimed more hateful content but cannot name a single tweet as an example. Speaker 0 responds that they have not looked at that feed recently, and that the last few weeks they saw it but cannot provide an exact example. - The discussion moves to COVID misinformation: Speaker 1 asks about changes to COVID misinformation rules and labels. Speaker 0 clarifies that the BBC does not set the rules on Twitter and asks about changes to the labels for COVID misinformation, noting there used to be a policy that disappeared. - Speaker 1 questions why the labels disappeared and asks whether COVID is no longer an issue, and whether the BBC bears responsibility for misinformation regarding masking, vaccination side effects, and not reporting on that, as well as whether the BBC was pressured by the British government to change editorial policy. Speaker 0 states that this interview is not about the BBC and emphasizes that they are not a representative of the BBC’s editorial policy, and tries to shift to another topic. - Speaker 1 continues pushing, and Speaker 0 indicates the interview is moving to another topic. Speaker 1 remarks that Speaker 0 wasn’t expecting that, and Speaker 0 suggests discussing something else.
View Full Interactive Feed