reSee.it - Related Video Feed

Video Saved From X

reSee.it Video Transcript AI Summary
Free speech should exist, but boundaries are needed when speech incites violence or discourages vaccinations. The question is where the US should draw those lines and what rules should be in place. With billions of online activities, AI could potentially encode and enforce these rules. A delayed response to harmful content means the harm is already done.

Video Saved From X

reSee.it Video Transcript AI Summary
We're returning to our roots of free expression on Facebook and Instagram. While we've implemented complex content moderation systems, they've led to too many mistakes and censorship. To address this, we will phase out fact checkers in favor of a community notes system, simplify content policies, and focus enforcement on serious violations. We're also reintroducing civic content based on user feedback and relocating our content moderation teams to Texas to reduce perceived bias. Additionally, we will collaborate with the U.S. government to combat global censorship trends. Our goal is to prioritize reducing mistakes and restoring free expression while still addressing illegal content. This is a complex process, but we're committed to giving people a voice once again. More updates will follow.

Video Saved From X

reSee.it Video Transcript AI Summary
We invest heavily in fighting misinformation by enforcing policies, promoting authoritative sources, avoiding borderline content, and not monetizing misleading information like climate change denial. We remove content violating policies, elevate trusted sources, and avoid recommending low-quality content. Our approach is similar to Google's search results, prioritizing reputable sources for sensitive topics like health and news.

Video Saved From X

reSee.it Video Transcript AI Summary
The laws were changed after wide consultation to balance free speech with protection from serious harm. The laws address deliberate misinformation and disinformation, and are not intended to police opinions. A high bar of serious harm must be met. ACMA, not the government, will decide whether to take action.

Video Saved From X

reSee.it Video Transcript AI Summary
Speaker 0 highlights how their platform is committed to reducing hateful content and promoting healthy behavior online. They claim that 99.9% of posted impressions are healthy, although the definition of "healthy" is not clarified. Speaker 1 questions this definition, citing examples like porn and conspiracy theories. Speaker 0 acknowledges the challenge of distinguishing between lawful but awful content and emphasizes that specific policies are in place. They mention Kanye West's potential return to the platform and assure that he will adhere to these policies. Speaker 0 believes in fostering healthy debate and discourse, even with those we disagree with, as it is essential for free expression to thrive.

Video Saved From X

reSee.it Video Transcript AI Summary
The speaker believes in adhering to the law and transparency regarding content shown on the platform. They argue against going beyond the law, stating it would be like censorship. The discussion revolves around hate speech on the platform and the responsibility to moderate it. The speaker emphasizes following the law and avoiding censorship, despite the concerns raised about hateful content. Ultimately, they stress the importance of upholding freedom of speech within legal boundaries.

Video Saved From X

reSee.it Video Transcript AI Summary
We support free speech, but there are limits, especially when it incites violence or discourages vaccination. It's important to define these boundaries. If we establish rules, how can we enforce them effectively, perhaps using AI? With billions of activities occurring, identifying harmful content after the fact can lead to significant consequences.

Video Saved From X

reSee.it Video Transcript AI Summary
Meta’s efforts to engage with the Jewish diaspora and address antisemitism on its platforms are highlighted through a newly created role focused on the Jewish diaspora. The speaker emphasizes that Meta’s commitment to addressing antisemitism has remained steadfast, especially after October 7, and asserts that Meta’s policies are industry-leading in protecting Jewish people and Israelis on its platforms. The company’s community standards include policies that prevent harassment, violence, and incitement, and feature a robust framework to combat antisemitism. The hateful conduct policy includes specific protections for Israelis and Jews. Holocaust denial and distortion were banned back in 2020, with Meta’s approach shifting industry thinking by designating denial as hate speech rather than misinformation. The emphasis was not only on facts but on protecting people from harmful conduct. Meta banned content with harmful stereotypes about Jews, such as the claim that Jews run the world or other major institutions. The policies were updated to recognize that the term Zionist can be used as a proxy for Jews and Israelis. Meta banned content claiming Zionists run the world or control the media, and it does not allow for dehumanizing comparisons of Zionists. The speaker notes finding a delicate balance between safety and expression. The role is intended to ensure that the voices of Israelis and the Jewish community are heard in the policy making process.

Video Saved From X

reSee.it Video Transcript AI Summary
We label posts about COVID-19 and vaccines with information from the WHO. We remove misinformation related to COVID-19 that has been debunked by public health experts and could lead to physical harm. This includes false claims about preventative measures, the existence of the virus, and vaccines. We also remove pages, groups, and Instagram accounts that repeatedly violate these policies. To address vaccine hesitancy, we reduce the distribution of certain content that doesn't violate our policies but could contribute to hesitancy. Our approach is based on guidance from health experts, who emphasize the importance of allowing people to ask legitimate questions and receive answers from trusted sources. We update our policies as new trends emerge.

Video Saved From X

reSee.it Video Transcript AI Summary
We're returning to our roots of free expression on Facebook and Instagram. While we've implemented complex content moderation systems, they've led to too many mistakes and excessive censorship. To address this, we will replace fact-checkers with a community notes system, simplify content policies, and focus enforcement on serious violations. We'll also reintroduce civic content based on user feedback and relocate our trust and safety teams to Texas to reduce perceived bias. Additionally, we will collaborate with the U.S. government to combat global censorship trends. Our goal is to prioritize free expression while responsibly managing harmful content. We're committed to reducing errors and simplifying our systems to empower voices on our platforms. More updates will follow.

Video Saved From X

reSee.it Video Transcript AI Summary
We flag problematic posts on Facebook in the surgeon general's office.

Video Saved From X

reSee.it Video Transcript AI Summary
I want to emphasize the importance of free expression on Facebook and Instagram. Over the years, we've seen increased censorship driven by political pressures and concerns about harmful content. To address this, we are simplifying our content moderation systems to reduce mistakes and restore free expression. We will replace fact checkers with a community notes system, simplify content policies, and focus enforcement on severe violations while relying on user reports for less critical issues. Civic content will be reintroduced as user feedback indicates a desire for it. Additionally, our trust and safety teams will relocate to Texas to enhance credibility. We will also collaborate with the U.S. government to combat global censorship trends. While this transition will take time, our goal is to prioritize voice and reduce unnecessary censorship on our platforms. Exciting changes are ahead!

Video Saved From X

reSee.it Video Transcript AI Summary
The foundation of democracy is vital, especially regarding freedom of speech. A recent policy titled "freedom of speech, not freedom of reach" emphasizes that while free speech is essential, platforms like Twitter can choose whom to amplify. It's important to limit the reach of extremist views without censoring speech entirely. Social media companies should follow the same business rules as other publishers. Providing a platform for hate groups and harmful individuals is unacceptable. The ADL has been actively monitoring and collaborating with major tech companies since 2017 to address these issues, ensuring that platforms are held accountable for the content they promote.

Video Saved From X

reSee.it Video Transcript AI Summary
Twitter acknowledges the challenges of maintaining free expression while keeping the platform healthy. They often had to make quick judgment calls based on internal debates and feedback from users and critics. In one instance, Twitter blocked links to New York Post articles about Hunter Biden's laptop, as they appeared to contain hacked materials. However, they soon realized the impact this had on free press and reversed their decision within 24 hours. Twitter admitted their initial action was wrong and allowed people to tweet the original content again.

Video Saved From X

reSee.it Video Transcript AI Summary
We have 40,000 people working on safety and integrity, spending billions on election integrity. Despite concerns, AI helps reduce hate speech on our platforms to 0.01%. AI is crucial for enforcing policies and combating misuse.

Video Saved From X

reSee.it Video Transcript AI Summary
If social media platforms like Facebook, X, Instagram, or TikTok don't moderate and monitor content, we lose total control. This loss of control extends beyond social and psychological effects, leading to real harm.

Video Saved From X

reSee.it Video Transcript AI Summary
We collaborate with over 80 fact-checking organizations worldwide in more than 60 languages to address content that doesn't violate our policies. When these partners identify false posts, especially about COVID or vaccines, we limit their distribution. Additionally, we use warning labels and reduce the visibility of such posts in people's feeds. This comprehensive approach involves providing authoritative information, removing harmful misinformation, and dealing with borderline content. Our goal is to continually improve our strategy.

Video Saved From X

reSee.it Video Transcript AI Summary
We support free speech, but there are limits, especially when it leads to violence or discourages vaccination. It's important to define these boundaries. If rules are established, how can they be enforced effectively? With billions of online activities, relying on AI to monitor and enforce these rules is crucial, as catching harmful content after the fact can lead to irreversible damage.

Video Saved From X

reSee.it Video Transcript AI Summary
X is committed to encouraging healthy behavior online, claiming 99.9% of all posted impressions are healthy. When asked to define "healthy," it was stated that lawful but awful content is difficult to see due to freedom of speech, not reach. Although Kanye West, who has not yet rejoined the platform but plans to do so, has millions of followers and is considered "lawful but awful," he will operate within specific, accessible policies. An extraordinary team oversees content to maintain the 99.9% healthy impression rate. Free expression at its core will only survive when someone you don't agree with says something you don't agree with, allowing for healthy, constructive discourse.

Video Saved From X

reSee.it Video Transcript AI Summary
Twitter is developing a tool to combat hate speech by analyzing networks to flag harmful content. This tool will hide violative tweets and redirect users to positive influencers, community groups, or mental health resources. Twitter currently quarantines harmful tweets, but believes providing healthier alternatives is more effective in disrupting radicalization.

Video Saved From X

reSee.it Video Transcript AI Summary
If social media platforms like Facebook, X, Instagram, or TikTok don't moderate and monitor content, we lose total control. This loss of control extends beyond social and psychological effects to include real harm.

Video Saved From X

reSee.it Video Transcript AI Summary
We're refocusing on free expression on Facebook and Instagram. While we've implemented complex content moderation systems, they often lead to mistakes and excessive censorship. To address this, we will replace fact checkers with community notes, simplify our enforcement policies, and prioritize high-severity violations while reducing reliance on automated filters. Civic content will be reintroduced based on user feedback. Additionally, our trust and safety teams will relocate to Texas to enhance transparency and reduce perceived bias. We will also collaborate with the U.S. government to combat global censorship trends. Our goal is to simplify our systems and restore the original mission of giving people a voice while still addressing illegal content. Exciting changes are ahead!

Video Saved From X

reSee.it Video Transcript AI Summary
We have developed brand safety and content moderation tools after acquisitions. Our new policy, "freedom of speech, not reach," addresses hate speech. Illegal or against the law content results in zero tolerance and removal. However, if something lawful but awful is posted, it gets labeled, de-amplified, and demonetized. This ensures brand safety by avoiding association with such content. It's worth noting that when a post is labeled and cannot be shared, users themselves take it down 30% of the time.

Video Saved From X

reSee.it Video Transcript AI Summary
Mister Musk's recent Twitter activity sparked a discussion on freedom of speech. While we also value this freedom, we acknowledge the need to address illegal content online.

Breaking Points

Pro Israel CRACKDOWN On Social Media
reSee.it Podcast Summary
A quiet policy overhaul on TikTok could silence Palestine coverage, as a September 13 shift, driven by a new hate speech czar hired after ADL lobbying, reshapes how users discuss Israel and Palestine. Erikica Commandel, described as an IDF instructor and State Department contractor, was installed to supervise changes, which were announced via a post notification when the app opened. The updated guidelines tighten references to violence and public-interest discussions, and require denouncing all designated terrorist organizations when they appear in neutral reporting, a rule the guest says targets Hamas coverage and related reporting. He notes before September 13 his channel had two video removals in six months, but since the change the count rose to eleven, with some cases lacking any option to appeal. Video removals come with restricted visibility, with some posts barred from the for-you feed and others stuck in limbo or shadow-banned, forcing creators to navigate monetization risk as 'soft violations' threaten payments.
View Full Interactive Feed