TruthArchive.ai - Tweets Saved By @NameRedacted247

Saved - March 19, 2025 at 6:48 PM
reSee.it AI Summary
A discussion began about Meta hiring over 160 individuals from the US Intelligence Community since 2018, questioning whether the Global Engagement Center funds Meta and drawing parallels to Operation Mockingbird. The breakdown of hires includes 14 from the CIA, 26 from the FBI, 16 from the NSA, 29 from the DHS, 32 from the State Department, and 49 from the DOD. Participants expressed interest in archiving this information and suggested investigating the presence of 8200 alumni in US tech companies, highlighting the need for an index of intelligence alumni in the tech sector.

@NameRedacted247 - Name Redacted

1. 🧵Why has Meta hired more than 160 individuals from the US Intelligence Community since 2018? Is the Global Engagement Center (GEC) directly providing funding to Meta? Is this a modern-day version of Operation Mockingbird? CIA-14 FBI-26 NSA-16 DHS-29 State Dept-32 DOD-49 This is an update to my previous thread from December 2022. The primary focus here is to provide a comprehensive list of the most notable individuals, currently working at Meta, with backgrounds in intelligence.

@NameRedacted247 - Name Redacted

1. After learning that Twitter employs at least 15 former FBI agents, I searched Facebook. What I found is alarming Facebook currently employs at least 115 people, in high-ranking positions, that formerly worked at FBI/CIA/NSA/DHS: 17 CIA 37 FBI 23 NSA 38 DHS Thread🧵

@GreenEyesinTN - GreenEyes in TN. Never violate 1A.

@NameRedacted247 @HolaKetty @GoyWonderTM @mama_aries2 please archive 🙏

@HolaKetty - Ketty D

@GreenEyesinTN @NameRedacted247 @GoyWonderTM @mama_aries2 🫡

@AlexGnarcia - A/G

@HolaKetty @GreenEyesinTN @NameRedacted247 @GoyWonderTM @mama_aries2 should see how many 8200 alum they have hired too!!! Came across figures when researching the WIZ stuff. 8200 alum in US tech companies is NOT RARE!!!! must build an index to understand ALL intel (dom/foreign) alum who populate today's TECH COMPANIES.

Saved - March 19, 2025 at 6:39 PM
reSee.it AI Summary
A user raised concerns about Meta hiring over 160 individuals from the US Intelligence Community since 2018, questioning whether the Global Engagement Center is funding Meta and if this resembles Operation Mockingbird. They provided a breakdown of the hires from various agencies: CIA (14), FBI (26), NSA (16), DHS (29), State Department (32), and DOD (49). The conversation included a request to archive the information for future reference.

@NameRedacted247 - Name Redacted

1. 🧵Why has Meta hired more than 160 individuals from the US Intelligence Community since 2018? Is the Global Engagement Center (GEC) directly providing funding to Meta? Is this a modern-day version of Operation Mockingbird? CIA-14 FBI-26 NSA-16 DHS-29 State Dept-32 DOD-49 This is an update to my previous thread from December 2022. The primary focus here is to provide a comprehensive list of the most notable individuals, currently working at Meta, with backgrounds in intelligence.

@NameRedacted247 - Name Redacted

1. After learning that Twitter employs at least 15 former FBI agents, I searched Facebook. What I found is alarming Facebook currently employs at least 115 people, in high-ranking positions, that formerly worked at FBI/CIA/NSA/DHS: 17 CIA 37 FBI 23 NSA 38 DHS Thread🧵

@GreenEyesinTN - GreenEyes in TN. Never violate 1A.

@NameRedacted247 @HolaKetty @GoyWonderTM @mama_aries2 please archive 🙏

Saved - February 25, 2024 at 1:30 PM
reSee.it AI Summary
Google and Meta have a significant number of former CIA and other intelligence community personnel in senior roles, raising concerns about their control over online censorship. These individuals, such as Jacqueline Lopour and Nick Rossmann, have been involved in developing intelligence programs and promoting conspiracy theories. The revolving door between the CIA and Big Tech is often overlooked in discussions about censorship. The increase in censorship coincides with the hiring of these individuals, leading to questions about their influence. Journalists and those affected by censorship should consider the presence of career CIA officers as content moderators in these companies.

@NameRedacted247 - Name Redacted

1. 🧵Google & Meta function as extensions of the US Intelligence Community. With Jacqueline Lopour, Google's Head of Trust & Safety, and Aaron Berman, Meta's Head of Elections Content/Misinformation Policy, both being career CIA officers, it underscores the CIA's substantial control over online censorship. Why is this CIA-Big Tech revolving door, where career CIA officers wield power to censor & decide what misinformation is, purposefully suppressed in the broader conversation about censorship? Why are career CIA officers like Jacqueline Lopour & Nick Rossmann, who both have a history of spreading misinformation & promoting the RussiaGate conspiracy theory, now in senior roles in Trust & Safety at Google, deciding what is misinformation & overseeing content moderation? The cumulative number of former Intelligence Community personnel hired by Meta & Google since 2018 is staggering. Before 2018, there were only a handful. Here are the combined hires by both companies: CIA-36 FBI-68 NSA-44 DHS/CISA-68 State Dept-86 DOD-121

Video Transcript AI Summary
We're advocating for talent to join the private sector. Transparency is crucial in combating harmful content and misinformation. Russia's involvement in election interference is unprecedented. Platforms are taking steps to combat misinformation and protect democracy. Stronger partnerships with government agencies are being formed. Coordination is key in decreasing fake news dissemination. 2018 is crucial for elections worldwide, and efforts are being made to safeguard their integrity.
Full Transcript
Speaker 0: We're also fighting for the awesome talent that's out there, to come to private sector Speaker 1: and to go go into the private Speaker 0: sector. That's correct. I will I will I will proper, though. I think that if they get into our space first, we own them for life. Speaker 1: Transparency is incredibly important in the work that I do. How do we think about the balance between harmful content and Speaker 2: was Speaker 1: coronavirus misinformation excuse me. Coronavirus information center. Speaker 2: Like an infinite amount quantity. She's the ultimate Washington insider in this, and there's one who has more foreign policy expertise. So Donald Trump is the ultimate wild card. He's unfamiliar with how the intelligence community works, and their great fear is that he could politicize them as he's done with the debate and his information about the the Russian hacks. I mean, this is extremely significant. We have increasing, mounting levels of evidence that, Russia's fingerprints are all over this. I mean, when Trump talks about being caught in the act, I mean, that's exactly what we have here. The hackers have been caught in the act. And we know that Russia has the the means and the motivation and, and the capabilities to do all of this. So, this is completely unprecedented in US electoral history. I can't even stress how significant it is. Speaker 1: Some examples we had labels on posts about COVID 19 and vaccines to show additional information from the WHO. And, when we do remove misinformation from the platform, which I'll talk about in a second, we built a tool so that, we notify users who saw that misinformation before we removed it so that they have access to the authoritative information that breaks it. And this Speaker 0: election cycle, as we be a solution in protecting our democracy. Speaker 3: We've really strengthened our partnership with, our government agencies since 2016. Speaker 4: I think you've heard us and many others talk about publicly the very good work in terms of service, how to remediate inauthentic content. In service how to remediate inauthentic content. Speaker 5: In decreasing the dissemination of fake news in transparency, and I think that's what you're seeing pay off. I think we've all said in the private meetings we had as well as this public discussion that tighter coordination really helps us. Speaker 0: In the community, it's very similar whether you're CIA, NSA, NGO, and let's make a difference. What we have in the community, the FBI, probably more anybody, once we get the win, we are in for life. Speaker 3: 2018 is is an incredibly important year for elections. Not just in with the US midterms, but around the world. There are important elections in India, in Brazil, in Mexico, in Pakistan, and in Hungary. So we wanna make sure that we do everything we can to protect the integrity of those elections. Now

@elonmusk - Elon Musk

I just typed in a Google query on my phone and the top two choices are pro censorship!

@NameRedacted247 - Name Redacted

2. Why would Google specifically choose these six senior executives to attend an @ISF_OSAC event in DC? Everyone in this picture, alongside former CIA Director Robert Gates, is a current senior executive at Google & a former career CIA officer, except for the attorney from Perkins Coie (2nd from the left). https://securityfdn.org/events-gallery/#gallery-972

Events Gallery | The Security Foundation View our events gallery for images of our International Security Foundation events supporting OSAC and informing others. securityfdn.org

@NameRedacted247 - Name Redacted

3. Jacqueline Lopour spent 10 years at the CIA before joining Google in 2017. She is currently Head of Trust & Safety, where she not only determines what constitutes misinformation but also wields considerable power in content moderation on Search & YouTube. In this interview, Lopour is promoting the RussiaGate conspiracy theory. I wonder if she believes her own propaganda.

Video Transcript AI Summary
The hacking on both sides occurred, but the release of DNC information to WikiLeaks was deliberate to help Trump get elected. Evidence suggests Russia's involvement. Trump's claim of being caught in the act is true. Russia has the means and motivation for such actions, making this unprecedented in US electoral history.
Full Transcript
Speaker 0: I think the difference here is even though the hacking was done on both sides, they deliberately released the DNC information to WikiLeaks. Mhmm. And so even though the hacking was on both sides, it was the release of the information out into the public domain with the specific motivation of helping Trump get elected which is so significant. I mean this is extremely significant. We have increasing mounting levels of evidence that Russia's fingerprints are all over this. I mean, when Trump talks about being caught in the act, I mean that's exactly what we have here. The hackers have been caught in the act. And we know that Russia has the the means and the motivation and, and the capabilities to do all of this. So, this is completely unprecedented in US electoral history, I can't even stress how significant it is.

@NameRedacted247 - Name Redacted

4. Jacqueline Lopour, a career CIA officer, played a significant role in developing various intelligence programs at Google & YouTube: *Manages intel operations for violent extremism, misinformation, hate speech, etc. *Led development of intelligence programs for global election analysis. *Developed the “YouTube Intelligence Desk.” *Developed Google’s first machine-learning threat detection & analysis program. *Provided daily COVID-19 briefings to senior leadership at Google & YouTube CEO LinkedIn - https://archive.ph/IgF1w

@NameRedacted247 - Name Redacted

5. In 2015, Lopour authored a rather bizarre article titled: “The best reason for Iran deal? The West will learn where to drop bombs.” https://www.reuters.com/article/idUSL1N1211IF/

@NameRedacted247 - Name Redacted

6. Nick Rossmann spent over 5 years at the CIA before joining Google in 2022 as Senior Manager of Trust & Safety. His activity on Twitter/X is troubling, especially considering his current position in content moderation. Why does Nick Rossmann have a problem with white people? Here are some examples of Rossmann’s unhinged behavior on Twitter/X (all archived): *Negative tweets about white people: https://archive.vn/ZdKeT https://archive.vn/PYgWh https://archive.vn/rOOpB *Hoping Trump voters cough on their grandparents (giving them COVID) & “get to rot”- https://archive.is/rppqw *Asking Trump if he is an agent of a foreign power - https://archive.vn/xi7t8 *Calling Trump “a lunatic & a racist”, tagging Keith Olbermann & using the hashtag “Resist” - https://archive.vn/Pk5Kh *Calling anti-vaxxers Nazis & Confederates - https://archive.vn/YWMDD

@NameRedacted247 - Name Redacted

7. Christopher Porter spent most of his professional career in the Intelligence Community. After 9 years at the CIA, he joined ODNI where he was Head of the IC Cyber Analysis Council leading a team of CIA, FBI, NSA & DOD regarding US elections. While at the ODNI, he regularly briefed President Biden so it’s only natural that as of June 2022, he joined Google as Head of Threat Intelligence. Porter is also a member of the Atlantic Council His LinkedIn bio states that he likes to talk about Russia & election security LinkedIn- https://archive.is/pFOI2

@NameRedacted247 - Name Redacted

8. Deborah Wituski joined Google in 2018 as Senior Director Global Intelligence. Her only prior work experience was 19 years at the CIA, where she was Chief of Staff to the Director. Wituski is a member of Council on Foreign Relations.

@NameRedacted247 - Name Redacted

9. Now why would Deborah’s intelligence desk at Google be so concerned about Putin’s personal life or the innerworkings of the Kremlin? Full video for reference- https://www.youtube.com/watch?v=1tW2tPdwI9E

Video Transcript AI Summary
I lead a team at Google analyzing geopolitical events. When the Ukraine war started, I reminded my team to question our assumptions. We need to consider if Putin has information we don't, influencing his decisions. It's important to slow down, gather all information, and view it from different angles before making decisions. Sometimes, more information takes time to emerge.
Full Transcript
Speaker 0: Here at Google, I, lead a team that's focused on a lot of the geopolitical developments and what does it mean for the company. And at the beginning of the war, the invasion of Ukraine, I had said to my analyst, we just need to keep asking ourselves, is there something that Vladimir Putin knows that we don't know? I'm not suggesting he's sick. I'm not suggesting that, you know, there's a coup around him, but we just have to we have to question our own assumptions. Is there something that he knows that we don't know that could be driving the timeline, that could be driving the decision making. Because I've learned that over time, off you don't have the complete picture. You have an assumption that, usually informs that first initial decision or first initial perspective on the action. And I just want us to kind of, like, slow down, take a look at all of the information, look at it from multiple perspectives, and recognize that some of the information takes some time to develop.

@NameRedacted247 - Name Redacted

10. Katherine Tobin joined Google in 2021. Her career path is like the others listed in this thread: after 6 years at Booz Allen Hamilton, she spent 4 years at the ODNI, followed by 4 years at the CIA, & then returned to the ODNI for another 3 years. With over 10 years of experience in the Intelligence Community, Google was the obvious choice for her. On her LinkedIn bio, she states that her favorite problems to solve are promoting DEI

@NameRedacted247 - Name Redacted

11. Katherine wrote a blog post on LinkedIn, sharing her transition from the CIA to the private sector titled “My New Mission: From Spying to Startups.” For anyone interested, here is her personal blog- https://kttobin.net/

Move fast and learn things. Triathlon, travel, and other experiments. kttobin.net

@NameRedacted247 - Name Redacted

12. Beth Schmierer joined Google in 2022 as a Global Threat Analyst and Intel Manager. Her only prior work experience was five years at the CIA. Additionally, she spent nine years in Madrid working for the State Department, where she claims to have been tasked with “implementing the US government’s agenda in Spain & Latin America”

@NameRedacted247 - Name Redacted

13. Omar Ahmed joined Google in 2020 as Manager of Trust & Safety at Google/YouTube. His only prior work experience is 10 years at the State Department. I suppose his experience in Indonesia, Afghanistan & Senegal qualifies him to be a content moderator.

@NameRedacted247 - Name Redacted

14. Jamie Washington joined Google in 2018 as Director of Threat Intelligence. Her only experience prior to joining Google is 2 years at the FBI and 13 years at the CIA (2 years in Iraq).

@NameRedacted247 - Name Redacted

15. Chelsea Magnant joined Google in 2018 as Public Policy Manager. Her only prior work experience was at the CIA for 9 years. She’s also an instructor at Aspen Institute

@NameRedacted247 - Name Redacted

16. Dawn Burton joined Google’s Trust & Safety team in 2022. She was previously at Twitter Trust & Safety but was fired by @elonmusk. Prior to this, she spent 6 years at DOJ, then 4 years at FBI as a Senior Advisor to James Comey. After a career at DOJ & FBI, working for James Comey, it’s totally normal that she would transition to Trust & Safety departments at tech platforms, right?

@NameRedacted247 - Name Redacted

17. Yong Suk Lee joined Google in 2019 as Director Global Risk Analysis. His only prior work history is 22 years at the CIA where he was the Deputy Assistant Director of Korea Mission Center. He is also a member of Council on Foreign Relations

@NameRedacted247 - Name Redacted

18. Candice Bryant joined Google in 2021 as Executive & Internal Comms Manager. Her only other work experience was 17 years at the CIA. She led the CIA's social media team with the goal of boosting the agency's public image. https://www.politico.com/news/2021/09/08/cia-least-covert-mission-510043

The CIA’s least covert mission The country’s premier intelligence agency is looking to showcase “a softer side” via social media. politico.com

@NameRedacted247 - Name Redacted

19. I could list over 100 more examples of individuals whose sole work history is within the Intelligence Community or career State Department diplomats. Many of these individuals hold positions as content moderators and policy managers. Most of them joined Google/YouTube after 2018. Is it merely a coincidence that censorship has increased aggressively since then?

@NameRedacted247 - Name Redacted

20. If you are a “journalist” who covers the censorship issue but chooses to ignore the revolving door, you lack integrity. If you are a notable figure who’s been censored on Facebook, Instagram, or YouTube, it's crucial to start questioning why career CIA officers are employed by these companies as content moderators.

Saved - December 1, 2023 at 5:08 PM
reSee.it AI Summary
Shelby Pierson, former ODNI election czar, raised concerns about a Russian "leak operation" after the Hunter Biden laptop story broke. However, her boss, former DNI John Ratcliffe, stated that the laptop was not part of a Russian disinformation campaign. Pierson's involvement in warning social media platforms about a potential hack-and-leak operation was confirmed by FBI's Elvis Chan. Pierson also had partnerships with social media and tech firms. The article questions Pierson's impartiality and explores the efforts to counter disinformation through the Foreign Malign Influence Center. The article concludes by suggesting that this partnership may be used to censor and socially engineer public opinion on various topics.

@NameRedacted247 - Name Redacted

1. #DisinfoGate PART 3 SHELBY PIERSON-ODNI ELECTION CZAR 10/15/2020- One day after the NY Post Hunter Biden Laptop story, Pierson says-“Russians leak information…whether it denigrates Biden and/or boosts Trump” Which Russian “leak operation” is she referring to? 🧵 @elonmusk

Video Transcript AI Summary
Russia is employing familiar tactics to influence public opinion, including leaking information that supports their national interests, such as denigrating Vice President Biden and boosting President Trump. They are also using social media to spread divisive narratives on various political issues in the United States. Russia's overt media is being utilized to promote narratives that align with their national interests. Additionally, they are sponsoring proxy websites that mimic legitimate sources to disseminate information. Furthermore, Russia is employing unwitting US individuals to lend credibility to the information they distribute, making it appear less obvious that it originates from Russia.
Full Transcript
Speaker 0: We have released a few statements about, what the Russians, are doing today. I think, these tactics are familiar to, many folks, which include, potential leaks of information and and specific leaking of information that fits a narrative, that suits their national interest, whether that's denigrating vice president Biden and or boosting president Trump. I think you've also seen even recent press reports, for example, of on social media and proliferating again those divisive narratives surrounding a spectrum of political issues in the United States. Russia is also using its overt to media, which has certainly raised, its profile over the past several years to promote, narratives that are, again, in their national interests And, also, sponsoring proxy websites, I think you're familiar with, the piece data takedown where we had websites that are, again, mimicking, I'd say, legitimate web presence and proliferating information, albeit sponsored, by the Russians. And then as I mentioned a minute ago, using unwitting US persons to boost the credibility of information as it's disseminated as it might not have the the sort of clunkiness if it was, coming from Russia directly.

@Jim_Jordan - Rep. Jim Jordan

We knew Big Tech was censoring conservatives, but the #TwitterFiles keep showing us it was worse than we thought.

@NameRedacted247 - Name Redacted

2. 10/19/2020- Five days after the NY Post story, Shelby Pierson’s boss, former DNI @JohnRatcliffe states emphatically: “Hunter Biden’s Laptop is NOT part of some Russian Disinformation campaign” Again, which Russian leak operation was Shelby Pierson talking about?

Video Transcript AI Summary
Adam Schiff, the chairman of the House Intelligence Committee, claimed that the Intelligence Community believes Hunter Biden's laptop and its emails are part of a Russian disinformation campaign. However, the Director of National Intelligence stated that there is no intelligence supporting this claim and no evidence has been shared with Schiff or any other member of Congress. The Director emphasized that using the intelligence community to push a political narrative is unacceptable. He made it clear that Hunter Biden's laptop is not involved in any Russian disinformation campaign, and he believes the American people are aware of this.
Full Transcript
Speaker 0: Is this Russian disinformation director? So, Maria, it's funny that, some of the people that complain the most about, intelligence being politicized are the ones politicizing intelligence and unfortunately in this case, it is Adam Schiff, the chairman of the House Intelligence uh-uh committee who as you pointed out on Friday said that the Intelligence Community believes that Hunter Biden's laptop and the emails on it are part of some Russian disinformation campaign. Let me be clear. The intelligence community doesn't believe that, because there's no intelligence that supports that and we have shared no intelligence with Chairman Schiff or any other member of Congress that Hunter Biden's laptop is part of some Russian disinformation campaign. It's simply not true and this is exactly what I said I would stop when I became the Director of National Intelligence and that's people using the intelligence community to leverage some political narrative and in this case apparently Chairman Schiff wants anything against his preferred political candidate to be deemed as not real and is using the Intelligence Community or attempting to use the Intelligence Community to say there's nothing to see here. Don't drag the intelligence community into this. Hunter Biden's laptop is not part of some Russian disinformation campaign and I think it's clear that the American people know that.

@NameRedacted247 - Name Redacted

3. Elvis Chan (FBI) was deposed in the Missouri v Biden case He ID'd Pierson as 1 of 4 Govt. officials that warned Social Media about concerns of a Russian “hack-and-leak” operation ahead of the 2020 election (pg. 222) @AGAndrewBailey-it wasn't just FBI

@AGAndrewBailey - Attorney General Andrew Bailey

The FBI deliberately planted false information about “hack-and-leak” operations to deceive social-media platforms into censoring the Hunter Biden laptop story.

@NameRedacted247 - Name Redacted

4. Yoel Roth states, in a sworn declaration, that he had weekly meetings with ODNI, DHS & FBI. Roth states he was warned of expected “hack-and-leak” operations by state actors that would involve Hunter Biden https://eqs.fec.gov/eqsdocsMUR/7827_08.pdf

Enforcing federal campaign finance law - FEC.gov The Federal Election Commission has jurisdiction over the civil enforcement of the federal campaign finance law. Enforcement cases can come from audits, complaints, referrals or self-submissions: Enforcement cases are primarily handled by the Office of General Counsel and are known as Matters Under Review (MURs). Other programs designed to augment the Office of General Counsel's enforcement role include the Alternative Dispute Resolution Program and the Administrative Fine Program. fec.gov

@mtaibbi - Matt Taibbi

@ShellenbergerMD @bariweiss 20. This post about the Hunter Biden laptop situation shows that Roth not only met weekly with the FBI and DHS, but with the Office of the Director of National Intelligence (DNI):

@NameRedacted247 - Name Redacted

5. Buried in the unclassified IC Assessment of Foreign Threats to the 2020 Elections, there is one section that stands out Page 9- the report blames Russia for publishing & amplifying disparaging content about Biden, his family & specifically “his son" https://www.dni.gov/files/ODNI/documents/assessments/ICA-declass-16MAR21.pdf

@NameRedacted247 - Name Redacted

6. @JohnRatcliffe said NO ONE from ODNI was authorized to discuss “content moderation” w/ Social Media firms Yet Shelby Pierson said “sharing information from the IC w/ Social Media platforms to consider, relative to their terms of service, how to remediate inauthentic content”

Video Transcript AI Summary
The speaker discusses the intelligence community's efforts to share information with social media platforms to address inauthentic content. They clarify that the office of the director of National Intelligence would only participate in approved election security briefings with private companies like Twitter, YouTube, Microsoft, and state election officials. These briefings focus on discussing threats and have nothing to do with content moderation or the Biden laptop as Russian disinformation. The speaker mentions that there were weekly meetings between the FBI, DHS, and Twitter, but only one reference to their office. They hope that this reference was part of the approved process for election security briefings.
Full Transcript
Speaker 0: I think you've heard us and many others talk about publicly the very good work in terms of sharing information from the intelligence community with social media platforms to consider relative to their terms of service how to remediate inauthentic content. Speaker 1: The office of the director of National Intelligence would have only been authorized to participate in Trump National Security Council approved and coordinated process for election security briefings to groups of private companies. So it would include companies like Twitter, But many other companies, YouTube, Microsoft, as well as state election officials to talk about threats, none of those meetings, But, Maria, would have had anything to do with content moderation, much less anything to do with, specifically about the Biden laptop as Russian disinformation. So there never would have been any authority or reason for anyone within the intelligence community to be saying anything otherwise. So, I think that's pretty clearly stated. And in looking at the Twitter files, I did look and see in Matt Taibbi's substack where he said that there were weekly meetings between the FBI and DHS and Twitter. And I know there are whistleblowers that are saying that as well. But Matt Taibbi also says there was only one reference to my office, and someone liaising with my office. And I assume that that I certainly hope that that was part Of the National Security Council approved process for election security briefings.

@NameRedacted247 - Name Redacted

7. DISTURBING- Pierson says domestic censorship, on social media, is justified if IC determines it’s “foreign sponsored” “if we’re aware that this information is foreign sponsored…we want to do everything we can to MANAGE this information” LISTEN: https://www.npr.org/2020/01/22/798186093/election-security-boss-threats-to-2020-are-now-broader-more-diverse

Video Transcript AI Summary
Americans spreading misinformation, whether intentionally or unknowingly, can pose a significant threat to elections. This misinformation can be shared on social media without us realizing it's fake. While foreign interference is a concern, we value and encourage free speech in our country. However, we also need to ensure that if we or the involved firms are aware of foreign-sponsored and covertly sponsored information, we take steps to manage it effectively.
Full Transcript
Speaker 0: We know that Americans are spreading misinformation. Sometimes they're doing it deliberately, sometimes they're not at all doing it deliberately. It can be a post on social media that we don't know is fake. Is that a bigger threat to the election than foreign interference? Well, I Speaker 1: think there's 2 aspects of that. You know, let's be very clear that, of course, the federal government encourages and wants as broad and Free speech as possible. That is a principle of our country, and it's probably one of the most valuable cornerstones of our society. So we want people to engage in public exchange, political exchange, and to have that freedom unfettered from foreign interference. But at the same time, I think we also want to make sure that if we or the firms involved are aware that this information is foreign sponsored and is covert in terms of its sponsorship to the user. We want to do everything we can to manage that information.
Election Security Boss: Threats To 2020 Are Now Broader, More Diverse In an exclusive interview with NPR, election threats executive Shelby Pierson says more nations may attempt more types of interference in the U.S. npr.org

@NameRedacted247 - Name Redacted

8. PARTNERSHIPS WITH SOCIAL MEDIA & TECH FIRMS 1/14/2020- Pierson speaks at EAC Summit. She confirms that the Intelligence Community, Social Media firms & Tech firms are in a “partnership” In my first #DisinfoGate thread, we heard @BillEvanina also describe this partnership

Video Transcript AI Summary
The speaker emphasizes the importance of partnership between the federal government, state and local colleagues, and social media and tech firms in securing elections. They acknowledge the valuable information and opportunities that these firms possess, which the government does not have. The integration of these relationships has been a critical step forward since 2016. Speaker 1 expresses pride in the accomplishments of the past 2 years, particularly in the last 6 to 9 months, as a collaborative effort between the government and social media and tech firms. They believe this partnership will serve as a model for the future.
Full Transcript
Speaker 0: In addition, as I mentioned in the opening part of my statement, this is a partnership. And it's not a partnership that stops within the federal government. I am keenly aware of the pressure that my state and local colleagues face every day as those that are responsible for securing the election. The exposure of the intelligence community to my state and local partners, again, through and through DHS and FBI, has been remarkable. To really understand, mutually understand one another, I think has been a critical step forward since 2016. But it can't stop there because we also have constituencies among social media firms and among tech firms who also have cognizance and information and opportunity that the intelligence community or the US government doesn't have. And again, I think you've seen and have read about the opportunities, that we are pursuing to continue to integrate that relationship with the constituencies, again, in Silicon Valley and in our private firms to make sure that even that seam and GAAP is stitched. Speaker 1: I'd actually be proud to talk about that, Arun. I think, from my perspective, what we accomplished, the past 2 years, But specifically the last 6 to 9 months, as an integrated holistic government effort in partnership with social media and tech firms, is unprecedented. And I think it's really going to be the model of the future moving forward.

@NameRedacted247 - Name Redacted

9. Shelby Pierson states: “we need an entire whole of society…working together to understand threats that come w/ election security & countering MALIGN INFLUENCE” She includes “Social Media, Tech firms, the Press, Academia, Special Interest Groups & NGO’s”

Video Transcript AI Summary
Civil society, including the press, academia, special interest groups, and NGOs, plays a crucial role in addressing election security and countering malign influence. It is not enough for just the federal government, states, or tech and social media companies to tackle this issue. We need a collaborative effort from all sectors of society to understand and address the threats. This synergy is still a work in progress.
Full Transcript
Speaker 0: But it also doesn't stop there, and it gets back to my opening comment that civil society is also a key player here. Whether you're part of the press or you're part of academia or you're part of special interest groups or you're part of NGOs, there's an entire body of expertise that also informs the voting population. So you can't simply have the feds tackling this. You can't simply have the states tackling this. You can't simply have tech firms and social media firms tackling this. We need an entire whole of society, seamless opportunity, working together to understand the threats that come with election security encountering malign influence. That is a synergy that is still in work.

@NameRedacted247 - Name Redacted

10. ODNI's Pierson & @BillEvanina held the highest positions in the IC working on ‘election security’ Pierson & @BillEvanina have both, unambiguously, stated that they worked w/ Social Media firms to “take down” or “remediate content”

Video Transcript AI Summary
Multiple agencies within the intelligence community collaborate with social media platforms to address and remove inauthentic content. These agencies work tirelessly to collect intelligence and provide real-time information to the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI). The FBI and DHS take appropriate action by working with social media companies to remove such content.
Full Transcript
Speaker 0: I think you've heard us and many others talk about publicly the very good work in terms of sharing information from the intelligence community with social media platforms to consider relative to their terms of service how to remediate inauthentic content. Double digit agencies working together every day, all day, dedicated women and men around the globe, collecting intelligence, driving action, providing real time information to DHS and FBI. FBI and DHS taking the the appropriate dealing with social media companies, taking stuff down and that

@elonmusk - Elon Musk

@ggreenwald @mtaibbi Most people don’t appreciate the significance of the point Matt was making: *Every* social media company is engaged in heavy censorship, with significant involvement of and, at times, explicit direction of the government. Google frequently makes links disappear, for example.

@NameRedacted247 - Name Redacted

11. @JohnRatcliffe said NO ONE from ODNI was authorized to discuss “content moderation” w/ Social Media firms Both Pierson & Evanina publicly admitted discussing content moderation w/ Social Media firms @Jim_Jordan @Weaponization @JudiciaryGOP should call ALL THREE to testify

Video Transcript AI Summary
The office of the director of National Intelligence would have only participated in election security briefings with private companies like Twitter, YouTube, Microsoft, and state election officials. These meetings did not involve content moderation or the Biden laptop as Russian disinformation. Therefore, there was no authority or reason for anyone in the intelligence community to say otherwise.
Full Transcript
Speaker 0: The office of the director of National Intelligence would have only been authorized to participate in Trump National Security Council approved and coordinated process for, election security briefings to groups of private companies. So it would include companies like Twitter but many other companies, YouTube, Microsoft, as well as state election officials to talk about threats. None of those meetings, Maria, would have had anything to do with content moderation, much less anything to do with, specifically about the Biden laptop as Russian disinformation. So there never would have been any authority or reason for anyone, within the intelligence community to be saying anything otherwise

@NameRedacted247 - Name Redacted

12. Brian Scully confirms, in his deposition, that ODNI officials held weekly teleconferences w/ CISA to discuss ways to “counter disinformation related to the November 2020 Elections” Exhibit 31 is OIG report on DHS that confirms this As noted earlier-Elvis Chan ID’d Pierson

@NameRedacted247 - Name Redacted

13.Shelby Pierson was appointed by former DNI Dan Coats on July 19, 2019. She was named Chair of the Election Executive & Leadership Board. She was the top election security official in the Intelligence Community https://www.dni.gov/index.php/newsroom/press-releases/item/2023-director-of-national-intelligence-daniel-r-coats-establishes-intelligence-community-election-threats-executive

@NameRedacted247 - Name Redacted

14. Was Shelby Pierson a partisan? Did her personal bias affect the types of intelligence she shared with Social Media firms? Did she use that intelligence to advise what content she wanted the Social Media firms to ‘remediate?’

@NameRedacted247 - Name Redacted

15. 2/5/2020- Trump acquitted in first impeachment Eight days later, Shelby Pierson briefed the HPSCI, where she told the committee that “Russia was working to get Trump re-elected.” https://apnews.com/article/campaigns-donald-trump-ap-top-news-elections-politics-4912baca0c4cbc6cb7a3580f4f3c9b96

Intel officials say Russia boosting Trump candidacy WASHINGTON (AP) — Intelligence officials have warned lawmakers that Russia is interfering in the 2020 election campaign to help President Donald Trump get reelected, according to three officials familiar with the closed-door briefing. apnews.com

@NameRedacted247 - Name Redacted

16.Trump was enraged over this, fired Acting DNI Maguire and replaced him with @RichardGrenell Yet Pierson remained in her position. Why?

@realDonaldTrump - Donald J. Trump

Another misinformation campaign is being launched by Democrats in Congress saying that Russia prefers me to any of the Do Nothing Democrat candidates who still have been unable to, after two weeks, count their votes in Iowa. Hoax number 7!

@NameRedacted247 - Name Redacted

17. 3/10/2020- In a classified briefing to Congress, @BillEvanina walks back claims made by Shelby Pierson a month prior. Evanina told Congress they had “nothing to support” the notion that Putin favored one candidate or another. https://www.cbsnews.com/news/administration-officials-brief-members-of-congress-on-election-security/

Trump administration officials brief Congress on election security Top U.S. officials briefed Congress on election security Tuesday, telling lawmakers they had "nothing to support" the notion that Russian President Vladimir Putin favored one candidate or another. cbsnews.com

@NameRedacted247 - Name Redacted

18. While Shelby Pierson has now left ODNI, the effort to counter “disinformation” continues Pierson’s old position as Elections Threats Executive is part of a new center at ODNI: FOREIGN MALIGN INFLUENCE CENTER Its mission is identical to DHS Disinformation Governance Board

@NameRedacted247 - Name Redacted

19. Jeffrey Wichman, former 30-year CIA officer, is the current acting Director of the Foreign Malign Influence Center #FMIC’s mission is to counter “malign influence” that seeks to influence public opinion & behavior Wichman is the de facto leader of the ‘Thought Police”

@NameRedacted247 - Name Redacted

20.#DisinfoGate- A real, documented, & vast conspiratorial effort led by Shelby Pierson & Bill Evanina at ODNI, along with other Government officials It includes virtually the entire Intel Community in partnership w/ Social Media firms, Tech firms, MSM, Academia, NGO’s, etc.

@NameRedacted247 - Name Redacted

21.#DisinfoGate- Using a false narrative that Russia interfered in the 2016 election, CISA was created as a liaison between IC & Social Media The goal was to censor free speech (under the false guise of foreign disinfo) & socially engineer the opinions of American voters

@NameRedacted247 - Name Redacted

22.#DisinfoGate- After achieving their goal of getting Trump out of office, this Deep State/Social Media partnership is now ostensibly being used to censor & socially engineer our views on COVID, vaccines, climate, race relations, Russia/Ukraine War, etc. End

@NameRedacted247 - Name Redacted

23.#DisinfoGate Part 1- Bill Evanina

@NameRedacted247 - Name Redacted

1. #DisinfoGate- VIDEO: Government official admits that Intel Community “partnered” w/ Social Media & worked together in “TAKING STUFF DOWN” Bill Evanina is the first official to admit this publicly The effort to ‘combat Disinfo,’ ahead of 2020 election, was as large as 9/11

Video Transcript AI Summary
I have witnessed a remarkable collaborative effort among various agencies, similar to what happened after 9/11. This time, multiple agencies are working together tirelessly, collecting intelligence, sharing real-time information with the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI). The DHS and FBI are taking appropriate action, including working with social media companies to remove content and conducting offensive measures globally. I believe this is the most significant role I have had in my career, and I am proud of our current success. We have taken bold actions, both publicly and privately, and they have all been successful.
Full Transcript
Speaker 0: You know, I've been in this business a long time going back prior to 911, and and I saw this type of, organizational symbiotic effort after 911. This is the 1st time I've seen it again, where we had, you know, double digit agencies working together everyday, all day, dedicated limited men around the globe, collecting intelligence, driving action, providing real time information to DHS and FBI. FBI and DHS taking the appropriate action, dealing with social media companies, taking stuff down, and then at the same time, doing really critical offensive measures around the globe, I I think was was outstanding. I'm really proud. I I I've said publicly, I think this would be the most important role I've had in my in my career, and I'm really proud of where we are right now, with success we've had. We did some really daring things the last couple months, both publicly and and non publicly. And I think they've all paid off.

@NameRedacted247 - Name Redacted

24.#DisinfoGate Part 2- Foreign Malign Influence Center - https://www.dni.gov/index.php/nctc-who-we-are/organization/340-about/organization/foreign-malign-influence-center

Foreign Malign Influence Center Joomla! - the dynamic portal engine and content management system dni.gov

@NameRedacted247 - Name Redacted

1. #DisinfoGate PART 2 FOREIGN MALIGN INFLUENCE CENTER #FMIC quietly opened it’s doors in Sept 2022 It’s mission is identical to DHS Disinformation Governance Board FMIC mission is to counter “malign influence” that seeks to influence public opinion & behavior @elonmusk

@NameRedacted247 - Name Redacted

25.Sources: Tweet 1 - https://www.youtube.com/watch?v=49MWq6cqPXU Tweet 2 - https://www.youtube.com/watch?v=woGyMvqLV5o&t=612s Tweet 6- https://www.youtube.com/watch?v=1EV9BSMfoqY https://www.foxnews.com/video/6317022404112 Tweets 8 & 9 - https://www.youtube.com/watch?v=yvMU67hsqM0&t=256s

John Ratcliffe: Hunter Biden story suppression a ‘domestic disinformation campaign’ | Fox News Video Former director of national intelligence John Ratcliffe responds to revelations in the Twitter Files on 'Sunday Mornin g Futures.'  foxnews.com
Saved - November 25, 2023 at 10:05 PM
reSee.it AI Summary
In this conversation, @NameRedacted247 discusses the impeachment of Trump and subsequent events. They question why Shelby Pierson remained in her position after briefing the committee about Russia's alleged efforts to get Trump re-elected. @NameRedacted247 also mentions the creation of the Foreign Malign Influence Center and its mission to counter disinformation. They express concerns about censorship and social engineering by the Deep State/Social Media partnership. Sources are provided for further information.

@NameRedacted247 - Name Redacted

15. 2/5/2020- Trump acquitted in first impeachment Eight days later, Shelby Pierson briefed the HPSCI, where she told the committee that “Russia was working to get Trump re-elected.” https://apnews.com/article/campaigns-donald-trump-ap-top-news-elections-politics-4912baca0c4cbc6cb7a3580f4f3c9b96

Intel officials say Russia boosting Trump candidacy WASHINGTON (AP) — Intelligence officials have warned lawmakers that Russia is interfering in the 2020 election campaign to help President Donald Trump get reelected, according to three officials familiar with the closed-door briefing. apnews.com

@NameRedacted247 - Name Redacted

16.Trump was enraged over this, fired Acting DNI Maguire and replaced him with @RichardGrenell Yet Pierson remained in her position. Why?

@realDonaldTrump - Donald J. Trump

Another misinformation campaign is being launched by Democrats in Congress saying that Russia prefers me to any of the Do Nothing Democrat candidates who still have been unable to, after two weeks, count their votes in Iowa. Hoax number 7!

@NameRedacted247 - Name Redacted

17. 3/10/2020- In a classified briefing to Congress, @BillEvanina walks back claims made by Shelby Pierson a month prior. Evanina told Congress they had “nothing to support” the notion that Putin favored one candidate or another. https://www.cbsnews.com/news/administration-officials-brief-members-of-congress-on-election-security/

Trump administration officials brief Congress on election security Top U.S. officials briefed Congress on election security Tuesday, telling lawmakers they had "nothing to support" the notion that Russian President Vladimir Putin favored one candidate or another. cbsnews.com

@NameRedacted247 - Name Redacted

18. While Shelby Pierson has now left ODNI, the effort to counter “disinformation” continues Pierson’s old position as Elections Threats Executive is part of a new center at ODNI: FOREIGN MALIGN INFLUENCE CENTER Its mission is identical to DHS Disinformation Governance Board

@NameRedacted247 - Name Redacted

19. Jeffrey Wichman, former 30-year CIA officer, is the current acting Director of the Foreign Malign Influence Center #FMIC’s mission is to counter “malign influence” that seeks to influence public opinion & behavior Wichman is the de facto leader of the ‘Thought Police”

@NameRedacted247 - Name Redacted

20.#DisinfoGate- A real, documented, & vast conspiratorial effort led by Shelby Pierson & Bill Evanina at ODNI, along with other Government officials It includes virtually the entire Intel Community in partnership w/ Social Media firms, Tech firms, MSM, Academia, NGO’s, etc.

@NameRedacted247 - Name Redacted

21.#DisinfoGate- Using a false narrative that Russia interfered in the 2016 election, CISA was created as a liaison between IC & Social Media The goal was to censor free speech (under the false guise of foreign disinfo) & socially engineer the opinions of American voters

@NameRedacted247 - Name Redacted

22.#DisinfoGate- After achieving their goal of getting Trump out of office, this Deep State/Social Media partnership is now ostensibly being used to censor & socially engineer our views on COVID, vaccines, climate, race relations, Russia/Ukraine War, etc. End

@NameRedacted247 - Name Redacted

23.#DisinfoGate Part 1- Bill Evanina

@NameRedacted247 - Name Redacted

1. #DisinfoGate- VIDEO: Government official admits that Intel Community “partnered” w/ Social Media & worked together in “TAKING STUFF DOWN” Bill Evanina is the first official to admit this publicly The effort to ‘combat Disinfo,’ ahead of 2020 election, was as large as 9/11

Video Transcript AI Summary
I have witnessed a remarkable collaborative effort among various agencies, reminiscent of the post-9/11 era. Multiple agencies have been working together tirelessly, gathering intelligence globally and providing real-time information to the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI). The DHS and FBI have taken appropriate action, including working with social media platforms to remove content. Additionally, they have undertaken critical offensive measures worldwide. This has been an incredibly significant role in my career, and I am immensely proud of our achievements. Over the past few months, we have undertaken daring actions, both publicly and privately, and I believe they have all been successful.
Full Transcript
Speaker 0: You know, I've been in this business a long time going back prior to 911, and and I saw this type of, organizational symbiotic effort after 911. This is the 1st time I've seen it again, where we had, you know, double digit agencies working together everyday, all day, dedicated limited men around the globe, collecting intelligence, driving action, providing real time information to DHS and FBI. FBI and DHS taking the appropriate action, dealing with social media companies, taking stuff down, and then at the same time, doing really critical offensive measures around the globe, I I think was was outstanding. I'm really proud. I I I've said publicly, I think this would be the most important role I've had in my in my career, and I'm really proud of where we are right now, with success we've had. We did some really daring things the last couple months, both publicly and and non publicly. And I think they've all paid off.

@NameRedacted247 - Name Redacted

24.#DisinfoGate Part 2- Foreign Malign Influence Center - https://www.dni.gov/index.php/nctc-who-we-are/organization/340-about/organization/foreign-malign-influence-center

Foreign Malign Influence Center Joomla! - the dynamic portal engine and content management system dni.gov

@NameRedacted247 - Name Redacted

1. #DisinfoGate PART 2 FOREIGN MALIGN INFLUENCE CENTER #FMIC quietly opened it’s doors in Sept 2022 It’s mission is identical to DHS Disinformation Governance Board FMIC mission is to counter “malign influence” that seeks to influence public opinion & behavior @elonmusk

@NameRedacted247 - Name Redacted

25.Sources: Tweet 1 - https://www.youtube.com/watch?v=49MWq6cqPXU Tweet 2 - https://www.youtube.com/watch?v=woGyMvqLV5o&t=612s Tweet 6- https://www.youtube.com/watch?v=1EV9BSMfoqY https://www.foxnews.com/video/6317022404112 Tweets 8 & 9 - https://www.youtube.com/watch?v=yvMU67hsqM0&t=256s

John Ratcliffe: Hunter Biden story suppression a ‘domestic disinformation campaign’ | Fox News Video Former director of national intelligence John Ratcliffe responds to revelations in the Twitter Files on 'Sunday Mornin g Futures.'  foxnews.com
Saved - September 17, 2023 at 10:20 PM
reSee.it AI Summary
The joint DHS/ODNI report emphasizes partnerships to combat foreign malign influence. FMIC, led by CIA veteran Jeffrey Wichman, engages with social media companies and acts as a liaison between the IC and platforms. The report highlights the distribution of labeled disinformation to platforms like Facebook, Twitter, and TikTok. FMIC's focus extends beyond foreign disinfo, addressing various topics like Biden criticism and COVID-19 conspiracy theories. Depositions reveal FMIC's collaboration with CISA and advising social media platforms on content removal.

@NameRedacted247 - Name Redacted

3. The joint DHS-ODNI report emphasizes Public-Private Partnerships to combat Foreign Malign Influence and outlines recommended roles for FMIC: *Directly engage with social media companies to address disinformation. *Serve as a liaison between the Intelligence Community (IC) and Social Media platforms on disinformation issues.

@NameRedacted247 - Name Redacted

2. In February 2023, I flagged concerns about the ODNI's Foreign Malign Influence Center (FMIC). FMIC is a functional clone of the now-defunct Disinformation Governance Board & was launched in September 2022 under Director Jeffrey Wichman, who is a 30-year CIA veteran. https://www.dni.gov/index.php/fmic-home

FMIC Home National Counterproliferation Center Who We Are The National Counterproliferation Center (NCPC) was founded on November 21, 2005 in the Office of the Director of National Intelligence to help the Unit dni.gov

@NameRedacted247 - Name Redacted

3. The joint DHS-ODNI report emphasizes Public-Private Partnerships to combat Foreign Malign Influence and outlines recommended roles for FMIC: *Directly engage with social media companies to address disinformation. *Serve as a liaison between the Intelligence Community (IC) and Social Media platforms on disinformation issues.

@NameRedacted247 - Name Redacted

4. At the report's conclusion, the 'Analytic Deliverable Dissemination Plan' is highlighted: ODNI & CISA jointly distribute content labeled as 'disinformation' or 'foreign malign influence' to these platforms: Meta (Facebook, Instagram, WhatsApp), Twitter/X, TikTok, Snapchat, & Discord

@NameRedacted247 - Name Redacted

5. I'll address common arguments about FMIC: First - "They only focus on foreign disinfo!" Wrong Anything contradicting the U.S. establishment narrative is flagged as 'Russian disinformation.' The ODNI's Assessment on Foreign Influence in the 2020 Election cites examples like: a. Criticizing President Biden and his son b. Spreading COVID-19 conspiracy theories. c. Claims of social media censorship d. And let's not forget the 51 'intelligence experts' who said the Hunter Biden laptop had "all the classic earmarks of a Russian information operation."

@NameRedacted247 - Name Redacted

6. I was told the relationship between FMIC & CISA is 'hypothetical malfeasance' Wrong A deposition in the Missouri v. Biden case revealed that Shelby Pierson, ODNI's election threat executive (which now falls under FMIC), had weekly meetings with CISA leading up to the 2020 election. She also directly advised social media platforms on removing 'inauthentic content according to their terms of service.' Don't take my word for it; she admits this herself.

Video Transcript AI Summary
We have discussed the intelligence community's efforts to share information with social media platforms to address fake content and ensure it aligns with their terms of service.
Full Transcript
Speaker 0: I think you've heard us and many others talk about publicly the very good work in terms of sharing information from the intelligence community with social media platforms to consider relative to their terms of service how to remediate inauthentic content.
Saved - August 5, 2023 at 4:28 AM
reSee.it AI Summary
Former CIA and ODNI operatives have joined Meta, the parent company of Facebook. Kristopher Rose, who played a role in suspending Trump from the platform, and Aaron Berman, who oversaw censorship during the 2020 US election, are now part of Meta's team. Other former intelligence and government officials have also joined Meta in various roles. The company has been actively recruiting individuals with experience in government intelligence and security. The article provides a comprehensive list of these individuals and their backgrounds.

@NameRedacted247 - Name Redacted

4. Kristopher Rose spent a decade between the CIA & ODNI before joining Meta in March 2020. During his time at ODNI, Rose collaborated with CIA's Aaron Berman in composing the Presidential Daily Brief (PDB) for President Trump. After leaving the ODNI, Rose became a member of the Oversight Board of Meta in 2020, where he was among the 20 individuals involved in the decision to suspend @realDonaldTrump from all Meta platforms. Subsequently, after executing the suspension, Rose transitioned to CISA, serving as Senior Advisor to Jen Easterly. After 1 & half years at CISA, he returned to Meta, now assuming the position of "Head of Governance Insights." For those who may have lost track, here's the Kristopher Rose timeline: CIA -> ODNI -> CIA -> ODNI -> META -> CISA -> META. Are you beginning to see the bigger picture? Two CIA operatives who contributed to writing PDBs for Trump have joined Meta. Berman, in charge of censorship ahead of the 2020 Election, and Rose, the one who played a role in suspending Trump from the platform. http://linkedin.com/in/kristopherrose/…

@NameRedacted247 - Name Redacted

2. GEC was established in the 2017 NDAA. Its primary objective was not only to identify misinformation and disinformation targeted at the US and its allies but also to develop and distribute "fact-based" narratives to counter propaganda. GEC was granted the authority to provide awards to various entities, including: “Private Companies” “Media Content Providers” Presently, @America1stLegal is pursuing a lawsuit via FOIA to uncover the recipients of awards from GEC. We will be revisiting this once the information is unredacted

@NameRedacted247 - Name Redacted

3. Aaron Berman possesses an extensive background at the CIA, spanning two decades, before joining Meta in July 2019. He built the Misinformation Policy team & wrote content policies, overseeing censorship during the 2020 US Election & COVID-19. Notably, he collaborated with fact-checkers to censor "misinformation" in US & foreign elections. As of May 2023, Berman holds the position of Lead for Elections Policy content. In his prior role at the CIA, he was responsible for writing the Presidential Daily Brief (PDB) for President Trump. http://linkedin.com/in/aarondberman/… @AaronDBerman is eager to share the writing skills he acquired during his tenure at the CIA through his Substack platform. https://theblueowl.substack.com For a comprehensive list of the censorship efforts overseen by Berman, I encourage you to read the attached thread:

Video Transcript AI Summary
Aaron, a product policy manager at Facebook, discusses the challenges his team faces in determining the rules for the platform. They strive to strike a balance between allowing content and ensuring safety. Transparency is crucial in their decision-making process. Aaron acknowledges the discomfort of drawing the line between acceptable and harmful content. He emphasizes that a small percentage of users spreading harmful content can negatively impact everyone. Aaron believes that regulation would provide clearer guidelines for platforms and help define the balance of rules. He highlights the need for legislative and regulatory catch-up due to the rapid evolution of technology.
Full Transcript
Speaker 0: My name is Aaron. I've been with Facebook for 2 years now, and I'm a product policy manager. Speaker 1: What does your job entail? Speaker 0: We're part of the team that writes the rules for Facebook. If something violates our standards for safety and security, what Facebook could, should, can, do. Speaker 1: You and your team are faced with very important decisions, especially when it comes to content. Speaker 0: There's very little agreement whether we should be leaving more content up, taking more content down with any particular rule or issue that we're looking at, where something has come up where the rules are not 100% clear. We're not gonna make everybody happy. Speaker 1: How does your team work on that? Speaker 0: Transparency is incredibly important in the work that I do. How do we think about the balance between harmful content and protecting from the speech? It's a balance. Speaker 1: It ever make you feel uncomfortable to be put in a position where you're having to draw the line? Speaker 0: Yes. And I think it should make me uncomfortable and all of us who do this work. If 99% of the people are expressing their selves, sharing their family photos, exchanging ideas, 0.001% are encouraging violence or spreading harmful content that can ruin the thing for everybody. These decisions can have real effects on people. We are developing rules and policies. Without regulation, we're really navigating that space as best 2 camps. Speaker 1: Why would updating regulations help you? Speaker 0: Regulation can help us better define what is acceptable and what's not. I think a standardized approach would help platforms all across the board actually give us guidelines where right now there's very few. Technology has changed so quickly. We need the legislative and regulatory space to catch up. Regulation can help us better define what the balance of those rules should be.
The Blue Owl | Aaron Berman | Substack Hundreds of subscribers. Tips on writing and critical thinking from the President's Daily Brief. Often with sci-fi examples. Click to read The Blue Owl, by Aaron Berman, a Substack publication. theblueowl.substack.com

@NameRedacted247 - Name Redacted

4. Kristopher Rose spent a decade between the CIA & ODNI before joining Meta in March 2020. During his time at ODNI, Rose collaborated with CIA's Aaron Berman in composing the Presidential Daily Brief (PDB) for President Trump. After leaving the ODNI, Rose became a member of the Oversight Board of Meta in 2020, where he was among the 20 individuals involved in the decision to suspend @realDonaldTrump from all Meta platforms. Subsequently, after executing the suspension, Rose transitioned to CISA, serving as Senior Advisor to Jen Easterly. After 1 & half years at CISA, he returned to Meta, now assuming the position of "Head of Governance Insights." For those who may have lost track, here's the Kristopher Rose timeline: CIA -> ODNI -> CIA -> ODNI -> META -> CISA -> META. Are you beginning to see the bigger picture? Two CIA operatives who contributed to writing PDBs for Trump have joined Meta. Berman, in charge of censorship ahead of the 2020 Election, and Rose, the one who played a role in suspending Trump from the platform. http://linkedin.com/in/kristopherrose/…

@NameRedacted247 - Name Redacted

5. Michael Marando: After 8 years at DOJ, including service for the Mueller special counsel, he joined Meta in 2020, assuming the role of Director of Content Policy, working alongside Aaron Berman. Among his accomplishments, Marando proudly highlights his involvement in prosecuting @RogerJStoneJr As Director of Content Policy, he focuses on misinformation & AI. Notably, he takes credit for contributing to the development of Meta's COVID-19 misinformation policy http://linkedin.com/in/michael-marando-b544278/…

@NameRedacted247 - Name Redacted

6. Mike Torrey is a Threat Intelligence professional at Meta. Before joining Meta in 2018, Mike dedicated 3 years to the NSA and 9 years to the CIA. While at Meta, he played a key role in co-authoring a report on "The State of Influence Operations 2017-2020 in response to "foreign interference by Russian actors in 2016." Notably, in this report, he labeled "QAnon, @RogerJStoneJr & @jairbolsonaro office" as "influence operations." http://linkedin.com/in/mike-torrey-01658b14a/… The report was a collective effort, and among the other authors was Nathaniel Gleicher, Ben Nimmo, David Agranovich and others. Full report here- https://about.fb.com/wp-content/uploads/2021/05/IO-Threat-Report-May-20-2021.pdf

@NameRedacted247 - Name Redacted

7. Mike Torrey is currently spearheading the recruitment efforts for the Influence Operations investigations team at Meta. For those interested in joining the Meta Influence Operations department, a minimum of 3 years of work experience in Government, Intelligence Organization, etc., is required. Depending on your level of experience, the starting pay ranges from $173k-$241k per year, complemented by bonuses, equity, and benefits. If you're eager to apply, you can do so by following this link: https://www.metacareers.com/jobs/588626383349158

Security Engineer Investigator - Influence Operations Meta's mission is to give people the power to build community and bring the world closer together. Together, we can help people build stronger communities - join us. metacareers.com

@NameRedacted247 - Name Redacted

8. Ben Nimmo & Nathaniel Gleicher were co-authors of “The State of Influence Operations 2017-2018” alongside Mike Torrey. Ben Nimmo currently serves as the Global Threat Intel Lead at Meta, in addition to being the Co-Founder of Atlantic Council's DFRLab and a former Head of Investigations at Graphika. Nathaniel Gleicher joined Meta in 2018 as Director & Head of Security Policy. Prior to this role, he had 5 years of experience at DOJ and 2 years in the Obama White House National Security Council. - linkedin.com/in/nathaniel-g… Again- big thanks to @MikeBenzCyber for doing work on these people!

@NameRedacted247 - Name Redacted

9. David Agranovich, who assumed the role of Director Global Threat Disruption at Meta in 2018, is another co-author "The State of Influence Operations 2017-2020." Before his tenure at Meta, David had an impressive background: World Economic Forum- 2016-2021: Served as "Global Shaper." DOD- 2012-2018: Analyst National Security Council- 2017-2018: Director for Intelligence linkedin.com/in/dagranovich/

@NameRedacted247 - Name Redacted

10. Shawn Turskey joined Meta in 2022 as part of the Integrity Investigations & Intelligence team, all the while holding a concurrent role as an Advisory Council Member at the Atlantic Council since February 2023. His impressive resume includes significant experiences: From 1987 to 2014, he served as a Signals Intelligence Analyst at the NSA for an impressive 27 years. In 2014, he assumed the position of Executive Director and 3rd in Command at the US Cyber Command, leading 12,000 personnel and overseeing a budget of $700 million. In 2018, Shawn returned to the NSA as the NSA Representative to the DHS for Cyber & Election Security, and later joined Meta in 2022. linkedin.com/in/shawnturske…

@NameRedacted247 - Name Redacted

11. Susan M. joined Meta in 2018 as a Threat Intelligence Program Manager. Before joining Meta, she held significant roles in FBI & CIA: FBI: 1996 to 2007 (11 years), she served as a Special Agent, focusing on terrorism and counterintelligence. By the end of 2007, she assumed the role of Operational Liaison with CIA Counterterrorism Center. CIA: 2007-2008 (1 year & 7 months). Susan served as Chief Targeting for CIA Counterterrorism. FBI: 2009-2018 (9 years). Upon her return to the FBI, Susan was the Head of the FBI office in Islamabad. Naturally, after a remarkable 23-year tenure in the Intelligence Community, Meta became her new professional home. linkedin.com/in/susan-m-171…

@NameRedacted247 - Name Redacted

12. Deborah Berman: With an illustrious 9-year career at the CIA, she recently made her transition to Meta's Trust & Safety department in 2022. In a LinkedIn post announcing her departure from the CIA, she receives an unusual “welcome to the other side” reply from Anna Hadzic, another former CIA operative who currently works at InterWorks. Additionally, in a separate post, she takes a subtle dig at @elonmusk leadership values. linkedin.com/in/deborah-b-2…

@NameRedacted247 - Name Redacted

13. Maddison Lucier: Prior to joining Meta in 2021 in the Trust & Safety department as a Risk Assessment Investigator, Maddison served 7 years with CIA, followed by 4 years at the State Department as a Foreign Service Officer in Tunisia. linkedin.com/in/maddison-lu… https://t.co/ZR56Khl3hW

@NameRedacted247 - Name Redacted

14. Scott Stern, who served as Chief of Targeting for the CIA for 7 and a half years, joined Meta in 2020 as Senior Manager of Trust & Safety. In his current role, he is responsible for developing AI/ML algorithms that address issues related to fraud & scams, advertiser & brand harm, child safety, "misinformation," terrorism, bullying, and harassment. linkedin.com/in/scottbstern/

@NameRedacted247 - Name Redacted

15. Jeff Lazarus joined Meta's Trust & Safety team in 2022, following an impressive career path: CIA: Served for 5 years Google: Worked for 4 years as a Senior Policy Advisor in Trust & Safety Apple: Spent 1 year as a Senior Trust & Safety Advisor linkedin.com/in/jeff-lazaru… https://t.co/43Vpcba2Vr

@NameRedacted247 - Name Redacted

16. Jonathan Lee joined WhatsApp in 2019, initially serving as a Director of Global Public Policy. Since 2022, he has taken on the role of Head of Global Public Policy, alongside Courtney Cooper as Director. Before his tenure at WhatsApp, Jonathan held significant positions in various government agencies, including: 4 years at DOD (2009-2013) - Assistant to Deputy Secretary of Defense 2 years at National Security Council, Obama White House (2013-2015) - Director for Human Rights 2 years at DHS (2015-2017) - Deputy Chief of Staff linkedin.com/in/jonathan-l-…

@NameRedacted247 - Name Redacted

17. Courtney Cooper: Since 2022, she has held the esteemed position of Director of Global Public Policy for WhatsApp. Her impressive qualifications include: 2007-08: Booz Allen Hamilton 2008-2015: CIA (7 years) 2015-2017: National Security Council White House- Director for Afghanistan (2 years) 2018-2022: CIA (4 years) 2017-present: Member of the Council on Foreign Relations Counting a remarkable “16” years of federal service,” Courtney's career spans across 18 countries and 4 continents, encompassing experiences in a warzone and serving at the White House. Despite this extensive journey, she eagerly embraces her new role as a top director at WhatsApp. Upon joining WhatsApp, she receives a warm welcome from David Martinez, to which she replies, "Excited that our professional paths will overlap again!” linkedin.com/in/courtney-co…

@NameRedacted247 - Name Redacted

18. David Martinez: A valuable addition to Meta since 2018, he currently serves as the Head of Latin America Strategic Response Policy. David's qualifications are nothing short of impressive: 2008: Booz Allen Hamilton (1 year) 2010-2018: State Department, including 2 years as a Political Officer in Jerusalem, 1 year in Baghdad, and 2 years in Bogota 2018-2020: Atlantic Council fellow 2017-2023: Council on Foreign Relations linkedin.com/in/dmart/

@NameRedacted247 - Name Redacted

19. Joseph Schadler, after a remarkable 24-year career at the FBI, he made the move to WhatsApp in 2022, assuming the role of Trust & Safety Manager linkedin.com/in/josephschad… https://t.co/3s77mH9pZX

@NameRedacted247 - Name Redacted

20. Aleah Houze joined Meta in 2020 as a Product Policy Manager in Trust & Safety, collaborating with Aaron Berman on matters related to "politics & elections." Prior to her role at Meta, she dedicated 7 years of service at the NSA as a Liaison Officer to the UK and served as a "Subject Matter Expert." linkedin.com/in/aleah-houze/

@NameRedacted247 - Name Redacted

21. Sadaf Khan joined Meta in 2019, where she plays a vital role in Content Policy in Trust & Safety. Concurrently, she is a "team member" at the Council on Foreign Relations. Before her tenure at Meta, Sadaf served as a Democrat Staff Director for 11 years. During this time, she held a significant position as the lead advisor to the Subcommittee Ranking Member, providing guidance on various foreign policy and national security matters. Notably, Sadaf asserts that she possesses an active Top Secret clearance. linkedin.com/in/sjkhan/

@NameRedacted247 - Name Redacted

22. Mike Bradow joined Meta in 2020 as Head of Fact-Checking Policy. He works alongside Aaron Berman to tackle misinformation. What are Mike’s qualifications for this position? linkedin.com/in/mikebradow/ Mike spent 10 years at @USAID as Deputy Director in the Office of Policy. Recently @RepMattGaetz has called for USAID to be abolished 👇🏼

@NameRedacted247 - Name Redacted

23. Blake P. joined Meta in 2022 as Public Policy Manager in Content Regulation. Her qualifications are working 12 years at the State Department, where her last role was as Internet Policy Advisor and Director for Digital Freedom linkedin.com/in/blake-p-216… https://t.co/gB7ULqkXR7

@NameRedacted247 - Name Redacted

24. Jennifer A. joined Meta in 2019 and currently holds the position of Global Security Analysis Program Lead. Before joining Meta, she gained valuable experiences in the following government roles: 6 years at the FBI 4 years as a Foreign Affairs Officer at the State Department linkedin.com/in/jennifer-a-…

@NameRedacted247 - Name Redacted

25. Sam Aronson joined Meta in 2022 as a Global Policy Manager for Content Policy. His qualifications for this role include: DOD- 1 year (2014-2015)- Political Military Affairs Analyst State Department- 7 years (2015-2022)- After serving as a Special Agent for 2 years in New York, he was transferred to Niger serving in various roles. After 5 years in Niger, he spent less than a year in Afghanistan as a Political & Consular Officer. linkedin.com/in/sam-aronson…

@NameRedacted247 - Name Redacted

26. Robert Flaim joined Meta in 2020 as the Head of Strategic Programs. Before his tenure at Meta, he had a notable career path: 10 years as a JAG Corps Attorney for DOD Subsequently, a 20-year career at the FBI linkedin.com/in/bobbyflaim/ https://t.co/lRTIyxlMlr

@NameRedacted247 - Name Redacted

27. Cynthia Deitle joined Meta in 2021 as the Director Associate General Counsel. Prior to her role at Meta, she had an illustrious 19-year career at the FBI. As the Director, Associate General Counsel of the Civil Rights Team at Meta, Cynthia's work centers on the intersection of law enforcement and civil rights, with a particular focus on hate crimes, hate speech, investigations, and surveillance. linkedin.com/in/cynthia-d-3…

@NameRedacted247 - Name Redacted

28. Matthew L. joined Meta in 2022. He is a Senior Trust & Safety Manager in charge of Content Policy. Prior to joining Meta, Matthew joined Google in 2018 as Senior Policy Specialist in charge of Google Ads’ “2020 US Election Integrity efforts” Prior to joining Google, Matthew worked at the State Department for 8 years focused in Africa. He also describes himself as a “Solver of hard problems” linkedin.com/in/matthew-l-a…

@NameRedacted247 - Name Redacted

29. Steve Goldman joined Meta in 2022, assuming the role of Acute Issue Management, Response Manager. Before joining Meta, Steve had an extensive 26-year career at the FBI, where he served as a Special Agent in Charge in Portland. linkedin.com/in/steve-goldm… https://t.co/CCjRUMe3QD

@NameRedacted247 - Name Redacted

30. Mike D. joined Meta in 2018, assuming the role of Manager of Cyber Threat Intelligence. Prior to joining Meta, Mike served as a Supervisory Special Agent with the FBI for 13 years. linkedin.com/in/mike-d-48b1… https://t.co/5e2vhGWHje

@NameRedacted247 - Name Redacted

31. Leo M. joined Meta in 2022 as a Threat Investigator, specializing in Dangerous Organizations. Prior to his position at Meta, he served as a Special Agent at the FBI for 7 years and worked as an Analyst at the DOD for 4 years. linkedin.com/in/leo-m-35956… https://t.co/dqFK4qbpPN

@NameRedacted247 - Name Redacted

32. Shaarik Zafar joined Meta in 2018 in the US Public Policy department. He provides advice and counsel to cross-functional partners, driving development on issues including youth safety, hate speech, terrorism, misinformation, and political advertising. Prior to joining Meta, his extensive career includes: DOJ: 2 years (2004-2006) as Special Counsel post 9/11 discrimination DHS: 4 years (2006-2009) as a Senior Policy Advisor Obama White House: 2 years (2011-2013) as Director of Global Engagement Directorate & advisor to President Obama ODNI: 5 years (2009-2014) as Deputy Chief National Counterterrorism Center State Department: 3 years (2014-2017) as Special Representative to Muslim Communities & advisor to John Kerry ODNI: 2 and a half years (2016-2018) as Program Mission Manager linkedin.com/in/shaarikzafa…

@NameRedacted247 - Name Redacted

33. Amanda Lewis joined Meta in 2020 as the Global Systems Manager. Her prior experience includes 4 years at DOD as an Operations Analyst and 10 years as a Senior Analyst at DHS - linkedin.com/in/amanda-lewi… https://t.co/MnCB0biqSE

@NameRedacted247 - Name Redacted

34. Mary Angroola joined Meta in 2020. She is a Senior Manager in Trust & Safety. Prior to Meta: DOD- 7 years (2010-2017)- Risk Mitigation Lockheed Martin – 3 years (2017-2020) linkedin.com/in/mary-angroo… https://t.co/80kFyBEqLy

@NameRedacted247 - Name Redacted

35. Rob Abrams joined Meta in 2019 after 16 years at DHS. He is the Head Law Enforcement Outreach for SE Asia & works in Singapore. Notable job duties at Meta include: “Consults with governments to shape novel legislation affecting their relationships with social media companies & the tech industry” “Manage national security relationships around election integrity” “Testify in Legislatures on matters ranging from response to terrorism, to foreign interference in elections, to fraud and scams, to child exploitation and human trafficking” My personal view is that while Rob's efforts to expose child exploitation and human trafficking through Meta's platform are commendable and deserving of credit, some concerns arise regarding his consultations with foreign governments on election-related matters and his involvement in crafting legislation that may favor social media firms. These aspects raise questions about potential biases and the impact on fair and transparent democratic processes. linkedin.com/in/abramsrj/

@NameRedacted247 - Name Redacted

36. Tom Dziadkowiec joined Meta in 2022 as Senior Manager Trust & Safety. Prior to joining Meta, he worked at the State Department for 5 years as Program Manager linkedin.com/in/tom-dziadko… https://t.co/ns8C2CiIm1

@NameRedacted247 - Name Redacted

37. Abbas Ravjan-, Prior to joining Meta in 2019 as Privacy & Public Policy Senior Manager, Abbas was a Member of the Biden-Harris Transition Team. He developed & drafted policy memos & draft Executive Orders that were signed by President Biden on Day One. Prior to serving on the Biden-Harris transition team, he spent 6 years at the State Department. linkedin.com/in/abbasravjan…

@NameRedacted247 - Name Redacted

38. Kevin LeClair joined Meta in 2020- “Head of Operations”. Prior to Meta: 4 years State Department- Special Agent 3 years Department of Treasury- Special Agent 1 year US Attorney’s Office- Special Agent In his role at Meta, he describes himself as a Problem Solver, Business enabler & Community Builder. linkedin.com/in/kevinjlecla…

@NameRedacted247 - Name Redacted

39. Ryan Kagin- Joined Meta in 2018. He’s an “Investigator” Prior to Meta, Ryan was an Analyst at the DOD for over 9 years linkedin.com/in/kagin/ https://t.co/VVT6VDRfa3

@NameRedacted247 - Name Redacted

40. Sandy Kunvatanagarn- After serving the State Department as a Diplomat for 7 years, she joined Meta in 2019 as a Public Policy Manager. She claims to have led the policy team’s COVID-19 response in Asia linkedin.com/in/sandy-kunva… https://t.co/AFgRByk1mA

@NameRedacted247 - Name Redacted

41. Alex Kokon- Joined Meta in 2020 as Head of Trust & Safety- Africa, Middle East & Turkey- Global Response. He describes himself as an “Experienced tech manager.” Prior to taking his talents to Meta, Alex worked at the State Department for 10 years. He worked at numerous US Embassies including Cario, Riyadh, Bogota, Baghdad, Yemen, UAE, Dubai & Kabul. linkedin.com/in/alexkokon/

@NameRedacted247 - Name Redacted

42. Jonathan Carpenter joined WhatsApp in 2019 as a Product Policy Director. Prior to that he spent 6 years a the State Department stationed in Afghanistan & Pakistan as a Senior Economic Advisor & Deputy Special Representative linkedin.com/in/jjcarp/ https://t.co/s7B4v0BrLn

@NameRedacted247 - Name Redacted

43. Andrea Wells- Joined Meta in 2018. She’s currently a Manager of Global Security Threat Management. Her prior experience is 12 years at State Department as a Special Agent linkedin.com/in/andreamwell… https://t.co/3xZO1zlAOf

@NameRedacted247 - Name Redacted

44. I've made my best effort to narrow down the results to Meta employees in Content Moderation, Public Policy, and Trust & Safety positions. The initial search yielded close to 400 names, but I excluded anyone with little to no relevant experience, such as interns or individuals with significant gaps in employment between their time in the Intelligence Community and joining Meta. Additionally, some employees' LinkedIn profiles are private, so the overall number is a rough estimate. Furthermore, it's worth noting that some employees have been laid off since my December 2022 thread, while others have either deleted their LinkedIn profiles or made them private. END

Saved - July 31, 2023 at 4:36 AM

@NameRedacted247 - Name Redacted

20. Here is the full video, which is still available on YouTube. Moderated by CIA Renee DiResta from Stanford Internet Observatory. Includes Brian Clarke from Twitter Trust & Safety, Dr. Anne Merritt from Google & Aaron Berman from Facebook https://youtube.com/watch?v=hB_YNbnt8x4&t=90s…

Video Transcript AI Summary
The panel discussion focuses on how major platforms like Google, Twitter, and Facebook are addressing false and misleading narratives surrounding COVID-19. The panelists discuss their strategies for content moderation, including removing harmful misinformation, reducing the distribution of certain content, and providing authoritative information to users. They also address the challenges of handling misinformation during a pandemic when information is constantly evolving. The panelists emphasize the importance of partnerships with health authorities and fact-checking organizations. They highlight the use of AI and human review in content moderation and the need for flexibility and adaptability in policies and systems. The panel concludes by discussing the balance between free expression and safety on social media platforms.
Full Transcript
Speaker 0: Alright. We are live. Hi, everybody. It is great to be here today. I am thrilled to be moderating this panel, And we are going to get started. I'm Renee Duresta. I'm the research manager at Stanford Internet Observatory. And I'm gonna start by briefly introducing our panelists, and then we will dive right into a conversation. So we have, Anne Merritt. Doctor Merritt is a product manager at Google Search where she focuses on health and information quality. And then we have, Brian Clark. And Brian is a senior manager in trust and safety focused on miss at Twitter. And then we have Aaron Berman, who's a Product policy manager for misinformation at Facebook. So we have 3 platforms represented today, and I'm really looking forward to Just diving right in and talking about, their work to moderate and mitigate some of the, false and misleading narratives that we've seen spreading around COVID. Particularly given, I think, some really unique challenges that have come up earlier in this conference, but also a little bit more broadly about the challenge of Handling moderation, around misinformation at a time when we're still trying to learn all the facts. So a lot of really interesting questions to ask and discussions to have here on ideas like borderline content and how to think about fact checking in moderation in a time of, incomplete consensus where we're still trying to sort out the facts. So, I wanted to just start right in. For those in the audience who I think maybe are not immersed in the, nuances of platform moderation day in day out, There's 2 policies I just wanted to kinda quickly call attention to, and have the, relevant platform representative explain them. And that is, Your money or your life, which I think is a really interesting framework, to kind of couch this conversation. So we'll have, doctor Merrick from Google, handle that one. And then Remove, reduce, and form, which I think is actually kind of a foundational way of describing the policy, the sort of the, the options that platforms have when they're trying to think about whether to take down or, throttle or put up interstitials over content to help, help users make sense of what's going on in the world. What what the kind of most accurate information is. So maybe we can actually start with remove, reduce, and form because it just gets to that moderation framework. And we can Our our friend from Facebook to start in with that one. Speaker 1: Sure. Thanks, Renee. And thanks for having me. I'm excited to be part of this Conversation and this, important topic. Yeah. So remove, reduce, inform in general, like at Facebook, that's a framework we think about for general content moderation where we could Remove content completely from the platform, reduce its distribution, in some way such that people are less likely to see it in their feeds or inform users in some way such as by adding labels or warning screens or the like. I can explain excuse me. I can explain briefly how we apply that philosophy to, COVID nineteen misinformation, in particular. And I'm actually gonna describe it backwards. I'll start with inform first and then remove and reduce. So Our 3 pronged strategy here for COVID misinformation is, first, promoting vaccines and authority of information. That's, the inform pillar. Also removing harmful misinformation and then addressing borderline content that could lead to vaccine hesitancy, which falls under the reduce idea. On the on the first part, promoting vaccines and authoritative information. I mean, Facebook is obviously, we have quite a large user base, so we've been able to direct more than 2,000,000,000 people worldwide to expert, to expert health resources through our coronavirus misinformation excuse me, our coronavirus information center, which you can find in your, in your Facebook app if you use Facebook. And just for some other examples more specifically how we're informing people and getting authoritative information to people, we help people find vaccine appointments in their areas through messages and news feed. We help people get questions answered not only through the COVID information center, But also through ad impressions that we, that we support partners on such as facts about COVID ad campaigns. We, enable social norming through profile frames. So in the United States, and I'm sure many people here have used this, more than 50% of Facebook users have seeing someone that they follow use a profile frame and feed. And then in addition on informing, we're partnering with other organizations to reach, low vaccination rate communities such as with, campaigns featuring black doctors, nurses, or Spanish language campaigns. And we are using, data that we're collecting in partnerships with academic institutions to, to judge the impact of this over time. I'll also note on the inform side, when it comes to users who have interacted with false or misleading came claims, we also inform them in those situations. So just for some examples, we add labels on posts about COVID nineteen and vaccines to show additional information from the WHO. And, when we do remove misinformation from the platform, which we'll talk about in a second, we built a tool so that, we notify users who Saw that misinformation before we removed it so that they have access to the authoritative information like Brexit. So that's in a in a large bucket are, part of our inform work here. On remove for COVID nineteen, we do have a policy to remove harmful misinformation related to this topic. Specifically, we remove content that has been debunked as false and leading to physical harm by public health experts related to the pandemic. So these are things like Fake preventative measures claims the virus doesn't exist or, this also includes a variety of claims about vaccines. The idea here is to, Remove misinformation that could lead to imminent physical harm by somebody maybe not receiving appropriate treatment or exposing themselves to the disease. So on vaccines, specifically, in December last year, we started removing false claims about the vaccine, again, that fall within this category, And we've expanded the list of claims we remove about vaccines in general earlier this year in consultation with health experts, and we're continuing to make updates to these policies as Trends emerge, including just this week, in fact. And we also remove, pages, groups, and Instagram accounts that repeatedly violate these policies to get at those entities that might repeatedly spread this content. And then finally, the third part of the strategy, addressing borderline content, Which could lead to vaccine hesitancy, which falls into the reduced area. So we do reduce the distribution of certain contents, about vaccines that doesn't otherwise violate our policies. And our approach here is really grounded in guidance that we've gotten from health experts that, who've emphasized the idea that overcoming Vaccine hesitancy really depends on people being able to ask legitimate questions about safety and efficacy and get those questions answered by trusted sources. But at the same time, we also realized that certain of this content could could lead to hesitancy, so we we reduced its distribution. And, Similarly, for content that, again, does not violate our policies, we also work with a global network of more than 80 fact checking organizations around the world in more than languages. And for with these partners, when they find the posts including COVID or vaccines that they rate as false, We reduce their distribution. We also, this is part of our inform strategy. We have warning labels, and we make it less likely that people will see them in feed. And so that's the holistic strategy that we have of providing authoritative information or inform, removing harmful misinformation, and addressing this borderline content, which adds up to our our whole strategy that, yeah, ideally gets a better outcomes overall. Speaker 0: Thank you. I wanna, I wanna come back to a couple of things you in there, but I'd love to have Anne perhaps, introduce the idea of your money or your life. And then maybe, Brian for just kinda round it out. After that, perhaps we could just have you give Twitter's general rubric of how you're how you guys are thinking about this. Speaker 2: That sounds great, Renee. Thanks. Very excited to be here. So delivering a high quality search experience is core to what makes Google search so helpful. And from the early days, understanding the quality of web content has been incredibly important to us. We have 3 key pillars when it comes to our approach to information quality and search, and the first one will touch On what Renee mentioned, YMYL. So first, we fundamentally design our ranking systems to identify information that people find useful and reliable. We recognize, however, that there are certain topics where quality is particularly important. And we call those topics at Google YMYL or your money or your life. And these topics include a variety of, a variety of subcategories that essentially encompass Any web page that includes content that can affect someone's health, happiness, safety, or financial stability. So it includes things like shopping or financial transactions or information, legal matters, information about national government processes or policies. And most importantly, for this conference and and in the context of COVID, for health and crisis type situations. So when it comes to our ranking algorithms and when it when it comes to our, our approach to these topics, we place an even greater emphasis on factors related to expertise and trustworthiness. We've learned that sites that demonstrate authoritativeness on these topics are much less likely to publish false or misleading information. So we try to build our systems to identify signals of those characteristics so that we can continue to provide the most reliable information. 2nd, to complement the efforts in our ranking systems, we've developed a number of search features that help you make sense of all the information you're online, but also provide direct access to information from health authorities in the case of COVID. We worked very closely with the World Health Organization And globally with national public health authorities to surface their content front and center on the search page, in a more organized fashion. And then lastly, we do have specific medical policies for what can appear, in some search features to make sure that what we're showing is high quality, medically accurate, and helpful. So for these features, we, again, we first and foremost design our automated ranking systems to show helpful content, but our systems aren't always perfect. And if they do fail, our enforcement team does take actions in accordance with those policies. And so for these specific Google features like knowledge panels, and featured snippets, which often show up in a more prominent position on the search page, We don't allow content that contradicts or runs contrary to scientific or medical consensus and evidence based practices. So with these 3 approaches, we're able to continue to improve Search and really try to raise the bar on quality to deliver a trusted experience for people around the world. Since the COVID nineteen outbreak teams across Google have worked to provide quality information to help keep people safe and to provide public health scientists and medical professionals with the tools to combat the pandemic. So we've launched more than 200 new products, features, and initiatives, and we've pledged over a 1,000,000,000 to assist our users, customers, and our partners around the world. In search, in particular, going back to that features aspect, we've introduced a comprehensive experience for COVID nineteen that provides easy access to information from health authorities alongside new data and visualizations. And we're continuing to iterate and and and build on that experience over time so that people can easily navigate to the resources that they need. For vaccines, we've also updated a feature which surfaces a list of authorized vaccines, vaccine statistics and more, in users' locations in response to searches for specific information about COVID nineteen vaccines. And then lastly, when it comes to misinformation on search, we continue to elevate the work of fact checkers and Google search, as well as Google News and Images. We signal fact check articles in our results via dedicated tags and and rich snippets, that make it easy for users to understand. So that's just a bit Speaker 0: of an overview. Thanks again, Renee. Speaker 3: Alright. I'll talk a little bit about our approach to COVID nineteen, as in fellow Twitter. So, like, 1st and foremost, we wanna protect the integrity of the public conversation and ensure that Twitter is a place where folks can be exposed to different perspectives and engage in healthy discourse. I think our policy remediation options give us the flexibility to address a wide range of misinformation and and the associated harms. And with a critical mass of, you know, expert organizations, government officials and accounts, health professionals, Like, our goal on Twitter is to amplify those, authoritative health inform sorry, amplify authoritative health in information as much as possible. So we have a wide range of enforcement options that give us that flexibility to be proportionate in our approach to combating COVID nineteen misinfo. So the first thing is, we remove content with the highest propensity of harm and content that may, Invoke deliberate conspiracy theories related to COVID nineteen and COVID nineteen vaccines. We also will label misleading content and provide us credible and authoritative information, you through curated moments. And, some of the topics we cover with our curated moments would be Vaccine safety, vaccine science, preventative measures, and unauthorized and unapproved treatments. With the labels, we also may prevent the tweet from being recommended, and prevent users from engaging with the tweet by disabling replies, tweets, and likes. And I think that allows us to mitigate the spread while also providing those who are exposed to it with, authoritative information through our labels. In addition, we also provide, warning interstitials for content that may contain misleading information off platform. And, Lastly, we implemented a strike policy to address, users who are repeat offenders of the COVID nineteen misinformation policy. Each Strike the company's escalated enforcement action with the final strike leading to permanent suspension. So another way of really just giving us the flexibility to address Potential harms in the platform. Speaker 0: I'd love to focus in on this idea of harm. And, this is a question that, any or all of you can answer. How do you how do you determine, what is a harm? How how are you defining that in the context of COVID? And particularly over, you know, over what timeline are you thinking about that? Speaker 1: I I I can weigh in from Facebook. So far, as I mentioned for our remove policies by remove causes. When we remove content from the platform, we're basing this, on a policy that we have to remove misinformation that leads to imminent physical harm. And so that's the that's the standard we're looking at, and that's when we are doing these consultations with health experts with the WHO. We're getting their feedback both, you know, is the content debunked? Is it false? And what is the the harm that is likely to come from? And that's the standard we use for removing misinformation. Speaker 0: And on the subject, particularly, you know, we've seen so many different, wild false cures or, you know, they're they're originally, I remember when hydroxychloroquine Chloroquine became a thing. And I followed that in our work at Stanford. We looked at where that narrative originated, where have these ideas about hydroxychloroquine come from. And it actually came from sort of Southeast Asia and then parts of Africa that had some familiarity with the drug and thought perhaps this is going to be a useful thing for combating COVID. They didn't really make it into the American, social media conversation until actually March when the president started really talking about president Trump at the time. And one of the things that was sort of remarkable about that was the language this this piece of information, this theory kind of made its way around the world, you know. And Over time, even as they were finding that this was not going to be an effective treatment, unfortunately, that was not going to that. That initial hope was not worn out. It was becoming a focus of hope in 1 in 1 country even as it was as as the sort of places where it had originated were beginning to move away from it for various reasons as the scientific consensus just began to show that it was ineffective. And then we saw that happen again with, I mean, most recently Ivermectin, but I feel like there've been, 3 or 4 different ones of these. And I'm curious how you all think about moderation in that environment, particularly as you have this incomplete consensus and these narratives are global. And, you know, what is kind of seen as prevailing opinion, from 1 group of scientists in 1 place maybe is, is not necessarily quite yet solidified at a global level. Speaker 3: Yeah. I'll jump in here. I think I think one of the, One of the ways we like to project, I think, and from, Twitter standpoint is, you know, really leaning on our curation team who who's able to be able to monitor the conversation on the platform in real time and and really be able to track where that where that, where the discussion is going so that we have a a good sense of, like, the different types of, you know, issues around it and and where that's traveling so that we have a better sense from a A policy perspective of where to start shifting our resources and where to start really understanding, you know, the the the patterns associated with it. So I think, you know, We have a huge benefit on our, on our team that we have, like, such a powerful team from across the globe, who can really be able to capture that context, in real time as it unfolds. Speaker 2: I can jump in as well on this one, Renee. I think it's been really interesting, and especially looking at this both both from Google searches perspective, but also as a clinician. We've seen the evidence and the guidance change so much over the last 18 months. And not only that, but what we've realized is that when you look at this globally, not only do national public health authorities differ in their guidance, from one another, from the World Health Organization over time, but also even at the at the more local levels, often the guidance is is different. And so I think, you know, from the search side, we, you know, we do have this this medical topics policy that says when something goes against consensus, well, then we take Action. But what about all of this other area where, you know, COVID kind of threw us for a loop because everything information is changing. We don't have all the information. And even the national public health authorities are often trying to, you know, create new guidance based on of the moment and of literally of the minute research. Search. So at least for us, I think it's really been about deepening our partnerships with with these authorities and working in country, to really try to understand what what those different perspectives are. But it's certainly been you know, I think it's certainly presented really an unprecedented challenge when it comes to medical information because it really does kind of force us to ask that question, what is medical misinformation? Or when it's evolving so rapidly, You know, when do we actually call something, you know, far enough at the end of the spectrum that we're comfortable saying, you know, this fits into that category. So it's been challenging. Speaker 3: Hello, sir. Speaker 0: Oh, go ahead. Speaker 3: Oh, it was good also to jump in jump in and say, like, I think, you know, to the the to some point about partnerships, I think, You know, we we've established a partnership with AP and Reuters exactly for this type of reason because we'll be able to scale, you know, that that monitoring of the conversation. And I think it it will provide, like, the context on different surfaces, you know, whether it's trends, explore page, and and prompts in our in our misinformation label. So, I I I think that that's been a a core part of our, ability to scale this as well. Speaker 0: One of the things I'd like to raise while we have representatives 3 platforms all in one place. Is this idea of misinformation as networked? Right? Particularly as a video, for example, that someone makes and posts to YouTube, or Facebook. Let's use the example of pandemic to kind of anchor it in a in a real example here. Highly misleading piece of content. Took fact checkers about 2 days to go through the full half hour of material. In that time, 8 to 12,000,000 views on YouTube, you know, viral on Facebook. Again, very, very quickly. We did some work looking at that. The way it traverse different groups, It's hops to Twitter. The attempts to take it down kind of after the fact as it was found to be misinformation, precipitating a whole second wave of it being Reposted to kind of all platforms like Rumble and, and a few of the other kind of video sharing smaller smaller ones. How do we think about that idea of the way in which this blank is the ecosystem? And do you all at your respective companies, think about those hops to other platforms and then, you know, what is what is that, if any, impact does that have on how you choose to moderate? Speaker 1: I I can try to start here. I mean, the the question of hops to other platforms is an interesting one given that obviously Our platforms are different. We we're all independent companies, and, and each are unique in in some ways. So we we have Sometimes differing or slightly different policies that that meet those unique challenges. I will I will say on the question of, You know, content that be that can be spread virally. This is, I mean, this is why we have the policies in place that we do, so where we can remove content if we find it, and we have AI systems that once, once we detect something that we can remove can help us Scale that, the impact of that. Similarly, we have AI systems that can help us, predict content that we might, later determine needs to be removed or predict content that, we can send to our fact checking partners around the world who then, could rate content and reduce the distribution of it. There so it's It's always gonna be the case that no systems are perfect, and there will be, you know, their their misses. But we, we're continually updating these over time. And I think the point that Anne mentioned earlier before is a really important one that this is in the context of a pandemic. It's really precedent in terms of like the amounts that guidance and information is changing so quickly. And so we're updating our systems and our policies over time to account for that. But because of that, we also look, we also look at the end at outcomes. And at least at Facebook, what we are what we're looking at here, I think, I don't know if I mentioned earlier that we are running a, what we think is actually the largest global health survey ever, in partnership with some academic institutions. And we've seen, we've seen vaccine acceptance actually encouraging trends on that at least in the US and a few other countries worldwide. So Regardless of the specific content issues that can come up, we're we're focused on that overall outcome metric, and our systems layering up to improve the outcomes over time. Speaker 0: Sorry. I muted. I think it's a really significant challenge that you all are facing. You know, the Pandemic has really shown the, the kind of front lines of the moderation question, particularly for the reasons we've discussed with new emerging Consensus, the global nature of it, the attention the amount of attention that people are, are paying to it. I know that we're gonna go to q and a in just a minute, but I, and I unfortunately have drop off for a scheduling conflict. So Anne is going to moderate that. But, I'm I'd love to hear just, you know, do you think what what learnings from the COVID specific, Environment do you feel are are more broadly applicable that as you as you move forward? But what kind of key learnings, have you found, If any. Speaker 3: I think, when we're dealing with sort of You know, an issue that is a crisis that is global in nature, it's it's really important to make sure that you're understanding the different context in which these things play out. So, you know, it's not just about, you know, if you were talking about, you know, Hydroxychloroquine or Ivermectin, like, the way in which that manifests in Each place is very different. And being able to capture that and and be able to really pinpoint it pinpoint those, Differences, I think, is the key to being able to do this at scale, you know. And I think one of the things that I've learned and I take from, You know, this particular, this particular crisis is the importance of having those diverse teams and and thinking about it in a in a diverse way so that, You know, you are making you are having sorry. You are creating solutions that are tailor made to each of the the challenges that you may face across different contexts. Speaker 2: From the search side, and I would imagine that, that Brian and Aaron, you've seen this as well. I I do feel like COVID was a great stress test for our existing systems in many ways, and it's pushed us and challenged us to grow and really emphasize The importance of flexibility and adaptability, you know, as we continue to navigate the pandemic and this space and continue to try to understand Vaccine access and vaccine misinformation and then how this is playing out across the globe. And I do think from the search standpoint, I can say, I you know, I do think that every day we have 15% of our searches are brand new searches that have never been done before. And so when you think about that and put that in the context of the pandemic, We really want to make sure that our automated systems are robust so that we can get people to the most relevant and reliable information, you know. And and when it comes to Some of these, these topics that, you know, like pandemic as an example, they kind of quickly gain attention and and, you know, Often will take a day or 2 for fact checkers to kind of chime in and get content on. I I think, you know, that's where developing the most robust systems kind of out of the gate is really the best and most scalable approach. And and obviously, it's not perfect, but the better we can do there, I think that the greater scale and impact we'll be able to drive. Speaker 0: Thank you so much for, for chatting with me. I know we're gonna open it up to q and a now, and I think, People can just, take questions in the chat, and then, Anne is going to moderate this portion of the event. Thanks so much. Speaker 2: Great. Thank you, Renee. Speaker 3: Thank you, Speaker 2: Okay. Mia, where do you draw the line on harmful content? So this is something that I think we touched on around harm with with one of Renee's earlier questions. Either if you wanna take it. Speaker 1: Hi. Oh, go ahead, Brian. Speaker 3: I was gonna say that it's it's kind of a it's not a simple answer. I think, it it it's it's quite complicated because I think there are different types of harms. I think earlier, Aaron mentioned the the idea of imminent harm. And I and I think when we think about imminent harm, we also take the approach of removal, but there are also, you know, informational harms. There are things that are misleading that could lead eventually lead to of of bad outcomes. So I think our, our approach with with labeling is is to mitigate that harm through information through authoritative incredible sourcing so, that we could, like, address some of, the the potential for taking action on misinformation or or bad, content. Speaker 2: At Google, I would say so I I'm in search, but but I have some visibility across the rest of the company in terms of, Speaker 0: you know, Speaker 2: how we how we think about and define harmful. There are different ways and and harmful is not you know, misinformation versus harm, I think there's overlap there, but those are not always, You know, entirely happening together. So when it comes to our policies, you'll see, that a lot of them cover harm, but also cover other types of, you know, dangerous or misinformation outside of the the harm vector. And so YouTube YouTube has published their COVID nineteen medical information policy about a year back, and there you can actually see a list of claims very specifically related to COVID, that they take action on. Google Ads has has a policy that hinges quite heavily on harm. So there, if you look into that policy, you can actually see how they, you know, how they define it. So there are different ways to approach this, but I think ultimately it's kind of a multi pronged approach where harm is 1 vector, but there are other vectors that we need to cover as well. Speaker 1: Yeah. That's that's important, and thanks, Anne, for mentioning that. I I wanna flag also that Facebook as well. We published under our under our help center, and we can find the link if needed. A full list of claims and policies that we have related to misinformation, about COVID nineteen that we remove for that, imminent physical harm impact. And we've also listed our policies when I talked about content that we reduce because it could lead to vaccine hesitancy, See, we've listed our policies there as well. So, we've we've listed these out online for those who are interested in learning more. Speaker 2: Great. Nelana, thank you. So given that social platforms deal with massive amounts of data, How effective is AI with sifting the vast majority of what constitutes misinformation? Speaker 3: I think, sorry to jump in here. I think from my perspective, and It's a common it's it's never just AI by itself. It's never just machine learning by itself. It's it's like this it's confluence. Right? It's it's a relationship, and it's finding that right balance. There are Certain context where human review is absolutely necessary, and and it'll be able to capture that nuance. But there are things where from whether it's detection, or, like, things with, you know, common patterns where AI and and machine learning is is incredible. And I think the human in the loop process is is absolutely, critical to being able to scale this. So, You know, I would, you know, always think about it not as just 1 tool, 1 instrument, but it's it's 1 tool in a larger toolbox. Speaker 1: Yeah. I I fully agree with that. And I'll say, like, specific places where you've seen AI being particularly, useful, as Brian just said, like, tuning our systems to try to to try to predict content that we, might later qualify for removal or for fact checking. And also, to scale the impact when we do find content and finding duplicates or very near exact duplicates of that content across platforms. We use our systems for that, and it's, it can be very effective. Speaker 2: Yeah. And at search, I'd say it's similar. It's a combination of AI with with human raters, and so we work with search quality raters who measure the quality of search results on an ongoing basis for for search. They are evaluating the results based on expertise, authoritativeness, and trustworthiness, that EAT or EAT that, That we really index heavily on for these YMYL topics. And so these ratings don't directly impact ranking, but they help us to benchmark quality of our results on a continuous basis to ensure that they're meeting a high bar. So so it's kind of always always a combination of the algorithms, you know, plus plus these ratings that help inform us as to how we're doing. Lola, How do you respond to the censorship narrative? I don't know if we need a little more context there, or Is that enough to Speaker 1: I I I can I can jump in? I mean so at least at Facebook, our approach and I would say our general approach in content moderation, we're always trying to once oh, the The core value that Facebook is to enable people to express themselves freely with protecting the safety of our of our community and our user community. And our approach to COVID nineteen, you know, is is part and parcel of that balance. So, You know, we when I when we remove misinformation, as I've said before, it's that, that's a standard of health experts have told us that it is false and leading to imminent physical harm and is a safety risk. And that's also why we have our these other approaches to to balance the risks in when we don't have that, when we don't have that clear assess assessment from these outside experts, right, where we can reduce the distribution of content that experts have told us lead to vaccine hesitancy could lead to vaccine hesitancy, but it's important for people to engage with, where we can work with our fact checking partners to find contents to reduce its distribution and add labels, but not remove it. So it I think It's a broader question for social media and online communications platforms in general, that question about, The right balance of all these policies, but we're aiming to really balance that free expression and safety, those safety issues with the policies that we have. Speaker 3: Yeah. I think, you know, just to take it back off, Aaron, I think, like, it's important for us to allow discourse and debate, But we also wanna create a safe environment for it to for but not to take place. So, you know, we have different levers to pull. We have different, approaches that, you know, will allow content to be left up with, you know, context or authoritative information. But, ultimately, I think it's it's upon us to it's upon, you know, social media companies to be transparent as much as possible on their help center pages. We do that, Twitter to make sure that, folks know exactly what we're looking to mitigate, the harms that we're looking to mitigate and what we're looking to address, especially in the COVID nineteen space. Speaker 2: Thanks, Brian. Yeah. And on search, I think, things are perhaps a little bit different as a search engine, you know, given that we're indexing the web and and we're Servicing the content that's available. You know, our general approach is, especially for health and other important topics like this, we need to really lean in on and and and show the high quality high high quality and authoritative results, both in features and also, You know, more prominently on the search page. That's that's really, I think, how we how we best address that. Diversity is also important to us, and we wanna make sure that, You know, we're showing users not only high quality results, but also a diverse set of results so that they can, you know, get the information that they need from from different types of sources. So so that's that's a little bit from the search side. Travis. Okay. Can any of you discuss the disinformation dozen as a case study of how your platform has or should respond to individuals responsible for significant disinformation. Speaker 1: I'll jump in on this one unless others would like to. So this is referring to a report that a certain that 12 12 specific individuals are allegedly responsible for a large amount of misinformation about COVID nineteen. I'll say that, We would we've we've looked into this, and I'm not sure that there is consensus that this is actually the case. Our look at this found that of these 12 people, Their content, they were actually responsible for just 0.05% of all views of vaccine related content on Facebook. So this includes vaccinated post they've shared with true or false, links associated with those people as well. So Just to put that in context, all all that said, I don't want to all that said, we are we have taken action against, against these people, and and we've removed, Over 3 dozen we've moved 3,000 pages, groups in Facebook or Instagram accounts linked to these people. We've placed restrictions on other accounts related to them as well. And this is part of our strategy of ensuring that if we find if if people are posting misinformation that violates our policies, we remove it, and we remove those Pages or accounts or groups that repeatedly post that, post that content? Speaker 3: Yeah. So I think, You know, as far as the the the this information doesn't, I think we've taken account level, you know, enforcement action on a number of the accounts identified, And, you know, we review them, you know, in accordance with our Twitter rules. So, you know, if they violated our removals Statute or our label statute, we enforce based on that. Several of the, you know, the Tweets referenced, were predate our updated COVID nineteen enforcement policy. And because we don't apply rules, the violations Retroactively. We did not take enforcement action on the content posted prior to the expansion, but this is something that we're we're still, you know, we're still keeping an eye on and it's still important to us. So, in any if we do come across content that does violate our policy, we will enforce Yeah, as we as in accordance with our policies. Speaker 2: Great. Thanks. Yeah. And and on the search side, I think, I'm not sure specifically how how search performed for for this this you know, we have a case study or example. I I would say that, you know, again, first, we lean heavily on on our ranking algorithms, but knowing that there are gaps there, our policies across Google search, Google news, YouTube, as well as our advertising products clearly outlined types of behaviors that are prohibited. So, you know, misrepresentation of ownership or, you know, a primary purpose on on Google News of impersonation or other things like that, We do have explicit policies that, that we act on Speaker 0: in those in those instances. Speaker 2: Hi, everyone. I just wanted Speaker 0: to let you know that there is about 5 minutes left in the So now I'll just open it up to final remarks. Speaker 3: Okay. I can jump in here. I think one of the things I've learned from the this tackling COVID nineteen misinformation, at Twitter is I I think the importance of trying to be innovative in the face of a crisis. And I think right now, at Twitter, we have several different, you know, Pilot programs. You know? You talk about bird watch when it you know, being able to leverage, the community to help with content moderation, when it comes to, you know, the the way the design of our labels look. You know, our redesign our our redesign label pilot. I think it it it shows, like, our like, the the sort of holistic multipronged approach that you have to take in these crisis to try try to try new things, to try to tackle a very, very complex issue. So I think, you know, one of the things that I'm I'm really excited about with with Twitter is that that we're, Listening to users and and listening to researchers and and and establishing those relationships and and meeting users where where they're at, so we can help mitigate some of the harm and help provide credible information. And, you know, I think one of the one great example is our our user our pilot user reporting flow. This is something that users have asked for for a while, so I I I think, we're, you know, piloting it in in the US, South Korea, and Australia. And we'll we'll be able to, you know, see, you know, what what uses a flag of misinformation and and and see and Give us an opportunity to 0 in and to take advantage of the user feedback. So, I think, you know, for social media companies like us, we we have to make sure that we're being innovative in the face of a crisis, especially in a very complex crisis. Speaker 1: Yeah. I I can just end by saying, first, I'm I'm I'm happy to be here. So I really appreciate the opportunity to participate on this panel. And, I think one point that several of us raised throughout is and that Brian just raised again is like flexibility and the changing situation on the ground and needing to to stay up with that. I think that's that's a really important issue and one where we have seen the need and and I think responded to be flexible over time to change our policies as the facts change and update our systems. Like, one example is just the words Delta as related to, coronavirus. It was not something that people probably would have thought about 8 months ago, and now it's obviously a big topic of discussion. So that that is one of my takeaways, and we'll, we're gonna continue to our to apply our policies as we've And in the 3 pronged approach that I've laid out, but make adjustments over time as we need to. So Speaker 2: I think on the Google side, I think one of the So this pandemic really showed us, was just how important all of these platforms are in a crisis like this and trying to get information out. We know that lots of our users search for health information all the time, but watching, and you can see on Google Trends the the escalation of COVID and vaccine queries over the last year. You can see that in in these, you know, in this environment, people are really trying to get The most accurate, the most up to date information that they can to make decisions. And so we really have an important role. You As we look at this clinically, as we look at this from a public health perspective, we really have an important role in this pandemic and in this crisis. And I think it's actually been But a really good thing and a great thing to see a lot of teams across our our organization come come together and work together on this. One thing I'll just highlight, I had seen some of the comments in the in the conference chat, that you may wanna look at is, there was recently a paper that was published by the National Academy of Medicine. It was an expert panel that can convened and and basically, publish some guiding principles on identifying credible online health sources. And the paper is called identifying credible sources of health information in social media principles and attributes. It was recently published about a month ago. It's it's on the website at Google and YouTube, and you may wanna check it out, just to kind of see where we're at in terms of thinking about, You know, what all of this means as we look ahead for, you know, the next crisis.
Saved - July 31, 2023 at 4:35 AM
reSee.it AI Summary
Aaron Berman, a former CIA member, joined Facebook in 2019. He led their Misinformation Policy department, representing Meta with various stakeholders, including intelligence agencies, governments, and media. Berman played a crucial role in combating misinformation during the 2020 US Election, COVID-19, Ukraine War, and global elections. He also highlighted Meta's efforts to counter COVID-19 misinformation. Berman's posts shed light on Meta's involvement in elections worldwide, including Brazil, Nigeria, Kenya, and the Philippines. Additionally, he discussed Meta's initiatives on climate change and attended conferences on fact-checking. Berman's presence at Meta raises concerns about the influence of intelligence operatives in social media.

@NameRedacted247 - Name Redacted

2. Aaron Berman spent over 17 years with the CIA before joining Facebook in 2019. He built their Misinformation Policy department and wrote most of the misinfo policy. Additionally, Berman says he 'represented Meta with external stakeholders.' This would include The White House, US Intel Agencies (FBI, CIA, DHS, CISA, etc.), foreign governments & intelligence, MSM, and more. He led the misinformation policy operation across all of Meta’s platforms for the 2020 US Election, COVID-19, the Ukraine War, and elections worldwide." http://linkedin.com/in/aarondberman/…

Video Transcript AI Summary
The speaker discusses Facebook's framework for content moderation, which includes removing, reducing, and informing users. They explain how this framework is applied to COVID-19 misinformation. The speaker highlights efforts to promote vaccines and authoritative information, remove harmful misinformation, and address borderline content that could lead to vaccine hesitancy. They mention various ways Facebook informs users, such as directing them to expert health resources, helping them find vaccine appointments, and partnering with organizations to reach low vaccination rate communities. The speaker also discusses the removal of debunked false claims and the reduction of certain content about vaccines. They emphasize the importance of providing authoritative information and addressing vaccine hesitancy.
Full Transcript
Speaker 0: Maybe we can actually start with remove, reduce, and form because it just gets to that moderation framework. And we can ask, our our friend from Facebook to start in with that one. Speaker 1: Sure. Thanks, Renee. And thanks for having me. I'm excited to be part of this conversation and this important topic. Yeah. So remove, reduce, inform in general, like at Facebook, Look, that's a framework we think about for general content moderation where we could remove content completely from the platform, reduce its distribution, in some way such that people are less likely to see it in their feeds or inform users in some way such as by adding labels or warning screens or the like. I can explain school play excuse me. I can explain briefly how we apply that philosophy to, COVID nineteen misinformation, in particular. And I'm actually gonna Describe it backwards. I'll start with inform first and then remove and reduce. So our 3 pronged strategy here for COVID misinformation is, 1st, promoting vaccines and authoritative information. That's the inform pillar. Also removing harmful misinformation and then addressing borderline content that could lead to vaccine hesitancy Which falls under the reduce idea. On the, on the first part, promoting vaccines and authoritative information. I mean, Facebook Is obviously we have quite a large user base. So we've been able to direct more than 2,000,000,000 people worldwide to expert, to expert health resources through our coronavirus misinformation me, our coronavirus information center, which you can find in your, in your Facebook app if you use Facebook. And just for some other examples, more specifically how we're informing people and getting Authoritative information to people. We help people find vaccine appointments in their areas through messages and newsfeed. We help people get questions answered not only through the COVID information center, but also through ad impressions that we, that we support partners on such as facts about COVID ad campaigns. We enable social norming through profile frames. So in the United States, and I'm sure many people here have used this, more than 50% of Facebook users have seen someone They follow, use a profile frame and feed. And then in addition on informing, we're partnering with other organizations To reach, low vaccination rate communities such as with, campaigns featuring black doctors, nurses, or Spanish language campaigns. And we are using, data that we're collecting in partnerships with academic institutions to, to judge the impact of this over time. Also note on the informed side, when it comes to users who have interacted with false or misleading came claims, we also inform them in those situations. So just for some examples, we add labels on posts about COVID nineteen and vaccines to show additional information from the WHO. And, when we do remove misinformation from the platform, which we'll talk about in a second, we built a Tools so that, we notify users who saw that misinformation before we removed it so that they have access to the authoritative information like Brexit. So that's in a large bucket are part of our inform work here. On remove for COVID nineteen, We do have a policy to remove harmful misinformation related to this topic. Specifically, we remove content that has been Debunked as false and leading to physical harm by public health experts related to the pandemic. So these are things like fake preventative measures, Claims the virus doesn't exist or, this also includes a variety of claims about vaccines. The idea here is to remove misinformation that could lead to imminent physical harm by somebody maybe not receiving appropriate treatment or exposing themselves to the disease. So on vaccines specifically in December last year, we started removing false claims about the vaccine. Again, that fall within this category. And we've expanded the list of claims we remove about vaccines in general earlier this year in consultation with health experts. And we're continuing to make updates to these policies as Trends emerge, including just this week, in fact. And we also remove, pages, groups, and Instagram accounts that repeatedly violate these policies To get at those entities that might repeatedly spread this content. And then finally, the third part of the strategy addressing borderline content, Which could lead to vaccine hesitancy which falls into the reduce area. So we do reduce the distribution of certain contents, about vaccines That doesn't otherwise violate our policies. And our approach here is really grounded in guidance that we've gotten from health experts that, who've emphasized the idea that overcoming Vaccine hesitancy really depends on people being able to ask legitimate questions about safety and efficacy and get those questions answered by trusted sources. But at the same time, we also realized that certain of this content could could lead to hesitancy. So we we reduced its distribution. And Similarly for content that again does not violate our policies, we also work with a global network of more than 80 fact checkering organizations around the world in more than 60 languages. And for with these partners, when they find the posts including about COVID or vaccines that they rate as false, We reduce their distribution. We also, this is part of our inform strategy. We have warning labels, and we make it less likely that people will see them in feed. And so that's the holistic strategy that we have of providing authoritative information or inform removing harmful misinformation and addressing this borderline content, which adds up to our our whole strategy that, yeah, ideally gets a better outcomes overall. Speaker 0: Thank you. I

@Jim_Jordan - Rep. Jim Jordan

This wasn’t the Biden Admin's first pressure campaign. In July 2021, FB’s head of Global Affairs asked why FB had been censoring the COVID lab leak theory. The answer was clear: “Because we were under pressure from the administration . . . We shouldn’t have done it.”

@NameRedacted247 - Name Redacted

2. Aaron Berman spent over 17 years with the CIA before joining Facebook in 2019. He built their Misinformation Policy department and wrote most of the misinfo policy. Additionally, Berman says he 'represented Meta with external stakeholders.' This would include The White House, US Intel Agencies (FBI, CIA, DHS, CISA, etc.), foreign governments & intelligence, MSM, and more. He led the misinformation policy operation across all of Meta’s platforms for the 2020 US Election, COVID-19, the Ukraine War, and elections worldwide." http://linkedin.com/in/aarondberman/…

@NameRedacted247 - Name Redacted

3. Aaron Berman is very active on his LinkedIn, with over a 100 posts. Here is a post from 3 years ago, while Trump was President. Berman expressed his frustration with COVID 'misinfo' about Hydroxychloroquine and mentioned that Meta was working hard to counter COVID-19 misinformation. Wait….what? According to @JimJordan's narrative, Facebook wouldn't have censored COVID content without pressure from Biden's White House. Biden wasn’t President in 2020. What’s going on here? Smoking Gun?

@NameRedacted247 - Name Redacted

4. In this next post, Aaron Berman stated that Meta was 'expanding' false claims they remove on Facebook & Instagram about COVID and vaccines. Again, this was written on April 16, 2020. Who was the President in 2020? Trump. Therefore, @Jim_Jordan's claim that Facebook only censored COVID because of pressure from the Biden White House is misleading https://about.fb.com/news/2020/04/covid-19-misinfo-update/

@NameRedacted247 - Name Redacted

@Jim_Jordan 5. In this post, Berman discusses how Facebook is promoting 'reliable' vaccine information to parents and enforcing their policies on harmful content related to children. He expresses his happiness about the FDA authorizing COVID vaccines for children.

@NameRedacted247 - Name Redacted

6. In the following tweets, you will see LinkedIn posts written by Aaron Berman that describe Meta’s extensive efforts to ‘combat misinformation’ in global elections. This is nothing more than election meddling, interference & rigging. In the first post, Berman talks about the 2020 US Election. He describes how Facebook displayed “warnings” on over 150 million pieces of content viewed on Facebook that were ‘debunked’ by one of Meta’s third-party fact checkers. Did Biden pressure Facebook to do this? In the second post, Berman ‘sends love’ to his fellow friends working on content moderation for the 2020 election and describes his efforts as being “neck-deep”

@NameRedacted247 - Name Redacted

@Jim_Jordan 7. Berman writes in detail how Meta interfered in the 2022 Brazil election, by aggressively limiting the “forward messages” feature on WhatsApp and how Meta worked with 6 ‘fact checkers’ in Brazil @ggreenwald

@NameRedacted247 - Name Redacted

8. Berman posted about Meta’s election interference in Nigeria’s 2023 election which included ‘partering with local radio stations to create “NoFalseNewsZone” radio dramas in English & Pidgin”. Meta also ran ads on Facebook & radio in 4 different languages – Yoruba, Pidgin, Hausa & Igbo

@NameRedacted247 - Name Redacted

9. Berman posted about Meta’s election interference in Kenya’s 2022 election which included partnering with fact checkers AFP, Pesa Check & Africa Check to review content in English & Swahili. Berman states they also relied on ‘guidance from local partners.’ Who might that be? Also seen here is Meta’s extensive efforts to “combat misinfo” for the 2022 Philippines election

@NameRedacted247 - Name Redacted

10. This is nothing more than election interference by the largest social media platform with over 3 billion users worldwide. So- Is this all happening because of pressure from the “Biden White House”? Or is this a wider Intelligence Operation (Mockingbird) infiltrating Social Media to rig global elections, censor content on COVID, manipulate Americans on Ukraine War, Climate Change, and other important issues? Here’s more….

@NameRedacted247 - Name Redacted

@Jim_Jordan @ggreenwald 11. Berman posts on LinkedIn how Facebook & fact checkers are working in overdrive related to the Ukraine War. Also included is a tweet from Feb 2022, where he describes what Meta is doing to fight the spread of misinformation. He adds that Meta is working 24/7 on this.

@NameRedacted247 - Name Redacted

@Jim_Jordan @ggreenwald 12. Berman announces the launch of Facebook Climate Science Information Center to connect people to ‘authoritative info’ about climate change. He also tweeted how Meta has partnered with 80 independent fact checkers.

@NameRedacted247 - Name Redacted

13. In a very bizarre post on LinkedIn, Aaron Berman states that intelligence assessments are not meant to be crystal balls. He describes these assessments as having a range of plausible outcomes in order to help policymakers assess risks & ‘shape events’ accordingly. Lastly he states “EVEN AN ASSESSMENT THAT GETS THE PREDICTION WRONG MAY BE EXACTLY WHAT IS NEEDED.’- What does he mean by this?

@NameRedacted247 - Name Redacted

@Jim_Jordan @ggreenwald 14. Just this month, Aaron Berman attended TrustCon23, where he taught a workshop on how to write better emails in order to “influence tech executives more effectively” Interesting.

@NameRedacted247 - Name Redacted

15. Apparently there is a Global Fact Checker annual conference called “Global Fact”. It is hosted by Poynter’s International Fact Checking Network. This event was sponsored by Google, Meta, Tik Tok and News Initiative and held in Oslo, Norway. Aaron Berman was in attendance, of course, & spoke at the conference.

Video Transcript AI Summary
The world's largest fact checking summit, Global Fact, is returning to the in-person stage after 2 years of virtual conventions. This event brings together creators, consumers, and champions of fact-based reporting to explore new and innovative ways to uncover the truth. The 9th annual Global Fact, hosted by the International Fact Checking Network, invites participants to join in Oslo or online to combat global misinformation.
Full Transcript
Speaker 0: The world's largest fact checking summit will return to the in person stage after 2 years of virtual conventions. Creators, consumers, and champions Of Fact Based Reporting will come together to find new and innovative ways to shed light on the truth. Join us in Oslo or online and fight Global Misinformation at the 9th annual Global Fact hosted by the International Fact Checking Network

@NameRedacted247 - Name Redacted

@Jim_Jordan @ggreenwald 16. Looks like a great event!

@NameRedacted247 - Name Redacted

@Jim_Jordan @ggreenwald 17. Aaron Berman was also featured in an ad campaign for Meta https://about.meta.com/regulations/

Video Transcript AI Summary
Aaron, a product policy manager at Facebook, discusses the challenges his team faces in determining the rules for the platform. They strive to strike a balance between leaving up more content or taking it down, considering safety and security. Transparency is crucial in their decision-making process, although it's impossible to please everyone. Aaron acknowledges the discomfort of drawing the line between acceptable and harmful content. He believes that if a small percentage of users spread harmful content, it can negatively impact the majority. Updating regulations would provide clearer guidelines for platforms like Facebook, as technology evolves rapidly. Aaron emphasizes the need for legislative and regulatory catch-up to define the rules and achieve a better balance.
Full Transcript
Speaker 0: My name is Aaron. I've been with Facebook for 2 years now, and I'm a product policy manager. Speaker 1: What does your job entail? We're part Speaker 0: of the team that writes the rules for Facebook. If something violates our standards for safety and security, what Facebook could, should, can, do. Speaker 1: You and your team are Based with very important decisions, especially when Speaker 0: it comes to content. There's very little agreement whether we should be leaving more content up, taking more content down with any particular rule or issue that we're looking at, where something has come up where the rules are not 100% We're we're not gonna make everybody happy. How does your team work on that? Transparency is incredibly important in the work that I do. How do we think about the balance between harmful content, and protecting from the speech. It's a balance. Does it ever make Speaker 1: you feel uncomfortable to be put in a position where you're having to draw Speaker 0: On the line? Yes. And I think it should make me uncomfortable and all of us who do this work. If 99% of the people are expressing yourselves, sharing their family photos, If exchanging ideas at 0.001% are encouraging violence or spreading harmful content, that can ruin the thing for everybody. These decisions can have real effects on We're developing rules and policies without regulation. We're really navigating that space as best we can. Speaker 1: Why would updating regulations help you. Speaker 0: Regulation can help us better define what is acceptable and what's not. I think a standardized approach would help platforms all across the board actually give us guidelines where right now there's very few. Technology has changed so quickly. We need the legislative and regulatory space to catch up. Regulation can help us better define and what the balance of those rules should be.
Internet Regulations | Meta We support updated regulations on the internet’s most pressing challenges. about.meta.com

@NameRedacted247 - Name Redacted

18. Here's Berman on the Wall Street Journal.

@NameRedacted247 - Name Redacted

19. There are an alarming number of other Intelligence Community operatives currently working at Meta in Trust & Safety and other departments. I wrote a thread about this in December 2022. Some of you have seen it, but I will be publishing a new thread that highlights more people later this week. Stay tuned

@NameRedacted247 - Name Redacted

1. After learning that Twitter employs at least 15 former FBI agents, I searched Facebook. What I found is alarming Facebook currently employs at least 115 people, in high-ranking positions, that formerly worked at FBI/CIA/NSA/DHS: 17 CIA 37 FBI 23 NSA 38 DHS Thread🧵

@NameRedacted247 - Name Redacted

20. Here is the full video, which is still available on YouTube. Moderated by CIA Renee DiResta from Stanford Internet Observatory. Includes Brian Clarke from Twitter Trust & Safety, Dr. Anne Merritt from Google & Aaron Berman from Facebook https://www.youtube.com/watch?v=hB_YNbnt8x4&t=90s

Video Transcript AI Summary
The panel discussion focuses on how major platforms like Google, Twitter, and Facebook are addressing false and misleading narratives surrounding COVID-19. The speakers discuss their policies and strategies for moderating and mitigating misinformation. They highlight the importance of providing authoritative information, removing harmful content, and addressing borderline content that could lead to vaccine hesitancy. The panelists also acknowledge the challenges of handling misinformation during a rapidly evolving crisis and emphasize the need for flexibility and adaptability in their approaches. They mention the use of AI systems and human review to sift through vast amounts of data and the importance of partnerships with health authorities and fact-checking organizations.
Full Transcript
Speaker 0: Alright. We are live. Hi, everybody. It is great to be here today. I am thrilled to be moderating this panel, And we are going to get started. I'm Renee Duresta. I'm the research manager at Stanford Internet Observatory. And I'm gonna start by briefly introducing our panelists, and then we will dive right into a conversation. So we have, Anne Merritt. Doctor Merritt is a product manager at Google Search where she focuses on health and information quality. And then we have, Brian Clark. And Brian is a senior manager in trust and safety focused on miss at Twitter. And then we have Aaron Berman, who's a Product policy manager for misinformation at Facebook. So we have 3 platforms represented today, and I'm really looking forward to Just diving right in and talking about, their work to moderate and mitigate some of the, false and misleading narratives that we've seen spreading around COVID. Particularly given, I think, some really unique challenges that have come up earlier in this conference, but also a little bit more broadly about the challenge of Handling moderation, around misinformation at a time when we're still trying to learn all the facts. So a lot of really interesting questions to ask and discussions to have here on ideas like borderline content and how to think about fact checking in moderation in a time of, incomplete consensus where we're still trying to sort out the facts. So, I wanted to just start right in. For those in the audience who I think maybe are not immersed in the, nuances of platform moderation day in day out, There's 2 policies I just wanted to kinda quickly call attention to, and have the, relevant platform representative explain them. And that is, Your money or your life, which I think is a really interesting framework, to kind of couch this conversation. So we'll have, doctor Merrick from Google, handle that one. And then Remove, reduce, and form, which I think is actually kind of a foundational way of describing the policy, the sort of the, the options that platforms have when they're trying to think about whether to take down or, throttle or put up interstitials over content to help, help users make sense of what's going on in the world. What what the kind of most accurate information is. So maybe we can actually start with remove, reduce, and form because it just gets to that moderation framework. And we can Our our friend from Facebook to start in with that one. Speaker 1: Sure. Thanks, Renee. And thanks for having me. I'm excited to be part of this Conversation and this, important topic. Yeah. So remove, reduce, inform in general, like at Facebook, that's a framework we think about for general content moderation where we could Remove content completely from the platform, reduce its distribution, in some way such that people are less likely to see it in their feeds or inform users in some way such as by adding labels or warning screens or the like. I can explain excuse me. I can explain briefly how we apply that philosophy to, COVID nineteen misinformation, in particular. And I'm actually gonna describe it backwards. I'll start with inform first and then remove and reduce. So Our 3 pronged strategy here for COVID misinformation is, first, promoting vaccines and authority of information. That's, the inform pillar. Also removing harmful misinformation and then addressing borderline content that could lead to vaccine hesitancy, which falls under the reduce idea. On the on the first part, promoting vaccines and authoritative information. I mean, Facebook is obviously, we have quite a large user base, so we've been able to direct more than 2,000,000,000 people worldwide to expert, to expert health resources through our coronavirus misinformation excuse me, our coronavirus information center, which you can find in your, in your Facebook app if you use Facebook. And just for some other examples more specifically how we're informing people and getting authoritative information to people, we help people find vaccine appointments in their areas through messages and news feed. We help people get questions answered not only through the COVID information center, But also through ad impressions that we, that we support partners on such as facts about COVID ad campaigns. We, enable social norming through profile frames. So in the United States, and I'm sure many people here have used this, more than 50% of Facebook users have seeing someone that they follow use a profile frame and feed. And then in addition on informing, we're partnering with other organizations to reach, low vaccination rate communities such as with, campaigns featuring black doctors, nurses, or Spanish language campaigns. And we are using, data that we're collecting in partnerships with academic institutions to, to judge the impact of this over time. I'll also note on the inform side, when it comes to users who have interacted with false or misleading came claims, we also inform them in those situations. So just for some examples, we add labels on posts about COVID nineteen and vaccines to show additional information from the WHO. And, when we do remove misinformation from the platform, which we'll talk about in a second, we built a tool so that, we notify users who Saw that misinformation before we removed it so that they have access to the authoritative information like Brexit. So that's in a in a large bucket are, part of our inform work here. On remove for COVID nineteen, we do have a policy to remove harmful misinformation related to this topic. Specifically, we remove content that has been debunked as false and leading to physical harm by public health experts related to the pandemic. So these are things like Fake preventative measures claims the virus doesn't exist or, this also includes a variety of claims about vaccines. The idea here is to, Remove misinformation that could lead to imminent physical harm by somebody maybe not receiving appropriate treatment or exposing themselves to the disease. So on vaccines, specifically, in December last year, we started removing false claims about the vaccine, again, that fall within this category, And we've expanded the list of claims we remove about vaccines in general earlier this year in consultation with health experts, and we're continuing to make updates to these policies as Trends emerge, including just this week, in fact. And we also remove, pages, groups, and Instagram accounts that repeatedly violate these policies to get at those entities that might repeatedly spread this content. And then finally, the third part of the strategy, addressing borderline content, Which could lead to vaccine hesitancy, which falls into the reduced area. So we do reduce the distribution of certain contents, about vaccines that doesn't otherwise violate our policies. And our approach here is really grounded in guidance that we've gotten from health experts that, who've emphasized the idea that overcoming Vaccine hesitancy really depends on people being able to ask legitimate questions about safety and efficacy and get those questions answered by trusted sources. But at the same time, we also realized that certain of this content could could lead to hesitancy, so we we reduced its distribution. And, Similarly, for content that, again, does not violate our policies, we also work with a global network of more than 80 fact checking organizations around the world in more than languages. And for with these partners, when they find the posts including COVID or vaccines that they rate as false, We reduce their distribution. We also, this is part of our inform strategy. We have warning labels, and we make it less likely that people will see them in feed. And so that's the holistic strategy that we have of providing authoritative information or inform, removing harmful misinformation, and addressing this borderline content, which adds up to our our whole strategy that, yeah, ideally gets a better outcomes overall. Speaker 0: Thank you. I wanna, I wanna come back to a couple of things you in there, but I'd love to have Anne perhaps, introduce the idea of your money or your life. And then maybe, Brian for just kinda round it out. After that, perhaps we could just have you give Twitter's general rubric of how you're how you guys are thinking about this. Speaker 2: That sounds great, Renee. Thanks. Very excited to be here. So delivering a high quality search experience is core to what makes Google search so helpful. And from the early days, understanding the quality of web content has been incredibly important to us. We have 3 key pillars when it comes to our approach to information quality and search, and the first one will touch On what Renee mentioned, YMYL. So first, we fundamentally design our ranking systems to identify information that people find useful and reliable. We recognize, however, that there are certain topics where quality is particularly important. And we call those topics at Google YMYL or your money or your life. And these topics include a variety of, a variety of subcategories that essentially encompass Any web page that includes content that can affect someone's health, happiness, safety, or financial stability. So it includes things like shopping or financial transactions or information, legal matters, information about national government processes or policies. And most importantly, for this conference and and in the context of COVID, for health and crisis type situations. So when it comes to our ranking algorithms and when it when it comes to our, our approach to these topics, we place an even greater emphasis on factors related to expertise and trustworthiness. We've learned that sites that demonstrate authoritativeness on these topics are much less likely to publish false or misleading information. So we try to build our systems to identify signals of those characteristics so that we can continue to provide the most reliable information. 2nd, to complement the efforts in our ranking systems, we've developed a number of search features that help you make sense of all the information you're online, but also provide direct access to information from health authorities in the case of COVID. We worked very closely with the World Health Organization And globally with national public health authorities to surface their content front and center on the search page, in a more organized fashion. And then lastly, we do have specific medical policies for what can appear, in some search features to make sure that what we're showing is high quality, medically accurate, and helpful. So for these features, we, again, we first and foremost design our automated ranking systems to show helpful content, but our systems aren't always perfect. And if they do fail, our enforcement team does take actions in accordance with those policies. And so for these specific Google features like knowledge panels, and featured snippets, which often show up in a more prominent position on the search page, We don't allow content that contradicts or runs contrary to scientific or medical consensus and evidence based practices. So with these 3 approaches, we're able to continue to improve Search and really try to raise the bar on quality to deliver a trusted experience for people around the world. Since the COVID nineteen outbreak teams across Google have worked to provide quality information to help keep people safe and to provide public health scientists and medical professionals with the tools to combat the pandemic. So we've launched more than 200 new products, features, and initiatives, and we've pledged over a 1,000,000,000 to assist our users, customers, and our partners around the world. In search, in particular, going back to that features aspect, we've introduced a comprehensive experience for COVID nineteen that provides easy access to information from health authorities alongside new data and visualizations. And we're continuing to iterate and and and build on that experience over time so that people can easily navigate to the resources that they need. For vaccines, we've also updated a feature which surfaces a list of authorized vaccines, vaccine statistics and more, in users' locations in response to searches for specific information about COVID nineteen vaccines. And then lastly, when it comes to misinformation on search, we continue to elevate the work of fact checkers and Google search, as well as Google News and Images. We signal fact check articles in our results via dedicated tags and and rich snippets, that make it easy for users to understand. So that's just a bit Speaker 0: of an overview. Thanks again, Renee. Speaker 3: Alright. I'll talk a little bit about our approach to COVID nineteen, as in fellow Twitter. So, like, 1st and foremost, we wanna protect the integrity of the public conversation and ensure that Twitter is a place where folks can be exposed to different perspectives and engage in healthy discourse. I think our policy remediation options give us the flexibility to address a wide range of misinformation and and the associated harms. And with a critical mass of, you know, expert organizations, government officials and accounts, health professionals, Like, our goal on Twitter is to amplify those, authoritative health inform sorry, amplify authoritative health in information as much as possible. So we have a wide range of enforcement options that give us that flexibility to be proportionate in our approach to combating COVID nineteen misinfo. So the first thing is, we remove content with the highest propensity of harm and content that may, Invoke deliberate conspiracy theories related to COVID nineteen and COVID nineteen vaccines. We also will label misleading content and provide us credible and authoritative information, you through curated moments. And, some of the topics we cover with our curated moments would be Vaccine safety, vaccine science, preventative measures, and unauthorized and unapproved treatments. With the labels, we also may prevent the tweet from being recommended, and prevent users from engaging with the tweet by disabling replies, tweets, and likes. And I think that allows us to mitigate the spread while also providing those who are exposed to it with, authoritative information through our labels. In addition, we also provide, warning interstitials for content that may contain misleading information off platform. And, Lastly, we implemented a strike policy to address, users who are repeat offenders of the COVID nineteen misinformation policy. Each Strike the company's escalated enforcement action with the final strike leading to permanent suspension. So another way of really just giving us the flexibility to address Potential harms in the platform. Speaker 0: I'd love to focus in on this idea of harm. And, this is a question that, any or all of you can answer. How do you how do you determine, what is a harm? How how are you defining that in the context of COVID? And particularly over, you know, over what timeline are you thinking about that? Speaker 1: I I I can weigh in from Facebook. So far, as I mentioned for our remove policies by remove causes. When we remove content from the platform, we're basing this, on a policy that we have to remove misinformation that leads to imminent physical harm. And so that's the that's the standard we're looking at, and that's when we are doing these consultations with health experts with the WHO. We're getting their feedback both, you know, is the content debunked? Is it false? And what is the the harm that is likely to come from? And that's the standard we use for removing misinformation. Speaker 0: And on the subject, particularly, you know, we've seen so many different, wild false cures or, you know, they're they're originally, I remember when hydroxychloroquine Chloroquine became a thing. And I followed that in our work at Stanford. We looked at where that narrative originated, where have these ideas about hydroxychloroquine come from. And it actually came from sort of Southeast Asia and then parts of Africa that had some familiarity with the drug and thought perhaps this is going to be a useful thing for combating COVID. They didn't really make it into the American, social media conversation until actually March when the president started really talking about president Trump at the time. And one of the things that was sort of remarkable about that was the language this this piece of information, this theory kind of made its way around the world, you know. And Over time, even as they were finding that this was not going to be an effective treatment, unfortunately, that was not going to that. That initial hope was not worn out. It was becoming a focus of hope in 1 in 1 country even as it was as as the sort of places where it had originated were beginning to move away from it for various reasons as the scientific consensus just began to show that it was ineffective. And then we saw that happen again with, I mean, most recently Ivermectin, but I feel like there've been, 3 or 4 different ones of these. And I'm curious how you all think about moderation in that environment, particularly as you have this incomplete consensus and these narratives are global. And, you know, what is kind of seen as prevailing opinion, from 1 group of scientists in 1 place maybe is, is not necessarily quite yet solidified at a global level. Speaker 3: Yeah. I'll jump in here. I think I think one of the, One of the ways we like to project, I think, and from, Twitter standpoint is, you know, really leaning on our curation team who who's able to be able to monitor the conversation on the platform in real time and and really be able to track where that where that, where the discussion is going so that we have a a good sense of, like, the different types of, you know, issues around it and and where that's traveling so that we have a better sense from a A policy perspective of where to start shifting our resources and where to start really understanding, you know, the the the patterns associated with it. So I think, you know, We have a huge benefit on our, on our team that we have, like, such a powerful team from across the globe, who can really be able to capture that context, in real time as it unfolds. Speaker 2: I can jump in as well on this one, Renee. I think it's been really interesting, and especially looking at this both both from Google searches perspective, but also as a clinician. We've seen the evidence and the guidance change so much over the last 18 months. And not only that, but what we've realized is that when you look at this globally, not only do national public health authorities differ in their guidance, from one another, from the World Health Organization over time, but also even at the at the more local levels, often the guidance is is different. And so I think, you know, from the search side, we, you know, we do have this this medical topics policy that says when something goes against consensus, well, then we take Action. But what about all of this other area where, you know, COVID kind of threw us for a loop because everything information is changing. We don't have all the information. And even the national public health authorities are often trying to, you know, create new guidance based on of the moment and of literally of the minute research. Search. So at least for us, I think it's really been about deepening our partnerships with with these authorities and working in country, to really try to understand what what those different perspectives are. But it's certainly been you know, I think it's certainly presented really an unprecedented challenge when it comes to medical information because it really does kind of force us to ask that question, what is medical misinformation? Or when it's evolving so rapidly, You know, when do we actually call something, you know, far enough at the end of the spectrum that we're comfortable saying, you know, this fits into that category. So it's been challenging. Speaker 3: Hello, sir. Speaker 0: Oh, go ahead. Speaker 3: Oh, it was good also to jump in jump in and say, like, I think, you know, to the the to some point about partnerships, I think, You know, we we've established a partnership with AP and Reuters exactly for this type of reason because we'll be able to scale, you know, that that monitoring of the conversation. And I think it it will provide, like, the context on different surfaces, you know, whether it's trends, explore page, and and prompts in our in our misinformation label. So, I I I think that that's been a a core part of our, ability to scale this as well. Speaker 0: One of the things I'd like to raise while we have representatives 3 platforms all in one place. Is this idea of misinformation as networked? Right? Particularly as a video, for example, that someone makes and posts to YouTube, or Facebook. Let's use the example of pandemic to kind of anchor it in a in a real example here. Highly misleading piece of content. Took fact checkers about 2 days to go through the full half hour of material. In that time, 8 to 12,000,000 views on YouTube, you know, viral on Facebook. Again, very, very quickly. We did some work looking at that. The way it traverse different groups, It's hops to Twitter. The attempts to take it down kind of after the fact as it was found to be misinformation, precipitating a whole second wave of it being Reposted to kind of all platforms like Rumble and, and a few of the other kind of video sharing smaller smaller ones. How do we think about that idea of the way in which this blank is the ecosystem? And do you all at your respective companies, think about those hops to other platforms and then, you know, what is what is that, if any, impact does that have on how you choose to moderate? Speaker 1: I I can try to start here. I mean, the the question of hops to other platforms is an interesting one given that obviously Our platforms are different. We we're all independent companies, and, and each are unique in in some ways. So we we have Sometimes differing or slightly different policies that that meet those unique challenges. I will I will say on the question of, You know, content that be that can be spread virally. This is, I mean, this is why we have the policies in place that we do, so where we can remove content if we find it, and we have AI systems that once, once we detect something that we can remove can help us Scale that, the impact of that. Similarly, we have AI systems that can help us, predict content that we might, later determine needs to be removed or predict content that, we can send to our fact checking partners around the world who then, could rate content and reduce the distribution of it. There so it's It's always gonna be the case that no systems are perfect, and there will be, you know, their their misses. But we, we're continually updating these over time. And I think the point that Anne mentioned earlier before is a really important one that this is in the context of a pandemic. It's really precedent in terms of like the amounts that guidance and information is changing so quickly. And so we're updating our systems and our policies over time to account for that. But because of that, we also look, we also look at the end at outcomes. And at least at Facebook, what we are what we're looking at here, I think, I don't know if I mentioned earlier that we are running a, what we think is actually the largest global health survey ever, in partnership with some academic institutions. And we've seen, we've seen vaccine acceptance actually encouraging trends on that at least in the US and a few other countries worldwide. So Regardless of the specific content issues that can come up, we're we're focused on that overall outcome metric, and our systems layering up to improve the outcomes over time. Speaker 0: Sorry. I muted. I think it's a really significant challenge that you all are facing. You know, the Pandemic has really shown the, the kind of front lines of the moderation question, particularly for the reasons we've discussed with new emerging Consensus, the global nature of it, the attention the amount of attention that people are, are paying to it. I know that we're gonna go to q and a in just a minute, but I, and I unfortunately have drop off for a scheduling conflict. So Anne is going to moderate that. But, I'm I'd love to hear just, you know, do you think what what learnings from the COVID specific, Environment do you feel are are more broadly applicable that as you as you move forward? But what kind of key learnings, have you found, If any. Speaker 3: I think, when we're dealing with sort of You know, an issue that is a crisis that is global in nature, it's it's really important to make sure that you're understanding the different context in which these things play out. So, you know, it's not just about, you know, if you were talking about, you know, Hydroxychloroquine or Ivermectin, like, the way in which that manifests in Each place is very different. And being able to capture that and and be able to really pinpoint it pinpoint those, Differences, I think, is the key to being able to do this at scale, you know. And I think one of the things that I've learned and I take from, You know, this particular, this particular crisis is the importance of having those diverse teams and and thinking about it in a in a diverse way so that, You know, you are making you are having sorry. You are creating solutions that are tailor made to each of the the challenges that you may face across different contexts. Speaker 2: From the search side, and I would imagine that, that Brian and Aaron, you've seen this as well. I I do feel like COVID was a great stress test for our existing systems in many ways, and it's pushed us and challenged us to grow and really emphasize The importance of flexibility and adaptability, you know, as we continue to navigate the pandemic and this space and continue to try to understand Vaccine access and vaccine misinformation and then how this is playing out across the globe. And I do think from the search standpoint, I can say, I you know, I do think that every day we have 15% of our searches are brand new searches that have never been done before. And so when you think about that and put that in the context of the pandemic, We really want to make sure that our automated systems are robust so that we can get people to the most relevant and reliable information, you know. And and when it comes to Some of these, these topics that, you know, like pandemic as an example, they kind of quickly gain attention and and, you know, Often will take a day or 2 for fact checkers to kind of chime in and get content on. I I think, you know, that's where developing the most robust systems kind of out of the gate is really the best and most scalable approach. And and obviously, it's not perfect, but the better we can do there, I think that the greater scale and impact we'll be able to drive. Speaker 0: Thank you so much for, for chatting with me. I know we're gonna open it up to q and a now, and I think, People can just, take questions in the chat, and then, Anne is going to moderate this portion of the event. Thanks so much. Speaker 2: Great. Thank you, Renee. Speaker 3: Thank you, Speaker 2: Okay. Mia, where do you draw the line on harmful content? So this is something that I think we touched on around harm with with one of Renee's earlier questions. Either if you wanna take it. Speaker 1: Hi. Oh, go ahead, Brian. Speaker 3: I was gonna say that it's it's kind of a it's not a simple answer. I think, it it it's it's quite complicated because I think there are different types of harms. I think earlier, Aaron mentioned the the idea of imminent harm. And I and I think when we think about imminent harm, we also take the approach of removal, but there are also, you know, informational harms. There are things that are misleading that could lead eventually lead to of of bad outcomes. So I think our, our approach with with labeling is is to mitigate that harm through information through authoritative incredible sourcing so, that we could, like, address some of, the the potential for taking action on misinformation or or bad, content. Speaker 2: At Google, I would say so I I'm in search, but but I have some visibility across the rest of the company in terms of, Speaker 0: you know, Speaker 2: how we how we think about and define harmful. There are different ways and and harmful is not you know, misinformation versus harm, I think there's overlap there, but those are not always, You know, entirely happening together. So when it comes to our policies, you'll see, that a lot of them cover harm, but also cover other types of, you know, dangerous or misinformation outside of the the harm vector. And so YouTube YouTube has published their COVID nineteen medical information policy about a year back, and there you can actually see a list of claims very specifically related to COVID, that they take action on. Google Ads has has a policy that hinges quite heavily on harm. So there, if you look into that policy, you can actually see how they, you know, how they define it. So there are different ways to approach this, but I think ultimately it's kind of a multi pronged approach where harm is 1 vector, but there are other vectors that we need to cover as well. Speaker 1: Yeah. That's that's important, and thanks, Anne, for mentioning that. I I wanna flag also that Facebook as well. We published under our under our help center, and we can find the link if needed. A full list of claims and policies that we have related to misinformation, about COVID nineteen that we remove for that, imminent physical harm impact. And we've also listed our policies when I talked about content that we reduce because it could lead to vaccine hesitancy, See, we've listed our policies there as well. So, we've we've listed these out online for those who are interested in learning more. Speaker 2: Great. Nelana, thank you. So given that social platforms deal with massive amounts of data, How effective is AI with sifting the vast majority of what constitutes misinformation? Speaker 3: I think, sorry to jump in here. I think from my perspective, and It's a common it's it's never just AI by itself. It's never just machine learning by itself. It's it's like this it's confluence. Right? It's it's a relationship, and it's finding that right balance. There are Certain context where human review is absolutely necessary, and and it'll be able to capture that nuance. But there are things where from whether it's detection, or, like, things with, you know, common patterns where AI and and machine learning is is incredible. And I think the human in the loop process is is absolutely, critical to being able to scale this. So, You know, I would, you know, always think about it not as just 1 tool, 1 instrument, but it's it's 1 tool in a larger toolbox. Speaker 1: Yeah. I I fully agree with that. And I'll say, like, specific places where you've seen AI being particularly, useful, as Brian just said, like, tuning our systems to try to to try to predict content that we, might later qualify for removal or for fact checking. And also, to scale the impact when we do find content and finding duplicates or very near exact duplicates of that content across platforms. We use our systems for that, and it's, it can be very effective. Speaker 2: Yeah. And at search, I'd say it's similar. It's a combination of AI with with human raters, and so we work with search quality raters who measure the quality of search results on an ongoing basis for for search. They are evaluating the results based on expertise, authoritativeness, and trustworthiness, that EAT or EAT that, That we really index heavily on for these YMYL topics. And so these ratings don't directly impact ranking, but they help us to benchmark quality of our results on a continuous basis to ensure that they're meeting a high bar. So so it's kind of always always a combination of the algorithms, you know, plus plus these ratings that help inform us as to how we're doing. Lola, How do you respond to the censorship narrative? I don't know if we need a little more context there, or Is that enough to Speaker 1: I I I can I can jump in? I mean so at least at Facebook, our approach and I would say our general approach in content moderation, we're always trying to once oh, the The core value that Facebook is to enable people to express themselves freely with protecting the safety of our of our community and our user community. And our approach to COVID nineteen, you know, is is part and parcel of that balance. So, You know, we when I when we remove misinformation, as I've said before, it's that, that's a standard of health experts have told us that it is false and leading to imminent physical harm and is a safety risk. And that's also why we have our these other approaches to to balance the risks in when we don't have that, when we don't have that clear assess assessment from these outside experts, right, where we can reduce the distribution of content that experts have told us lead to vaccine hesitancy could lead to vaccine hesitancy, but it's important for people to engage with, where we can work with our fact checking partners to find contents to reduce its distribution and add labels, but not remove it. So it I think It's a broader question for social media and online communications platforms in general, that question about, The right balance of all these policies, but we're aiming to really balance that free expression and safety, those safety issues with the policies that we have. Speaker 3: Yeah. I think, you know, just to take it back off, Aaron, I think, like, it's important for us to allow discourse and debate, But we also wanna create a safe environment for it to for but not to take place. So, you know, we have different levers to pull. We have different, approaches that, you know, will allow content to be left up with, you know, context or authoritative information. But, ultimately, I think it's it's upon us to it's upon, you know, social media companies to be transparent as much as possible on their help center pages. We do that, Twitter to make sure that, folks know exactly what we're looking to mitigate, the harms that we're looking to mitigate and what we're looking to address, especially in the COVID nineteen space. Speaker 2: Thanks, Brian. Yeah. And on search, I think, things are perhaps a little bit different as a search engine, you know, given that we're indexing the web and and we're Servicing the content that's available. You know, our general approach is, especially for health and other important topics like this, we need to really lean in on and and and show the high quality high high quality and authoritative results, both in features and also, You know, more prominently on the search page. That's that's really, I think, how we how we best address that. Diversity is also important to us, and we wanna make sure that, You know, we're showing users not only high quality results, but also a diverse set of results so that they can, you know, get the information that they need from from different types of sources. So so that's that's a little bit from the search side. Travis. Okay. Can any of you discuss the disinformation dozen as a case study of how your platform has or should respond to individuals responsible for significant disinformation. Speaker 1: I'll jump in on this one unless others would like to. So this is referring to a report that a certain that 12 12 specific individuals are allegedly responsible for a large amount of misinformation about COVID nineteen. I'll say that, We would we've we've looked into this, and I'm not sure that there is consensus that this is actually the case. Our look at this found that of these 12 people, Their content, they were actually responsible for just 0.05% of all views of vaccine related content on Facebook. So this includes vaccinated post they've shared with true or false, links associated with those people as well. So Just to put that in context, all all that said, I don't want to all that said, we are we have taken action against, against these people, and and we've removed, Over 3 dozen we've moved 3,000 pages, groups in Facebook or Instagram accounts linked to these people. We've placed restrictions on other accounts related to them as well. And this is part of our strategy of ensuring that if we find if if people are posting misinformation that violates our policies, we remove it, and we remove those Pages or accounts or groups that repeatedly post that, post that content? Speaker 3: Yeah. So I think, You know, as far as the the the this information doesn't, I think we've taken account level, you know, enforcement action on a number of the accounts identified, And, you know, we review them, you know, in accordance with our Twitter rules. So, you know, if they violated our removals Statute or our label statute, we enforce based on that. Several of the, you know, the Tweets referenced, were predate our updated COVID nineteen enforcement policy. And because we don't apply rules, the violations Retroactively. We did not take enforcement action on the content posted prior to the expansion, but this is something that we're we're still, you know, we're still keeping an eye on and it's still important to us. So, in any if we do come across content that does violate our policy, we will enforce Yeah, as we as in accordance with our policies. Speaker 2: Great. Thanks. Yeah. And and on the search side, I think, I'm not sure specifically how how search performed for for this this you know, we have a case study or example. I I would say that, you know, again, first, we lean heavily on on our ranking algorithms, but knowing that there are gaps there, our policies across Google search, Google news, YouTube, as well as our advertising products clearly outlined types of behaviors that are prohibited. So, you know, misrepresentation of ownership or, you know, a primary purpose on on Google News of impersonation or other things like that, We do have explicit policies that, that we act on Speaker 0: in those in those instances. Speaker 2: Hi, everyone. I just wanted Speaker 0: to let you know that there is about 5 minutes left in the So now I'll just open it up to final remarks. Speaker 3: Okay. I can jump in here. I think one of the things I've learned from the this tackling COVID nineteen misinformation, at Twitter is I I think the importance of trying to be innovative in the face of a crisis. And I think right now, at Twitter, we have several different, you know, Pilot programs. You know? You talk about bird watch when it you know, being able to leverage, the community to help with content moderation, when it comes to, you know, the the way the design of our labels look. You know, our redesign our our redesign label pilot. I think it it it shows, like, our like, the the sort of holistic multipronged approach that you have to take in these crisis to try try to try new things, to try to tackle a very, very complex issue. So I think, you know, one of the things that I'm I'm really excited about with with Twitter is that that we're, Listening to users and and listening to researchers and and and establishing those relationships and and meeting users where where they're at, so we can help mitigate some of the harm and help provide credible information. And, you know, I think one of the one great example is our our user our pilot user reporting flow. This is something that users have asked for for a while, so I I I think, we're, you know, piloting it in in the US, South Korea, and Australia. And we'll we'll be able to, you know, see, you know, what what uses a flag of misinformation and and and see and Give us an opportunity to 0 in and to take advantage of the user feedback. So, I think, you know, for social media companies like us, we we have to make sure that we're being innovative in the face of a crisis, especially in a very complex crisis. Speaker 1: Yeah. I I can just end by saying, first, I'm I'm I'm happy to be here. So I really appreciate the opportunity to participate on this panel. And, I think one point that several of us raised throughout is and that Brian just raised again is like flexibility and the changing situation on the ground and needing to to stay up with that. I think that's that's a really important issue and one where we have seen the need and and I think responded to be flexible over time to change our policies as the facts change and update our systems. Like, one example is just the words Delta as related to, coronavirus. It was not something that people probably would have thought about 8 months ago, and now it's obviously a big topic of discussion. So that that is one of my takeaways, and we'll, we're gonna continue to our to apply our policies as we've And in the 3 pronged approach that I've laid out, but make adjustments over time as we need to. So Speaker 2: I think on the Google side, I think one of the So this pandemic really showed us, was just how important all of these platforms are in a crisis like this and trying to get information out. We know that lots of our users search for health information all the time, but watching, and you can see on Google Trends the the escalation of COVID and vaccine queries over the last year. You can see that in in these, you know, in this environment, people are really trying to get The most accurate, the most up to date information that they can to make decisions. And so we really have an important role. You As we look at this clinically, as we look at this from a public health perspective, we really have an important role in this pandemic and in this crisis. And I think it's actually been But a really good thing and a great thing to see a lot of teams across our our organization come come together and work together on this. One thing I'll just highlight, I had seen some of the comments in the in the conference chat, that you may wanna look at is, there was recently a paper that was published by the National Academy of Medicine. It was an expert panel that can convened and and basically, publish some guiding principles on identifying credible online health sources. And the paper is called identifying credible sources of health information in social media principles and attributes. It was recently published about a month ago. It's it's on the website at Google and YouTube, and you may wanna check it out, just to kind of see where we're at in terms of thinking about, You know, what all of this means as we look ahead for, you know, the next crisis.
Saved - June 22, 2023 at 11:10 PM

@NameRedacted247 - Name Redacted

MAJOR UPDATE Aaron Berman, former 17 year CIA officer, is now “Head of Elections Policy” for Facebook & Instagram. Berman joined Facebook in 2019 and was responsible for writing Misinformation Policy & enforcing it for the 2020 election, COVID, Brazil elections etc He is joined by 15 other CIA, FBI & DHS working in Trust & Safety. LinkedIn - https://linkedin.com/in/aarondberman @RobertKennedyJr @elonmusk https://t.co/o7oLq4oAWr

@NameRedacted247 - Name Redacted

1. After learning that Twitter employs at least 15 former FBI agents, I searched Facebook. What I found is alarming Facebook currently employs at least 115 people, in high-ranking positions, that formerly worked at FBI/CIA/NSA/DHS: 17 CIA 37 FBI 23 NSA 38 DHS Thread🧵

Saved - June 20, 2023 at 6:59 AM
reSee.it AI Summary
Google, Facebook, and Twitter have hired at least 300 former CIA, FBI, and NSA employees since the 2016 election. Many of these former intelligence community members hold high-ranking positions in trust and safety teams, which oversee misinformation and censorship policies. Senior managers at Google, such as Nick Rossmann and Jacqueline Lopour, have posted troubling tweets expressing disdain for President Trump, his family, and white people. Other former intelligence community members hold key positions at Google, including Dawn Burton, Jacob Barrett, and Beth Schmierer. The coordination between the FBI and Twitter in the TwitterFiles case raises questions about the hiring practices of big tech companies.

@NameRedacted247 - Name Redacted

1. Google currently employs at least 165 people, in high-ranking positions, from the Intelligence Community. Google’s Trust & Safety team is managed by 3 ex-CIA agents, who control “misinfo & hate speech.” Here’s the breakdown: CIA-27 FBI-52 NSA-30 DHS-50 ODNI-6 Thread🧵

@NameRedacted247 - Name Redacted

2. Since the 2016 Presidential election, Google/Facebook/Twitter have hired at least 300+ people formerly employed by CIA, FBI, etc Ex-CIA agents are Heads of Trust & Safety at Google & Facebook. Is it OK that ex-CIA agents control what “misinfo” is?

@NameRedacted247 - Name Redacted

3. Nick Rossmann (He/Him)– Current Google Senior Manager Trust & Safety. Former CIA Analyst 5 years. https://www.linkedin.com/in/nickrossmann/

Nick Rossmann | LinkedIn Leader. Culture Changer. Whatever It Takes. Former CIA Analyst.

I am an experienced manager skilled in growing teams developing strategic insights for decision-makers. I am an effective relationship builder, well-versed in leading in cross-functional organizations - with product management, communications, legal, and public policy - to drive security results. I have proven skills in prioritizing and managing multiple projects with high visibility.

In the private sector, I have overseen the IBM threat intelligence group with a focus on organizational culture to permeate a "whatever it takes" attitude. I have grown a team of 12 analysts to 40 threat hunters, reverse engineers, and developers with a global voice on cybersecurity threats and protect global companies. I have launched threat intelligence research into the public to take analysts' insight to cybersecurity decision-makers around the globe, driving news coverage and market access.

While in government as a CIA intelligence analyst, I developed innovative intelligence products to address customers' needs. As an analyst covering a unique target - rule of law issues in the Middle East - I developed collaborative working relationships with intelligence customers to identify their analytic needs and use interviews and debriefings to leverage their knowledge for leaders across the community.

I readily take "personal leadership" regardless of my role in the organization - whether volunteering to lead interagency working groups, mentoring junior employees development gaps, and developing solutions for clients. | Learn more about Nick Rossmann's work experience, education, connections & more by visiting their profile on LinkedIn
linkedin.com

@NameRedacted247 - Name Redacted

4. Rossmann has posted dozens of troubling tweets on his Twitter account. Many tweets show his disdain for President Trump, Trump’s family, Trump voters & white people Reminder- the following tweets are from a Senior Manager of “TRUST & SAFETY” at Google & former CIA analyst:

@NameRedacted247 - Name Redacted

5. In March 2020, while COVID infections were exploding, Rossmann, in a tweet directed at people who voted for Trump, stated: “I hope they cough on their grandparents, who voted for Trump, & get to rot” What did he mean by this? https://archive.vn/rppqw

@NameRedacted247 - Name Redacted

6. Rossman, in a series of anti-white people tweets, states, “Anti-vaxxers are like Nazis“ https://archive.vn/YWMDD https://archive.vn/ZdKeT https://archive.vn/PYgWh https://archive.vn/rOOpB

@NameRedacted247 - Name Redacted

7. Rossmann is “Still With Her” https://archive.vn/amzeS

@NameRedacted247 - Name Redacted

8. Rossmann calls President Trump a “lunatic & racist” https://archive.vn/Pk5Kh

@NameRedacted247 - Name Redacted

9. Rossmann asks Trump if he’s an agent of a foreign power – https://archive.vn/xi7t8

@NameRedacted247 - Name Redacted

10.Rossmann tweets “Enjoy prison” to @EricTrump https://archive.vn/rsWmI

@NameRedacted247 - Name Redacted

11.Rossmann tweets that @Jaredkushner “should be strangled”- https://archive.vn/4C6rZ

@NameRedacted247 - Name Redacted

12. Jacqueline Lopour (She/Her)– Current Google Senior Manager Trust & Safety. Former CIA analyst 10 years. https://www.linkedin.com/in/jacqueline-l-23322072/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

13. Jacqueline is a proponent of the Russia-gate conspiracy theory. She states, emphatically, “They (Russia) deliberately released the DNC information to @wikileaks …with the specific motivation of getting Trump elected.” Full video link -https://www.cbc.ca/player/play/831297091976

Video Transcript AI Summary
Both sides engaged in hacking, but the DNC deliberately released their information to WikiLeaks. This release aimed to help Trump get elected, making it a significant factor. The motivation behind the hacking is separate from the act itself.
Full Transcript
Speaker 0: Right. Absolutely. I think the the difference here is even though the hacking was done on on both sides, they deliberately released, the DNC information to WikiLeaks. Mhmm. And so even though the hacking was on both sides, it was the release of the information out into the public domain with the specific evasion of helping Trump get elected, which is so significant. Okay. Just the hacking itself, but the motivation
Former CIA analyst: 'Russia's fingerprints are all over this' Jacqueline Lopour and former ambassador to Russia Jeremy Kinsman on the CIA's Russian hacking report. cbc.ca

@NameRedacted247 - Name Redacted

14. In a video posted on Facebook, Jacqueline made clear which candidate she, and the Intelligence Community, prefers- Hillary Clinton https://fb.watch/hDGbFzDoKr/

Video Transcript AI Summary
Hillary Clinton is an experienced figure in national security and foreign policy, known by the intelligence community. She understands their strengths and weaknesses. On the other hand, Donald Trump is seen as unpredictable and lacks knowledge of how the intelligence community operates. There is concern that he may politicize intelligence, as he has done with the Russian hacks, and use it to enhance his political image. Trump is receiving intelligence briefings during his campaign and there is worry that this behavior may continue if he becomes president, as he would have control over the intelligence community and its resources.
Full Transcript
Speaker 0: In terms of national security and foreign policy, Hillary Clinton is a known quantity. She's the ultimate Washington insider in this, and there's no one who has more foreign policy expertise. So when the national security community and the intelligence community goes to work with her, they know what they're dealing with. She knows what she's dealing with. She knows the the the positives, the negatives, Their, their abilities, their weaknesses. Donald Trump is the ultimate wild card. He's unfamiliar with how the intelligence community works, and their great fear is that he could politicize as he has done with the debates and his information about the the Russian hacks. Donald Trump is obtaining intelligence briefings while he's on the campaign trail, and he's using that, to bolster his political image in a way that no other candidate has done in the past. And the fear is that, he could continue doing so once he becomes president and has basically The keys of the kingdom in terms of the intelligence community and their assets.

@NameRedacted247 - Name Redacted

15.I won’t list every employee in this thread, as it’s quite extensive. Below, I’ll highlight some of the more notable Senior Management roles at Google, held by former Intelligence Community:

@NameRedacted247 - Name Redacted

16.Dawn Burton (She/Her)- Current Google Director/Chief of Staff Privacy & Safety. Former Twitter Senior Director Trust & Safety 3 years. Former FBI Deputy Chief of Staff to Former Director James Comey- 4 years. Former DOJ 6 years https://www.linkedin.com/in/dawn-b-39dk394kf/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

17.Jacob Barrett – Current Google Director Trust & Safety. Former CIA 7 years. https://www.linkedin.com/in/jacobgbarrett/

Jacob Barrett | LinkedIn I lead Google’s Security Intelligence team, which investigates, monitors, and disrupts the… | Learn more about Jacob Barrett's work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

18.Beth Schmierer – Current Google Intelligence Manager. Former CIA Analyst 5 years. Former Department of State Diplomat in Spain 10 years. https://www.linkedin.com/in/beth-schmierer-b307aa222/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

19.Chelsea Magnant – Current Google Cybersecurity Policy Manager. Former CIA Analyst 9 years. https://www.linkedin.com/in/chelsea-m-68b667103/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

20.Katherine Tobin (She/Her). Current Google Head of Workspace Innovation. Former ODNI 7 years. Former CIA Branch Chief 4 years. https://www.linkedin.com/in/katherine-tobin/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

21.Yong Suk Lee- Current Google Director Global Risk Analysis. Former CIA analyst 22 years. https://www.linkedin.com/in/yong-suk-lee-7724a318a/

Yong Suk Lee | LinkedIn Member, Council on Foreign Relations
Visiting Scholar, Hoover Institution, Stanford University
Senior Fellow, Asia Program, Foreign Policy Research Institute
Member, Association of Threat Assessment Professionals | Learn more about Yong Suk Lee's work experience, education, connections & more by visiting their profile on LinkedIn
linkedin.com

@NameRedacted247 - Name Redacted

22.Crystal Lister – Current Google Security & Trust Center Program Manager. Former CIA Cyber & Counterintelligence 9 years. https://www.linkedin.com/in/crystallister/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

23.Amber Johnson – Current Google Head of Global Communications. Former CIA 8 years. https://www.linkedin.com/in/amberchristina/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

24.Connie LaRossa – “Obama/Biden Alum” Current Google National Security Policy. Former Department of Defense 5 years. Former DHS 7 years. https://www.linkedin.com/in/connie-larossa-5a6bb077/

Connie LaRossa | LinkedIn View Connie LaRossa’s profile on LinkedIn, the world’s largest professional community. Connie has 9 jobs listed on their profile. See the complete profile on LinkedIn and discover Connie’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

25.Robert Chung- Current Google Key Account Executive. Former NSA Director Intelligence 2 years. Former US Army Intelligence Manager 11 years. Former State Department 1 year. https://www.linkedin.com/in/rob-chung/

Robert Chung | LinkedIn Business development and strategic advisor to global executives and senior government officials by understanding the operating space, transforming risks/threats into opportunities, and delivering meaningful execution to achieve the desired outcome. | Learn more about Robert Chung's work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

26.   Adam Calabro- Current Google Cybercrime Manager. Former NSA analyst 7 years. https://www.linkedin.com/in/adam-calabro-6772a5109/

Adam Calabro | LinkedIn View Adam Calabro’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Adam Calabro discover inside connections to recommended job candidates, industry experts, and business partners. linkedin.com

@NameRedacted247 - Name Redacted

27.   Kingman Wong- Current Google Security Compliance Lead. Former FBI special agent 25 years. https://www.linkedin.com/in/kingman-k-wong-phd/

Kingman K. Wong, PhD, MPA, MSc in Cybersecurity, CFE, CPP, CAMS | LinkedIn View Kingman K. Wong, PhD, MPA, MSc in Cybersecurity, CFE, CPP, CAMS’ professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Kingman K. Wong, PhD, MPA, MSc in Cybersecurity, CFE, CPP, CAMS discover inside connections to recommended job candidates, industry experts, and business partners. linkedin.com

@NameRedacted247 - Name Redacted

28.   Heather Dagostino – Current Google Program Manager. Former FBI Intel Analyst 11 years. https://www.linkedin.com/in/heatherdagostino/

@NameRedacted247 - Name Redacted

29.   Pamela Cerria- Current Google Program Manager. Former Department of Defense 3 years. Former DHS 3 years. https://www.linkedin.com/in/pamela-cerria-0647a345/

Pamela Cerria | LinkedIn View Pamela Cerria’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Pamela Cerria discover inside connections to recommended job candidates, industry experts, and business partners. linkedin.com

@NameRedacted247 - Name Redacted

30.   Jeremy Warner- Current Google Manager Customer Success. Former CIA Senior Intelligence Analyst 7 years. https://www.linkedin.com/in/jeremy-warner/

Jeremy Warner - Google | LinkedIn View Jeremy Warner’s profile on LinkedIn, the world’s largest professional community. Jeremy has 3 jobs listed on their profile. See the complete profile on LinkedIn and discover Jeremy’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

31.   Lauren Kelly – Current Google Office of CFO. Former Biden-Harris Transition Team. Former Director in Obama White House 6 years. Former DHS 2 years. https://www.linkedin.com/in/lauren-kelly-b4252582/

Lauren Kelly | LinkedIn View Lauren Kelly’s profile on LinkedIn, the world’s largest professional community. Lauren has 7 jobs listed on their profile. See the complete profile on LinkedIn and discover Lauren’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

32.   Reminder- Twitter currently employs at least 15 former FBI agents   Since Jim Baker was fired (after interfering with #TwitterFiles release) @Elonmusk has been asked, numerous times, how many FBI agents remain at Twitter. He refuses to answer   Why?  https://t.co/0Qn09iT5UP

@NameRedacted247 - Name Redacted

33.   Last week, we learned that Facebook’s Head of Trust & Safety is a 17-year former CIA Analyst- Aaron Berman. https://t.co/UclWuL90D0

@NameRedacted247 - Name Redacted

34.   Why, since 2016, did Twitter, Facebook & Google go on a hiring blitz of former CIA, FBI, NSA, etc. and assign them to high level managerial positions, many of which oversee “misinformation” and censorship policy?

@NameRedacted247 - Name Redacted

35. Given the coordination we’ve seen between FBI & Twitter in #TwitterFiles, there should be more media coverage on hiring practices at Big Tech   .@JudiciaryGOP @Jim_Jordan should investigate why former IC agents are imbedded in high ranks of the largest Social Media companies

Saved - June 16, 2023 at 8:00 AM
reSee.it AI Summary
Facebook currently employs at least 115 people in high-ranking positions that formerly worked at the FBI, CIA, NSA, and DHS. Many of these former intelligence agents were hired after the 2016 Presidential Election and the establishment of the FBI's social media-focused task force. Facebook's Misinformation Policy team is led by a former CIA officer who worked for the agency for 17 years. Facebook partners with over 80 fact-checking organizations to direct which posts to reduce distribution, add warning labels, and shadowban. There are concerns about coordination between Facebook and the intelligence community in misinfo censorship.

@NameRedacted247 - Name Redacted

1. After learning that Twitter employs at least 15 former FBI agents, I searched Facebook. What I found is alarming Facebook currently employs at least 115 people, in high-ranking positions, that formerly worked at FBI/CIA/NSA/DHS: 17 CIA 37 FBI 23 NSA 38 DHS Thread🧵

@NameRedacted247 - Name Redacted

2. All, but a few, of the former intelligence agents were hired, by Facebook after the 2016 Presidential Election & after the FBI established their social media-focused task force FTIF.

@NameRedacted247 - Name Redacted

3. As @mtaibbi detailed in #TwitterFiles Part 6, we know there was massive coordination of censorship between the FBI & Twitter during 2020-2022. Who is controlling “misinfo” censorship at Facebook? Is there similar coordination between Facebook & the Intelligence community?

@NameRedacted247 - Name Redacted

4. The following is a list (obtained through PUBLICLY available LinkedIn profiles) of former CIA/FBI/NSA/DHS that are currently working at Facebook, at least 10 work in the Trust & Safety (Misinfo) department. Many of the LinkedIn profiles are private so those will not be posted.

@NameRedacted247 - Name Redacted

5. Aaron Berman (He/Him) leads the Misinformation Policy team at Facebook. According to Aaron’s public LinkedIn profile, he worked for the CIA for 17 years. https://www.linkedin.com/in/aarondberman/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

6. Aaron states that his experience at the CIA included writing President’s Daily Brief, leading briefings for Cabinet members, senior NSC officials & members of Congress.

@NameRedacted247 - Name Redacted

7. On Twitter, Aaron is followed by Yoel Roth & admits he is friends with Trust & Safety people at Twitter. Was Facebook coordinating with Twitter on info-sharing to censor posts they deem as ‘misinfo’? archive.vn/7r2vX

@NameRedacted247 - Name Redacted

8. Aaron admits to specific Facebook campaigns where he tackles “misinfo.” Re: COVID19, they allow ‘health authorities’ to guide what Facebook should label as misinformation archive.vn/85N7v

@NameRedacted247 - Name Redacted

9. On a YouTube discussion, with Stanford, Aaron admits that Facebook works with a ‘Global network of over 80 fact checker Organizations” who direct Facebook on which posts to reduce distribution, add warning labels & shadowban

Video Transcript AI Summary
We collaborate with over 80 fact-checking organizations worldwide in more than 60 languages to address content that doesn't violate our policies. When these partners identify false posts, especially about COVID or vaccines, we limit their distribution. Additionally, we use warning labels and reduce the visibility of such posts in people's feeds. This comprehensive approach involves providing authoritative information, removing harmful misinformation, and dealing with borderline content. Our goal is to continually improve our strategy.
Full Transcript
Speaker 0: And similarly, for content that, again, does not violate our policies, we also work with a global network of more than 80 fact checking organizations around the world in more than 60 languages. And for with these partners, when they find the posts, including about COVID or vaccines, that they rate as false, We reduce their distribution. We also, this is part of our inform strategy. We have warning labels, and we make it less likely that people will see them in feed. And so that's the holistic strategy that we have of providing authoritative information or inform, removing harmful misinformation, and addressing this borderline content, which adds up to our our whole strategy that, yeah, ideally gets it better.

@NameRedacted247 - Name Redacted

10. Aaron discusses in detail the lengths Facebook goes to in censoring what they deem as COVID19 misinfo, specifically on Vaccines

Video Transcript AI Summary
We label posts about COVID-19 and vaccines with information from the WHO. We remove misinformation related to COVID-19 that has been debunked by public health experts and could lead to physical harm. This includes false claims about preventative measures, the existence of the virus, and vaccines. We also remove pages, groups, and Instagram accounts that repeatedly violate these policies. To address vaccine hesitancy, we reduce the distribution of certain content that doesn't violate our policies but could contribute to hesitancy. Our approach is based on guidance from health experts, who emphasize the importance of allowing people to ask legitimate questions and receive answers from trusted sources. We update our policies as new trends emerge.
Full Transcript
Speaker 0: For some examples, we add labels on posts about COVID nineteen and vaccines to show additional information from the WHO. And, when we do remove misinformation from the platform, which I'll talk about in a second, we built a tool so that, we notify users who saw that misinformation before we removed it so that they have access to the authoritative information like Brexit. So that's, in a large bucket, are part of our INFORM work here. On REMOVE for COVID nineteen, we do have a policy to remove harmful information related to this topic. Specifically, we remove content that has been debunked as false and leading to physical harm by public health experts related to the pandemic. So these are things like fake preventative measures, claims the virus doesn't exist, or, this also includes a variety of claims about vaccines. The idea here is to, remove misinformation that could lead to imminent physical harm by somebody maybe not receiving appropriate treatment or exposing themselves to the disease. So on vaccines specifically, in December last year, we started removing false claims about the vaccine, again, that fall within this category, and we've expanded the list of claims we remove about vaccines in general earlier this year in consultation with health experts, And we're continuing to make updates to these policies as trends emerge, including just this week, in fact. And we also remove, pages, groups and Instagram accounts that repeatedly violate these policies to get at those entities that might repeatedly spread this content. And then finally, the 3rd part of the strategy, addressing borderline content, which could lead to vaccine hesitancy, which falls into the reduce area. So we do reduce the distribution of certain content, about vaccines that doesn't otherwise violate our policies. And our approach here is really grounded in guidance that we've gotten from health experts that who've emphasized the idea that overcoming vaccine hesitancy really depends on people being able to ask legitimate questions about safety and efficacy and get those questions answered by trusted sources. But at the same time, we also realize that certain of this content could lead to hesitancy, so we reduce its distribution. And similarly, for content

@NameRedacted247 - Name Redacted

11. Here is the entire YouTube video where Aaron and members from Twitter & Google discuss misinformation censoring https://www.youtube.com/watch?v=hB_YNbnt8x4&t=90s

@NameRedacted247 - Name Redacted

12.Brazil Election misinfo censorship. archive.vn/JSgET

@NameRedacted247 - Name Redacted

13.Philippines Election misinfo censorship. archive.vn/Mm15a

@NameRedacted247 - Name Redacted

14.Russia/Ukraine War misinfo censorship. archive.vn/9jAkq

@NameRedacted247 - Name Redacted

15. Aaron tweeted that the CIA backs insurgency groups archive.vn/i8KiE

@NameRedacted247 - Name Redacted

16.“As a current combatant against misinfo and former intelligence officer” archive.vn/9jAkq

@NameRedacted247 - Name Redacted

17. Climate change censorship & again, Aaron states that Facebook partners “with more than 80 independent fact-checking organizations” archive.vn/gArWb archive.vn/6ijCS

@NameRedacted247 - Name Redacted

18.Deborah B. (She/Her). Current Facebook Trust & Safety. Former CIA Analyst 15 years. https://www.linkedin.com/in/deborah-b-219225225/

Deborah B. | LinkedIn View Deborah B.’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Deborah B. discover inside connections to recommended job candidates, industry experts, and business partners. linkedin.com

@NameRedacted247 - Name Redacted

19.Scott S. (He/Him) current Facebook Senior Manager Trust & Safety. Former CIA 7 years. https://www.linkedin.com/in/scottbstern/

Scott S. | LinkedIn With over 16 years of experience in product management, design thinking, data analytics,… | Learn more about Scott S.'s work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

20.Bryan Weisbard. Current Facebook Trust & Safety. Formerly 9 years of “multiple senior level leadership positions in US Government Intelligence Community.” Former Twitter Online Safety & Security Analysis 4 years. Former Youtube Trust & Safety 1 year. https://www.linkedin.com/in/bryanweisbard/

Bryan Weisbard - Meta | LinkedIn Global Trust & Safety, Security, Risk Management, Data Privacy, Customer Success and Operations leader with a compelling combination of private sector and U.S. Government experience. Proven record of advising executives on how best to manage/mitigate risk, build product solutions, and enable business operations. Multi-disciplined skill set in online platform trust and safety product management, investigations, due diligence, intelligence analysis, physical security, customer success, and business operations. Strong communication skills with an ability to adapt to diverse corporate cultures. | Learn more about Bryan Weisbard’s work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

21.Hagan Barnett. Current Facebook Trust & Safety Operations Lead. Former Self Employed Contractor CIA 1 year, Booz Allen 4 years, US Department of Treasury 3 years. https://www.linkedin.com/in/haganbarnett/

Hagan Barnett | LinkedIn View Hagan Barnett’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Hagan Barnett discover inside connections to recommended job candidates, industry experts, and business partners. linkedin.com

@NameRedacted247 - Name Redacted

22.Jeff Lazarus. Current Facebook Trust & Safety. Former Apple Trust & Safety 1 year. Former Google Trust & Safety 4 years. Former CIA 5 years. https://www.linkedin.com/in/jeff-lazarus-b76846191/

Jeff Lazarus | LinkedIn View Jeff Lazarus’ profile on LinkedIn, the world’s largest professional community. Jeff has 5 jobs listed on their profile. See the complete profile on LinkedIn and discover Jeff’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

23.Chon Rosa. Current Facebook Trust & Safety. Former US Army Intelligence & Security Command 4 years. https://www.linkedin.com/in/chon-c-rosa/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

24.Jason Barry. Current Facebook Trust & Safety Manager. Former DHS 7 years. https://www.linkedin.com/in/jason-barry-808536a9/

Jason Barry | LinkedIn View Jason Barry’s profile on LinkedIn, the world’s largest professional community. Jason has 3 jobs listed on their profile. See the complete profile on LinkedIn and discover Jason’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

25.Rick Cavalieros. Current Facebook Trust & Safety Manager. Former FBI 21 years. https://www.linkedin.com/in/rick-cavalieros-17a1198/

Rick Cavalieros | LinkedIn View Rick Cavalieros’ professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Rick Cavalieros discover inside connections to recommended job candidates, industry experts, and business partners. linkedin.com

@NameRedacted247 - Name Redacted

26.   Sandeep A. (He/Him). Current Senior Investigator Trust & Safety. Former NSA SIGINT Lead Analyst 4 years. https://www.linkedin.com/in/sandeep-abraham/

Sandeep A. | LinkedIn View Sandeep A.’s profile on LinkedIn, the world’s largest professional community. Sandeep has 2 jobs listed on their profile. See the complete profile on LinkedIn and discover Sandeep’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

27.   Amarpreet G. (She/Her). Current Facebook Product Integrity, Elections. Former FBI 6 years. https://www.linkedin.com/in/amarpreet-ghuman/

Amarpreet G. | LinkedIn View Amarpreet G.’s profile on LinkedIn, the world’s largest professional community. Amarpreet has 5 jobs listed on their profile. See the complete profile on LinkedIn and discover Amarpreet’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

28.   Brian Kelley. Current Facebook Law Enforcement Outreach Manager. Former FBI 7 years. https://www.linkedin.com/in/briankelley0717/

Brian Kelley | LinkedIn Experienced Lead Associate with a demonstrated history of working in the law enforcement and management consulting industry. Skilled in Counterterrorism, Criminal Law, Criminal Investigations, Law Enforcement, and HUMINT. Strong professional with a Doctor of Law - JD focused in Law from Suffolk University Law School. | Learn more about Brian Kelley's work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

29.   Aleah Houze. Current Facebook Product Policy Manager. Former NSA 7 years. https://www.linkedin.com/in/aleah-houze/

Aleah Houze | LinkedIn Expertise in counseling product teams to build with safety, privacy, transparency, and expression in mind. I love building systems and frameworks that enable organizations to operate in a consistent, principled way - efficiently. In both the private sector and government I've thrived in environments that require agility, curiosity, the ability to work across organizations and cultures, and strong analytical and communications skills. | Learn more about Aleah Houze's work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

30.   Shawn Turskey. Current Facebook Global Director Security Investigations. Former NSA 19 years. Former US Cyber Command 4 years. https://www.linkedin.com/in/shawnturskey/

Shawn Turskey | LinkedIn In Sept 2023, I took on new challenges and opportunities at Snap.

In Sept 2022, I… | Learn more about Shawn Turskey's work experience, education, connections & more by visiting their profile on LinkedIn
linkedin.com

@NameRedacted247 - Name Redacted

31.   Mike Torrey. Current Facebook Security Engineer Investigator. Former NSA 3 years. Former CIA 9 years. https://www.linkedin.com/in/mike-torrey-01658b14a/

Mike Torrey | LinkedIn Expert with extensive experience analyzing and disrupting cyber threats. Extensive public and private sector experience in cyber threat intelligence and response, including against information operations and advanced persistent threats. Experience includes developing whole of government strategies for countering threats, while also supporting human and technical operations. Exceptional research and analysis capabilities combined with the ability to convey complex technical and geopolitical issues through writing and briefing. Substantive technical expertise and ability to analyze digital collections for Intelligence value. | Learn more about Mike Torrey's work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

32.   Corey Ponder. Current Facebook Senior Strategist. Former Policy Consultant DHS 7 months. Former CIA 6 years. Former Policy Advisor Google 2 years. https://www.linkedin.com/in/coreytponder/

Corey P. | LinkedIn View Corey P.’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Corey P. discover inside connections to recommended job candidates, industry experts, and business partners. linkedin.com

@NameRedacted247 - Name Redacted

33.   John Papp (He/Him). Current Facebook Infrastructure ASIC Sourcer. Former DIA 4 years. Former CIA 12 years. https://www.linkedin.com/in/johnpapp/

John L. Papp, Jr. | LinkedIn As a former Sr. Intelligence Officer for the CIA, I come to Technical Sourcing with a… | Learn more about John L. Papp, Jr.'s work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

34.   Nick Lovrien (He/Him). Current Facebook Chief Global Security. Current Board Director US State Department. Former CIA 5 years. https://www.linkedin.com/in/nick-lovrien-cpp-a5a98392/

Nick Lovrien, CPP | LinkedIn Nick Lovrien, CPP (He/Him), is the Vice President and Chief Global Security Officer at Meta, the $120-billion company behind Facebook, Instagram, WhatsApp, and other services that connect billions of people and communities around the world. With over 20 years of experience in the security industry, Nick leads a world-class global security organization that balances the needs of security, business operations, and innovation.

Nick is a recognized expert in global security strategy, risk management, geopolitics, and national security, with a unique background in both the public (CIA) and private sectors. He serves on the Board of Directors for the US Department of State Overseas Security Advisory Council, the International Security Foundation, and the Silicon Valley Leadership Group. He has received multiple awards and honors, including the Don Walker Chief Security Officer Of The Year Award in 2020. Nick is also a passionate advocate for diversity, equity, and inclusion, and a proud executive sponsor for Meta's LGBTQ+ resource group. He is fluent in English, Portuguese, and Spanish. | Learn more about Nick Lovrien, CPP's work experience, education, connections & more by visiting their profile on LinkedIn
linkedin.com

@NameRedacted247 - Name Redacted

35.   Cameron H. Current Facebook Workflow Risk Project Manager. Former CIA 4 years. https://www.linkedin.com/in/cameron-h-759a9b191/

Cameron H. - Meta | LinkedIn View Cameron H.’s profile on LinkedIn, the world’s largest professional community. Cameron has 9 jobs listed on their profile. See the complete profile on LinkedIn and discover Cameron’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

36.   Andi Allen (She/Her). Current Facebook Senior Technical Recruiter. Current “Talent Partner” for https://helpukraine22.org/ . Former CIA 4 years. https://www.linkedin.com/in/andi-allen-634a9a89/

Help Ukraine | Operation Palyanytsya Help UKRAINE 22 OPERATION PALYANYTSYA. Support is provided both financially and in humanitarian and medical supplies. Having a presence in the US, EU, and Ukraine gives us both fiscal and supply flexibility. We have reliable partners for fund distribution and a safe yet agile supply chain. helpukraine22.org
Andi Allen | LinkedIn I've interpreted foreign intelligence for the President of the United States as a CIA… | Learn more about Andi Allen's work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

37.   Travis M. Current Facebook Technical Investigator. Former NSA 10 years. https://www.linkedin.com/in/travis-m/

Travis M. - Discover Financial Services | LinkedIn Experience: Discover Financial Services · Education: Illinois State University · Location: Normal · 500+ connections on LinkedIn. View Travis M.’s profile on LinkedIn, a professional community of 1 billion members. linkedin.com

@NameRedacted247 - Name Redacted

38.   Keith Pridgen. Current Facebook Program Manager. Former NSA 2 years. Former US Navy Information Warfare Officer 7 years. https://www.linkedin.com/in/keithpridgen/

Keith Pridgen | LinkedIn View Keith Pridgen’s profile on LinkedIn, the world’s largest professional community. Keith has 5 jobs listed on their profile. See the complete profile on LinkedIn and discover Keith’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

39.   Daniel Kaiser. Current Facebook Research Data Scientist. Former NSA 2 years. https://www.linkedin.com/in/daniel-kaiser-a397a419b/

Daniel Kaiser | LinkedIn • Full stack data scientist, leader of global interdisciplinary technical teams comprised of Data Engineers, Machine Learning Engineers, Data Architects, Software Engineers, and Data Scientists.
• Exceptionally skilled in deep learning with neural networks, machine learning, applied mathematics, and algorithmic design.
• Expert knowledge of Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) cloud environments.
• Proven experience in big data analytics, including analyzing data on a Hadoop cluster.
• Presented to key stakeholders and corporate executives.
• Experienced in model development, testing using Anaconda/pip, and deployment using Docker images running on Kubernetes cluster and on custom cloud architectures.
• Deep knowledge of recommender systems, computer vision, natural language processing (NLP), time series analysis, and Bayesian optimization.
• Used wide array of Python packages including XGBoost, PyTorch, TensorFlow, Keras, gensim, scikit-learn, GPyOpt, pandas, NumPy, and PyMongo.
• Fluent in Python/Jupyter Notebooks, Spark (PySpark), Java, C++, R, Pig, SQL, MongoDB, and Matlab. | Learn more about Daniel Kaiser's work experience, education, connections & more by visiting their profile on LinkedIn
linkedin.com

@NameRedacted247 - Name Redacted

40.   Jerrod Lowmaster. Current Facebook/Instagram Data Scientist. Former NSA 5 years. https://www.linkedin.com/in/jerrodlowmaster/

Jerrod Lowmaster | LinkedIn Data Scientist with experience in large scale analytics - member acquisition, connection growth, member communications and email deliverability, and video advertising marketplace and auctions.

Big Data
Hadoop - MapReduce - Apache Pig - Hive
Scripting and Scientific Computing in Python - IPython -Numpy -Pandas - Matplotlib - Bokeh - scikit-learn - flask
Databases - Data Warehousing - MySql - ETL
Statistics - Econometrics - Machine Learning
Linux - Bash Scripting
Funnel Analysis | Learn more about Jerrod Lowmaster's work experience, education, connections & more by visiting their profile on LinkedIn
linkedin.com

@NameRedacted247 - Name Redacted

41.   Gabrielle Johnson (She/Her). Current Facebook Platform Investigator. Former NSA Deputy Office Chief 2 years. https://www.linkedin.com/in/gabrielle-t-johnson/

Gabrielle Johnson | LinkedIn I attribute my successes to both my own aptitude and ability to overcome obstacles, but… | Learn more about Gabrielle Johnson's work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

42.   Michael Khbeis (He/Him). Current Facebook Director of Operations. Former NSA 10 years. https://www.linkedin.com/in/michael-khbeis-3201778/

Michael Khbeis | LinkedIn View Michael Khbeis’ profile on LinkedIn, the world’s largest professional community. Michael has 3 jobs listed on their profile. See the complete profile on LinkedIn and discover Michael’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

43.   Josh Bulluck. Current Facebook SPARQ Manager. Former NSA Signals Intelligence Analyst 2 years. Former US Army Intelligence Analyst 7 years. https://www.linkedin.com/in/jsbulluck/

Josh Bulluck | LinkedIn View Josh Bulluck’s profile on LinkedIn, the world’s largest professional community. Josh has 1 job listed on their profile. See the complete profile on LinkedIn and discover Josh’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

44.   Eric Gonzalez. Current Facebook Systems Project Manager. Former NSA 3 years. Former US Navy Cryptologic Warfare Officer 2 years. https://www.linkedin.com/in/erictgonzalez/

Eric Gonzalez | LinkedIn Technical Program Management & Cybersecurity Professional with 10+ years of experience:

- Leading Technical Teams in Developing & Deploying Software Products for 50k+ End Users

- Large Scale (1m+) Data Labeling support for AI/ML model training.

- Driving Department of Defense Information Security Compliance Standards for multiple organizations

- Implementing Privacy & Technical Security Controls (i.e. SOC2)

- Developing Threat Intelligence Analysis Tradecraft

- Information & Influence Operations Analysis

- Big Data Collection & Analysis

- Building cross-functional teams

- Advising senior executive decision makers.

- Effectively defining customer product security requirements

- Data Privacy & Digital Rights advocate ✊🏾

- Podcast host | The Tech Amendment 🎙

- DJ | https://soundcloud.com/doyouwepa 🤘🏾 🎧

Honored to have sailed ships, flown planes, and served in the Intelligence Community in support of U.S. National Security. 🇺🇸

Credentials:
M, Eng., Cybersecurity Policy, PMP, CSM, AWS-CCP | Learn more about Eric Gonzalez's work experience, education, connections & more by visiting their profile on LinkedIn
linkedin.com

@NameRedacted247 - Name Redacted

45.   Seth Summersett. Current Facebook Head of Security Partners. Former NSA 8 years. https://www.linkedin.com/in/seth-summersett-057b081a0/

Seth Summersett | LinkedIn View Seth Summersett’s profile on LinkedIn, the world’s largest professional community. Seth has 6 jobs listed on their profile. See the complete profile on LinkedIn and discover Seth’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

46.   Brian McFarland (He/Him). Current Facebook Security Partner. Former NSA Cryptographer 3 years. https://www.linkedin.com/in/brian-mcfarland-3a64126/

Brian McFarland | LinkedIn Accomplished embedded systems security engineer recognized as a secure processing subject… | Learn more about Brian McFarland's work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

47.   Mike D. Current Facebook Threat Intelligence Manager. Former FBI 13 years. https://www.linkedin.com/in/mike-d-48b149b3/

Mike D. | LinkedIn View Mike D.’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Mike D. discover inside connections to recommended job candidates, industry experts, and business partners. linkedin.com

@NameRedacted247 - Name Redacted

48.   Steve Goldman. Current Facebook Acute Issue Management. Former FBI 26 years. https://www.linkedin.com/in/steve-goldman-1b656943/

Steve Goldman | LinkedIn View Steve Goldman’s profile on LinkedIn, the world’s largest professional community. Steve has 2 jobs listed on their profile. See the complete profile on LinkedIn and discover Steve’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

49.   Jennifer A. Current Facebook Global Intelligence Lead. Former Foreign Affairs Officer US Department of State. Former FBI 6 years. https://www.linkedin.com/in/jennifer-a-4b7632/

Jennifer A. | LinkedIn Experienced leader in both public and private security environments known for creative and collaborative problem solving, strategic mindset, integrity, innovation, and cross-functional leadership. | Learn more about Jennifer A.'s work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

50.   Steven S. Current Facebook Director & Associate General Counsel. Former DOJ Trial Attorney 5 years. Former FBI Deputy General Counsel 8 years. https://www.linkedin.com/in/steven-s-36819437/

Steven S. - United States | Professional Profile | LinkedIn Location: United States · 500+ connections on LinkedIn. View Steven S.’s profile on LinkedIn, a professional community of 1 billion members. linkedin.com

@NameRedacted247 - Name Redacted

51.   Cynthia Deitle (She/Her). Current Facebook Director, Associate General Counsel. Former FBI 19 years. https://www.linkedin.com/in/cynthia-deitle-jd-ll-m-3a9991141/

Cynthia Deitle, JD. LL.M - Meta | LinkedIn I am a civil rights attorney, instructor, investigator, published author, and adjunct law professor. I joined Meta's Civil Rights Team in 2021 as the Director and Associate General Counsel to assist the company with building products and policies with civil rights in mind to support marginalized populations. I also bring communities and law enforcement officers together to collaborate on police reform efforts, enforce hate crime laws, and explore opportunities to keep individuals safe. I was fortunate to be featured in a 2011 episode of 60 Minutes regarding an unsolved racially motivated hate crime and I appeared in every episode of the first season of the Injustice Files on Investigation Discovery to profile three bias-motivated murders. I was featured in episode 17 of FBI True involving the capture of a Top Ten fugitive. I have granted interviews to the New York Times, the Washington Post, National Public Radio, and the British Broadcasting Corporation. I volunteer my time with nonprofits focused on Type 1 diabetes, human trafficking, and criminal justice reform. · Experience: Meta · Education: New York University School of Law · Location: Knoxville · 500+ connections on LinkedIn. View Cynthia Deitle, JD. LL.M’s profile on LinkedIn, a professional community of 1 billion members. linkedin.com

@NameRedacted247 - Name Redacted

52.   Tromila Maile. Current Facebook FIU Investigator. Former FBI Intelligence Analyst 3 years. https://www.linkedin.com/in/tromila-maile/

Tromila Maile | LinkedIn I am a analyst who loves to use both the quantitative and qualitative aspects of a topic to develop the best strategy to mitigate risk associated with a variety of crimes, most specifically human trafficking, human smuggling, drug trafficking, and terrorism. I believe the numbers can tell a compelling story, but it is only complete when I am able to add to it the human dimension not available in a table or a spreadsheet. | Learn more about Tromila Maile's work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

53.   Tim Hadley. Current Facebook Data Center Manager. Former FBI 17 years. https://www.linkedin.com/in/tim-hadley-3958888/

Tim Hadley | LinkedIn View Tim Hadley’s profile on LinkedIn, the world’s largest professional community. Tim has 6 jobs listed on their profile. See the complete profile on LinkedIn and discover Tim’s connections and jobs at similar companies. linkedin.com

@NameRedacted247 - Name Redacted

54.   Jeffrey K. Van Nest. Current Facebook In-House Counsel. Former FBI 20 years. https://www.linkedin.com/in/jeffrey-k-van-nest-0812b81a/

Jeffrey K. Van Nest | LinkedIn Former career FBI Supervisory Special Agent / Chief Division Counsel now serving as in-house counsel for Meta Platforms, Inc. | Learn more about Jeffrey K. Van Nest's work experience, education, connections & more by visiting their profile on LinkedIn linkedin.com

@NameRedacted247 - Name Redacted

55.   Meredith Burkett. Current Facebook Anti-Scraping Investigator. Former FBI 6 years. https://www.linkedin.com/in/meredith-burkett-57650bb4/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

56.   Leo M. Current Facebook Threat Investigator. Former FBI 7 years. Former DOD 4 years. https://www.linkedin.com/in/leo-m-359567169/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

57.   Keith Allan. Current Facebook Corporate Strategy & Global Operations. Former FBI 5 years. https://www.linkedin.com/in/keitheallan/

Keith Allan - United States | Professional Profile | LinkedIn Location: United States · 405 connections on LinkedIn. View Keith Allan’s profile on LinkedIn, a professional community of 1 billion members. linkedin.com

@NameRedacted247 - Name Redacted

58.   Anthony S. Current Facebook Business Integrity Specialist. Former FBI 8 years. https://www.linkedin.com/in/kysmith99/

Anthony S. | LinkedIn View Anthony S.’s professional profile on LinkedIn. LinkedIn is the world’s largest business network, helping professionals like Anthony S. discover inside connections to recommended job candidates, industry experts, and business partners. linkedin.com

@NameRedacted247 - Name Redacted

59.   Christina F. Current Facebook Security Engineer Investigator. Former FBI 5 years. https://www.linkedin.com/in/christinafowler1/

Profile Not Found | LinkedIn LinkedIn strengthens and extends your existing network of trusted contacts. LinkedIn is a networking tool that helps you discover inside connections to recommended job candidates, industry experts and business partners. linkedin.com

@NameRedacted247 - Name Redacted

60. Reminder- All of these Facebook employees publicly list their work experience. I’ve labeled their names “as listed” on their LinkedIn Profiles. Anyone can do a simple LinkedIn search & find the same. Current Company Facebook/META. Past Company FBI/CIA/DHS/NSA. 115 results.

View Full Interactive Feed