Facebook will expand its action against QAnon by restricting #SaveOurChildren, one of the hashtags supporters of the conspiracy theory often append to their social media posts.
Starting Friday, the company will be “limiting the distribution” of the hashtag, spokesperson Emily Cain said in a statement to CNN Business, meaning that posts using the hashtag will have their visibility reduced in the News Feed and people clicking on the hashtag will not be able to see the aggregated results.
Instead, they will see a link to a list of “credible child safety resources,” Cain added.
Save The Children is a respected humanitarian organization that has been around for more than 100 years, but QAnon followers have hijacked and bastardized the name “Save The Children” as a way to spread baseless conspiracy theories about prominent Democrats, including former Vice President Joe Biden.
Posts about the conspiracy theory often include the hashtags #SaveTheChildren or #SaveOurChildren. Facebook said it will only be limiting the distribution of the latter for now, and will continue to monitor different hashtags and other methods by which QAnon supporters might try to continue evading detection.
Searching for #SaveTheChildren shows a prompt from Facebook asking if you’re looking for the humanitarian organization with a link to its website. You also have the option of proceeding to the search results. Other similar hashtags show a link to a page with child safety resources in addition to the regular search results.
Children need to be “saved,” Qanon followers believe, from a cabal of evil Democrats. It is essentially the same conspiracy theory that was pushed as part of “Pizzagate” in 2016 which falsely alleged a Washington DC pizza shop was at the center of a child sex trafficking ring.
The “Save The Children” charity has nothing to do with the QAnon and has publicly sought to distance itself from the conspiracy theory and its followers. Other child protection organizations have said these conspiracy theories are creating dangerous distractions from the real issue of child exploitation.
But the platforms have allowed QAnon content to grow and spread for years. There are now multiple Republicans running for Congress who have expressed support for QAnon.
In August, President Donald Trump praised QAnon followers for supporting him.
“I don’t know much about the movement other than I understand they like me very much, which I appreciate,” Trump said in the White House briefing room.
Last year an FBI office warned that Q adherents are a domestic terrorism threat.
— CNN Business’ Donie O’Sullivan contributed to this report
Staffers at Wikipedia’s parent organization and the volunteer editors who maintain its millions of pages have a plan to ensure that election-related entries aren’t improperly edited.
Last week, the Wikipedia community placed “extended protections” on the 2020 United States presidential election page, which means only experienced volunteers with at least 500 edits and 30 days on the platform can make changes. Other pages related to the election and presidential candidates already have protections, like the articles for Hunter Biden, the son of Democratic presidential nominee Joe Biden, Jared Kushner, President Donald Trump’s son-in-law, and the pages for both the Trump and Biden campaigns.
Generally, anyone can go into an article and make a change, however, there are varying levels of protections for what Wikipedia calls contested pages, which range from political topics to more obscure subjects over which editors disagree.
There are over 70 English-language articles about the 2020 election, according to the Wikimedia Foundation, Wikipedia’s parent. It said more articles may be protected as Election Day nears.
Editors will be monitoring a list of relevant articles on Election Day and beyond. If someone makes an edit to those pages, over 500 people will get an email alerting them that there could be something worth checking.
Since late August, some Wikimedia staff have been running through different scenarios of what could happen on its site during the election, such as how it would handle malicious content or a coordinated attack by multiple accounts making edits across several Wikipedia pages on Election Day.
“We are under no illusions that we will prevent every bad edit from making it onto the site,” said Wikimedia chief of staff Ryan Merkley, who leads its new internal US election task force. “We think our responsibility is to make sure that we are as prepared to respond and that we can do it as swiftly as possible and ideally prevent its spread broadly.”
Ahead of Election Day, Instagram has moved to temporarily restrict a popular way to browse posts.
Instagram announced that it will temporarily hide the “Recent” tab from showing up on all hashtag pages — whether they’re related to politics or not. The company said it hopes the move will help prevent the spread of misinformation and harmful content related to the election.
Hashtag pages will still work, they’ll just only show “Top Posts” as determined by the platform’s algorithms. This may include some recent posts.
An Instagram spokesperson said the change was rolled out Thursday evening, and there is no specific timeline for when the action will end.
Other social platforms have also implemented similar temporary changes ahead of Election Day. For example, Twitter is encouraging users to quote tweet rather than to retweet, hoping people will add context or a reaction before spreading information.
Twitter labeled a video from the Russian-state controlled broadcaster RT as election misinformation on Thursday.
RT is registered with the US Justice Department as an agent of the Russian government. It is the first time Twitter has taken action against RT for US election misinformation in this way, Twitter confirmed to CNN.
The four-minute video posted by RT was titled “Questions mount amid voter fraud, rigging claims ahead of #USelection.”
Twitter deactivated the retweet featured on the video, to reduce how much it can be shared, and slapped a label over it that read, “Some or all of the content shared in this Tweet is disputed and might be misleading about how to participate in an election or another civic process.”
The Kremlin uses RT to spread English-language propaganda to American audiences, and was part of Russia’s election meddling in 2016, according to US intelligence agencies.
A report released by the US intelligence community in 2017 said RT has historically “portrayed the US electoral process as undemocratic” and amplifies false narratives claiming that “US election results cannot be trusted.”
The four-minute video that RT posted Thursday touches on many of these themes. It raises concerns about “fraud” and echoes many of the lies President Donald Trump has spread about mail-in voting. Their segment cites Fox News, which has championed many of Trump’s attacks against the electoral process. It highlights isolated incidents of ballot mishaps, many of which have already been deemed by local authorities to be accidents and errors — and not fraud.
Earlier this year, an internal intelligence bulletin issued by the Department of Homeland Security said Russia was amplifying disinformation about mail-in voting as part of a broader effort “to undermine public trust in the electoral process.”
Facebook has hired a network of fact-checkers across America. CNN talks to two who have received threats for simply doing their jobs during the 2020 election cycle.
Ted Cruz yelled. His Democratic colleague Brian Schatz called the hearing in which he was speaking “a sham.” Committee chair Roger Wicker couldn’t pronounce the last name of Google’s CEO. Just another day on Capitol Hill for Big Tech.
In a contentious hearing on Wednesday, the CEOs of Facebook (FB), Google (GOOG) and Twitter (TWTR) were questioned by Senators on the Commerce Committee over their content moderation policies. Some demanded more transparency while others sought explanations on a few specific cases in which content was removed or labeled by platforms. Though the hearing was meant to focus on a crucial law, known as Section 230, that protects the companies’ ability to moderate content as they see fit, Senators strayed from the brief and confronted the executives on other topics, including antitrust and election interference.
Schatz and other senators slammed the timing of hearing, which comes less than a week before the US election. “This is bullying and it is for electoral purposes,” Schatz said. “Do not let the United States Senate bully you into carrying water for those who want to spread misinformation.”
Cruz angrily went after Twitter CEO Jack Dorsey, pressing him on the platform’s decision to restrict content posted by the New York Post. He concluded by shouting at Dorsey: “Mr. Dorsey, who the hell elected you and put you in charge of what the media are allowed to report and what the American people are allowed to hear, and why do you persist in behaving as a Democratic super PAC silencing views to the contrary of your political beliefs?”
TikTok said Wednesday it will reduce the distribution of claims of election victory before official results are confirmed by authoritative sources.
Eric Han, TikTok’s US head of safety, announced that premature claims of victory surrounding the 2020 election will be restricted if the Associated Press has not declared a result. Han also said the company is working with third-party fact-checkers who are “on expedited call during this sensitive time.”
“Out of an abundance of caution, if claims can’t be verified or fact-checking is inconclusive, we’ll limit distribution of the content,” Han added in a blog post. “We’ll also add a banner pointing viewers to our election guide on content with unverifiable claims about voting, premature declarations of victory, or attempts to dissuade people from voting by exploiting COVID-19 as a voter suppression tactic.”
The policy is similar to ones previously announced by other social media companies like Facebook and Twitter.
One of Facebook’s top executives in India — where it has more users than anywhere else — has resigned months after being linked to allegations of political bias and hate speech against the platform.
Ankhi Das, Facebook’s head of public policy in India, allowed a politician from the country’s ruling party to remain on its platform even though his anti-Muslim posts flouted its rules against hate speech, current and former Facebook employees told the Wall Street Journal in August. Das reportedly opposed banning the politician (which Facebook ultimately did weeks later) because doing so would hurt Facebook’s business in the country.
“Ankhi has decided to step down from her role in Facebook to pursue her interest in public service,” Ajit Mohan, the company’s vice president and managing director in India, said in a statement. “We are grateful for her service and wish her the very best for the future.”
Facebook has long faced controversies over harmful misinformation and hate speech in India, whose 600 million-plus internet users are increasingly important to its business as it’s locked out of China and looks for future growth.
The Indian government has repeatedly called on Facebook to do more to curb misinformation, particularly on its mobile messaging platform WhatsApp, after viral hoaxes in 2018 were linked to more than a dozen lynchings.
WhatsApp misinformation may be finding its way into the upcoming US presidential election, with Reuters reporting that misleading messages about Democratic candidate Joe Biden have been making the rounds on the private messaging service — particularly within the Indian-American community.
WhatsApp counts India as its largest market, with around 400 million users.
A misleading video clip of Democratic presidential candidate Joe Biden has been spreading on social media without any warning labels since Saturday after having been promoted by members of President Donald Trump’s inner circle.
In the 24-second clip from an interview with the podcast Pod Save America, which is hosted by four former members of the Obama administration, Biden is heard saying, in part, that “we have put together, I think, the most extensive and inclusive voter fraud organization in the history of American politics.”
The clip was first posted by RNC Research, a Twitter account operated by the Republican National Committee.
It’s clear in context that Biden is talking about an effort to combat voter suppression and provide resources for those seeking to vote, not an organized effort to perpetrate voter fraud.
The clip is part of a longer response by the former Vice President to a two-part question from host Dan Pfeiffer about Biden’s message to people who haven’t voted and those who already have.
In his response, Biden encouraged people to “make a plan exactly how you’re going to vote, where you’re going to vote, when you’re going to vote. Because it can get complicated, because the Republicans are doing everything they can to make it harder for people to vote, particularly people of color, to vote…”
He continued a few sentences later: “We have put together, I think, the most extensive and inclusive voter fraud organization in the history of American politics. What the President is trying to do is discourage people from voting by implying that their vote won’t be counted, it can’t be counted, we’re going to challenge it…”
Biden goes to explain that his campaign has arranged for legal assistance for people who feel their right to vote has ben challenged.
On Saturday evening, White House Press Secretary Kayleigh McEnany posted the shortened clip from her personal Twitter account saying: “BIDEN ADMITS TO VOTER FRAUD!”
Fact-checking website Snopes debunked the claim as false.
McEnany’s post has been retweeted more than 32,000 times. The clip has been viewed 7.9 million times on Twitter. Eric Trump also posted the video on both Twitter and Facebook without any false commentary.
President Trump’s verified YouTube account also posted the clip, with the title: “Joe Biden brags about having ‘the most extensive and inclusive VOTER FRAUD organization’ in history.” It’s been viewed nearly 500,000 times.
A Twitter spokesperson said it will not label the tweets by McEnany or Eric Trump. The social network did not provide further detail.
Facebook did not immediately respond to requests for comment, but the platform has not added any information labels or fact-checking resources to the clip.
According to Twitter’s rules, users may not “deceptively share synthetic or manipulated media that are likely to cause harm.” However, it’s unclear if a clip taken out of context, but not technologically manipulated, would fall into this category.
Facebook’s manipulated media policy states users should not post video that has been “edited or synthesized … in ways that are not apparent to an average person, and would likely mislead an average person to believe that a subject of the video said words that they did not say.”
A YouTube spokesperson said the video does not violate its rules.
“While the video shared with us by CNN does not violate our Community Guidelines, we have robust policies prohibiting deceptive practices such as technically manipulating content in a way that misleads users (beyond clips taken out of context) and may pose egregious harm.” said Ivy Choi, a YouTube spokesperson.
The Trump campaign did not respond to a request for comment. Biden’s national press secretary TJ Ducklo said: “The President of the United States has already demonstrated he’s willing to lie and manipulate our country’s democratic process to help himself politically, which is why we have assembled the most robust and sophisticated team in presidential campaign history to confront voter suppression and fight voter fraud however it may present itself.”
When asked if the RNC stood by the clipped video, and if it’s the official position of the RNC that Biden was endorsing and explicitly encouraging voter fraud, RNC Rapid Response Director Steve Guest said: “You should ask Joe Biden if he stands by the words he uttered, not us for sharing them. It’s not the RNC’s responsibility to clarify for the Biden campaign their candidate’s repeated blunders.”