By Agence France-Presse
YouTube said Monday it would remove election-related videos that are “manipulated or doctored” to mislead voters, as part of its efforts to stem online misinformation.
The Google-owned service said it was taking the measures as it strives to become a “more reliable source” for news and to promote a “healthy political discourse” amid heightened fears over video fakes around the world.
Leslie Miller, YouTube’s vice president of government affairs and public policy, said in a blog post that the service’s community standards prohibit “content that has been technically manipulated or doctored in a way that misleads users… and may pose a serious risk of egregious harm.”
The latest YouTube statement, which seeks to clarify its global policy on election misinformation, was announced as the US presidential primary season kicks off, with caucuses being held in Iowa on Monday and the first primary next week in New Hampshire.
The move comes amid growing concern about so-called “deepfake” videos altered by using artificial intelligence, which can create credible-looking events, but also “shallow” fakes that use more rudimentary techniques to deceive viewers.
Deepfakes and manipulated content have raised fears over the election process worldwide, amid notable incidents in Britain and India where videos were used for disinformation.
YouTube noted the policy also bans content which aims to mislead people about voting rules or the US census process that is underway.
Online platforms have come under pressure to root out misinformation in the wake of foreign manipulation efforts in the US in 2016 and elsewhere in recent years.
In the US, critics of online platforms have claimed not enough is being done to curb false claims by candidates themselves.
Concerns over US campaign
Google last year said it was stepping up efforts on election misinformation and would remove false claims in ads, including on YouTube, but the new statement appeared to offer specifics on certain kinds of content that will be blocked.
“The underlying standards YouTube explains and illustrates today do not appear to be brand new, but the company deserves praise for setting them out in clear terms and warning that it intends to enforce them vigorously,” said Paul Barrett of the New York University Center for Business and Human Rights and author of a 2019 study on political disinformation.
“YouTube’s statement today appears to reiterate its determination not to allow its users to be conned during the 2020 election campaign.”
The announcement underscores differing policies by major social networks on disinformation. Twitter has said it would ban all political ads for candidates, while Facebook has maintained a hands-off policy for political speech and ads, with some exceptions for content that misleads users about voting times and places.
“Each platform is weighing free expression against voter manipulation, but the information operations work across platforms and exploit these loopholes,” said Karen Kornbluh, a German Marshall Fund researcher who follows political disinformation.
“That’s why the platforms should come together and develop shared, clear, consistent, enforceable rules to protect voters from becoming easy marks for disinformation campaigns.”
Monday’s statement offered specific examples of content that would be removed from YouTube.
Among the content banned include any video “manipulated to make it appear that a government official is dead” or that “aims to mislead people about voting or the census processes, like telling viewers an incorrect voting date.”
YouTube will also take down any false claims about eligibility requirements for political candidates, “such as claims that a candidate is not eligible to hold office based on false information about citizenship status requirements.”
A separate statement by Google’s head of online trust and safety, Kristie Canegallo, said the internet giant has stepped up its efforts to stem abuse and false information on elections globally.
“Our trust and safety teams span the globe to monitor and disrupt account hijackings, inauthentic activity, disinformation campaigns, coordinated attacks, and other forms of abuse on our platforms on a 24/7 basis,” Canegallo said.
“We take seriously our responsibility to protect our users from harm and abuse, especially during elections.”
She added that Google was working with other technology companies and government agencies including the FBI “to identify bad actors, disable their accounts, warn our users about them, and share relevant information with industry officials and law enforcement.”