TikTok has announced it is banning deepfakes as, like other social media platforms, it aims to prevent misinformation from spreading.

The timing of the change is notable because of the upcoming US election and the threats made to ban TikTok in the US by President Donald Trump.

Of course, social networks do not want to be seen as aiding election meddling in any way, especially after the Cambridge Analytica saga plus Twitter and Facebook being under the spotlight for fact-checking and censuring posts.  

In a statement from TikTok's US lead Vanessa Pappas, the social media giant said:

"We're adding a policy which prohibits synthetic or manipulated content that misleads users by distorting the truth of events in a way that could cause harm.

"Our intent is to protect users from things like shallow or deep fakes, so while this kind of content was broadly covered by our guidelines already, this update makes the policy clearer for our users."

TikTok adds it is working with outside fact-checkers (PolitiFact and Lead Stories) ahead of the 2020 election, and that it will allow users to report election-related misinformation in its app. And the company is working with the Department of Homeland Security in order to combat foreign interference in the election. 

"While TikTok isn't the go-to app to follow news or politics, we're focused on supporting our users with education and authoritative information on important public issues," says Pappas.

"Misinformation, disinformation, and threats to civic engagement are challenges no platform can ignore. By working together as an industry with experts and civil society organizations, we can better protect the civic processes that are so essential to our users."

Pappas also makes it clear the platform doesn't accept political ads "because the nature of paid political ads isn't something we think fits with the experience our users expect on TikTok." This seems to infer that engagement on such content would be pretty low anyway.