YouTube removes videos that could cause “real world harm” and restricts misinformation that has no such risk, said the US company on Thursday.
Content with misinformation but unlikely to cause harm won’t be recommended to users, said the video sharing and social media company that is owned by Alphabet, which also runs Google.
“Just because content is identified as being misinformation, it may or may not violate our content policies. In the example of somebody uploading a video about no human ever landing on the moon before, we think it is a video that should be allowed to be on YouTube as there is no risk of real world harm. But that doesn’t mean we want to recommend that content,” said Timothy Katz, Director, global head of responsibility at YouTube, at a virtual media roundtable for Indian journalists.
YouTube removed over 14.06 million channels, totalling 86.76 million videos, worldwide between April and June this year, according to a transparency report by Google. As many as 93.5 per cent of channels removed were misleading or were spam. Just 0.7 per cent of channels were taken down for misinformation.
Youtube took down 2.07 million videos in India, the most for a country, between April and June. India was followed by the US (1.32 million) and Russia (0.57 million).
Videos containing misinformation on sensitive events are removed once they are identified. “The criticality of the event matters a lot. In the election, if somebody uploads a video that has misinformation about how to vote, we would remove that content from our platform,” said Katz.
YouTube said it has a ‘4R’ approach: Removing content that violates policies, raising high-quality information in ranking and recommendations, reducing the impact of borderline content, and rewarding trusted creators on its platform.
“Raise is the way in which we are trying to connect users. For example, in Telangana, users can be connected to high quality information which is coming from credible and authoritative sources. With that approach we are ensuring to keep the platform safe through the elections,” said Ishan John Chatterjee, director, India, YouTube, at the roundtable.
The platform’s approach for videos in an election season is similar to its regular safety policy. “In the four R’s framework, for the first ‘R’ we have clear community guidelines which dictate the kind of content allowed on the platform. That includes misinformation. When videos violate our policies, we act to take them down,” said Chatterjee.
YouTube uses machine learning and artificial intelligence to identify misinformation and fake news. Besides that, human reviewers check videos for content guidelines.
“We have a high volume of content that is uploaded to YouTube. It will fortify our machine learning systems to be really attuned and detect content as soon as possible,” said Katz.
Between April and June 2023, YouTube’s automated flagging systems worldwide removed over 6.8 million videos, whereas user detection terminated 0.46 million. The transparency report added that eight videos were removed at the request of a government agency
Youtube added that one in two Indian languages internet users were consumers of news in India. It said in 2022, 94 per cent users in India reported using Youtube to gather information and knowledge.
Note:- (Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor. The content is auto-generated from a syndicated feed.))