Sachin Tendulkar’s deepfake video sparks concern: How to protect yourself- QHN


Former cricketer Sachin Tendulkar at the silver jubilee event celebrating his historic 'Desert Storm' innings in Sharjah against Australia on Apr 22, 1998, in Mumbai.

Former cricketer Sachin Tendulkar. (File)

Sachin Tendulkar, on Monday, expressed his concern over a viral deepfake video featuring the former cricketer and urged social media platforms to curb the spread of such posts, aimed at spreading misinformation.

The fabricated video of the cricketer depicted him endorsing a mobile gaming application, sparking concerns among the general public over the rising cases of the harmful trend and its impact on society, especially women and children.

Notably, Tendulkar is not the only celebrity to have been caught up in this deepfake artificial intelligence (AI) mess. In November, Rashmika Mandana had also expressed her concerns after a viral deepfake video of the actress emerged.

What is deepfake technology?

Deepfake artificial intelligence(AI) is a type of emerging technology which is used to create convincing deceptive images, audio and video footage. “The underlying technology can replace faces, manipulate facial expressions, synthesise faces, and synthesise speech. Deepfakes can depict someone appearing to say or do something that they, in fact, never said or did and are aimed at spreading misinformation,” the United States Government Accountability Office (GAO) explains.

Apps undressing women gaining popularity

However, it’s not just the public figures who risk privacy rights violations in such cases. The common public is just as vulnerable in this worrying trend, which is increasingly being used in financial fraud and nonconsensual pornography. Research conducted last year had flagged how applications designed to undress women in photos are gaining popularity. In September 2023 alone, 24 million people visited undressing websites, the social network analysis company Graphika found.

Notably, the ‘State of Deepfakes Report’ by ‘Home Security Heroes’, a US-based web security services company, also highlighted that deepfake videos saw a five-time increase in 2023 from 2019.

How to protect your data online

While social media platforms and governments across the world are still coming up with legislation and moderation methods to curb the spread of misinformation using AI, user media users are generally advised to have guardians to protect their data and information from hackers.

According to the American nonprofit National Cybersecurity Alliance, users are advised to act cautiously when sharing information on public platforms. “Limit the amount of data available about yourself, especially high-quality photos and videos, that could be used to create a deepfake,” the organisation says. It encourages to enact stronger privacy settings available to them, including two-factor authentication that requires a one-time password on the users’ linked mobile number to ensure authentic log in into social media accounts.

The organisation also advises using watermarks on photos and videos to discourage hackers. Additionally, users should change their passwords often and integrate the habit of using unique password combinations for enhanced security.

How to identify deepfake AI image or video

The US government’s Department of Homeland Security outlines a set of signs that may indicate that a video or an image is fake.

Identifying deepfake AI videos

1)These cues include looking out for a change of skin tone near the edge of the face
2)Double chins
3)Whether the face gets blurry when it is partially obscured by a hand or another object
4)Box-like shapes and cropped effects around the mouth, eyes, and neck
5)Blinking (or lack thereof), movements that are not natural
6)Changes in the background and/or lighting.

Identifying deepfake AI audios and texts

1)To check audio, the US government suggests checking if sentences are choppy
2)Whether they are varying in tone, have an odd phrasing or lack context.
3)It also says to see if the background sounds are consistent with the speaker’s presumed location.

4)In the case of texts, the federal organ advises misspelt words, lack of sentence flow, lack of context and sources of information.

First Published: Jan 15 2024 | 6:54 PM IST

Note:- (Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor. The content is auto-generated from a syndicated feed.))