Shutterstock

Indian cricket legend Sachin Tendulkar has taken aim at a gaming company over the use of his image in a deepfake promotion. 

It follows a trend of celebrities having their image used in deepfake advertisements that they haven’t given permission for. 

Tendulkar stated on X in response to the video: “These videos are fake. It is disturbing to see rampant misuse of technology. Request everyone to report videos, ads & apps like these in large numbers. Social Media platforms need to be alert and responsive to complaints. Swift action from their end is crucial to stopping the spread of misinformation and deepfakes.”

As AI has continued to grow it has significantly evolved the threat of fraud, with a growing number of celebrities being the target of an escalating number of deepfake impersonations.

Furthermore, the expansion of AI into the mainstream has changed the approach to fraud from all angles, as operators have sought to tap into the growth of the tech in order to combat fraud. 

Rajeev Chandrasekhar, Union Minister of Electronics and Technology, expressed concern over the video shared by Sachin Tendulkar and said, “Deepfakes and misinformation powered by Artificial Intelligence are a threat to safety and trust of Indian users and represents harm and legal violation that platforms have to prevent and take down.”

Shaun Smith-Taylor, Group Director, Product Management at Eastnets revealed to CasinoBeats that the series of events involving Tendlkar underlines that deepfakes have now transcended into the mainstream as he stated that AI is the essential tool when it comes to combating the new threat of fraud.  

He stated: “Sachin Tendulkar being impersonated to promote a gaming app shows deepfakes are now mainstream. And they don’t just affect the advertising industry. A deepfake can make it hard for a financial institution to verify a person’s identity in know-your-customer checks. The same approach can also trick customers & employees into making fraudulent payments with account takeover attacks and first-party fraud being the high profile targets for criminals.

“The only way to deal with the threat is by checking identities against a wide array of customer data points, such as online behaviour patterns, transaction histories, and social media interactions. This requires AI to sift through the information and make judgements. It also calls for ongoing real-time monitoring, rather than relying on a one-off check. Essentially, when we can’t rely on a person appearing to be who they are, we need many more, and much faster checks, using more data. And that can only be achieved with AI.”

At the end of last year, the International Organisation of Securities Commissions (IOSCO) highlighted the significant growth of fraud.

In particular, IOSCO warned that dangers causing financial harm could be lingering where they aren’t expected, ranging from misleading social media adverts and online promotion of risky investments to other investment scams, which often involve digital assets. 

Jean-Paul Servais, Chairman of IOSCO, stated at the time of the warning: “Buying investment products and services online can bring significant benefits for retail investors such as convenience and reduced costs.

“However, the easy availability of investment products and services online brings an increased risk of fraud. Retail investors are at risk of falling victim to ‘bad actors’, who take advantage of them through online scams, which can lead to significant losses of money.

“We will continue our work to combat online fraud through rigorous enforcement efforts and by informing retail investors so they are vigilant to the risks and can take precautions to avoid frauds and scams. 

“We urge retail investors to only use reliable sources of information; to not invest too much money in one single product; and to never invest more money than you can afford to lose.”