The need to regulate Deepfakes

in hive-175254 •  last year  (edited)

Deepfakes are the digital media (videos, audio and images) manipulated using Artificial Intelligence. Deepfakes are a new tool to spread computational propaganda and disinformation at scale and with speed. From deepfakes the potential damage to individuals, organizations and societies is vast.

Advances in Artificial Intelligence (AI) and Machine Learning have enabled computer systems to create synthetic videos, aka deepfakes. A deepfake video can show a person saying or doing something they never said or did. This makes it possible to fabricate media by swapping faces, lip syncing and puppeteering things mostly without the consent of the person being impersonated. This can not only torment a person psychologically, but can also endanger their security, reputation and even cost them their career.

Deepfakes can be used to depict a person indulging in antisocial behaviour such as hate speech, instigating violence etc. Thereby opening a whole new avenue for identity theft.

They can be used to fuel propaganda and fake narratives. They can be used for financial frauds and can even influence elections. They can impair the public’s ability to distinguish between facts and fiction, and undermine their trust in institutions. They can amplify pre-existing social tensions and further divide people. Some leaders can also use them to increase populism and consolidate their power.

Lately, deepfakes have been used to particularly target women. Woman are almost the exclusive targets of deepfake pornography, with it often being used to blackmail them. Oftentimes, even after the victim has exposed the fake, the damage has been done.

Deepfakes can also be used by insurgent groups and terrorist organizations, often to represent their adversaries as making provocative speeches to stir up anti establishment or anti state sentiment. Lack of proper regulations can create avenues for individuals, firms and non state actors to misuse AI.

A multi stakeholder, multi modal approach is needed to defend the truth and freedom of expression. Legislative regulations, technological intervention and media literacy can provide ethical and effective counter measures to mitigate the threat of malicious deepfakes. AI can help detect deepfake videos. Media literacy for consumers and journalists is the most important information step for combating misinformation, besides effective legislation that would ensure appropriate punishment for misusing deepfakes.

gencraft_image_1698087667695.png

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!