via threatpost.com: A Deepfake Deep Dive into the Murky World of Digital ImitationsteemCreated with Sketch.

in deepfakes •  5 years ago 

image.png

Author: Lindsey O'Donnell

Deepfake technology is becoming easier to create – and that’s opening the door for a new wave of malicious threats, from revenge porn to social-media misinformation.

About a year ago, top deepfake artist Hao Li came to a disturbing realization: Deepfakes, i.e. the technique of human-image synthesis based on artificial intelligence (AI) to create fake content, is rapidly evolving. In fact, Li believes that in as soon as six months, deepfake videos will be completely undetectable. And that’s spurring security and privacy concerns as the AI behind the technology becomes commercialized – and gets in the hands of malicious actors.

Li, for his part, has seen the positives of the technology as a pioneering computer graphics and vision researcher, particularly for entertainment. He has worked his magic on various high-profile deepfake applications – from leading the charge in putting Paul Walker into Furious 7 after the actor died before the film finished production, to creating the facial-animation technology that Apple now uses in its Animoji feature in the iPhone X.

But now, “I believe it will soon be a point where it isn’t possible to detect if videos are fake or not,” Li told Threatpost. “We started having serious conversations in the research space about how to address this and discuss the ethics around deepfake and the consequences.”

The security world too is wondering about its role, as deepfakes pop up again and again in viral online videos and on social media. Over the past year, security stalwarts and lawmakers say that the internet needs a plan to deal with various malicious applications of deepfake video and audio – from scams, to misinformation online, to the privacy of footage itself. Questions have arisen, such as whether firms like Facebook and Reddit are prepared to stomp out imminent malicious deepfakes — used to spread misinformation or for creating nonconsensual pornographic videos, for instance.

And while awareness of the issues is spreading, and the tech world is corralling around better detection methods for deepfakes, Li and other deepfake experts think that it may be virtually impossible to quell malicious applications for the technology.
How Does Deepfake Tech Work?

Deepfakes can be applied in various ways – from swapping in a new face onto video footage of someone else’s facial features (as seen in a Vladimir Putin deepfake created by Li), to creating deepfake audio imitating someone’s voice to a tee. The latter was seen in a recently-developed replica of popular podcaster Joe Rogan’s voice, created using a text-to-speech deep learning system, which made Rogan’s fake “voice” talk about how he was sponsoring a hockey team made of chimpanzees.

At a high level, both audio and video deepfakes use a technology called “generative adversarial networks” (GANs), which consists of two machine-learning models. One model leverages a dataset to create fake video footage, while the other model attempts to detect the fake footage. The two work together until one can’t detect the other.
deepfake

image.png
Credit: Jonathan Hui

GANs were first introduced in a 2014 paper by Ian Goodfellow and researchers at the University of Montreal. The concept was hailed as useful for various applications, such as improving astronomical images in the science industry or helping video game developers improve the quality of their games.

While video manipulation has been around for years, machine learning and artificial intelligence tools used for GAN have now brought a new level of reality to deepfake footage. For instance, older deepfake applications (such as FakeApp, a proprietary desktop application that was launched in 2018) require hundreds of input images in order for a faceswap to be synthesized, but now, new technologies enable products – such as the deepfake face-swapping app Zao – to only utilize one image.

“The technology became more democratized after…video-driven manipulations were re-introduced to show fun, real-time [deepfake] applications that were intended to make people smile,” said Li.
Security Issues

From a security perspective, there are various malicious actions that attackers could leverage deepfake for – particularly around identity authentication.

“Deepfakes are becoming one of the biggest threats to global stability, in terms of fake news as well as serious cyber risks,” Joseph Carson, chief security scientist with Thycotic, told Threatpost. “Deepfakes are getting to the point that any digital audio or video online can be questioned on its authenticity and integrity, and can be used to not only steal the online identity of a victim but now the voice and face. Identity theft has now entered a new phase.”

The ability to simulate someone’s image and behavior can be used by spam callers impersonating victims’ family members to obtain personal information, or criminals gaining entrance to high-security clearance areas through impersonating a government official.

Already, an audio deepfake of a CEO’s fooled a company into making a $243,000 wire transfer in the first known case of successful financial scamming via audio deepfake.

But even beyond security woes, far more sinister applications exist when it comes to deepfake technology.

At a more high-profile level, experts worry that deep fakes of politicians could be used to manipulate election results or spread misinformation.

In fact, already deepfakes have been created to portray former president Donald Trump saying “AIDS is over,” while another deepfake replaced the face of Argentine president Mauricio Macri with that of Adolf Hitler.


"AIDS is over". The first fake news that could become real.#Treatment4all #endAIDS pic.twitter.com/KBxJoKanDM

— Solidarité Sida (@SolidariteSida) October 7, 2019


“The risk associated with this will be contextual. Imagine a CEO making an announcement to his company, that ended up being a deepfake artifact,” said Kothanath. “Same could go to sensitive messages between country leaders that could be the beginning of a conflict.”
Privacy Scares

In September, the Chinese deepfake app Zao (see video below) went viral in China. The app – which lets users map their faces over various clips of celebrities –spurred concerns about user privacy and consent when it comes to the collection and storage of facial images.

In case you haven't heard, #ZAO is a Chinese app which completely blew up since Friday. Best application of 'Deepfake'-style AI facial replacement I've ever seen.


Here's an example of me as DiCaprio (generated in under 8 secs from that one photo in the thumbnail) 🤯 pic.twitter.com/1RpnJJ3wgT

— Allan Xia (@AllanXia) September 1, 2019


The idea of seamlessly mapping someone’s online face onto another’s body is also provoking concerns around sexual assault and harassment when it comes to deepfake pornography.

Several reports of deepfake porn in real-life situations have already emerged, with one journalist in 2018 coming forward with a revenge porn story of how her face was used in an sexually explicit deepfake video — which was developed and spread online after she was embroiled in a political controversy.

Deepfake porn also emerged on Reddit in 2017 after an anonymous user posted several videos, and in 2018, Discord shut down a chat group on its service that was being used to share deepfaked pornographic videos of female celebrities without their consent. In 2019, a Windows/Linux application called DeepNude was released that used neural networks to remove clothing from images of women (the app was later shut down).

“Deepfake gives an unsophisticated person the ability to manufacture non-consensual pornographic images and videos online,” said Adam Dodge, executive director with EndTAB, in an interview with Threatpost. “This is getting lost in the conversation…we need to not just raise awareness of the issue but also start considering how this is targeting women and thinking of ways which we can address this issue.”

There’s also a privacy concern that dovetails with security. “There could be many ways an individual’s privacy is compromised in the context of a media asset such as video data that is supposed to be confidential (in some cases not),” Arun Kothanath, chief security strategist at Clango, told Threatpost. “Unauthorized access to those assets leads me think nothing but compromise on security breaches.”
Deepfake Detection

On the heels of these concerns, deepfakes have come onto the radar of legislators. The House Intelligence Committee held a hearing in June examining the issue; while Texas has banned deepfakes that have an “intent to injure a candidate or influence the result of an election. Virginia has outlawed deepfake pornography, and just last week, California also passed a law that bans the use of deepfake technology in political speech, and for non-consensual use in adult content.

When it comes to adult content, the California law requires consent to be obtained prior to depicting a person in digitally produced sexually explicit material. The bill also provides victims with a set of remedies in civil court.

But even as regulatory efforts roll out, there needs to also be a way to detect deepfakes – and “unfortunately, there aren’t enough deepfake detection algorithms to be confident,” Kothanath told Threatpost.
deepfake database

image.png
Images from Google’s deepfake database

The good news is that the tech industry as a whole is beginning to invest more in deepfake detection. Dessa, the company behind the aforementioned Joe Rogan deepfake audio, recently released a new open-source detector for audio deepfakes, which is a deep neural network that uses visual representations of audio clips (called spectrograms, used to train speech synthesis models) to sniff out real versus fake audio.

Facebook, Microsoft and a number of universities have meanwhile joined forces to sponsor a contest promoting research and development to combat deepfakes. And, Google and other tech firms have released a dataset containing thousands of deepfake videos to aid researchers looking for detection techniques.
Deepfake’s Future

Despite these efforts, experts say that many of the threats posed by deepfakes – from disinformation to harassment – are existing problems that the internet is already struggling with. And that’s something that even a perfect deepfake detector won’t be able to solve.

For instance, tools may exist to detect deepfakes, but how will they stop the video from existing on – and spreading on – social-media platforms? Li said pointed out that already, fake pictures and news have spread out of control on social-media platforms like Twitter and Facebook, and deepfake is just more of the same.

“The question is not really detecting the deepfake, it is detecting the intention,” Li said. “I think that the right way to solve this problem is to detect the intention of the videos rather than if they have been manipulated or not. There are a lot of positive uses of the underlying technology, so it’s a question of whether the use case or intention of the deepfake are bad intentions. If it’s to spread disinformation that could cause harm, that’s something that needs to be looked into.”

It’s a question that social-media sites are also starting to think about. When asked how they plan to combat deepfakes, Reddit and Twitter both directed Threatpost toward their policies against spreading misinformation (Facebook didn’t respond, but announced in September that it ramping up its deepfake detection efforts).

Twitter said that its policies work toward “governing election integrity, targeted attempts to harass or abuse, or any other Twitter Rules.”

On Reddit’s end, “Reddit’s site-wide policies prohibit content that impersonates someone in a misleading or deceptive manner, with exceptions for satire and parody pertaining to public figures,” a Reddit spokesperson told Threatpost. “We are always evaluating and evolving our policies and the tools we have in place to keep pace with technological realities.”

But despite these efforts, deepfake prevention efforts at this point are still reactive rather than proactive, meaning that once the bad deepfakes are live, the damage will still be done, according to Kothanath. Until that issue can be fixed, he said, the extent of damage that a deepfake can cause remains to be seen.

“My worry will be the ‘fear of the unknown’ that leads in to a breach and to a privacy violation,” Kothanath said.

Link to original article


My 2 sats on this...

I'm afraid we're going to see more and more improved deep fakes so that it becomes nearly impossible to make out the difference between fakes and real news!

Cheers!


So, what do you think?

Are we all prone to step into upcoming deep fake traps?

Do you do any kind of diligence on new breaking news stories to verify their authenticity?

Let me know in the comments of this article!

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Regards esteemed friend @doifeellucky.

Currently, AI is capable of generating complex and elaborate texts, many can convince us that they were written by another human. This creates a dangerous potential to generate fake news, reviews and social accounts.

Deepfakes, so far it is detectable. That is, we can analyze digitally and in depth one of these videos and determine that it is not real. But technology is constantly evolving and refining. Maybe the time comes that we cannot distinguish reality from the virtual.

I imagine a declaration of war made with Deepfakes, where we see some world leader president of a powerful nation, making sensitive statements about a counterpart. The consequences could be catastrophic for humanity.

What should we do to prevent such abuse then? It may sound ridiculous, but developing another AI is a decent solution.

Your friend, Juan

Thanks for you comment!

Actually I think this is a suitable approach to counter this... a next level AI "arms race" so to say. I doesn't sound weird to me at all but very likely!

Cheers!
Lucky

a next level AI "arms race" so to say.

It is certainly quite likely.

Thank you for your attentive reply dear @doifeellucky, you´re so kind.

Would you mind if I share with you some details about our nonprofit initiative based on STEEM Blockchain: "Project Hope" (@project.hope)?

Together with @crypto.piotr I've been working on it for few months already and recently we've launched our website. It's still kind of a "construction zone" :)

If you think so, could check it out: https://www.projecthope.pl/

You may find section "passive income" particularly interesting. Please, check it out and let me know what do you think. Your opinion would be a gold mine for me.

Your friend, Juan

Thank you Juan!

Already started to look into it! Looks very interesting to me! I'll def. check it out in depth the next few days!

Take care!

Cheers!
Lucky

Hi appreciated friend @doifeellucky.

I wanted to thank you personally for your participation in our @project.hope.

Big Thank you!

I am sure you will feel comfortable being part of our team.
Welcome!

Your friend, Juan.

Thank you for the welcome Juan! Looking forward to the great things happening in project.hope!

Cheers!
Lucky

Thank you very much.
I hope you find it interesting and decide to participate.

This post was curated by @theluvbug
and has received an upvote and a resteem to hopefully generate some ❤ extra love ❤ for your post!
JOIN ME ON TWITTER

myluvbug.gif

In Proud Collaboration with The Power House Creatives
and their founder @jaynie

JOIN US HERE

Thank you!