Title:Don't let Deepfake fool you
Type of article:Insights/Blog
Body:

                    

Machine learning algorithms are responsible for a number of AI applications and innovations we use today. Machine Learning algorithms use statistics to find structure in humongous amounts of data. And when we speak of data, it’s not just numbers. It could be anything like words, images, clicks, what have you. If it can be evaluated digitally, it can be fed into a machine-learning algorithm.

 Machine learning fuels so many of the services used every day- recommendation systems as on point as Netflix, YouTube and Spotify; search engines such as Google; social media feeds tailored to your interests such as Facebook and Twitter or even voice assistants like Siri and Alexa, the list goes on. Over 16.1 million Amazon Echo devices have been sold, according to data collected in June 2017. That means 7% of the population aged 12+ owns an AI-based speaker device and Netflix reserved $1 billion this year as a result of its machine learning algorithm which recommends personalized TV shows.

In all of these instances, each platform is learning about your inclinations as much is possible—what content you like watching, what thumbnails you are clicking, which posts you are reacting to—and using machine learning to make a highly educated guess about what you might want next. Or, in the case of a voice assistant, about which words match best with the weird sounds coming out of your mouth. Deep learning is machine learning amplified: it uses a technique that gives machines an enhanced ability to find—and assess—even the smallest patterns. This technique is called a deep neural network, deep because it has many, many layers of simple computational nodes that work together to go through data and deliver a final result in the form of the prediction.

Using one neural network is great for understanding sequences; using two is really great for creating them. Welcome to the magical, terrifying world of generative adversarial networks, or GANs. The goal of GANs is to give machines something akin to an imagination. They are responsible for the first piece of AI-generated artwork sold at Christie’s, as well as the category of fake digital images known as “Deep Fakes.”

What are Deep Fakes?

Deep fake is an AI based technology that is used to produce or alter video content so that it presents something that didn’t occur.  Photo fakery is far from new, but artificial intelligence has completely turned it upside down. Until recently only a big-budget movie studio could carry out a video face-swap, and it would probably have cost big bucks. AI now makes it possible for anyone with a decent computer and a few hours to spare to do the same thing. Further machine-learning advances will make even more complex deception possible—and make fakery harder to spot.  

How does it work?

Deep fake video is created by using two competing AI systems, one is called the generator and the other is called the discriminator. Basically, the generator creates a fake video clip and then asks the discriminator to determine whether the clip is real or fake. Each time the discriminator accurately identifies a video clip as being fake, it gives the generator a clue about what not to do when creating the next clip. 

                                                                          

                    

Why does it concern you?

These advances threaten to further blur the line between truth and fiction. Already the internet accelerates and reinforces the dissemination of disinformation through fake social-media accounts. “Alternative facts” and conspiracy theories are common and widely believed. Fake news stories, aside from their possible influence on the last US presidential election, have sparked ethnic violence in Myanmar and Sri Lanka over the past year. Now imagine throwing new kinds of real-looking fake videos into the mix: politicians mouthing nonsense or ethnic insults, or getting caught behaving inappropriately on video—except it never happened.

“Deep fakes have the potential to derail political discourse,” says Charles Seife, a professor at New York University and the author of Virtual Unreality: Just Because the Internet Told You, How Do You Know It’s True? Seife confesses to astonishment at how quickly things have progressed since his book was published, in 2014. “Technology is altering our perception of reality at an alarming rate,” he says.

The Impact of Deep Fakes:

You can see why a world with GANs is equal measures beautiful and ugly. It also helps increating satirical, humorous videos or aiding with motion picture special effects and video content production.   On one hand, the ability to work with media and mimic other data patterns can be useful in photo editing, animation, and medicine (such as to improve the quality of medical images and to overcome the scarcity of patient data). It also brings us wholesome content like this and this.

On the other hand, GANs can also be used in ethically objectionable and dangerous ways: to overlay celebrity faces on the bodies of porn stars, to make Barack Obama say whatever you want, or to forge someone’s fingerprint and other biometric data, an ability researchers at NYU and Michigan State recently showed in a paper.

                                   

Source: ABC news and Buzzfeed

The creation of realistic, non-consensual videos (e.g. causing a person’s mannerism to be “transposed” or “transplanted” onto the speech and/or motion of others) and the ease with which such videos may be made has set off alarm bells in the digital world.  In response, several large social platforms such as Reddit have revised their user policies to forbid users from uploading or sharing such non-consensual videos created by deep fake technology.   

 

Safety from Deep Fakes:

No, face-swapping technology isn’t illegal but one can incur in a Copyright Infringement for using videos made by others.  The use of someone else’s face without their consent violates their Right to Publicity, which covers the right to use their image and/or their identity.

Using this technology to create non-consensual pornographic content is not technically a crime yet but could fall into the category of revenge porn. However, face-swapping children or underage teens into pornographic scenes is indeed a crime, as even drawings and artistic rendering can be considered child pornography. Most countries allow copyright enforcement to be avoided as a part of caricature, parody or pastiche. There is no reason to believe this will not be the case for face-swap videos as well.

The new DEEP FAKES Accountability Act in the House would take steps to criminalize the synthetic media referred to in its name, but its provisions seem too optimistic in the face of the reality of this threat. On the other hand, it also proposes some changes that will help bring the law up to date with the tech.

It’s a start, and an attempt to mitigate it before the thing is truly a problem. However, such attempts are usually put down as state policies, so we wait for a few things to go south then get to work with hindsight. So, while the Deep Fakes Accountability Act would not create much in the way of accountability for the malicious creatures most likely to cause problems, it does begin to set a legal foundation for victims and law enforcement to fight against them.

Conclusion:

If used conscientiously, Deep Fakes can get to a never seen before level of photo realism. Differentiating between a real video and a computer-generated video will be next to impossible in the near future. Thus, there will be dire need for software that can detect the origin of video in the internet. The fact that most Deep Fake videos are obviously fake does not make them less dangerous. While all of the above holds true, it is very crucial to state that the technique behind Deep Fake is, per se, objective and open-minded. Like any weapon, Machine Learning is a tool can be used for good or evil.