Deep fakes. You've probably seen them, but you may or may not know that you have. They are images or videos produced of individuals that look like real footage, but are fabricated. In the world of fake news, it has become a big problem.
The technology used to make this realistic looking content is continually improving. It has gotten to the point where many times the untrained eye or ear would have no idea that what they're seeing online or on social media is completely fabricated.
The artificial intelligence software takes real video from an individual, and then it is only a matter of plugging in any audio or facial expression and converting it to the person "saying" it. It's very innovative, but scary.
Jeremy Kahn of Bloomberg Tech refers to deep fakes as "fake news on steroids". While Kahn is confident that the technology has not yet been successful in an information warfare campaign or to severely damage someone's reputation, he is concerned that these harmful effects are in our future.
And who wouldn't be, knowing how good the technology is getting. It raises severe concerns in a world where constant information vetting is necessary.
I'm sure you have seen a fake article on social media, and likely dismissed it as one either by the headline, or by clearly false or ridiculous content. But how much harder would it be to personally vet information if what looks like real life video isn't real life?
We've always relied on video being proof. It someone's word goes against another's, how do we know who is right? Well, do we have video?
Now, we can have fake video. Even contradicting video. Evidence that we once considered proof, can we consider it that anymore, given this technology?
In addition to this, thinking of deep fakes from the perspective of journalism is troubling. Fundamentally, journalists seek to gather information using various resources like video and audio, and report it back using some of these same resources.
Deep fakes blur the lines in journalism. What if a video a reporter shares is a fake? Is that the journalist's fault? Or can we still consider video evidence safe to share? We have grown accustomed to vetting other sources, like online written articles, due to the prevalence of fake news. But now we are forced to take it to the next level to something as concrete as video. It's right before our eyes, and it could be fake. We're not used to that yet. The result? Deep confusion.
Eventually, hopefully we can develop equally potent artificial intelligence that can determine if the video in question is a deep fake, or altered in any way. After all, if technology can create deep fakes, we can only hope it will undo the damage that they can cause. That's kind of what technology does, right? It creates problems with its intelligence, but then also provides solutions for them.
In the meantime though, question everything. Yes, it might be on video, but is it real video? Sometimes we can tell, like if we see something that couldn't possibly be altered, like our own home security footage for example. But as for what we see on social media or online, deep fakes are leading to deep confusion. It's hard to ever be sure.
Comments