Your Digital Self: This is breakthrough technology in fighting fake videos

This post was originally published on this site

Fake news is a well-documented problem. Fake, but believable, artificial-intelligence-generated video is more frightening because anyone can be targeted. It’s obvious that an AI-based technique is needed to counter that threat.

AI-generated videos (also known as deepfakes) and images are easier to come up with than ever before. They can range from funny and quirky to much more sinister and dangerous, such as those of political statements that were never given, or shots of events that never took place.

It’s easy to see how such content published on social networks or by reputable media outlets could be used to manipulate public opinion. There has been no easy way to combat this epidemic other than doing your own research. Until now.

Getting ahead of deepfakes

One company, Singapore-based CVEDIA, is taking a “fight fire with fire” approach to combat fake news with technology that, according to its creators, is “one step ahead of deepfakes.”

To produce a deepfake video, a programmer needs to “train” the AI algorithm by feeding it copious amounts of visual and audio data. The algorithm then takes that data and uses it to find patterns and elements it can emulate to create its own footage.

This process can be reverse-engineered to train a similar algorithm — a “hunter” of sorts, which can recognize fake videos and flag them appropriately. Unlike the deepfake AI that requires only the samples of person/environment it’s meant to emulate, the hunter is trained on many different visual and auditory stimuli, because it needs to be ready to detect a wide range of threats.

Providing an AI with this much data presents a big challenge. Not only is it expensive and time-consuming, but it can also be illegal to feed it imagery of real persons and places without their consent or proper government/institutional permissions.

An AI breakthrough

But what if the hunter AI does not need to be fed real-world info to detect fake videos? This is the approach that CVEDIA took.

The company specializes in machine-learning applications used to develop, train and validate the AI that are embedded in various sensor-enabled systems, such as vehicles or machinery. To do this, it uses SynCity, a platform that runs proprietary algorithms that simulate real-world physics, a multitude of lighting and environmental conditions, and renderings of people, animals and cars in a manner that AI systems interpret as real and lifelike.

Since its algorithms are believable to the AI, CVEDIA argues they can also be used to train it to detect fake videos. Armed with this tool, AIs no longer need to rely on real-world data, and can instead be fed an infinite amount of simulated information built especially for them.

That paves the way for robust, video-authenticity verification systems that can be used to scan videos published online. Online services and social networks such as Facebook FB, +0.13% Alphabet GOOG, +0.54% GOOGL, +0.44%  unit YouTube and others could use this technology to automatically scan all uploaded videos and tag them as fake.

However, what if the hunter becomes the hunted? Making this technology or its data-acquisition model publicly available enables those spreading fake news to get their hands on it and figure out a way to trick the algorithm by making minute changes in the footage. If that happens, it would be up to those designing detection systems to up the ante and upgrade the algorithm to factor in such changes. And the cat and mouse game continues …

The best protection

Technologies such as CVEDIA’s are a big step toward much needed clarity on the global information highway, but they are not going to solve the problem of visual and auditory forgeries completely. We must also keep in mind that media outlets don’t need deepfakes to skew the truth for their own agenda.

The best way to protect yourself from the torrent of misinformation available online is still good, old objectivism, research and fighting cognitive bias by reviewing information from multiple sources — especially if they present conflicting views. The truth is most often somewhere in between.

Jurica Dujmovic is a MarketWatch columnist.