By: Xi He RIG Inc Intern Researcher


In 2020, researchers from Google released an experimental platform to help fact-checkers detect fake images. AI-generated fake news, videos and images are becoming more common and more concerns are arising around the risks of Deep Fakes. Recently, Microsoft introduced a new deepfake detection tool to combat disinformation. In this blog I will be exploring Deepfakes- starting from providing a definition of deepfakes to understanding the risks and potential solutions for their spread.

What is Deepfake?

Deepfake uses deep learning techniques in Artificial Intelligence to generate videos, photos, or news that seems real but is actually fake. These techniques can be used to synthesize faces, replace facial expressions, synthesize voices, and generate news. This technique is also used to create special effects in movies. However, more recently this technique is being widely used by criminals to create disinformation.


How Does Deepfake Work?

Deepfake techniques rely on a deep learning technique called autoencoder. Autoencoder is a type of artificial neural network which contains an encoder and a decoder. The encoder can reduce the input dimensions and map input into an encoded representation. This algorithm needs thousands of training data. The decoder learns how to reconstruct the input images from the encoded representation. The input data is first decomposed into an encoded representation then these encoded representations are reconstructed into new images which are close to input images. Deepfake software works by combining several autoencoders, one for the original face and one for the new face (Lee, 2019).

The website below is a website for detecting deep fakes. Test yourself and check whether or not you can distinguish between them!

The Potential Risks

Now, with the development of technology and the internet, anyone can download deepfake software, just by searching on Google. As Ian Sample states, “the AI firm Deeptrace found 15,000 deep fake videos online in September 2019, a near doubling over nine months” (2020). Surprisingly, Ian Sample points out that “96% were pornographic and 99% of those mapped faces from female celebrities on to porn stars”. This example shows how Deepfake technology can become a weapon against women.

There is also concern about potential growth in the use of disinformation to the public. People worry about this technology and how it can be used by criminals to scam people over the phone. Deepfake can be used to generate fake news and incite civil unrest. If we are not able to indicate deepfakes, we are not able to confirm these illegal behaviors and we may not have the evidence to convict these criminals. Therefore, research on the deepfake topic is critical.


The Achievements in Fakedeeps Detections

Last year, Microsoft announced that their recent new deepfake detection tool not only can detect manipulated content in images and videos but also can indicate the authenticity of the media users are viewing. Microsoft achieves these goals by using two components. First, technologists developed a new model which allows content producers to add digital hashes and certificates to a piece of content. The second component is a reader which can check the certificates and match the hashes. (Burt, 2020)

Last month, computer scientists from the University of Buffalo developed a new tool that automatically spots Deepfake photos by using light reflections in the eyes (Bankhead, 2021). This new model is highly accurate. The scientists point out that the reflections on the eyes should generally appear to be the same shape and color in the real images, but the many deepfake images generated by AI cannot consistently do this. Therefore, they used this idea to successfully construct an effective deepfake detection model.


What Can be Done to Prevent the Spread of Deep Fake?

As discussed above, researchers from universities and industry experts are working on implementing deepfake detecting techniques and have successfully made some progress in this area. Combating disinformation and deepfakes requires collaboration among policymakers, technologists, and companies and no single organization will be able to create a robust and meaningful impact.

Government Interventions:

  • Policymakers can support and invest in research and education on deepfake detecting tools
  • Governments can convene people across the academic and private industry to share data samples, ideas, and techniques
  • Governments can provide social metrics to access disinformation detecting techniques

Technologists’ Research:

  • Technologists should continue researching deepfake and disinformation detection models
  • Technologists should collaborate with government and private companies to have a better understanding of the goal of disinformation detection tools

Companies’ Actions:

  • Private industries should actively collaborate with one another to share datasets and techniques
  • Social media can provide direct authentication of content on their platform. For example, when users upload images from their devices, social media should check whether these images have originated from their devices.
  • Companies should work together to create a large system to track whether the videos or images are being edited by other people and track the life cycle of these images and videos.

What RIG can do?

Revolutionary Integration Group Inc’s Dynamic Trust will be highly useful. Companies, Technologies, and government can use trust models which RIG is currently developing a way to assign trust levels to images and videos. To learn more about RIG Inc and its products please visit



Tom Burt. (2020). New Steps to Combat Disinformation. Retrieved from

Melvin Bankhead. (2021). UB computer scientists develop tool to spot deepfake photos. Retrieved from

Ian Sample. (2020). What are deepfakes – and how can you spot them? Retrieved from

TIMOTHY B. LEE. (2020). I created my own deepfake—it took two weeks and cost $552. Retrieved from