Auto-translation used

Electronic Traces Of Crimes: Discovering Media Criminology

 What is Media Forensics in the field of AI? Let's figure it out!

This field of knowledge, media criminology, is devoted to the study of manipulations and offenses carried out using synthetic media. At its core, it is an effective means of identifying malicious content and combating disinformation in the media. Media forensics focuses on investigating cases of unethical or criminal use of deepfake technology.

The term "deepfake" comes from a combination of the words "deep learning" and "fake". This technology is capable of creating high-quality images of people in artificially created scenarios. For example, a deepfake can place Robert Pattinson on the deck of the Titanic set or introduce your neighbor in the company of Elon Musk. Despite its entertainment side, such content can lead to serious consequences, and media criminology is actively considering cases of the negative impact of deepfakes in the media.

In the spring of 2019, a deepfake representing the Speaker of the US House of Representatives, Democrat Nancy Pelosi, was published on the Internet. In the video, the impression was artificially created that she was speaking slowly, giving the false impression that she was in a state of heavy intoxication. This content caused excitement and indignation among Republican experts and politicians, provoking an almost instantaneous unfolding of the scandal.

The criticism of the speaker was intense, and the discussion around the event began almost instantly. However, it was only a few days later that it became clear that Nancy Pelosi's speech was completely generated by artificial intelligence (AI). The episode highlighted how deepfakes can become a tool of political manipulation, as well as how important it is to distinguish fact from manipulation in the digital space.

The deepfake of Donald Trump's Arrest blew up the Internet in March 2023

In March 2023, the networks exploded with photos representing the detention of former US President Donald Trump. The excitement around the event turned out to be incredible, and the images quickly spread across social media and news platforms. However, after a few days it became clear that these photos were the result of the creation of the Midjourney neural network. Despite the disclosure of the mystery, deepfake continued his life, leaving a mark on the digital space, filling it with uncertainty and excitement.

Report of the International Economic Forum on AI

Artificial intelligence is also becoming a tool for fraud. Criminals use it to bypass authentication systems in banks, deceive security services and steal money from accounts. In 2020, a top manager of a Japanese company became a victim of fraud, transferring $35 million to them after receiving an order from a deepfake with the voice of the head of the corporation.

Experts express serious concerns about the destructive influence of deepfakes on political processes around the world. The World Economic Forum report highlights that artificial content can have an impact on voters. With the 2024 elections approaching in various countries, including the United States, Russia and India, experts are warning about the possibility of using fake audio and video recordings to manipulate public opinion during election campaigns.

Government officials, scientists and developers around the world are actively looking for effective ways to counter the negative impact of synthetic media and deepfakes. In this article, we will look at three interesting cases from this area of struggle.

The mechanism of operation of deepfakes

Deepfakes are becoming more and more high-quality, and this is due to the use of a deep learning method known as generative adversarial network (GAN). This method is based on the confrontation of two neural networks: a generator and a discriminator. The generator creates a fake image, and the discriminator tries to determine whether it is real. The quality of the deepfake depends on how successfully the generator deceives the discriminator.

Previously created images using GAN had a low resolution, and blurring often gave them away as generated. However, the ProGAN version coped with this limitation by significantly increasing the resolution to 1024×1024 pixels.

Another important model is StyleGAN, which masterfully creates the faces of fictional people. Trained on a library of real photo portraits, this neural network has become capable of generating convincing human images.

How are deepfakes detected?

More than two decades ago, the U.S. Department of Homeland Security, together with Google and other organizations, established a research laboratory based at the University of Buffalo. This laboratory is a competence center for the forensic examination of digital media such as photographs, videos and audio recordings. As part of their activities, scientists actively use computer vision, machine learning, medical imaging and robotics methods.

Every year, the laboratory publishes several studies, including works on the search for methods for detecting deepfakes. Researchers realize that in the world of digital manipulation, scammers are also constantly evolving. Therefore, the laboratory is actively engaged in countercriminalism, an area that studies countering the methods of criminology. She not only fights against current forms of content manipulation, but also deals with predicting possible digital crimes. If you are interested in reading the report, then please click here and here.

Examples of methods:

  1. Pupil reflection: An effective method based on pupil analysis. The algorithm compares the specular highlights on the cornea of the eyes, where a real person's pattern will be the same, while fake portraits show a difference.
  2. Contrasting Areas: This method allows you to identify the points of "splice" of sections of the photo. By analyzing contrasting areas, experts can prove that the final image is assembled from individual images.
  3. Mismatch of Facial Reactions: This approach compares deepfake's facial expressions with the natural movement of human facial muscles. By identifying inconsistencies in facial reactions, experts can identify and document artificial changes.

The US Defense Advanced Research Agency (DARPA) has been developing the MediFor and SemaFor programs since 2019.

The services help to analyze media content automatically and on a large scale and identify manipulations. Both programs are fighting fraud and mass disinformation. Developers strive not only to find and refute deepfakes that have already appeared in the media space. The main goal is to stop the generation and dissemination of false content at the very beginning. 

A team from Google and Jigsaw, as well as media criminologist Luisa Verdoliva from Italy, jointly developed a deepfake detector - FaceForensics++ (link to Github). The project includes an extensive dataset containing synthetic videos and photos created by popular neural networks. The algorithm trained on this dataset is now able to recognize deepfakes. It is important to note that the developers emphasize that the detector does not provide an absolute guarantee of the result and is currently able to detect content created only by some well-known software packages.

A bit of history

In the history of criminology, new directions have repeatedly emerged due to the development of technology, and media criminology is no exception. The history of criminology is proud of several cases when new areas of this science have come to the fore due to technological innovations.

Profiling: In the 1980s, FBI Special Agent John Douglas pioneered the systematization of information on serial offenders. The database he created made it possible to analyze the motives and actions of criminals. This approach led to the creation of a special FBI department to investigate serial murders.

Databases of genomic information: In the 1980s, a DNA analysis method was developed, which quickly found application in criminology. By the end of the 1990s, genetic databases had been created in many countries, speeding up investigations and helping to solve even complex cases.

Crime Series Database: Since the early 1970s, investigative journalist Thomas Hargrove has been collecting reports on murders. In 2010, he developed a program that identifies coincidences in various cases, helping police officers establish links between crimes and identify series.

I hope my information has prompted you to delve into the topic of Media forensics and understand how to identify fakes.

Comments 1

Login to leave a comment

sometimes it causes a feeling of fear...