Artificial intelligence technology: use machine learning weaknesses to find false video flaws


With the development of artificial intelligence technology, the production of false news, especially Deepfake audio and video, has entered the stage of low-cost, low-tech knowledge. False news has even become the mantra of US President Trump's mainstream media. Some analysts believe that the world may enter a stage where information is difficult to distinguish.


"The biggest problem brought by false audio and video is the collapse of the trust system, which will cause great harm to politics, economy and society. Although artificial intelligence technology can identify some false audio and video, it is one foot high. Zhang, the fight against false audio and video will be a long-term battle." Lu Siwei, associate professor of computer engineering and director of the Computer Vision and Machine Learning Laboratory at the State University of New York at Albany.


Artificial intelligence-driven false news spawns a crisis of trust


The rise and popularity of electronic media has increased the amount of news information people receive every day hundreds of times more than 10 years ago. But all kinds of false information are filled with all kinds of media and social media. The rapid development of artificial intelligence technology further makes the production threshold of false video news lower and lower, and the fidelity is getting higher and higher. Artificial intelligence "face-changing" last year, the face of "Wonder Woman" actress Gail Jiaduo was grafted onto an adult movie actress, causing a sensation.


Professor Lu pointed out that most of the non-professional fake video news is actually very rough, and a slightly knowledgeable audience can identify it. However, due to the exponential increase of the communication platform, information is flooding the audience like a flood, quickly dispersing the audience. Attention, combined with the quality and resolution of the video itself, the audience is difficult to distinguish between true and false in a short period of time. The audience forwards based on their own interests and conditional reflections, and also plays a role in fueling the spread of false audio and video. Columbia University's psychology research shows that the result of attention economy (eyeball economy) is that people's attention is divided more and more, and the focus on information is even shorter than goldfish.


Although the after-effect remedy can play a certain role in clearing the source, the huge negative impact caused by the transmission of false audio and video has already been caused, and it is difficult to eliminate it in a short time. Moreover, the continuous emergence of false audio and video has made the government or the parties rush to clarify the facts. Over time, the trust system of the whole society will fall apart and form a situation in which the public does not believe in any news facts.


At the beginning of June this year, the New York Media Lab convened a group of relevant practitioners in the media and academia to hold a false news terror exhibition to discuss the false propaganda and error information that may be generated by new technologies such as artificial intelligence. Justin Hendricks, executive director of the laboratory, said: "It only takes a few big scams to convince the public that nothing is real." Professor Lu also believes that the biggest problem caused by false news is that when news can be false, Real news can also become “unreal”, ultimately causing the audience to believe in the dilemma of any news. For society, this is an unprecedented crisis of trust.


Use machine learning weaknesses to find false video flaws

Professor Lu said that the "face changing" machine algorithm is trained using a large number of facial images. This is a bit like machine translation. It can translate one type of text into another, and the algorithm of changing face false video "translates" The object is the face, extracting the facial features of the original face, and then using the other face to express the same expression.


In order to find a technical solution for identifying false audio and video, Professor Lu analyzed the algorithm code of the fake video generated by machine learning, established hundreds of fake video models, and tried various methods to detect. They finally found that the characters in the fake video are basically not blinking, and the normal physiological phenomenon is that humans interact with each other in face-to-face communication. Blinking is an unconscious behavior that is not subjectively controlled. Every two or three seconds will generally Look at once.


Professor Lu further explained that machine learning is not really “knowledge”, and all the information it gets comes from training data. Machine learning does not know that blinking is the normal physiological characteristics of human beings. The first generation of fake videos use a lot of pictures of static people in the network. The characters in these pictures are blinking because no one wants to show the image of closed eyes to public.


It’s a long way to go with the fake video artificial intelligence war.


Professor Lu’s team used this weakness of machine learning to develop a new deep learning algorithm to detect false video with a very high success rate. However, Professor Lu admits that in the field of machine learning, the height of a high-definition video can be described as a height, and the producer of false video can also design a more realistic video with a blinking feature for the new algorithm.


Professor Lu said that the production process of false video is generally unspeakable and will not be made public. There is huge political or economic interest behind it. Therefore, counterfeiting technology often has certain advantages, ahead of the development of detection technology. Professor Lu pointed out that its newly developed algorithm currently has certain advantages, because it considers the data training problem that false video producers have not considered. After the publication of the paper, some hackers directly challenged Professor Lu to test the false video that merged with the blink of an eye, but these false videos have not escaped the "eye" of the new algorithm.

In order to advance the development of false video detection technology, Professor Lu has begun to upgrade the training data of deep learning algorithms from static pictures to dynamic video, and to establish high-quality false video by establishing red and white teams for confrontation training. The level of detection. Professor Lu revealed that Jigsaw, a company owned by Google, has approached the development team to cooperate.


In addition to constantly improving the technical means, Professor Lu believes that the most important thing in the struggle against false video is that the public needs to have the spirit of independent thinking. It is not free to forward, and it does not contribute to the widespread dissemination of false video, whether it is the mainstream media or the media. Keep the bottom line and insist on seeking truth.

Release date: 2018/8/29 11:52:45

Gate valve is usually used for shutting off or connecting the medium flow but not recommended for throttling action.

Nominal pressure of brass gate valves 1.6MPa and medium temperature for metal seal is -20℃~150℃. When the Gate Valve is used for saturated steam, the pressure of the saturated steam should be≤0.6MPa.

Gate Valve

Gate Valve,Brass Gate Valve,Brass Lockable Gate Valve,Forged Brass Gate Valve

Ningbo Jiekelong Precision Manufacturing Co., Ltd. , http://www.jklvalve.com