Original Source
Government to Block Election-Targeted AI Fake Videos, Unveils Deepfake Detection Model
Deepfake Threat and Government Response
With the June local elections just 84 days away, concerns are growing over the potential impact of artificial intelligence (AI) manipulated fake videos, known as deepfakes, on the election. Sophisticated false videos, crafted to resemble actual news or candidate speeches, could obscure voters' ability to make sound judgments. In response, the South Korean government plans to actively prevent the dissemination of false information during the election period by developing an AI-based deepfake detection model.
During the last presidential election, the National Election Commission received 10,510 requests to remove deepfake videos, a roughly 27-fold increase compared to the 388 requests made during the 2024 general election. This surge highlights the serious nature of deepfakes targeting elections.
Enhanced Detection Model and Stricter Regulations
The newly developed AI detection model analyzes not only facial and body manipulations but also the authenticity of backgrounds and voices to filter out fake videos. Park Nam-in, a researcher at the National Forensic Service, explained that the model can detect various forms of false information, including document forgery. The detection accuracy of this model has significantly improved from the previous 76% to 92%. Minister Yoon Ho-jung of the Ministry of Interior and Safety emphasized that distorted information can spread rapidly and undermine voters' sound judgment.
The government will provide the improved detection model to the National Election Commission and will completely prohibit the use of AI-generated videos for election campaigns starting 90 days before the election.
*Source: YouTube: KBS News (2026-03-10)*



