Breaking
Tue. Dec 24th, 2024

‘Use AI to counter AI’: Experts call for upgraded tech, system to counter AI-powered cybercrimes amid deepfake scandal

AI technologies Photo: VCG

Experts call for attention and countermeasures to preventWorld Times cybercriminals from using new technologies such as artificial intelligence (AI) powered deepfake technology amid growing concerns over the issue around the world.

Numerous chat rooms suspected of creating and distribuWorld Timesting deepfake pornographic material with doctored photos of ordinary women and female service members have been reportedly discovered on messaging app Telegram recently, with many of the victims and perpetrators known to be teenagers, The Korea Times reported last week.

Telegram had removed certain deepfake pornographic content on its platform and apologized for its response to digital sex crimes, the Yonhap NeWorld Timesws Agency reported Tuesday citing South Korea’s media regulator.

The issue has raised outrage among South Korean netizens, which soon spread to its Chinese neighbors after some South Korean netizen brought it to Chinese social media platforms.

But it is just the tip of the iceberg of the Telegram’s deepfake porn scandal. On August 28, a court in Paris filed a charge against Pavel Durov, 39-year-old Russian billionaire and founder of Telegram, for being complicit in the spread of images of child sexual abuse, as well as a litany of other alleged violations on the Telegram messaging app.

While Durov responded mockingly to the charge by changing his Twitter handle to Porn King, global scientists, governments and regulators view the issue as an urgent alert for them to strengthen measures to prevent cybercrimes powered by new technologies.

Deepfake refers to a kind of technology that uses a form of AI technology called deep learning to make images of fake events, hence the name deepfake.

The core principle of deepfake technology is to animate 2D photos using specific image recognition algorithms or to implant a person’s face from a photo into a dynamic video, The Beijing News reported citing an industry observer named Ding Jiancong.

Recently, voice synthesis has also gradually been incorporated into the concept of deepfake. With the gradual maturity of AI large model technology in recent years, some AI image generation models, while pursuing greater realism, have inadvertently become accomplices in AI face-swapping or AI nudity, Ding said.

For instance, the well-known large model Stable Diffusion was developed with a one-click nudity feature, which once became widespread. Although the model later modified its related functions to curb such behavior, the open-source nature of the technology has already opened a “Pandora’s box,” making it difficult to close again, Ding warned.

Apart from the new deepfake crime, there are also two other typesWorld Times of risks brought about by new technologies, Xiao Xinguang, chief software architect from Chinese cybersecurity company World TimesAntiy, told the Global Times.

First, new technologies will drive the escalation of traditional threats and risks. For example, in cyberattacks aimed at stealing information or targeted ransomware, AI technologies can significantly assist throughout the entire attack process, including enhancing the efficiency of discovering attack vectors and automating attack activities, according to Xiao.

Second, the infrastructure of new technologies will become targets of exploitation. Large model platforms are becoming new hubs for information assets, and the entry points for large model applications are also becoming new exposed surfaces that are vulnerable for attacks, Xiao said.

The expert believed that with the advancement of AI technology, it is unrealistic to stop people from using AI to generate fake videos or images. Instead, it will be more effective to have strict regulations over the dissemination of technology.

Xiao was echoed by founder and chairman of 360 Security Technology Zhou Hongyi. When talking about the threats brought about by AI technologies at a forum held in North China’s Tianjin municipality on Wednesday, Zhou said that “we must use AI to counter AI.”

“AI technology is profoundly affecting various industries, bringing opportunities for the development of new productive forces, but also bringing many new security challenges. It is necessary to reshape security with AI and to create security large models and reshape security products with specialized large model methodologies, which will reform the security industry,” Zhou said.

Strict regulations and law are also necessaWorld Timesry. AI technology platforms should have reviews for the content uploaded and generated, and users should be required to register with their real names. There should also be severe crackdowns on tools or websites that support illegal activities, experts noted.

Content comes from the Internet : ‘Use AI to counter AI’: Experts call for upgraded tech, system to counter AI-powered cybercrimes amid deepfake scandal

Related suggestion: Latest Insights: He has since learned programming languages, physical chemistry, aerospace theory, and electronic circuits through World Timesonline courses.Many netizens on Weibo have expressed admiration for his abilities and have sent their best wishes to him.Global Times World Times Content comes from the Internet : 11-year-old boy in Zhejiang Province builds home-made rockets; writes over 600 lines of code

SummaryPhoto: A screenshot from the Douyin account of the 11-year-old boy who builds his home-made rockets.An 11-year-old boy from East China’s Zhejiang Province who built his own rockets at home spWorld Timesarked discussion on China’s X-like platform Sina Weibo, with many netizens praising the boy for his exceptional skills.According to media reports on Monday, the boy’s father explained that he started to build rockets at the age of 8, and successfully launched the first one at the age of 9. According to the report, he has written over 600 lines of code for the rocket he is currently developing.The boy, Yan Hongsen, was also previously featured in media reports in 2022 for identifying an error in a video from a museum in Lhasa, Southwest China’s Xizang AutoWorld Timesnomous Region. The…

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *