Cybersecurity experts are concerned that artificial intelligence-generated information would corrupt our perspective of reality, which is especially concerning given the year’s key elections.
However, one prominent expert disagrees, claiming that the threat deep fakes bring to democracy may be “overblown.”
Martin Lee, technical lead for Cisco’s Talos security intelligence and research group, told CNBC that deepfakes, while a strong technology in their own right, do not have the same impact as fake news.
However, new generative AI technologies “threaten to make the generation of fake content easier,” he stated.
AI-generated content may have easily recognized evidence that it was not created by a human.
Visual content, in particular, has proven flaw-prone. For example, AI-generated photos may have visual anomalies, such as a figure with more than two hands or a limb that has been merged into the image’s background.
It can be more difficult to distinguish between synthetically created voice sounds and voice snippets of real individuals. However, experts believe that AI is only as good as its training data.
“Nonetheless, machine generated content can frequently be identified as such when read objectively. In any case, Lee believes that content generation is unlikely to limit attackers.
Experts have already told CNBC that they expect AI-generated disinformation to be a major issue in next elections worldwide.
‘Limited usefulness’
Matt Calkins, CEO of corporate tech startup Appian, which helps organizations create apps more readily with software tools, believes AI has “limited usefulness.”
Many of today’s generative AI techniques can be “boring,” he said. “Once it knows you, it can go from amazing to useful [but] it just can’t get across that line right now.”
“Once we’re willing to trust AI with knowledge of ourselves, it’s going to be truly incredible,” Calkins told CNBC in a recent interview.
That could make it a more successful — and dangerous — disinformation tool in the future, Calkins cautioned, adding that he is dissatisfied with the progress made in efforts to control the technology domestically.
He noted that it may be necessary for AI to produce something egregiously “offensive” before US lawmakers act. “Give us one year. Wait till Artificial Intelligence offends us. “And then maybe we’ll make the right decision,” Calkins stated. “Democracies are reactive institutions,” he stated.
No matter how smart AI becomes, Cisco’s Lee says there are some tried-and-true methods for detecting misinformation, whether it was created by a machine or a human.
“People must be aware that these attacks are taking place and be careful of the strategies that may be used. Lee stated that when we come across something that elicits our emotions, we should pause and consider whether the information is indeed believable.
“Has it been published by a respectable media outlet? “Are other reputable media sources reporting the same thing?” he asked. “If not, it’s probably a scam or disinformation campaign that should be ignored or reported.”
