Disingenuous actors are already using AI tools to support their operations. This isn't science fiction. Lachlan McGrath explores the AI enabled disinformation threat landscape.

By Drawlab19 (Adobe Stock)

The KGB’s Operation DENVER (also known as Operation INFEKTION), was a 1980’s disinformation operation that blamed the US military for developing HIV/AIDS as a bioweapon. Operation Denver involved activities across a number of years before the disinformation narrative began to gain traction.  Many of these activities leveraged Soviet-aligned newspapers that had been developed as early as 1963.

By comparison, misinformation today can be developed in seconds with accompanying photo/video evidence at next to no cost and spread across the internet either organically or with the help of semi-automated botnets.  

Content generated by generative artificial-intelligence (AI) tools such as ChatGPT is advancing quickly and will increasingly be used in disinformation operations. Disingenuous actors using generative AI tools can now write articles using ChatGPT, substantiate that article with images generated in DALL-E or Midjourney, and videos produced in Pictory or Synthesia. All this at no to little cost, and with outputs generated by these tools in seconds.

The ability to generate believable evidence to substantiate disinformation quickly and at low-cost will increase the effectiveness of disinformation operations and allow poorer-resourced actors to enact sophisticated disinformation operations.

This isn’t the world of science fiction; disingenuous actors are already using AI tools to support their operations.

On May 23rd 2023, an AI-generated image purporting to show an explosion at the Pentagon was tweeted out by a fake Bloomberg News account and began circulating online. Russian news outlet RT amplified the image and it was also picked up by Indian television news channel Republic TV. The image, able to be created for free and initially tweeted by a verified twitter account means that this operation could have cost as little as $8. By comparison, the image led to a brief but noticeable dip in the US stock market as measured by the S&P 500.

On June 5th 2023, the Russian television channel Mir, formally the Interstate Television and Radio Company (MTRK), displayed an AI-generated video of Russian President Vladimir Putin announcing martial law and military mobilisation in response to a non-existent invasion of Russia by Ukrainian forces. The AI-generated Putin urged listeners to evacuate deep into Russia to escape the Ukrainian advance. Mir later announced that their systems had been hacked and for 37 mins (12:41 to 13:18) the hackers were able to broadcast on the Mir network. It’s unclear what impact the video may have had but it provides an example of the novel disinformation capabilities offered by AI-generated content.

While tools like Photoshop have previously allowed skilled users to doctor images, the use of generative AI tools is unique because of the speed, low cost, and potential scale of disinformation that is possible.

Forms of media that have previously been difficult to fake, such as video and speech, are now quickly and easily produced using low-cost tools. Disinformation operations using these new tools may be particularly impactful as most people will not be as critical of video and audio content as they are with still pictures.

As these tools become more advanced, it will become more difficult to identify and disprove disinformation operations. There is currently no easy way to detect AI-generated content on social media at scale. Even if it were possible to detect AI-generated content at scale, there are many legitimate uses of such content. Furthermore, there would still be a significant cost to social media platforms to identify potential disinformation operations, and not all social media companies have demonstrated the same level of dedication to fighting disinformation.  

It is reasonable to expect that as these kinds of AI-enabled disinformation operations become more common, people will become more wary of online information. Even this development may be used by bad actors. If evidence is found that incriminates a powerful individual, it may become easier for them to claim that the evidence against them was faked. This defence will be particularly potent for autocrats who control media organisations that can delegitimise real evidence of wrongdoing.

The first step to fixing a problem is to identify it. AI-generated content is an emerging capability that will turbocharge disinformation operations. AI-generated content is already being used in coordinated disinformation operations and there is currently no way to identify or address such operations at scale.

Lachlan McGrath

Advisor

Author Profile