OpenAI says Russian and Israeli groups used its tools to spread disinformation | OpenAI

On Thursday, OpenAI released its first-ever report on how its AI tools are being used in covert influence operations, revealing that the company had disrupted disinformation campaigns originating in Russia, China, Israel and Iran.

The malicious actors used the company’s generative artificial intelligence models to create and disseminate advertising content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, According to the report.

As generative AI has become a booming industry, there has been widespread concern among researchers and lawmakers about its potential to increase the quantity and quality of misinformation online. AI companies like OpenAI, which makes ChatGPT, have tried to allay these concerns and put barriers to their technology, with mixed results.

The 39-page OpenAI report is one of the most detailed reports from an AI company on the use of its software for advertising. OpenAI claimed that its researchers found and blocked accounts linked to five covert influence operations over the past three months, which were from a mix of government and private actors.

In Russia, two operations created and disseminated content critical of the United States, Ukraine, and several Baltic states. One operation used an OpenAI model to debug code and create a bot that was deployed on Telegram. The Chinese influence operation produced text in English, Chinese, Japanese and Korean, which activists then posted on Twitter and Medium.

Iranian representatives produced entire articles attacking the United States and Israel, and translated them into English and French. An Israeli political company called Stoic ran a network of fake social media accounts that created a range of content, including posts accusing US student protests against the Israeli war in Gaza of being anti-Semitic.

See also  DeSantis condemns Trump for congratulating "killer dictator" Kim Jong Un

Many of the disinformation spreaders that OpenAI banned from its platform were already known to researchers and authorities. US Treasury Sanctions imposed on two Russian men In March, they were allegedly behind one of the campaigns discovered by OpenAI, while Meta also banned Stoic from its platform this year for violating its policies.

The report also highlights how productive AI can be incorporated into disinformation campaigns as a way to improve certain aspects of content creation, such as publishing more persuasive foreign language posts, but it is not the only tool for propaganda.

“All of these operations used artificial intelligence to some extent, but none of them used it exclusively,” the report stated. “Instead, the AI-generated material was just one of many types of content they published, alongside traditional formats, such as handwritten texts, or memes copied online.”

While none of the campaigns produced any noticeable impact, their use of the technology shows how malicious actors have found that generative AI allows them to scale propaganda production. It is now possible to write, translate and publish content more efficiently through the use of AI tools, reducing the level of creation of disinformation campaigns.

Over the past year, malicious actors have used generative AI in countries around the world to try to influence politics and public opinion. Deepfake audio, AI-generated images, and text campaigns have been used to disrupt election campaigns, increasing pressure on companies like OpenAI to restrict the use of their tools.

OpenAI stated that it plans to periodically issue similar reports on covert influence operations, in addition to removing accounts that violate its policies.

See also  Alexei Navalny's legal team is "looking" for him after he was taken to prison

Leave a Reply

Your email address will not be published. Required fields are marked *