[[{“value”:”
OpenAI is weeding out more bad actors using its AI models. And, in a first for the company, they’ve identified and removed Russian, Chinese, and Israeli accounts used in political influence operations.
According to a new report from the platform’s threat detection team, the platform discovered and terminated five accounts engaging in covert influence operations, such as propaganda-laden bots, social media scrubbers, and fake article generators.
“OpenAI is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content,” the company wrote. “That is especially true with respect to detecting and disrupting covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.”
Terminated accounts include those behind a Russian Telegram operation dubbed “Bad Grammar” and those facilitating Israeli company STOIC. STOIC was discovered to be using OpenAI models to generate articles and comments praising Israel’s current military siege, that were then posted across Meta platforms, X, and more.
OpenAI says the group of covert actors were using a variety of tools for a “range of tasks, such as generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts.”
In February, OpenAI announced it had terminated several “foreign bad actor” accounts found engaging in similarly suspicious behavior, including using OpenAI’s translation and coding services to bolster potential cyber attacks. The effort was in collaboration with Microsoft Threat Intelligence.
As communities rev up for a series of global elections, many are keeping a close eye on AI-boosted disinformation campaigns. In the U.S., deep-faked AI video and audio of celebrities, and even presidential candidates, led to a federal call on tech leaders to stop their spread. And a report from the Center for Countering Digital Hate found that — despite electoral integrity commitments from many AI leaders — AI voice cloning is still easily manipulated by bad actors.
Learn more about how AI might be at play in this year’s election, and how you can respond to it.
The post OpenAI halted five political influence ops over the last three months from Mashable appeared first on Tom Bettenhausen’s.
“}]] Article Continues..