OpenAI, the creator of ChatGPT, announced on Friday that it has removed accounts from users in China and North Korea suspected of using its artificial intelligence technology for malicious purposes, including surveillance and influence operations. The company revealed these actions in a report, highlighting how authoritarian regimes could leverage AI to target both the U.S. and their own populations.

OpenAI used its own AI tools to detect and disrupt these operations, though it did not specify how many accounts were banned or the timeframe of the activity.

In one case, users generated Spanish-language news articles through ChatGPT that criticized the United States. These articles were published by mainstream news outlets in Latin America under the byline of a Chinese company. In another instance, actors potentially linked to North Korea used AI to create fake resumes and online profiles for fictitious job applicants, aiming to infiltrate Western companies.

Additionally, accounts tied to a financial fraud operation based in Cambodia utilized OpenAI’s technology to translate and generate comments across social media platforms, including X (formerly Twitter) and Facebook.

The U.S. government has repeatedly raised concerns about China’s alleged use of AI to suppress dissent, spread misinformation, and undermine the security of the U.S. and its allies. OpenAI’s actions come as the company continues to grow, with its ChatGPT chatbot surpassing 400 million weekly active users. .

The move underscores the growing challenges of regulating AI technology and preventing its misuse on a global scale.

One response to “OpenAI Bans Accounts Linked to China and North Korea for Malicious AI Use”

  1. […] a striking escalation of cyberespionage tactics, North Korean hackers are now using AI-generated deepfake videos of corporate executives to […]

Leave a comment

Trending