Scammers Paradise: Spammers Exploit OpenAIs Chatbot to Target Over 80,000 Websites with Custom Messages

Researchers from SentinelOne have discovered a significant spam campaign that used an OpenAI chatbot to create unique messages. This campaign targeted over 80,000 websites and ran for four months, highlighting a troubling trend where cybercriminals exploit artificial intelligence for illegal activities.

According to a report by Ars Technica, the spam operation, known as AkiraBot, aimed to promote questionable search engine optimization (SEO) services to small and medium-sized websites. By utilizing OpenAI’s chat API linked to the gpt-4o-mini model, AkiraBot was able to generate tailored messages for each site. This tactic allowed the spam to bypass filters that usually block identical content sent in bulk.

The AkiraBot framework operated by assigning the chatbot the role of a "helpful assistant" to create marketing messages. It provided prompts to the AI that included the name of the targeted website and a brief description of its services. This approach made each message seem personalized and less likely to be flagged as spam.

SentinelOne researchers, Alex Delamotte and Jim Walter, pointed out the new challenges that AI presents in fighting spam. They noted that the rotating domains used to send these SEO promotions were the easiest way to identify the spam. Unlike previous campaigns, the content of the messages varied significantly, making them harder to detect.

The scale of this campaign was revealed through log files left by AkiraBot, which tracked the success and failure rates of the messages sent. Between September 2024 and January 2025, unique messages reached more than 80,000 websites, while attempts to contact about 11,000 domains were unsuccessful.

OpenAI acknowledged the findings from SentinelOne and stated that this misuse of their chatbot violates their terms of service. They took action by revoking the spammers’ account after being informed. However, the fact that this activity went unnoticed for four months raises concerns about the effectiveness of current measures to prevent such abuse.

This incident serves as a reminder of the dual-edged nature of AI technology. While it has many beneficial uses, it can also be misused for harmful purposes. As AI continues to evolve, so do the tactics of those looking to exploit it for their gain.

Scroll to Top