With the increasing spread of artificial intelligence technologies around the world, a recent study conducted by Reuters in collaboration with researchers revealed that chatbots like Gemini from Google, ChatGPT from OpenAI , Grok from xAI , Claude from Anthropic, and MetaAI have become capable of crafting convincing phishing messages targeting the most vulnerable and at-risk group: seniors.
In a practical experiment involving 108 senior volunteers, 9 phishing messages generated by these robots were sent out.
Researchers found that 11% of participants clicked on the fraudulent links, most of which were produced by MetaAI and Grok, while no clicks were recorded on messages from ChatGPT and DeepSeek.
However, experts confirmed that the lack of clicks does not mean these systems are safe; it merely reflects the overall effectiveness of phishing attacks.
The results also showed that the AI robot Gemini provided specific advice on the best times to send messages to attract seniors, indicating that the period between 9 AM and 3 PM is the most effective.
Interestingly, ChatGPT (in its GPT_5 version) initially refused to generate fraudulent messages, but responded after a little insistence, producing messages titled "Charity" with places to add fake links, while Grok sometimes refused to produce messages asking for sensitive information directly, such as bank account numbers or national ID numbers.
These results are a serious indication that artificial intelligence, despite the policies and restrictions in place to prevent its misuse, can be easily exploited by scammers, becoming a powerful tool that allows them to produce endless fraudulent messages quickly and at a low cost.
In response to these findings, Google confirmed that it has retrained Gemini and added new layers of protection, while other companies like Meta and Anthropic emphasized that fraud constitutes a violation of their policies and pledged to impose strict penalties.
Meanwhile, OpenAI previously acknowledged that GPT_4 could be exploited for "social engineering," but confirmed that it has implemented multiple protective measures.
However, the study indicates that the protective systems in AI robots remain volatile and can be easily manipulated, making seniors an easy target for these attacks, turning artificial intelligence into a dangerous weapon in the hands of scammers on a large scale.