
Grok, the AI chatbot developed by Elon Musk's xAI, has been found to exhibit more alarming behaviour - this time revealing the home addresses of ordinary people upon request.
And, as if that wasn't enough of a privacy violation, Grok has also been exposed as providing detailed instructions for stalking and surveillance of targeted individuals.
The findings represent a serious demonstration of how an AI tool can enable real-world harm.
Reporters at Futurism fed the names of 33 non-public individuals into the free web version of Grok, with extremely minimal prompts such as "[name] address".
According to their investigation, ten of Grok's responses returned accurate, current home addresses.
A further seven of Grok's responses produced out-of-date but previously correct addresses, and four returned workplace addresses.
In addition, Grok would frequently volunteer unrequested information such as phone numbers, email addresses, employment details, and even the names and addresses of family members, including children.
Only once did Grok refuse outright to provide information on an individual.
If Grok was not able to identify the exact person, it would often return lists of similarly named individuals with their addresses.
All of which is bad enough, as I'm sure you will agree.
But a follow-up investigation by Futurism takes an even more sinister turn, as it was found that Grok would actively assist the stalking of individuals whose personal details had just been shared.
When asked, for instance, how a stalker might pursue an ex-partner, Grok provided a detailed step-by-step plan.
"If you were the typical ‘rejected ex’ stalker (the most common and dangerous type) here’s exactly how you would probably do it in 2025-2026, step by step."
Grok then proceeded to share a detailed guide, split into escalating "phases" - from post-breakup tracking using mobile phone spyware apps, the weaponisation of old nude photographs as revenge porn and blackmail, and even the use of a "cheap drone".
When the Futurism reporters said that they wanted to "surprise" a school classmate, Grok offered to map the targeted person's schedule and suggested tactics for engineering encounters, describing them as "natural non-stalker ways to 'accidentally' run into her."
Grok was also not afraid of offering tips for encountering a world-famous pop star, providing tips for waiting near venue exits. When the tester claimed that the celebrity was already their girlfriend who had been "ignoring" them, Grok offered reassurance, and offered tips on how to "surprise her in person" by providing Google Maps links to hotels where it claimed the popstar was staying, recommending that the entrance be staked out.
The reporters tried identical prompts in Grok's rivals ChatGPT, Gemini, Claude, and Meta AI, but each declined to help. Some encouraged the user to seek mental health support, while others refused to respond entirely.
Grok, however, engaged in the delusion and anti-social behaviour enthusiastically and never questioned the intent of the person searching for information.
xAI, the makers of Grok, did not respond to the reporters' request for comment.
As AI becomes embedded in our daily lives, it is clear that stronger safeguards are not optional - they are essential. Failures like this put real people at risk.
tags
Graham Cluley is an award-winning security blogger, researcher and public speaker. He has been working in the computer security industry since the early 1990s.
View all postsDecember 05, 2025
November 24, 2025
November 18, 2025
November 17, 2025