In recent months, cybersecurity experts have been warning of a new threat –hackers exploiting artificial intelligence in sophisticated attacks.
While the trend seems to point toward “convincing” AI assistants to write malicious code, this time the danger lurks in another form: threat actors using fake ChatGPT apps to spread malware on Windows and Android.
Hackers are taking advantage of ChatGPT’s popularity by creating fake versions of the app and spreading them through various channels, such as social media and email.
Threat actors entice victims into downloading the rogue apps by promising uninterrupted, free access to the premium (paid) version of ChatGPT.
The perpetrators operate through a combination of promoting fake websites on social platforms and apps on the official (Google Play Store) and third-party Android app stores, says security researcher Dominic Alveri.
In one example, threat actors lured victims to a domain where they could download a fake ChatGPT client for Windows that would infect them with the Redline info-stealing malware.
However, Redline isn’t the only malware threat actors deploy in this deceitful campaign. As BleepingComputer reported, attackers use an array of stealers, such as Aurora or Lumma. Still, they may also resort to phishing forms embedded in rogue websites where they lure their victims.
Once installed, these malicious apps can cause tremendous damage, ranging from stealing sensitive information and cryptocurrency from compromised devices to complete system takeovers in some cases.
To avoid falling prey to this scam, AI enthusiasts who frequent the ChatGPT website should remember that the service is exclusively online and doesn’t currently provide any official desktop or mobile client.
Specialized software such as Bitdefender Ultimate Security can help you keep attackers at bay with its comprehensive library of features, including: