ChatGPT might be known to plagiarize an essay or two, but its rogue counterparts are doing far worse.
Duplicate chatbots with criminal capabilities are surfacing on the dark web and — much like ChatGPT — can be accessed for a modest monthly subscription or one-time fee.
These language learning models, as they’re technically known, essentially serve as a tool chest for sophisticated online scammers.
Several dark web chatbots, DarkBERT, WormGPT and FraudGPT — the last of which goes for $200 a month or $1,700 annually — have recently caught the attention of cybersecurity firm SlashNext. They were flagged for having the potential to create phishing scams and phony texts via remarkably believable images.
The company found evidence that DarkBERT illicitly sold “.edu” email addresses at $3 apiece to con artists impersonating academic institutions. These are used to wrongfully access student deals and discounts on marketplaces like Amazon.
Another grift, facilitated by FraudGPT, involves soliciting someone’s banking info by posing as a trusted entity, such as the bank itself.
These sorts of swindles are nothing new, but are more accessible than ever thanks to artificial intelligence, warns Lisa Palmer, an AI strategist for consulting firm AI Leaders.
“This is about crime that can be personalized at a massive scale. [Scammers] can create campaigns that are highly personalized for thousands of targeted victims versus having to create one at a time,” she told The Post, adding that fraudulent, deepfake video and audio is now easy to create.
Moreover, these attacks don’t just pose a threat to the elderly and less-than-tech-savvy.
“Since [these kind of models] are trained across large amounts of publicly available data, they could be used to look for patterns and information that is shared about the government — a government that they are wanting to infiltrate or attack,” Palmer said. “It could be gathering information about specific businesses that would allow for things like ransom or reputation attacks.”
AI-driven character assassination could also facilitate a major crime cyber security already struggles with defending.
“Think about things like identity theft and being able to create identity theft campaigns,” Palmer said. “They are highly personalized at a massive scale. What you’re talking about here are taking crimes to an elevated level.”
Serving justice to those responsible for the outlaw LLMs won’t be easy, either.
“For those that are sophisticated organizations, it’s exceptionally hard to catch them,” Palmer said.
“On the other end of that, we also have these new criminals that are being emboldened by new language models because they make it easier for people without high-tech skills to enter illegal enterprises.”
𝗖𝗿𝗲𝗱𝗶𝘁𝘀, 𝗖𝗼𝗽𝘆𝗿𝗶𝗴𝗵𝘁 & 𝗖𝗼𝘂𝗿𝘁𝗲𝘀𝘆: nypost.com
𝗙𝗼𝗿 𝗮𝗻𝘆 𝗰𝗼𝗺𝗽𝗹𝗮𝗶𝗻𝘁𝘀 𝗿𝗲𝗴𝗮𝗿𝗱𝗶𝗻𝗴 𝗗𝗠𝗖𝗔,
𝗣𝗹𝗲𝗮𝘀𝗲 𝘀𝗲𝗻𝗱 𝘂𝘀 𝗮𝗻 𝗲𝗺𝗮𝗶𝗹 𝗮𝘁 firstname.lastname@example.org