Artificial intelligence is reportedly being used by more and more scammers, with the number of scams up 118% compared to last year. Criminals are reportedly posting AI-generated job listings to dupe unsuspecting job seekers. 

The Identity Theft Resource Center released the Trends in Identity Report, highlighting information gathered from victims throughout the year.

WARMCOOKIE Backdoor Targets Jobseekers in Latest Phishing Scheme
Cybersecurity researchers warned jobseekers to be vigilant when accessing job opportunities on a website because of the new WARMCOOKIE phishing campaign.
(Photo : Clem Onojeghuo from Unsplash)

Job and employment scams were discovered to be predominantly carried out on websites such as LinkedIn and job search platforms. Scammers often pose as recruiters and produce phony job postings to entice those searching for jobs. The information provided throughout the application procedure is then stolen.

According to the paper, the fast improvement in identity fraud's look, feel, and messaging is virtually probably due to the emergence of AI-driven technologies.

The group notes that the major protection against the new sophisticated technology is considerably more low-tech, advising consumers to pick up the phone and check the contact straight from the source. 

Read Also: Booking.com Warns of Rise in AI Travel Scams as Summer Travel Season Begins 

AI Scams in All Avenues

Recent statistics show that AI-generated frauds are only increasing. Only last week, cybersecurity experts warned that AI posed a huge risk to security after discovering that AI chatbots might soon easily trick people.

Javvad, KnowBe4's main security awareness advocate, believes people will acclimate to artificial intelligence. That might make them less defensive, allowing AI more influence over us. 

Scientists claimed earlier this year that AI has mastered "deception" and learned ways to "cheat" people. Additionally, experts have informed outlets that hackers may "manipulate" AI.

Javvad warns that as people get more comfortable with AI chatbots, they may become more open to every response. According to cybersecurity advocates, training, awareness, and education are required to protect against these threats. 

The rapid progress of AI is a significant contributing cause of the problem. Furthermore, it is difficult for the typical person to remain current on improvements and be aware of potential hazards, which, according to Javvad, exposes everyday people.

AI Scam Calls

Even more concerning are fresh revelations that show AI bots may now steal a user's login credentials by making unusual calls to their targets. They now understand how to target users who have enabled two-factor authentication.

The attackers prepare the victim's credentials before the AI call, allowing bots to intercept and steal the one-time password (OTP).  

It was discovered that fraudulent individuals are involved in fraud by paying $420 each week for Bitcoin memberships. They are given AI bots to handle the calls they make. First, the con artists get an individual's login information, which includes usernames, email addresses, and passwords.

The hostile actors would then launch a spoofing mechanism, prompting victims to provide their OTPs over the phone, immediately sending them to the threat actor's Telegram bot. 

Related Article: Deepfake Scams Are Expected to Intensify as it Plunder Companies of Millions - Experts 

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion