How AI is making phishing attacks more dangerous
Cybercriminals are using AI chatbots, such as ChatGPT, to launch sophisticated business email compromise attacks. Cybersecurity practitioners must fight fire with fire.
As AI’s popularity grows and its usability expands, thanks to generative AI’s continuous improvement model, it is also becoming more embedded in the threat actor’s arsenal.
To mitigate increasingly sophisticated AI phishing attacks, cybersecurity practitioners must both understand how cybercriminals are using the technology and embrace AI and machine learning for defensive purposes.
What are AI-powered phishing attacks?
Phishing attacks have long been the bane of security’s existence. These attacks that prey on human nature have evolved from the days of Nigerian princes and rich relatives looking for beneficiaries to increasingly sophisticated attacks that impersonate Amazon, the Postal Service, friends, colleagues and business partners, among others.
Often evoking fear, panic and curiosity, phishing scams use social engineering to get innocent users to click malicious links, download malware-laden files, and share passwords and business, financial and personal data…