This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2023/12/GettyImages-1422766384.jpeg?w=2048Artificial intelligence is already changing the way we interact with technology. But it can be challenging to identify where it can have the most impact operationally. Use cases for AI are broad but work best when applied to specific tasks as a force multiplier for human teams. For many organizations, one of the most impactful AI investments they make will be in cybersecurity.
Cyberattacks are among the biggest risks for a modern organization of any size. Our research has identified an 8% increase in weekly cyberattacks worldwide in the first half of 2023 alone. Their impact can range from ransom payments to cessation of services in important sectors of the economy, and even disruption to essential services, as we saw with the Colonial Pipeline breach.
Threat actors quickly adopt new technologies to exploit their targets more effectively—including and especially artificial intelligence. In 2021, when the Colonial Pipeline attack occurred, cybersecurity incidents resulted in a successful breach 18% of the time, according to the Verizon Data Breach Investigations Report. Since then, the success rate has escalated to more than 30%. As threat actors use AI to make themselves more effective, it’s essential that organizations everywhere evolve in tandem not just to respond to these threats, but to prevent them.
Threat actors and AI
Cyber threat actors are leveraging AI in ways that have a tremendous impact at cloud scale. This is perhaps most visible in social-engineering-based attacks.
According to KnowBe4, at least 70% of malicious breaches stem from social engineering or phishing attacks. That means that attackers don’t necessarily exploit a technical vulnerability at all, but instead persuade users to surrender their legitimate access credentials, typically by sending an email with a malicious attachment disguised as a legitimate sender. This attack vector has only gotten more dangerous following the debut of generative AI models in 2022.
Threat actors are experts at finding malicious applications for technology advances, and ChatGPT is no exception. They discovered that despite its safeguards, they could easily use the tool to write malicious emails for phishing campaigns. Prior to this, many phishing emails contained obvious red flags: poor grammar, abnormal word choice, typos, and other deviations that raised questions. This fortunate last line of defense has disappeared as threat actors use generative AI to draft phishing lures that are formally perfect and often personalized. These engines typically feature a natural speech-to-code function, which can be used to build malicious files to deploy.
Generative AI lowers the barrier to entry across the entire attack life cycle. The generative AI boom may be having an impact already: Our research shows that email-delivered attacks have spiked in 2023, representing 86% of all file-based attacks we recorded. Other types of AI also amplify threat actors’ capacity by automating attacks, finding vulnerabilities, managing botnets, and more. They use artificial intelligence as a force multiplier.
Mitigating your risk optimizes your cyber resilience
Over the past several years, we’ve seen attacks on entities ranging from multinational companies to regional utilities and even individual schools and hospitals. A major share of these organizations have very limited cybersecurity expertise, and threat actors are nothing if not opportunistic. In the first half of 2023, health care organizations alone, for example, experienced 1,634 cyberattacks per week, an 18% leap compared with last year.
The financial impact of an attack can be serious and varied: Risks span from upfront ransom to leaks of commercially sensitive information, the cost of idle machinery, and a wide range of possibilities beyond. In some cases, lawsuits follow and generate settlements in hundreds of millions of dollars. As claims rise and insurance companies recognize the scale of cyber risk, the insurance industry has revised premiums to be prohibitively expensive for most organizations.
At the same time, even the best-financed organizations can’t be expected to fund security teams with the human personnel and expertise required to confront the full-scale modern threat environment without a force multiplier. That’s where defensive AI comes in as an indispensable foundation for every organization. No matter what other technologies or innovations you implement, they will always be at risk for a cyberattack that freezes operations or opens the company up to potentially catastrophic liability.
Moreover, new technologies also serve as new entry points for malicious actors; we see this acutely with Internet of Things (IoT) devices. As cybercriminals adapt and grow more effective at using AI in their attacks, organizations must use AI to fight that threat from a prevention standpoint. Current suites of point products produce significant, avoidable blind spots and limited interoperability. Implementing a consolidated cybersecurity platform that uses AI to refine proactive detection and remediation continuously over time, for example, or to identify abnormal behavior within strictly defined zero trust policies, exponentially strengthens cyber resilience against attacks of all kinds.
AI is leading to breakthroughs in commerce, health care, education, logistics, and other areas essential to our society. We can’t take these advances for granted by neglecting to protect them. Prevention-focused cybersecurity is achievable for organizations of all sizes through AI-enabled solutions. Establishing this kind of consolidated security posture is the next era of protection.
Rupal Hollenbeck is president of Check Point Software Technologies. Check Point is a partner of Fortune Brainstorm AI.