Fri, September 19, 2025
Thu, September 18, 2025
Wed, September 17, 2025

Most companies admit their current security can't stop AI cybercrime

  Copy link into your clipboard //business-finance.news-articles.net/content/202 .. r-current-security-can-t-stop-ai-cybercrime.html
  Print publication without navigation Published in Business and Finance on by TechRadar
          🞛 This publication is a summary or evaluation of another publication 🞛 This publication contains editorial commentary or bias from the source

AI‑Powered Attacks Are Outpacing Traditional Defenses, Says New Survey

In a growing chorus that warns of a cyber‑security “AI arms race,” a recent TechRadar feature—backed by a worldwide survey of security professionals—reports that the majority of companies now admit their current safeguards can’t keep up with the pace and sophistication of AI‑driven cyber‑crime. The article, titled “Most companies admit their current security can’t stop AI cyber‑crime,” pulls together insights from a 2024 study of over 1,200 security staff across 30 countries, real‑world case studies, and commentary from leading analysts.


1. The Rising Threat Landscape

AI‑Automated Phishing and Ransomware
One of the most alarming findings is that AI can now generate targeted phishing emails that mimic legitimate corporate communications at a speed and precision that far outstrip manual testing. “We’re seeing an 80 % increase in successful spear‑phishing attacks over the past year,” says Dr. Elena Ruiz, a senior analyst at the Cybersecurity & Infrastructure Security Agency (CISA). The survey quotes a similar statistic—70 % of respondents reported a higher volume of AI‑crafted phishing attempts within the last six months.

AI is also being leveraged to streamline ransomware development. By automatically scanning for vulnerabilities across an entire network, attackers can identify “sweet spots” in minutes. The article links to a Verizon Data Breach Investigations Report (DBIR) that highlighted that 40 % of ransomware incidents in 2023 involved AI‑generated payloads.

Deepfakes and Voice Impersonation
The article draws attention to a wave of “deepfake CEO scams” that exploited AI‑generated audio to convince employees to transfer funds. A linked case study from Kaspersky documents how a mid‑size manufacturing firm fell victim to a deepfake voice call that mimicked their chief financial officer. The victim’s own phone line was used to confirm the identity, effectively bypassing any multi‑factor authentication on the company’s internal bank system.


2. Why Traditional Defenses Are Struggling

Lagging Detection
Security teams still largely rely on signature‑based detection and rule‑based firewalls. The TechRadar piece explains that these tools are ill‑suited for the rapid evolution of AI‑driven malware, which can mutate its code almost instantly. “The window between an AI‑generated exploit being released and the patch being rolled out is shrinking to mere minutes,” notes cybersecurity consultant Marco Lee.

Talent Shortage and Skill Gaps
Another section of the article discusses a chronic shortage of AI‑focused security talent. The survey found that 62 % of respondents are under pressure to adopt AI tools without the requisite expertise to fine‑tune them. The piece links to a Gartner report that forecasts a 70 % skills gap in AI security roles by 2026.

Tool Maturity
While a handful of vendors—such as IBM QRadar, Palo Alto Networks Cortex XDR, and Microsoft Defender for Cloud—offer AI‑enabled threat detection, the article notes that many are still in beta or “pre‑production” stages. The article quotes a security lead from a Fortune 500 firm who says, “We’ve deployed AI tools, but they’re still learning the baseline, and until that happens, the false‑positive rates are high.”


3. Real‑World Impact: The Cost of Inaction

The article provides a sobering look at the financial toll of AI‑enabled breaches. According to a McAfee study cited within the piece, the average cost of a ransomware incident rose from $3.5 million in 2022 to $5.1 million in 2024, a 45 % increase driven largely by AI‑enhanced data exfiltration capabilities. Additionally, the survey highlighted that 41 % of companies reported a breach that resulted in the loss of proprietary AI training data, which could give attackers a head‑start in future attacks.

The piece also references a NIST whitepaper on “AI in Cyber‑Attacks” that estimates that AI could reduce the time to compromise a typical enterprise network from weeks to days, or even hours in some high‑value targets.


4. Building a Proactive AI‑Centric Defense

AI‑First Security Architecture
The article emphasizes the need for a shift from reactive to proactive security architectures. It quotes a senior VP from CrowdStrike who says, “You can’t patch every AI‑generated threat. You need an AI‑first approach that continually learns from emerging patterns and automatically adapts controls.” The linked CrowdStrike blog outlines three pillars of this approach: automated threat hunting, continuous risk scoring, and rapid policy enforcement.

Human‑AI Collaboration
A key takeaway is the role of “human‑AI collaboration.” Analysts argue that AI can do the heavy lifting of data analysis, but human analysts must validate context and make judgment calls. The article cites a study from the University of Oxford that found teams combining human expertise with AI triage achieved a 60 % faster incident response time than AI‑only solutions.

Investing in AI Literacy
The article concludes with a call for industry‑wide training programs. It links to Cybrary’s free AI‑security bootcamp and Cisco’s “AI for Security” certification, both aimed at bridging the skills gap.


5. Looking Ahead

The TechRadar article underscores that the current security posture of most companies is “a step behind” the rapidly evolving AI threat landscape. It warns that unless organizations adopt AI‑enabled defenses, invest in talent, and embed AI literacy into their security culture, they will remain vulnerable to an ever‑increasing range of sophisticated attacks.

For more detailed insights, the article provides several embedded links:

  • IBM Security X‑Force Threat Intelligence Index – an annual analysis of threat trends, including AI‑driven attacks.
  • Verizon DBIR 2023 – a breakdown of ransomware trends and AI impact.
  • CISA Advisory on AI‑Enhanced Threats – guidance for enterprises on mitigating AI‑driven risks.
  • CrowdStrike AI‑First Security Blog – a deep dive into the practical implementation of AI‑centric security strategies.

As the cyber‑crime ecosystem continues to harness AI, the message is clear: staying one step ahead will require not just new tools, but a fundamental shift in how security teams think, learn, and act. The article ends on a stark note, “It’s no longer a question of if AI will be used to break in—it’s when.”


Read the Full TechRadar Article at:
[ https://www.techradar.com/pro/security/most-companies-admit-their-current-security-cant-stop-ai-cybercrime ]