Auto-translation used

Artificial intelligence and cybersecurity in 2025: a double challenge for organizations

With the advent of generative AI and the ubiquity of LLM models (Large Language Models), cybersecurity is undergoing a transformation. Artificial intelligence has become both a defense tool and a weapon in the hands of intruders. In 2025, this is no longer a hypothesis, but a reality faced by both public and private entities.

According to MIT Technology Review, in 2025, more than 40% of phishing emails are created using LLM, which makes them virtually indistinguishable from real correspondence.

Key threats related to the use of AI:

  • Automation of phishing and social engineering: generation of personalized emails, voice messages and deepfake videos for attacks on employees.
  • Bypassing traditional protection systems: neural networks create malicious code that is not recognized by classical signature antiviruses.
  • Hacking CAPTCHA, MFA, and authentication systems: using computer vision and recognition models.
  • Attacks on machine learning models: poisoning (data poisoning), injection of scripts and substitution of input data.

In response to challenges, organizations are increasingly integrating AI solutions into incident detection and response systems (SOC, SIEM, SOAR).

The most effective AI applications:

  • ✅ Real-time Anomaly Analysis (UEBA): Identification of atypical user and device actions.
  • Advanced threat hunting: detecting attacks before they are deployed using behavioral correlation.
  • Automatic classification and prioritization of incidents: taking into account the business context and MITRE ATT&CK.
  • Auto-generation of remediation recommendations based on an analysis of previous incidents.

According to Gartner, by 2026, more than 60% of organizations will use AI to enhance threat detection mechanisms.

  1. Include AI risks in the cyber risk profile. Industrial injections, malicious script generation, and model compromise are real threats.
  2. Monitor the use of LLM within the organization. Create internal policies for using ChatGPT, Gemini, and other models.
  3. Audit all AI services and connected APIs. Special attention is paid to cloud integrations and third—party developments.
  4. Implement AI Threat Intelligence tools. The new platforms allow tracking the abuse of public LLMs to generate attacking promts.
  5. Conduct employee training. Topics: protection from deepfake, countering social attacks, conscious use of AI tools.

AI in cybersecurity is a new reality. The winner is the one who knows how to use it one step ahead of the attackers. Companies that are already implementing AI-Augmented Security today will be able not only to protect themselves, but also to create a competitive advantage against the backdrop of global threat growth.

#Astanahub #Cybersecurity #AIsecurity #AI #Informationsecurity #LLM #MITRE #SOC #SOAR #ZeroTrust #AIinCyber #DataProtection #DigitalKazakhstan

Comments 0

Login to leave a comment