Companies are increasingly integrating AI technologies. Their use is spreading rapidly. This trend not only increases the attack surface, but also creates additional attack vectors that can be used to compromise corporate security.
But attackers also use AI to further develop and expand their methods. Some groups linked to North Korea (DPRK) IT teams are using AI models to generate materials for fraudulent IT and software development job applications. Additionally, cyberattacks have used AI models to create Windows-based malware. They built a command-and-control (C&C) infrastructure, distributed it through public repositories, and sometimes imitated legitimate tool sets.
The operational risks associated with the introduction of AI are continually evolving. Therefore, companies must adapt their security strategies. This is the only way they will be able to better deal with these threats. This process consists of various steps: on the one hand, the security situation is improved and cyber resilience is strengthened as companies integrate AI into central security systems. At the same time, they use platform-driven solutions to coordinate security controls.
KI-Integration in Zero Trust
Zero trust architecture – combined with AI – reduces attack surfaces, identifies anomalies and prevents data loss. It assesses the context and risks of the environment and thus addresses modern security challenges. The most important elements include:
- Create transparency across all digital assetsincluding AI models and datasets to monitor behavior and threat-based risks and manage risk. Companies are able to detect abnormal behavior and threat-based risks and immediately defend against AI-induced threats.
- Apply detailed access control policies based on principles like Least Privilege (POLP) and Just-in-time accessto restrict access as needed. By segmenting access and using non-persistent permission models, organizations can restrict access to company resources – for example, for a specific purpose and/or time period. This reduces the attack surface and leverages AI to effectively map permission models across a wide range of corporate assets and identities – both human and non-human. This allows the necessary access rights for the corresponding tasks to be defined in detail.
- Continuously assess asset security, security controls and AI systemsto identify new threats and vulnerabilities and update policies. The use of AI makes it easier for the Zero Trust framework to identify vulnerabilities, misconfigurations, exploitable default settings, weak credentials, and supply chain exploits. It helps identify gaps in perception that attackers can exploit to gain unauthorized access to the corporate landscape.
- Inspect application traffic and apply real-time policiesto block threats. Access attempts are continuously evaluated based on user behavior, device status, location and time. The Zero Trust model continually assesses the context of access attempts. It takes into account factors such as user behavior, device status, location and access time. AI enables faster assessment of context and associated risks. This speeds up the decision-making process by identifying compromises more quickly.
Proactively integrating AI into Zero Trust architecture supports incident triage. It also improves detection accuracy and makes it easier to search for threats. It also helps address the increasing complexity and volume of cyber threats. Security analysts are able to scale, analyze defenses and respond to advanced attack techniques.
“When integrating AI into cybersecurity, the balance between automation and human control must be maintained. Particular attention is paid to the issues of privacy, transparency, bias and unintended consequences of AI decisions.”
Umashankar Lakshmipathy, Infosys
Human control and AI in balance
When integrating AI into cybersecurity, the balance between automation and human control must be maintained. Particular attention will be paid to the issues of data protection, transparency, bias and unintended consequences of AI decisions. It is also important to adhere to ethical frameworks. Accountability and responsible AI practices must also be regulated to ensure trustworthy and ethical use of AI in cybersecurity. This also helps leverage potential advances in AI technology, such as explainable AI and sharing AI-based threat intelligence in collaboration with researchers, experts and policymakers, to address new challenges.
About the author:
Umashankar Lakshmipathy ist EVP and Head of Cloud, Infrastructure, and Security Service, EMEA, bei Infosys.
The authors are responsible for the content and accuracy of their contributions. The opinions presented reflect the views of the authors.
