Subscribe to the feed

Artificial intelligence (AI) is being introduced to just about every facet of life these days. AI is being used to develop code, communicate with customers, and write in various media. Cyber security, particularly product security is another place AI can have a significant impact. AI is being built into security tools, and, on the flip side, into the realm of exploitation. AI is now mainstream and won't be going away anytime soon, so security professionals need to learn how to best use it to help enhance the security of their systems and products.

AI and its implications for security

The term "artificial intelligence" refers to using computer systems to simulate human intelligence. AI systems are able to perform a growing variety of tasks, such as pattern recognition, learning and problem solving. Within AI there are different fields like machine learning (ML), which enables systems to learn and improve over time; natural language processing (NLP), which attempts to mimic human speech; computer vision, which utilizes cameras as input to perform various tasks, and more.

These applications of AI are being woven into a vast array of systems to automate, analyze and improve current processes. Within the world of cyber security, AI is filling—or assisting with—a number of roles and processes. It's being used to analyze logs, predict threats, read source code, identify vulnerabilities and even to create or exploit vulnerabilities.

Using AI to detect cyber security attacks

Considering AI’s proficiency in pattern recognition, detecting cyber security anomalies is an obvious use case for it. Behavior anomaly detection is a good example of this. Through the use of machine learning, a model can identify what normal behavior within a system looks like, and single out any instances that deviate from the norm. This can help identify potential attacks as well as help identify systems that are not working as intended by catching outliers in their behavior.

Even user behavior that might be an issue, such as accidental data leaking or exfiltration, can potentially be discovered through AI pattern recognition or other mechanisms. Using datasets either made or consumed by the organization can be also used to watch for patterns and outlier behavior on a broader scale, in an attempt to determine the likelihood of the organization being targeted by cyber security incidents happening throughout the world.

Use case 1: Anomaly detection

Anomaly detection—the identification of unusual, rare, or otherwise anomalous patterns in logs, traffic, or other data—is a good fit for the pattern recognition power of ML. Whether its network traffic, user activities, or other data, given the right algorithm and training, AI/ML is ideally suited for spotting potentially harmful outliers. This can be done in a number of ways, starting with real time monitoring and alerting. This method starts with preset norms for a system such as network traffic, API calls or logs, and can employ statistical analysis to continuously monitor system behavior and actions. The model is able to trigger an alert anytime anomalous or rare actions are discovered.

Not only is AI/ML great at spotting patterns, it is also able to categorize and group them. This is essential for assigning priority levels to various events, which can help prevent "alert fatigue," which can happen if a user or team is inundated with alerts, many of which may be little more than noise. What often happens is that the alerts lose their importance, and many if not all alerts are viewed as noise and not properly investigated. Using these capabilities, AI/ML is able to provide intelligent insights, helping users make more informed choices.

Use case 2: AI-assisted cyber threat intelligence

The ability to monitor systems and provide real time alerts can be vital, but AI/ML can also be used to help enhance the security of systems before a security event takes place. Cyber Threat Intelligence (CTI) works by collecting information about cyber security attacks and events. The goal of CTI is to be informed about new or ongoing threats with the intent being to proactively prepare teams about the possibility of an attack to your organization before an attack takes place. CTI also provides value in dealing with existing attacks by helping incident response teams better understand what they are dealing with.

Traditionally, the collection, organization and analysis of this data was done by security professionals, but AI/ML is able to handle many of the routine or mundane tasks, and help with organization and analysis, letting those teams focus on the decision making required when they have the necessary information in an actionable format.

Using AI to prevent vulnerabilities

While leveraging AI/ML to detect and prevent cyber security attacks is valuable, preventing vulnerabilities in software is also hugely important. AI assistants in code editors, build pipelines and the tools used to test or validate running systems are quickly becoming the norm in many facets of IT.

As with CTI, AI systems can help alleviate mundane tasks, freeing humans to spend more time working on more valuable projects and innovations. Code reviews, while important, can be improved by leveraging Static Application Security Testing (SAST). While SAST platforms have existed for some time now, their biggest issue is the often large quantity of false positives they generate. Enter AI/ML’s ability to take a more intelligent look at source code, along with infrastructure and configuration code. AI is also starting to be used to run Dynamic Application Security Testing (DAST) to test running applications to see if common attacks would be successful.

Use case 3: AI-assisted code scanning

SAST has long used a “sources and sinks” approach to code scanning. This refers to a way to track the flow of data, looking for common pitfalls. The various tools produced for static code scanning often use this model. While this is a valid way to look at code, it can lead to many false positives that then need to be manually validated.

AI/ML can provide value here by learning and understanding the context or intent around possible findings in the code base, reducing false positives and false negatives. Not only that, but both SAST tools and AI assistants have been added to code editors, helping developers catch those errors before they are ever submitted. There are a few limitations, however, including language support and scalability with very large code bases, but these are quickly being addressed.

Use case 4: Automate discovery of vulnerabilities

Code reviews can be a time consuming process, but once that code is submitted, testing doesn’t usually end. DAST is used to test common attacks against a running application. There are a few tools on the market that help with this well, but like coding itself, there is some ramp up time involved. A user needs to understand these attack types, and how to replicate them through the DAST tool and then automate them.

Recently, DAST and related application testing tools have begun to implement AI/ML either directly into their platforms, or as plugins, allowing for a great deal of improved automated scanning. Not only does this free up staff who would need that ramp up time and the time needed to run the different attacks, it also frees up the time and money needed to do full blown penetration testing. Penetration testing still very much requires a human who is capable of thinking like an attacker and recognizing potential weaknesses, often creating novel ways of verifying that they are indeed exploitable.

Protecting AI itself

Although AI can help eliminate many human errors, it itself is still susceptible. First there is the bane of many IT problems, poor or improper configuration. Closely related is the need to more securely train and validate the model and its processes. Failure to do so can quickly lead to a system that is not well understood by its users, creating a kind of black box and a poor model lifecycle management process.

One of the most commonly discussed security concerns related to AI is data poisoning. Human beings often collect data that is then used to train AI/ML algorithms, and as humans, we can introduce bias into the data. This is a simple enough concept to watch out for, but sometimes that bias is added on purpose. Attackers, through various mechanisms, can intentionally poison the dataset used to train and validate AI/ML systems. It is then conceivable that the new biased output from the system can be used for nefarious purposes.

As AI is quickly becoming more mainstream, our understanding and training is lagging behind, especially security training around AI/ML. Much of the inner workings of AI/ML systems are not well understood by many outside the tech community, and this can become worse if systems are neglected and lack transparency.

This leads to another fairly common problem in technology: proper documentation. Systems require documentation that is easy to understand and comprehensive enough to cover the great majority of the system in question.

Finally, governments around the world are discussing and planning (or even already making) regulations related to AI/ML systems. It's not inconceivable that secure AI/ML certifications will be developed, so doing what we can to make sure that systems being developed today are as secure and valid as possible will likely save work down the road.

Final thoughts

As we become more and more dependent on AI systems, the speed and accuracy of machine learning in securing the systems we use won’t just be a “nice to have,” but will increasingly become a “must have.” It is all but a guarantee that bad actors will use AI/ML systems to conduct their attacks, so the defenders will need to implement these systems to help protect and defend their organizations and systems.

Ideally, students getting ready to enter the workforce will learn about AI/ML systems, but the grizzled veterans will need to embrace this as well. The best thing individuals can do is make sure they have at least a basic understanding of AI, and the best thing organizations can do is to start looking at how they can best leverage AI/ML in their products, systems and security.

How Red Hat can help

Red Hat OpenShift AI can help build out models and integrate AI into applications. For organizations in the security space, OpenShift AI can help you build the power of AI into your products. AI-enabled applications are only going to become more prevalent, and OpenShift AI is a powerful, scalable AI development platform that can help bring those applications to production.

Try Red Hat OpenShift AI


About the author

I'm a long time enthusiast of both cyber security and open source residing in the United States. From a young age, I have enjoyed playing with computers and random tech. After leaving the U.S. Army, I decided to pursue my studies in computer science, and focused much of my attention on application security. I joined Red Hat in 2023, and work with engineering teams to improve the security of the applications and processes. When I am not working, studying, or playing with my home lab, I enjoy off-roading with a local Jeep club.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech