To paraphrase the film “Zoolander”, AI is so hot right now. Tools like ChatGTP and Midjourney have rocketed AI to the forefront of the average person’s mind more than any other technological advancement of the past few years. As we’ve seen in the past, whenever a piece of technology becomes that firmly rooted in the zeitgeist there’s a veritable gold rush of people looking to take advantage of it. Just like the gold rush of 1848, the adoption of AI is causing both positive and negative impacts on risk for organizations across the globe.
AI as Risk
The risk created by AI tools may not be readily apparent, which can actually make them more insidious. The first increase in risk comes from a social engineering perspective. Over the years I have seen more than my fair share of phishing emails attempting to entice me to hand over sensitive information. Typically there are tell-tale signs such as awkward grammar or misspelled words that give it away as being fake, but as AI models like ChatGTP are trained using real world data, they can quickly create very convincing emails, improving the odds of catching people unaware. A quick sample I created even included a spot to “[insert a link to a fake Netflix website]”
We’ve also already seen examples of AI being used to generate actual malware, allowing even those with little technical know-how to easily create exploits. These are the more obvious cyber risks from AI. However, there’s a deeper risk that I feel isn’t being given enough attention: AI generated code.
One of the worst kept secrets in software development is how much code is reused. Sites like StackOverflow and GitHub contain volumes of source code that makes its way into commercial software via developers that are rushed for time and looking for a quick fix. While on the surface this seems “mostly” harmless, it also serves as a way for code with security vulnerabilities to be spread far and wide. As we’ve seen in the recent past with the log4j, SolarWinds, and Kaseya incidents, a piece of vulnerable code somewhere in your software supply chain can put your entire application (and organization) at risk.
Now, if we look into the not-so-distant future as more development teams rely on AI as a way to remain ahead of increased deadlines and lack of skilled coders, we can see this problem only growing. Any AI used for creating code will need to have been trained on large chunks of code, and unless that training data is thoroughly vetted and sanitized it will include large amounts of code that contains security vulnerabilities. This has the potential to dramatically increase the number of vulnerabilities we see released into the wild.
AI as a Force (Multiplier) for Good
So how do you counter risk created by an increase in AI usage? With more AI? In short, yes. However, we can’t just expect to turn over all of our security tools and tactics over to AI. People creating malware have always had an advantage over cybersecurity teams in that they don’t have to make sure their code plays nice. If an attack doesn’t work on a particular target, the cybercriminals just find a new target. If a piece of malware crashes the application it was targeting, but still opens a backdoor into the system, it’s still a win for the attacker. However, if a security tool blocks a benign connection, takes up too much system memory, or flags a false positive, it becomes a headache for the security team and end users complain about security getting in the way of productivity. As many organizations are facing a constantly increasing number of attacks but also dealing with a shortage of skilled cybersecurity practitioners, many security teams find themselves overwhelmed.
This is where AI can help reduce risk for an organization.
Anyone who has worked in cybersecurity knows there’s a certain “it factor” that good security analysts have. The ability to spot something that just “seems off” is something that requires a good deal of intuition and isn’t easily trained. Current AI engines don’t make decisions on “hunches”, instead they rely on the data models they have been trained on. Where AI excels is in processing large amounts of information quickly. This becomes very beneficial to security practitioners when dealing with huge amounts of information, be this data coming into a SIEM/SOAR or analyzing code for vulnerabilities using application security testing tools. Instead of dealing with a deluge of alerts and manually inspecting each one, AI can be used as a force multiplier, taking the initial pass through mountains of data and providing security analysts with a smaller subset of potential security issues to look at. Instead of looking for a needle in a haystack, AI can just hand analysts a stack of needles.
AI is Just Another Tool
At the end of the day, AI is just another tool. How well that tool is utilized is up to the organization. The cybersecurity industry goes through a repeating cycle of innovation and adoption. There will always be the early adopters who are quick to adapt and embrace change. There will also be those who are more risk averse and watch how things play out with the early adopters. We saw this with next-gen firewalls, network sandboxing, EDR, and other tools. AI in the hands of skilled security practitioners is simply the next tool organizations can utilize to reduce risk.
Bruce Snell is the Cybersecurity Strategist for Qwiet AI, a company on a mission to stop security issues where they start:at the source code.