While the AI-powered chatbot has been shown to make life easier for hackers, ChatGPT ‘helps reduce the barrier to entry with getting into the defensive side as well,’ Accenture’s cyber resilience lead tells CRN.
Even as a growing number of researchers find that OpenAI’s ChatGPT could be a powerful ally to hackers, the tool may also have the potential to transform the work of security operations teams.
Researchers at Accenture Security have been trying out ChatGPT’s capabilities for automating some of the work involved in cyber defense, and the initial findings around using the AI-powered chatbot in this way are promising, according to Accenture’s global lead for cyber resilience services, Robert Boyce.
After taking in data from a security operations platform, ChatGPT has shown the ability to “actually create for us a really nice summary — almost like an analyst’s report — of what you would expect a human analyst to do as they’re reviewing it,” Boyce told CRN.
These potential applications of ChatGPT for cyber defense deserve attention to round out the picture amid the numerous research reports suggesting that the tool can be misused to enable cyberattacks, he said.
Advertisement
On Thursday, researchers from threat intelligence firm Recorded Future became the latest to share findings that suggest ChatGPT can in fact assist cybercriminals with writing better phishing emails and developing malware. “ChatGPT lowers the barrier to entry for threat actors with limited programming abilities or technical skills,” the Recorded Future researchers said in the report.
But it’s not just the malicious actors who can use ChatGPT as a research and writing assistant, as it’s clear that the tool “helps reduce the barrier to entry with getting into the defensive side as well,” said Boyce, who is also a managing director at Accenture Security in addition to heading its cyber resilience services.
Typically, after an analyst gets an alert about a potential security incident, they start pulling other data sources to be able to “tell a story” and make a decision on whether they think it’s a real attack or not, he said.
That often entails a lot of manual work, or requires using a SOAR (security orchestration, automation and response) tool to be able to pull it together automatically, Boyce said. (Many organizations find SOAR tools to be difficult, however, since they require additional specialized engineers and the introduction of new rules for the security operations center, he noted.)
On the other hand, the research at Accenture suggests that taking the data outputs from a security information and event management (SIEM) tool and putting it through ChatGPT can quickly yield a useful “story” about a security incident. Using ChatGPT to create that narrative from the data, Boyce said, “is really giving you a clear picture faster than an analyst would by having to gather the same information.”
He cautioned that the researchers haven’t done extensive testing on this application so far. And “you would have to do more work to make it really, really meaningful,” Boyce said.
But the potential is there. For years, the security operations space “has been stagnant in a lot of ways because of the immense amount of information coming at an analyst, and because of the enrichment that has to happen before they can make good decisions,” he said. “It’s always been overwhelming. It’s information overload.”
And while many cybersecurity professionals are overburdened, there also aren’t nearly enough of them, as the massive shortage of skilled security pros continues.
ChatGPT, however, holds the promise of automating some of work of overwhelmed security teams while also helping to “erase some of the noise from the signal,” Boyce said. “This helps us be able to maybe get to the signal faster, which is an exciting prospect.”
Kyle Alspach
Click here to view original web page at www.crn.com