How LLMs could help defenders write better and faster detection

Most users will associate large language models (LLMs) like ChatGPT with answering basic questions or helping to write basics lines of text.  

But could these tools actually help defenders in the cybersecurity industry write more effective detection content?  

Several security researchers from across Cisco recently looked into how LLMs, which have surged in popularity over the past year, could assist them in the detection research process. 

Part of their jobs is to try and perform test behavior that will trigger existing detection rules to check their effectiveness and try to emulate the behavior of a typical adversary — all in the name of updating that detection content to catch the latest tactics, techniques and procedures (TTPs). 

LLMs may be able to assist in this complex, time-consuming tax, as Darin Smith, Yuvi Meiyappan, Moazzam Khan and Ray McCormick write in this paper, which you can download below. 

Effectiveness of LLMs for Detection ResearchLLM-Detection-Whitepaper.pdf835 KBdownload-circle

Khan, a security researcher for Cisco, will be presenting the findings of this paper at the upcoming BSides Portland conference. 



Support the originator by clicking the read the rest link below.