Codes can change and become harder to detect
EDR is a type of cybersecurity technique that can be used to catch malware. However, experts suggest that this protocol does not match the potential harm ChatGPT can inflict. This is where the term polymorphic comes in, as codes can change and be much harder to detect.
Most language learning models (LLMs), such as ChatGPT, were designed by their creators with filters to avoid producing inappropriate content. These filters can range from specific topics to malicious code. But it didn’t take long for users to find ways to bypass these filters. It is this tactic that makes ChatGPT particularly vulnerable to people trying to generate malicious scripts.
“A threat actor using these new techniques can combine a set of typically highly detectable behaviors in an unusual combination and escape detection by exploiting the model’s failure to recognize it as a malicious one,” IT security engineer Jeff Sims said in a statement.