The dark side of ChatGPT: It can generate malware!

ChatGPT continues to create some useful and fun work in the right hands. However, there is also a dark side to artificial intelligence that could pose serious problems for the future of IT. Several IT pros recently announced that ChatGPT...
 The dark side of ChatGPT: It can generate malware!
READING NOW The dark side of ChatGPT: It can generate malware!
ChatGPT continues to create some useful and fun work in the right hands. However, there is also a dark side to artificial intelligence that could pose serious problems for the future of IT. Several IT professionals have recently outlined ChatGPT’s dangerous potential and ability to generate polymorphic malware that is nearly impossible to catch using endpoint detection and response (EDR).

Codes can change and become harder to detect

EDR is a type of cybersecurity technique that can be used to catch malware. However, experts suggest that this protocol does not match the potential harm ChatGPT can inflict. This is where the term polymorphic comes in, as codes can change and be much harder to detect.

Most language learning models (LLMs), such as ChatGPT, were designed by their creators with filters to avoid producing inappropriate content. These filters can range from specific topics to malicious code. But it didn’t take long for users to find ways to bypass these filters. It is this tactic that makes ChatGPT particularly vulnerable to people trying to generate malicious scripts.

“A threat actor using these new techniques can combine a set of typically highly detectable behaviors in an unusual combination and escape detection by exploiting the model’s failure to recognize it as a malicious one,” IT security engineer Jeff Sims said in a statement.

Comments
Leave a Comment

Details
163 read
okunma39892
0 comments