FBI: Artificial intelligence makes it easier for hackers

In a meeting with journalists a while ago, the FBI warned that cyber attacks supported by artificial intelligence programs are widespread. According to the FBI, phishing attacks or malware development...
 FBI: Artificial intelligence makes it easier for hackers
READING NOW FBI: Artificial intelligence makes it easier for hackers
In a meeting with journalists a while ago, the FBI warned that cyber attacks supported by artificial intelligence programs are widespread. According to the FBI, the number of people using AI technology as part of phishing attacks or malware development is increasing at an alarming rate, and the impact of their operations is increasing.

FBI warns of artificial intelligence

In our previous news, we have included reports on how even artificial intelligence models like ChatGPT, despite having precautions and restrictions, are used to develop malware that can evade even the latest security systems. “We expect these trends to increase as the adoption and diffusion of AI models continues,” the senior FBI official said.

On the other hand, the main focus of the FBI and other law enforcement agencies is open source models, not walled tools like ChatGPT. The last example of open source models was Meta’s Llama 2. It is possible to take these basic models and train them on the desired dataset according to the purpose you want. Again, in our previous news, we talked about the malicious WormGPT, which was developed in this way and sold with subscription-based memberships. With such tools, attackers can be more effective especially in e-mail fraud or enable the development of software that is difficult to detect.

The FBI also states that there is a rapid upward trend in deepfake (artificial intelligence-generated content that never actually happened), which raises security concerns. A number of AI giants, including OpenAI, Microsoft, Google, and Meta, recently pledged at the White House to develop some form of AI watermarking technology that would benefit them as well.

Regardless of the promises made or the open source models developed, people need to be made aware of artificial intelligence. HardwareNews readers can tell if a video or image is fake, but the same cannot be expected from the entire population. The importance of the necessity of verifying the information of today’s people is increasing day by day and this should become a habit.

Comments
Leave a Comment

Details
131 read
okunma55676
0 comments