![](https://www.technopixel.org/wp-content/uploads/2023/07/fbi-artificial-intelligence-makes-it-easier-for-hackers-0-CS7axCX9.jpg)
FBI warns of artificial intelligence
In our previous news, we have included reports on how even artificial intelligence models like ChatGPT, despite having precautions and restrictions, are used to develop malware that can evade even the latest security systems. “We expect these trends to increase as the adoption and diffusion of AI models continues,” the senior FBI official said.
On the other hand, the main focus of the FBI and other law enforcement agencies is open source models, not walled tools like ChatGPT. The last example of open source models was Meta’s Llama 2. It is possible to take these basic models and train them on the desired dataset according to the purpose you want. Again, in our previous news, we talked about the malicious WormGPT, which was developed in this way and sold with subscription-based memberships. With such tools, attackers can be more effective especially in e-mail fraud or enable the development of software that is difficult to detect.
The FBI also states that there is a rapid upward trend in deepfake (artificial intelligence-generated content that never actually happened), which raises security concerns. A number of AI giants, including OpenAI, Microsoft, Google, and Meta, recently pledged at the White House to develop some form of AI watermarking technology that would benefit them as well.
Regardless of the promises made or the open source models developed, people need to be made aware of artificial intelligence. HardwareNews readers can tell if a video or image is fake, but the same cannot be expected from the entire population. The importance of the necessity of verifying the information of today’s people is increasing day by day and this should become a habit.