On the surface, ChatGPT may seem like a useful tool for a range of tasks, but before you ask it to summarize key notes or check your work for errors, you’ll realize that anything you share can be used to train the system and perhaps even come back in the form of responses to other users. it’s good to remember. At least some Samsung employees have learned this the most painful way.
According to The Economist Korea, Samsung’s semiconductor division engineers have been using ChatGPT for some time. However, in this process, it turned out that the employees had leaked confidential information to ChatGPT at least 3 times. One employee reportedly asked ChatGPT to check the sensitive database source code for errors, another requested code optimization, and a third requested that a meeting be recorded in ChatGPT and recorded.
Reports suggest that after Samsung learned of these security flaws, employees tried to limit the scope of potential future leaks by limiting the length of ChatGPT requests to 1 kilobyte or 1024 characters of text. It is said that the company has also launched an investigation for the 3 employees in question and wants to create its own chatbot to prevent similar mishaps.