British intelligence agency warns of AI chatbots

The UK's National Cyber ​​Security Center (NCSC), part of the Government Communications Center (GCHQ), established to protect the British state against cyber attacks and threats, has warned people about artificial intelligence. Artificial intelligence powered...
 British intelligence agency warns of AI chatbots
READING NOW British intelligence agency warns of AI chatbots
The UK’s National Cyber ​​Security Center (NCSC), part of the Government Communications Center (GCHQ), established to protect the British state against cyber attacks and threats, has warned people about artificial intelligence.

The security risks of AI-powered chatbots are a concern with their increasing use globally. The UK’s National Center for Cyber ​​Security wants to draw attention to the issue by highlighting the potential risks of chatbots like ChatGPT. In an article published by NCSC, it is stated that although large language models (LLM) are impressive, they are not magic tools, they have not reached the level of artificial general intelligence and have serious flaws.

British intelligence warns of artificial intelligence

The NCSC emphasizes that AI tools and language models such as ChatGPT and GPT-4 can be misleading, produce false information, be biased, and tend to create toxic content. Concern that sensitive user questions could be seen by providers like OpenAI is seen as a major issue. For example, sensitive topics such as emerging health issues can be used for training future chatbot releases. There is also a scenario where sensitive information might be asked, such as how a CEO might fire an employee. Therefore, all queries made in chat tools such as ChatGPT or Bing actually go to OpenAI’s servers as data and it is stated that attention should be paid to what information is given.

Because of these concerns, companies like Amazon and JPMorgan are asking their employees not to use ChatGPT, especially because sensitive information could be leaked. The NCSC also states that LLMs can give cybercriminals the ability to write malware. This is supported by the news that ChatGPT, discovered by security researchers in January, is used both as an “educational” tool and as a malware generation platform. In our previous news, we mentioned that the new GPT-4 creates a working website from a hand-drawn sketch. Methods like this can be used for good or bad.

They can increase their cybercrime activity

As a result, the British intelligence agency points out that care should be taken in the use of artificial intelligence-supported chatbots, especially when using questions that may expose sensitive information as input. It also warns of the possibility of AI-powered bots to further foster cybercriminal activity. In addition, the issue of data security is another controversial issue. Although OpenAI describes itself as a non-profit organization, it offers ChatGPT Plus for a fee. In addition, although Microsoft does not buy OpenAI, it invests billions of dollars and uses the company’s technologies in its own services. For this reason, the independence of the data obtained from ChatGPT, which is used by hundreds of millions of people every day, becomes even more important.

Comments
Leave a Comment

Details
139 read
okunma56977
0 comments