WHO worried about artificial intelligence
In its statement, WHO underlines that the data used to train artificial intelligence may be biased. In the light of these data, it is stated that artificial intelligence can produce misleading or false information and models can be abused to produce disinformation. On the other hand, the UN health agency said it was “mandatory” to assess the risks of using established large language model tools (LLM) such as ChatGPT to protect human well-being and public health.
In fact, the WHO and the UN’s health branch are perfectly justified in their concerns. It’s a well-known fact that current productive AI tools are prone to “hallucination,” that is, to make things up. For this reason, it is possible for these tools to give false information, as well as giving false information.