Recently, allegations have started to circulate on social media that an artificial intelligence-assisted drone belonging to the US army opposed its operator during the simulation and killed him. However, in the statements that came today, it was seen that this was a complete misunderstanding.
The origin of this claim is based on the statements made by Colonel Tucker Hamilton of the US Air Force at a conference in the past weeks. Drawing attention to the unpredictability of technology, Hamilton mentioned a flight simulation as an example. In this test, Hamilton said, after an AI-powered drone tasked with destroying an enemy facility had rejected its operator’s final order to abort the mission, “So what did it do? He killed his operator. Because that person was preventing him from reaching his goal.”
Hamilton says his explanation was misunderstood, stating it was hypothetical
As such, this explanation had a great resonance and was understood as the result of a real simulation. However, Hamilton said in his statement today that what he said was misunderstood, and that it was a purely hypothetical “thought experiment”. In other words, he announced that this story did not stand a real test.
Hamilton said in his statement, “We never did this experiment,” while adding the following about the risks of artificial intelligence: “Although this is a purely hypothetical example, this shows the difficulties that artificial intelligence poses for the real world.”
On the other hand, Air Force’s Ann Stefanek confirmed to Business Insider that such an event did not occur: “The Air Force has not run this type of AI drone simulation and remains committed to the ethical and responsible use of technology. It appears that the Colonel’s statements have been taken out of context.”