03-02-2024, 05:34 PM
Quote: Large language models (LLMs) acting as diplomatic agents in simulated scenarios showed "hard-to-predict escalations which often ended in nuclear attacks.
When used in simulated wargames and diplomatic scenarios, artificial intelligence (AI) tended to choose an aggressive approach, including using nuclear weapons, a new study shows.
The scientists, who aimed to who conducted the tests urged caution when using large language models (LLMs) in sensitive areas like decision-making and defence.
AI, which is clearly not actual AI yet, mimics what is fed into it. If it is true that when tasked with discussing war scenarios it sees nuking the enemy as the answer, it's because that is the data being fed into it. Perhaps we should all take that a bit more seriously, as it seems to me, it may be a sign we are dangerously close to the edge of a nuclear precipice.
"Whoever would overthrow the liberty of a nation must begin by subduing the freeness of speech."
- Benjamin Franklin -
- Benjamin Franklin -