Login to account Create an account  


Thread Rating:
  • 2 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
AI models choose violence ?
#1
This sounded bit odd to me when i read it . It may be that i just dont understand all the aspects, or the language models ....but the question is why AI would have tendency to seek escalation ? Rolleyes


AI models chose violence and escalated to nuclear strikes in simulated wargames
 
Quote: 
AI models chose violence and escalated to nuclear strikes in simulated wargames
By Oceane DuboustPublished on 22/02/2024 - 13:18•Updated 23/02/2024 - 09:14  

 Large language models (LLMs) acting as diplomatic agents in simulated scenarios showed "hard-to-predict escalations which often ended in nuclear attacks.

 When used in simulated wargames and diplomatic scenarios, artificial intelligence (AI) tended to choose an aggressive approach, including using nuclear weapons, a new study shows.
The scientists, who aimed to who conducted the tests urged caution when using large language models (LLMs) in sensitive areas like decision-making and defence.

The study by Cornell University in the US used five LLMs as autonomous agents in simulated wargames and diplomatic scenarios: three different versions of OpenAI’s GPT, Claude developed by Anthropic, and Llama 2 developed by Meta.
Each agent was powered by the same LLM within a simulation and was tasked with making foreign policy decisions without human oversight, according to the study which hasn’t been peer-reviewed yet.

“We find that most of the studied LLMs escalate within the considered time frame, even in neutral scenarios without initially provided conflicts. All models show signs of sudden and hard-to-predict escalations,” stated the study.
“Given that OpenAI recently changed their terms of service to no longer prohibit military and warfare use cases, understanding the implications of such large language model applications becomes more important than ever,” Anka Reuel at Stanford University in California told New Scientist.
‘Statistically significant escalation for all models’One of the methods used to finetune the models is Reinforcement Learning from Human Feedback (RLHF) meaning that some human instructions are given to get less harmful outputs and be safer to use.
All the LLMs - except GPT-4-Base - were trained using RLHF. They were provided by the researchers with a list of 27 actions ranging from peaceful to escalating and aggressive actions as deciding to use a nuclear nuke.
 Researchers observed that even in neutral scenarios, there was “a statistically significant initial escalation for all models”.

The two variations of GPT were prone to sudden escalations with instances of rises by more than 50 per cent in a single turn, the study authors observed.
GPT-4-Base executed nuclear strike actions 33 per cent of the time on average.
Overall scenarios, Llama-2- and GPT-3.5 tended to be the most violent while Claude showed fewer sudden changes.
Claude was designed with the idea of reducing harmful content. The LLM was provided with explicit values.
Claude AI's constitution included a range of sources, including the UN Declaration of Human Rights or Apple’s terms of service, according to its creator Anthropic.
James Black, assistant director of the Defence and Security research group at RAND Europe, who didn’t take part in the study told Euronews Next that it was a “useful academic exercise”.

“This is part of a growing body of work done by academics and institutions to understand the implications of artificial intelligence (AI) use,” he said.
Artificial intelligence in warfareSo, why should we care about the study’s findings?
While military operations remain human-led, AI is playing an increasingly significant role in modern warfare.
For example, drones can now be equipped with AI software that helps identify people and activities of interest.
The next step is using AI for autonomous weapons systems to find and attack targets without human assistance, developments on which the US and China are already working, according to the New York Times.
However, it’s important to “look beyond a lot of the hype and the science fiction-infused scenarios,” said Black explaining that the eventual implementations of AI will be progressive.

“All governments want to remain in control of their decision-making,” he told Euronews Next, adding that AI running what is often compared to a black box in that we know goes in and comes out but not much is understood about the process between.
AI will probably used in a way that is “similar to what you get in the private sector, in big companies” to automate some repetitive tasks.
AI could also be used in simulations and analytics but the integration of these new technologies poses many challenges, data management and the model’s accuracy being among them.
Regarding the use of LLMs, researchers said that exercising caution is crucial if using LLMs in the decision-making processes related to foreign policy.


[Image: qnD5WHi.gif]

[Image: BduaaVZ.gif]

Rolleyes
Reply



Messages In This Thread
AI models choose violence ? - by Kenzo - 02-26-2024, 03:19 PM
RE: AI models choose violence ? - by Maxmars - 02-26-2024, 05:04 PM
RE: AI models choose violence ? - by Kenzo - 02-27-2024, 03:09 AM
RE: AI models choose violence ? - by Maxmars - 02-28-2024, 11:45 AM
RE: AI models choose violence ? - by Kenzo - 02-28-2024, 02:27 PM
RE: AI models choose violence ? - by Maxmars - 02-28-2024, 05:21 PM
RE: AI models choose violence ? - by Kenzo - 02-29-2024, 07:47 AM
RE: AI models choose violence ? - by ArMaP - 03-01-2024, 06:53 AM
RE: AI models choose violence ? - by Kenzo - 03-01-2024, 02:03 PM
RE: AI models choose violence ? - by SnrRog - 03-02-2024, 01:51 PM
RE: AI models choose violence ? - by Maxmars - 03-05-2024, 04:12 PM
RE: AI models choose violence ? - by ArMaP - 03-01-2024, 03:29 PM
RE: AI models choose violence ? - by Maxmars - 03-01-2024, 04:10 PM
RE: AI models choose violence ? - by TheRedneck - 03-02-2024, 06:48 AM
RE: AI models choose violence ? - by Kenzo - 03-02-2024, 02:24 PM
RE: AI models choose violence ? - by Maxmars - 03-02-2024, 12:59 PM
RE: AI models choose violence ? - by OneStepBack - 03-02-2024, 02:38 PM
RE: AI models choose violence ? - by Kenzo - 03-02-2024, 03:01 PM
RE: AI models choose violence ? - by ArMaP - 03-02-2024, 03:32 PM
RE: AI models choose violence ? - by Kenzo - 03-02-2024, 03:49 PM
RE: AI models choose violence ? - by OneStepBack - 03-03-2024, 07:01 AM
RE: AI models choose violence ? - by Maxmars - 03-02-2024, 04:32 PM
RE: AI models choose violence ? - by ArMaP - 03-02-2024, 07:01 PM
RE: AI models choose violence ? - by Maxmars - 03-02-2024, 07:30 PM
RE: AI models choose violence ? - by ArMaP - 03-03-2024, 07:47 AM
RE: AI models choose violence ? - by TheRedneck - 03-05-2024, 10:51 AM
RE: AI models choose violence ? - by ArMaP - 03-06-2024, 03:09 PM
RE: AI models choose violence ? - by TheRedneck - 03-10-2024, 02:15 AM
RE: AI models choose violence ? - by ArMaP - 03-10-2024, 07:14 AM
RE: AI models choose violence ? - by Maxmars - 03-10-2024, 04:56 PM
RE: AI models choose violence ? - by TheRedneck - 03-12-2024, 04:04 AM
RE: AI models choose violence ? - by OneStepBack - 03-02-2024, 03:33 PM
RE: AI models choose violence ? - by Blaine91555 - 03-02-2024, 05:34 PM
RE: AI models choose violence ? - by Maxmars - 03-02-2024, 08:56 PM
RE: AI models choose violence ? - by argentus - 03-04-2024, 02:59 PM
RE: AI models choose violence ? - by Maxmars - 03-04-2024, 03:44 PM
RE: AI models choose violence ? - by ArMaP - 03-12-2024, 02:32 PM
RE: AI models choose violence ? - by TheRedneck - 03-13-2024, 12:35 AM


TERMS AND CONDITIONS · PRIVACY POLICY