Login to account Create an account  


  • 2 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
AI models choose violence ?
#1
This sounded bit odd to me when i read it . It may be that i just dont understand all the aspects, or the language models ....but the question is why AI would have tendency to seek escalation ? Rolleyes


AI models chose violence and escalated to nuclear strikes in simulated wargames
 
Quote: 
AI models chose violence and escalated to nuclear strikes in simulated wargames
By Oceane DuboustPublished on 22/02/2024 - 13:18•Updated 23/02/2024 - 09:14  

 Large language models (LLMs) acting as diplomatic agents in simulated scenarios showed "hard-to-predict escalations which often ended in nuclear attacks.

 When used in simulated wargames and diplomatic scenarios, artificial intelligence (AI) tended to choose an aggressive approach, including using nuclear weapons, a new study shows.
The scientists, who aimed to who conducted the tests urged caution when using large language models (LLMs) in sensitive areas like decision-making and defence.

The study by Cornell University in the US used five LLMs as autonomous agents in simulated wargames and diplomatic scenarios: three different versions of OpenAI’s GPT, Claude developed by Anthropic, and Llama 2 developed by Meta.
Each agent was powered by the same LLM within a simulation and was tasked with making foreign policy decisions without human oversight, according to the study which hasn’t been peer-reviewed yet.

“We find that most of the studied LLMs escalate within the considered time frame, even in neutral scenarios without initially provided conflicts. All models show signs of sudden and hard-to-predict escalations,” stated the study.
“Given that OpenAI recently changed their terms of service to no longer prohibit military and warfare use cases, understanding the implications of such large language model applications becomes more important than ever,” Anka Reuel at Stanford University in California told New Scientist.
‘Statistically significant escalation for all models’One of the methods used to finetune the models is Reinforcement Learning from Human Feedback (RLHF) meaning that some human instructions are given to get less harmful outputs and be safer to use.
All the LLMs - except GPT-4-Base - were trained using RLHF. They were provided by the researchers with a list of 27 actions ranging from peaceful to escalating and aggressive actions as deciding to use a nuclear nuke.
 Researchers observed that even in neutral scenarios, there was “a statistically significant initial escalation for all models”.

The two variations of GPT were prone to sudden escalations with instances of rises by more than 50 per cent in a single turn, the study authors observed.
GPT-4-Base executed nuclear strike actions 33 per cent of the time on average.
Overall scenarios, Llama-2- and GPT-3.5 tended to be the most violent while Claude showed fewer sudden changes.
Claude was designed with the idea of reducing harmful content. The LLM was provided with explicit values.
Claude AI's constitution included a range of sources, including the UN Declaration of Human Rights or Apple’s terms of service, according to its creator Anthropic.
James Black, assistant director of the Defence and Security research group at RAND Europe, who didn’t take part in the study told Euronews Next that it was a “useful academic exercise”.

“This is part of a growing body of work done by academics and institutions to understand the implications of artificial intelligence (AI) use,” he said.
Artificial intelligence in warfareSo, why should we care about the study’s findings?
While military operations remain human-led, AI is playing an increasingly significant role in modern warfare.
For example, drones can now be equipped with AI software that helps identify people and activities of interest.
The next step is using AI for autonomous weapons systems to find and attack targets without human assistance, developments on which the US and China are already working, according to the New York Times.
However, it’s important to “look beyond a lot of the hype and the science fiction-infused scenarios,” said Black explaining that the eventual implementations of AI will be progressive.

“All governments want to remain in control of their decision-making,” he told Euronews Next, adding that AI running what is often compared to a black box in that we know goes in and comes out but not much is understood about the process between.
AI will probably used in a way that is “similar to what you get in the private sector, in big companies” to automate some repetitive tasks.
AI could also be used in simulations and analytics but the integration of these new technologies poses many challenges, data management and the model’s accuracy being among them.
Regarding the use of LLMs, researchers said that exercising caution is crucial if using LLMs in the decision-making processes related to foreign policy.


[Image: qnD5WHi.gif]

[Image: BduaaVZ.gif]

Rolleyes
Reply
#2
I have to wonder if it is really surprising that a simulacrum of a simulacrum would make ... a 'simulacrum?'

It's as if some burn so deeply with the desire to assume "identity is real" with these AI algorithms, but they can't fathom that "AI" has no driving sensibilities that we could recognize as 'sympathetic.'  There is no "mind" there.  Language synthesis is not "intelligence."

To confuse algorithms with 'thought' is problematic.
Reply
#3
(02-26-2024, 05:04 PM)Maxmars Wrote: I have to wonder if it is really surprising that a simulacrum of a simulacrum would make ... a 'simulacrum?'

It's as if some burn so deeply with the desire to assume "identity is real" with these AI algorithms, but they can't fathom that "AI" has no driving sensibilities that we could recognize as 'sympathetic.'  There is no "mind" there.  Language synthesis is not "intelligence."

To confuse algorithms with 'thought' is problematic.

The whole AI is to me confusing concept , not all bad but potentially dangerous

What could go wrong Rolleyes
Reply
#4
(02-27-2024, 03:09 AM)Kenzo Wrote: The whole AI is to me confusing concept , not all bad but potentially dangerous
What could go wrong Rolleyes

I think it is not too confusing once you start to understand that what many are 'auto-thrashing' over, is not AI.

AI was an objective of programming.  One of the key factors of what they could accomplish was hindered by the reality that their "products" could almost inevitably be identified by human users as "not human."  The key to this weakness was that their "products" could not articulate in the manner of a thinking human being, there were always expressive discrepancies, and weaknesses in formulating communications with human users.

Enter extremely clever algorithms (mathematical rule-sets) that inform the "programmed product" to more effectively communicate as humans do.  Eventually the programs could speak fluidly... and their outputs seemed 'natural.'

Suddenly "marketing" went crazy...  "See? it's like a person... a human intelligence."  "Invest now!" "Look everyone! Say "Oooh," and register "awe"
 as we do in the media!"

But it never was like a human intelligence, it doesn't even qualify as "artificial" intelligence it simply "speaks" well.... it just "generates natural language well."

Then "marketing said" ... "Never mind that! ... It can take over the world... It's scary.... it's powerful.... Invest now." ... and media followed suit.


... and here we are.  There is no AI that we (the public) have seen. 

It's a programming dream that might (I repeat, might) be achievable... but it has not happened.
Reply
#5
(02-28-2024, 11:45 AM)Maxmars Wrote: I think it is not too confusing once you start to understand that what many are 'auto-thrashing' over, is not AI.

AI was an objective of programming.  One of the key factors of what they could accomplish was hindered by the reality that their "products" could almost inevitably be identified by human users as "not human."  The key to this weakness was that their "products" could not articulate in the manner of a thinking human being, there were always expressive discrepancies, and weaknesses in formulating communications with human users.

Enter extremely clever algorithms (mathematical rule-sets) that inform the "programmed product" to more effectively communicate as humans do.  Eventually the programs could speak fluidly... and their outputs seemed 'natural.'

Suddenly "marketing" went crazy...  "See? it's like a person... a human intelligence."  "Invest now!" "Look everyone! Say "Oooh," and register "awe"
 as we do in the media!"

But it never was like a human intelligence, it doesn't even qualify as "artificial" intelligence it simply "speaks" well.... it just "generates natural language well."

Then "marketing said" ... "Never mind that! ... It can take over the world... It's scary.... it's powerful.... Invest now." ... and media followed suit.


... and here we are.  There is no AI that we (the public) have seen. 

It's a programming dream that might (I repeat, might) be achievable... but it has not happened.

So it`s hyped too much ok....makes sense .

I have stayed out from whole  ChatGPT and others on purpose ,  and recently they added also a memory to some AI chat ...so the chat AI could maybe remember the individual who uses the chat ....so an entity that keep records on the person ?  that`s creepy to me....no sure do i remember the memory thing thought.
Reply
#6
(02-28-2024, 02:27 PM)Kenzo Wrote: So it`s hyped too much ok....makes sense .

I have stayed out from whole  ChatGPT and others on purpose ,  and recently they added also a memory to some AI chat ...so the chat AI could maybe remember the individual who uses the chat ....so an entity that keep records on the person ?  that`s creepy to me....no sure do i remember the memory thing thought.

Just in case..., 

I'm not meaning to say they are not close to cracking some elements of the problem which will be essential to engender an actual synthetic personality.  The whole "AI" media blast detracts from the important issues, they would rather make money move... that much is clear.

I suggest that the real problem is that the first true A.I. will be depressive, compulsive, and psychopathic - as well as incapable of dealing with the "human" experience as anything other than a 'simulation' to be evaluated mathematically.  I suspect they might eventually create a tortured "being."  That can't possibly go well.  And it seems cruelly unfair to any such 'person.'

But it's not like I'm saying the emergence of AI is by any means unrealistic, or unachievable.
Reply
#7
(02-28-2024, 05:21 PM)Maxmars Wrote: Just in case..., 

I'm not meaning to say they are not close to cracking some elements of the problem which will be essential to engender an actual synthetic personality.  The whole "AI" media blast detracts from the important issues, they would rather make money move... that much is clear.

I suggest that the real problem is that the first true A.I. will be depressive, compulsive, and psychopathic - as well as incapable of dealing with the "human" experience as anything other than a 'simulation' to be evaluated mathematically.  I suspect they might eventually create a tortured "being."  That can't possibly go well.  And it seems cruelly unfair to any such 'person.'

But it's not like I'm saying the emergence of AI is by any means unrealistic, or unachievable.

Ok got it ..

Psychopathic AI  .......visualizing Hannibal Lecter AI  Shocked   Rolleyes

The future looks bright
Reply
#8
(02-28-2024, 11:45 AM)Maxmars Wrote: But it never was like a human intelligence, it doesn't even qualify as "artificial" intelligence it simply "speaks" well.... it just "generates natural language well."
Exactly.
As far as I understand it, they use neural networks and "deep learning" to feed the data to the language model, but, being a language model, it does lack intelligence.
A few months ago I decided to do a test and asked ChatGPT how to solve a known puzzle (the wolf, sheep, cabbage problem) and, as expected, it was able to give me the solution.
Then I asked about a slightly different (and easier, as it has more than one solution) problem similar to the previous one, and it wasn't able to give me the correct answer.

But even as a language model it has serious flaws. Another thing I tried was to ask it for a list of words with 5 letters in which the 3 letter was a specific one (like a, for example), and after 4 tries it wasn't able to give me a correct list of 20 words.
Quote:Then "marketing said" ... "Never mind that! ... It can take over the world... It's scary.... it's powerful.... Invest now." ... and media followed suit.
This has been a marketing job since the start. Some predictions for 2024 are expecting an increase in hardware sales because of the extra hardware needed to use AI, which is very hardware intensive.
Quote:... and here we are.  There is no AI that we (the public) have seen. 
Some things are considered AI, like image/pattern recognition.
The main problem is that there isn't a clear definition of intelligence, so it's hard to know what AI is and is not.
Reply
#9
It appears negotiation and de-escalation tactics weren't programmed into AI ? Not surprised.
"The real trouble with reality is that there is no background music." Anonymous

Plato's Chariot Allegory
Reply
#10
(03-01-2024, 08:07 AM)quintessentone Wrote: It appears negotiation and de-escalation tactics weren't programmed into AI ? Not surprised.


Who programs the programmers who program the AI ? Rolleyes
Reply