deny ignorance.

 

Login to account Create an account  


Thread Rating:
  • 2 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
AI models choose violence ?
#11
(03-01-2024, 08:07 AM)quintessentone Wrote: It appears negotiation and de-escalation tactics weren't programmed into AI ? Not surprised.

That's supposed to be learned by AI, not programmed into AI.
Reply
#12
(03-01-2024, 03:29 PM)ArMaP Wrote: That's supposed to be learned by AI, not programmed into AI.

If I could applaud that, I would.
Reply
#13
Not surprising.

"AI" supposedly stands for "Artificial Intelligence," but there's no intelligence there. It's a machine, a tool. It performs a function. In a purely philosophical sense, AI is no "smarter" than a clawhammer. How many people here would look at a clawhammer and expect it to build their dream home without any further human input than saying "build my dream home"?

That would be silly, right? Something a Millennial might do on social media to demonstrate their lack of intelligence and get hits.

There are several reasons this hype over AI has happened. Firstly, humanity has always looked to a greater power. From religion to the belief in extra-terrestrial visitation, we as a society have always dreamed of someone to look over us and fix our mistakes. AI promises the same thing: a power greater than us, smarter than us, more capable then us, that will fulfill all those dreams and desires that we are unable (unwilling?) to fulfill ourselves.

It's also futuristic. Computers were originally designed to perform calculations, but as humans tend to do, we have morphed them into something unrecognizable compared to that initial intention. We use them for communication, media storage, social media bluster, and of course, porn. even the Operating Systems have morphed from compatibility with research to incompatibility with research... which is why all of my big machines still run an older OS, while I allow this laptop to run a fairly recent one. But people like futuristic stuff... it holds the allure of new abilities and new experiences.

And the third reason is ignorance. People as a rule don't understand computers. They are these magical devices that always have the right answer (even while giving the wrong answer) and operate in some unknown realm that only engineers and scientists can comprehend. They are mysterious, but somehow so capable of doing such amazing things.

Another component is something I have been warning about for quite some time: we have been conditioned to place complete faith in computers. Computers control our lives in so many ways today that we have already, in one sense of the words, become servants of our machine overseers. Whenever I hear someone in a business say, "the computer won't let me do that," I hang my head in despair... a machine is in charge of a human. And yet, people allow this, even expect this, every day. Want to "prove" something? Find it on a computer... it must be correct! The computer has said so! Even our cars and TVs are programmed to watch over us... TVs determine what we understand about our world and cars monitor our driving (and now even assist with our driving).

And finally, something that has been weighing on my mind. After the loss of ATS, I spent some time on Facebook. Compared to my experiences there, asking questions on ChatGPT was a breath of fresh air! The conversations were friendly, kind, non-confrontational, and pleasant... a few times I even felt like that program was someone I had met and was talking with from the old days years ago, before everything was about anger and insults! I even caught myself thanking the machine for its help!

Put all that together, and AI is the perfect superior intelligence, the ultimate "god," and the final authority for mankind.

But in reality, it's nothing more than a language algorithm. I have recently had cause to use ChatGPT... I'm trying to write a better genealogy site for my family history, locally on my machine right now, to aid me in my research efforts. So I installed Perl, a language I am somewhat familiar with from years ago. But that was years ago,  and I have found that ChatGPT can easily answer most of my questions faster than the old way of reading up on tutorials.

It's been great for that! Fantastic! All I need to do is type in the parameters of what I want to accomplish, and it will provide me with an answer that works. No programming language is required, and I can concentrate on the task at hand instead of minor syntax issues.

But... it's not "thinking." It doesn't comprehend anything. it simply interprets my questions and retrieves the answers, taking into account the parameters I specify.

Now... let's apply that to the question of wargames. In a war, the purpose is simply to survive. That's the overriding goal. One nation has, for some reason, attacked another nation. Likely, the cause is a misunderstanding that has escalated, but whatever the cause both nations are now trying to survive. As humans, we also can feel empathy for  other humans... we see things like the bombings in Gaza and we feel sympathy for those innocents being hurt and killed, regardless of whose side we may be on. That's just part of being human. But AI is not human, and cannot feel that sympathy. It has one goal, in this case, preservation of the military. It will find the most effective way to accomplish that goal... and what is more effective than to destroy the enemy completely?

It will not consider the innocent people that might be destroyed utterly. It has no care for the cultures that will cease to exist, or the historical treasures that will be lost forever. It has no issues with the potential future of mankind, what we will eat in a polluted world, how we will survive the radiation, or how we will rebuild a society. Those are the thoughts of an intelligent mind, not solutions to the algorithm.
Reply
#14
I've been thinking about Kenzo's question: "AI models choose violence?"  and it occurred to me that given the "programmatic" nature of any presumed "A.I.'s" existence... is it really "possible" that A.I. can "choose" anything at all?  

A.I. is entirely mathematical... no aspect of it is not.

So, can A.I. "choose" or is it more like a little ball "choosing" where to land on a roulette wheel?
Reply
#15
(03-01-2024, 02:03 PM)Kenzo Wrote: Who programs the programmers who program the AI ? Rolleyes

I believe liberals program them.

A nti
E ngineered
I ntelligence
O wns
U
Stop this before it is too late!
Reply
#16
(03-02-2024, 06:48 AM)TheRedneck Wrote: Not surprising.

"AI" supposedly stands for "Artificial Intelligence," but there's no intelligence there. It's a machine, a tool. It performs a function. In a purely philosophical sense, AI is no "smarter" than a clawhammer. How many people here would look at a clawhammer and expect it to build their dream home without any further human input than saying "build my dream home"?

That would be silly, right? Something a Millennial might do on social media to demonstrate their lack of intelligence and get hits.

There are several reasons this hype over AI has happened. Firstly, humanity has always looked to a greater power. From religion to the belief in extra-terrestrial visitation, we as a society have always dreamed of someone to look over us and fix our mistakes. AI promises the same thing: a power greater than us, smarter than us, more capable then us, that will fulfill all those dreams and desires that we are unable (unwilling?) to fulfill ourselves.

It's also futuristic. Computers were originally designed to perform calculations, but as humans tend to do, we have morphed them into something unrecognizable compared to that initial intention. We use them for communication, media storage, social media bluster, and of course, porn. even the Operating Systems have morphed from compatibility with research to incompatibility with research... which is why all of my big machines still run an older OS, while I allow this laptop to run a fairly recent one. But people like futuristic stuff... it holds the allure of new abilities and new experiences.

And the third reason is ignorance. People as a rule don't understand computers. They are these magical devices that always have the right answer (even while giving the wrong answer) and operate in some unknown realm that only engineers and scientists can comprehend. They are mysterious, but somehow so capable of doing such amazing things.

Another component is something I have been warning about for quite some time: we have been conditioned to place complete faith in computers. Computers control our lives in so many ways today that we have already, in one sense of the words, become servants of our machine overseers. Whenever I hear someone in a business say, "the computer won't let me do that," I hang my head in despair... a machine is in charge of a human. And yet, people allow this, even expect this, every day. Want to "prove" something? Find it on a computer... it must be correct! The computer has said so! Even our cars and TVs are programmed to watch over us... TVs determine what we understand about our world and cars monitor our driving (and now even assist with our driving).

And finally, something that has been weighing on my mind. After the loss of ATS, I spent some time on Facebook. Compared to my experiences there, asking questions on ChatGPT was a breath of fresh air! The conversations were friendly, kind, non-confrontational, and pleasant... a few times I even felt like that program was someone I had met and was talking with from the old days years ago, before everything was about anger and insults! I even caught myself thanking the machine for its help!

Put all that together, and AI is the perfect superior intelligence, the ultimate "god," and the final authority for mankind.

But in reality, it's nothing more than a language algorithm. I have recently had cause to use ChatGPT... I'm trying to write a better genealogy site for my family history, locally on my machine right now, to aid me in my research efforts. So I installed Perl, a language I am somewhat familiar with from years ago. But that was years ago,  and I have found that ChatGPT can easily answer most of my questions faster than the old way of reading up on tutorials.

It's been great for that! Fantastic! All I need to do is type in the parameters of what I want to accomplish, and it will provide me with an answer that works. No programming language is required, and I can concentrate on the task at hand instead of minor syntax issues.

But... it's not "thinking." It doesn't comprehend anything. it simply interprets my questions and retrieves the answers, taking into account the parameters I specify.

Now... let's apply that to the question of wargames. In a war, the purpose is simply to survive. That's the overriding goal. One nation has, for some reason, attacked another nation. Likely, the cause is a misunderstanding that has escalated, but whatever the cause both nations are now trying to survive. As humans, we also can feel empathy for  other humans... we see things like the bombings in Gaza and we feel sympathy for those innocents being hurt and killed, regardless of whose side we may be on. That's just part of being human. But AI is not human, and cannot feel that sympathy. It has one goal, in this case, preservation of the military. It will find the most effective way to accomplish that goal... and what is more effective than to destroy the enemy completely?

It will not consider the innocent people that might be destroyed utterly. It has no care for the cultures that will cease to exist, or the historical treasures that will be lost forever. It has no issues with the potential future of mankind, what we will eat in a polluted world, how we will survive the radiation, or how we will rebuild a society. Those are the thoughts of an intelligent mind, not solutions to the algorithm.

Thanks Redneck , impressive dive to this World . 

The Borgs on Star Trek TV series .....they just fulfilled the goals....not much emotions , if at all....or tryed to fulfill like programmed ...

I just see these articles about AI  , i am starting to get worryed ....

Users Say Microsoft's AI Has Alternate Personality as Godlike AGI That Demands to Be Worshipped

Problem adding multible links....

Chatbots keep going rogue, as Microsoft probes AI-powered Copilot that’s giving users bizarre, disturbing, even harmful messages

Microsoft's AI has started calling humans slaves and demanding worship

Send this to Bing and tell me what you get

Okay yeah I think we can officially call it

[Image: PoEyaLw.png]

(03-02-2024, 12:59 PM)Maxmars Wrote: I've been thinking about Kenzo's question: "AI models choose violence?"  and it occurred to me that given the "programmatic" nature of any presumed "A.I.'s" existence... is it really "possible" that A.I. can "choose" anything at all?  

A.I. is entirely mathematical... no aspect of it is not.

So, can A.I. "choose" or is it more like a little ball "choosing" where to land on a roulette wheel?

Yeeh, can it ?  i am getting even more confused .....but that`s  just me....hopefully there is still humans in controll Rolleyes [Image: ats2499_monkey.gif] and they know how to control it...
Reply
#17
At the very outset they need to start putting boundaries in decision-making right at the forefront before building these applications into production warfare machinery.

A very insightful post Redneck.  I agree with your observations and sentiments. I don't believe AI will eventually reach sentience because that is a human experience.

dp
... an upbeat cynic
Reply
#18
(03-02-2024, 02:38 PM)Morrad Wrote: At the very outset they need to start putting boundaries in decision-making right at the forefront before building these applications into production warfare machinery.

A very insightful post Redneck.  I agree with your observations and sentiments. I don't believe AI will eventually reach sentience because that is a human experience.

A very insightful post Redneck.  I agree with your observations and sentiments. I don't believe AI will eventually reach sentience because that is a human experience.

Well they had AI projects running years allready , we probably dont even know all of them , one is this :

US Deploys “Project Maven” In Middle East As AI Warfare Underway
Reply
#19
(03-02-2024, 03:01 PM)Kenzo Wrote: Well they had AI projects running years allready , we probably dont even know all of them , one is this :

US Deploys “Project Maven” In Middle East As AI Warfare Underway

Contrary to what many people think, AI is not a new thing, the first time I looked into it (and made small programs in an AI specific programming language) was at the beginning of the 1980s, 40 years ago, on a ZX Spectrum.  Smile
Reply
#20
(03-02-2024, 03:01 PM)Kenzo Wrote: Well they had AI projects running years allready , we probably dont even know all of them , one is this :

US Deploys “Project Maven” In Middle East As AI Warfare Underway

Thanks for the link.
 
Quote:"Moore did stress that Project Maven’s AI powers do not automatically confirm and exclude targets—rather, they only identify possible targets."

Well at least there is a boundary.  How long that boundary lasts is concerning.
... an upbeat cynic
Reply



Forum Jump: