AI models choose violence ? - Printable Version +- Deny Ignorance (https://denyignorance.com) +-- Forum: Current Events (https://denyignorance.com/Forum-Current-Events) +--- Forum: Current Events (https://denyignorance.com/Forum-Current-Events--20) +--- Thread: AI models choose violence ? (/Thread-AI-models-choose-violence) |
RE: AI models choose violence ? - Kenzo - 03-02-2024 (03-02-2024, 03:32 PM)ArMaP Wrote: Contrary to what many people think, AI is not a new thing, the first time I looked into it (and made small programs in an AI specific programming language) was at the beginning of the 1980s, 40 years ago, on a ZX Spectrum. Woah 40 years? And i believed it's quote new toy ... If you use those AI chats, have you tryed to get it show it's other personality, the dark side ? If it has dark side i mean.... (03-02-2024, 03:33 PM)Morrad Wrote: Thanks for the link. Yeeh boundary , thats good to have , but how good boundary.... the problem with military AI is even small error could cause big danger potentially, wrong target etc... RE: AI models choose violence ? - Maxmars - 03-02-2024 (03-02-2024, 03:32 PM)ArMaP Wrote: Contrary to what many people think, AI is not a new thing, the first time I looked into it (and made small programs in an AI specific programming language) was at the beginning of the 1980s, 40 years ago, on a ZX Spectrum. I would say that the "1980's" is MUCH closer to the truth than the "ZX Spectrum." (I would have gone with the IMSAI 8080.) Technological advances in certain fields are far and beyond anything the public is actually privy to. I can't say whether A.I. (as we are being "taught" by popular media) is similar in that respect. In a "commercial" world, where all is driven by "commerce" we will only be 'sold' the truth. As a matter of scientific inquiry, A.I. is a worthy pursuit. "Marketing" makes of that, whatever it will... "Be afraid,".... "Be proud," "Invest now." I think AI is not ready for "declarations" of any kind. Perhaps there is an A.I in the world today. So far as I can tell, we haven't seen it. What we have seen is the industry (hopefully) learning that you can't point at human works to 'educate' them and not expect them to regurgitate every cogent thought they were ever exposed to. Thus, you end up with 'deterministic' results... they mimic what they know... they know what humans have recorded of themselves... which teaches us that we must carry our deficiencies and adapt to overcome them... the AI sees the deficiencies only as a matter of "fact." Most AI misfires are a result of what it is being TAUGHT, rather than what it is LEARNING... (I would think that a "rookie mistake.") RE: AI models choose violence ? - Blaine91555 - 03-02-2024 Quote: Large language models (LLMs) acting as diplomatic agents in simulated scenarios showed "hard-to-predict escalations which often ended in nuclear attacks. AI, which is clearly not actual AI yet, mimics what is fed into it. If it is true that when tasked with discussing war scenarios it sees nuking the enemy as the answer, it's because that is the data being fed into it. Perhaps we should all take that a bit more seriously, as it seems to me, it may be a sign we are dangerously close to the edge of a nuclear precipice. RE: AI models choose violence ? - ArMaP - 03-02-2024 (03-02-2024, 04:32 PM)Maxmars Wrote: I would say that the "1980's" is MUCH closer to the truth than the "ZX Spectrum." (I would have gone with the IMSAI 8080.) What do you mean? RE: AI models choose violence ? - Maxmars - 03-02-2024 (03-02-2024, 07:01 PM)ArMaP Wrote: What do you mean? I just remembered the IMSAI 8080 as one of the earliest clones of 8-bit computer tech (in the mid 70's) and the ZX is mid 80's... as far as earliest beginnings of digital magic making... that's almost as far back as I go. Sorry, trying to be funny... and failing, if I have to explain it. I can imagine the "ideas" of "AI" coming around quite early... RE: AI models choose violence ? - Maxmars - 03-02-2024 Just to point out another aspect of AI that gets lost in the "look at me" world of public releases... From: CFOs take charge of AI plans A study by Gartner has found that CFOs are getting more involved in shaping their firms’ AI strategies. The research, which quizzed 822 bosses, found that 34 per cent of them thought CFOs had a say in their AI plans. CTOs came top with 55 per cent, followed by CIOs with 48 per cent and CEOs with 45 per cent. ... “AI spending is set to soar by five to eight times this year at most firms, and many CFOs are playing a key role in ensuring these investments pay off and don’t cause too much risk.” ... But many CFOs are not just partners but leaders in developing their firm’s AI strategy to match their business and financial goals and what their backers, like boards, investors and regulators, want. I could go on and on about how NONE of that has to do with "end of the world scenarios," just plain business and business acumen. Yet someone seems to be constantly stoking the flames of fear anyway... why is that? Will that 'register' with the CFOs? Something tells me "Nay!" RE: AI models choose violence ? - OneStepBack - 03-03-2024 (03-02-2024, 03:49 PM)Kenzo Wrote: Yeeh boundary , thats good to have , but how good boundary.... Last night I remembered someone in AI being interviewed. He talked about a study he did did on problem solving. The AI apparently used pathways it was not programmed to use in solving a specific problem ie it created these new pathways itself. I tried looking for the interview today but couldn't find it. It may have been on YouTube. My mind isn't good these days. I will keep looking for it. As far as military warfare goes it could potentially open a pandora's box. Scary stuff. RE: AI models choose violence ? - ArMaP - 03-03-2024 (03-02-2024, 07:30 PM)Maxmars Wrote: I just remembered the IMSAI 8080 as one of the earliest clones of 8-bit computer tech (in the mid 70's) and the ZX is mid 80's... as far as earliest beginnings of digital magic making... that's almost as far back as I go. Sorry, trying to be funny... and failing, if I have to explain it. The ZX Spectrum is from 1982, I got mine as a Christmas present in 1983 (I still have it, I have to see if it still works). As far as I know, AI studies started in the 1950s or so. RE: AI models choose violence ? - argentus - 03-04-2024 Think about what would be best for the Earth from an AI point of perspective. Humanity is toxic and forever craps in its own nest, always expanding and depleting resources. I think the Earth itself will always survive whatever horrors we thrust upon it. I like watching 'Ancient Aliens'. Not sure if it is always on the mark, but I think there are truths there. I think the perfect evolution from current humans -- perfect in the sense of being right for the Earth -- would likely have to sacrifice all the things about humanity that make us unique and wonderful, such as creativity, and emotions, and a regard for beauty. I think the evolution of us of the future won't be prejudiced, or want to wage war, but they also won't be able to or care about music and poetry and art. what to do, what to do. I think these quandaries are much of why the EBE's -- the others -- are interested in us. They are well beyond our views because they know well where it will lead, but we're just so damned entertaining. RE: AI models choose violence ? - Maxmars - 03-04-2024 I suspect that if AI were to emerge, as many fear, it will be the closest thing to "first contact" we could ever experience outside of any EB scenario. Of course, we can't ignore the fact that the research driven by commerce is actually seeking to create AI slaves. Imagine the responsibility..., total and unrestrained control of true, intelligent (sapient,) entities. Ahem... yeah, I don't see evidence of "preparedness" for that eventuality. I wonder, would any EBE's "feel" ready for that? I'm guess AI's might. |