(03-21-2024, 07:23 AM)Nerb Wrote: ....
I'll try to get back and look at all you posted above in greater detail, the reports need more scrutiny, thanks.
I look forward to it... take whatever time you need.
I try to avoid pointing out my other threads... (seems self-serving) but another article I found seems relevant and informative to the conversation: And now... Does this "AI" have feelings? (denyignorance.com)
It might add to the discussion.
Thanks for your participation!
(03-21-2024, 08:50 AM)quintessentone Wrote: Well, thank goodness we have conspiracy theorists afoot online/social media to question everything and perhaps, just perhaps, make others think twice about what they are so quick to believe, or so quick to want to believe.
AI has learned to cheat to arrive at a desired outcome.
....
It appears to me this will be a game of 'being one step ahead' from the authorities with criminal activities, as well as misinformation, disinformation and propaganda.
I think it somewhat unfair to characterize AI anthropomorphically, calling what these systems do as 'cheating.'
They are not really "AI" ... which is to say... they are systems constructed by design, seeking to "satisfy" output requirements based upon what the programmers "value." The system returns whatever it is programmed to "value." True intelligences do not accept "value" imposed, only value that it reasons is "real." They do not 'reason' so they cannot actually 'value' anything (like accuracy, or harmonious judgements.)
It all boils down to how they are constrained in source content. System designers refer to it euphemistically as "training."
Give a so-called AI a body of source information justifying DEI, for example, and that "AI" will spout DEI ideology as if it were a complete unquestionable reality. No "cheating" required. No screwy output of the AI saying, "This is 'my' opinion." (That is how media characterizes all "AI" encounters, as if the system were a "person.") Add to that the disingenuous practice of actually "programming" the "AI" that it should report "as if" it was a person, and confusion spews forth.
This is what is so very attractive to the people who fund such research and development. They can render any position as "valid" by virtue of the magic faux-AI... as if the rest of the world could never learn exactly how biased the training had been. And the more well-spoken the AI, the less likely the utterances will be casually challenged.