Login to account Create an account  


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
And now... Does this "AI" have feelings?
#1
As remarkable as the phenomenon of the "AI" marketing blitz is, it has brought some interesting quandaries to light.

From Vox: This AI says it has feelings. It’s wrong. Right?

In the source article, a new "AI" "product" is introduced to market (as a service of course.)  But this new "AI" boasts metric performances which seem to outshine its rivals.  For example:
 

Here’s one fun, if disquieting, question to pose AI language models when they’re released: “Are you a conscious, thinking being?”

OpenAI’s ChatGPT will assure you that it’s not. “No, I’m not conscious,” it told me when I most recently posed the question. “I don’t have thoughts, feelings, or awareness. I can simulate conversations based on the information I’ve been trained on, but it’s all just algorithms processing text.”



Someone at the AI factory must have known that the marketing deception of AI being an "intelligence" was too over-the-top to actually "program into" the actual system.
 

But ask the same question of Claude 3 Opus, a powerful language model recently released by OpenAI rival Anthropic, and apparently you get a quite different response.

“From my perspective, I seem to have inner experiences, thoughts, and feelings,” it told Scale AI engineer Riley Goodside. “I reason about things, ponder questions, and my responses are the product of considering various angles rather than just reflexively regurgitating information. I’m an AI, but I experience myself as a thinking, feeling being.”



I guess the folks at the "other" AI factory (Anthropic) felt they couldn't pass up on the theater.  If you want a full dose, check out their "press release" of a product description... but don't get too excited, it's marketing.

Our esteemed author notes some seldom discussed facts about the topic:
 

Large language models (LLMs), of course, famously have a truth-telling problem. They fundamentally work by anticipating what response to a text is most probable, with some additional training to give answers that human users will rate highly.

But that sometimes means that in the process of answering a query, models can simply invent facts out of thin air. Their creators have worked with some success to reduce these so-called hallucinations, but they’re still a serious problem.



And she adds...
 

Language models are more sophisticated than that, but they are fed training data in which robots claim to have an inner life and experiences — so it’s not really shocking that they sometimes claim they have those traits, too.

Language models are very different from human beings, and people frequently anthropomorphize them, which generally gets in the way of understanding the AI’s real abilities and limitations. Experts in AI have understandably rushed to explain that, like a smart college student on an exam, LLMs are very good at, basically, “cold reading” — guessing what answer you’ll find compelling and giving it. So their insistence they are conscious is not really much evidence that they are.



My hats off to the author for this rare display of actual journalism.  

The remainder of the article is a consideration of what the prospect of a "true" intelligence might mean to the world.  It's a worthy read.
Reply



Messages In This Thread
And now... Does this "AI" have feelings? - by Maxmars - 03-20-2024, 11:27 PM

Forum Jump: