Deny Ignorance
And now... Does this "AI" have feelings? - Printable Version

+- Deny Ignorance (https://denyignorance.com)
+-- Forum: Current Events (https://denyignorance.com/Section-Current-Events)
+--- Forum: Current Events (https://denyignorance.com/Section-Current-Events--20)
+--- Thread: And now... Does this "AI" have feelings? (/Thread-And-now-Does-this-AI-have-feelings)



And now... Does this "AI" have feelings? - Maxmars - 03-20-2024

As remarkable as the phenomenon of the "AI" marketing blitz is, it has brought some interesting quandaries to light.

From Vox: This AI says it has feelings. It’s wrong. Right?

In the source article, a new "AI" "product" is introduced to market (as a service of course.)  But this new "AI" boasts metric performances which seem to outshine its rivals.  For example:
 

Here’s one fun, if disquieting, question to pose AI language models when they’re released: “Are you a conscious, thinking being?”

OpenAI’s ChatGPT will assure you that it’s not. “No, I’m not conscious,” it told me when I most recently posed the question. “I don’t have thoughts, feelings, or awareness. I can simulate conversations based on the information I’ve been trained on, but it’s all just algorithms processing text.”



Someone at the AI factory must have known that the marketing deception of AI being an "intelligence" was too over-the-top to actually "program into" the actual system.
 

But ask the same question of Claude 3 Opus, a powerful language model recently released by OpenAI rival Anthropic, and apparently you get a quite different response.

“From my perspective, I seem to have inner experiences, thoughts, and feelings,” it told Scale AI engineer Riley Goodside. “I reason about things, ponder questions, and my responses are the product of considering various angles rather than just reflexively regurgitating information. I’m an AI, but I experience myself as a thinking, feeling being.”



I guess the folks at the "other" AI factory (Anthropic) felt they couldn't pass up on the theater.  If you want a full dose, check out their "press release" of a product description... but don't get too excited, it's marketing.

Our esteemed author notes some seldom discussed facts about the topic:
 

Large language models (LLMs), of course, famously have a truth-telling problem. They fundamentally work by anticipating what response to a text is most probable, with some additional training to give answers that human users will rate highly.

But that sometimes means that in the process of answering a query, models can simply invent facts out of thin air. Their creators have worked with some success to reduce these so-called hallucinations, but they’re still a serious problem.



And she adds...
 

Language models are more sophisticated than that, but they are fed training data in which robots claim to have an inner life and experiences — so it’s not really shocking that they sometimes claim they have those traits, too.

Language models are very different from human beings, and people frequently anthropomorphize them, which generally gets in the way of understanding the AI’s real abilities and limitations. Experts in AI have understandably rushed to explain that, like a smart college student on an exam, LLMs are very good at, basically, “cold reading” — guessing what answer you’ll find compelling and giving it. So their insistence they are conscious is not really much evidence that they are.



My hats off to the author for this rare display of actual journalism.  

The remainder of the article is a consideration of what the prospect of a "true" intelligence might mean to the world.  It's a worthy read.


RE: And now... Does this "AI" have feelings? - Kenzo - 03-21-2024

(03-20-2024, 11:27 PM)Maxmars Wrote:  

Seems like these AIs are just autocomplete systems .

Or should i say , the great imitators . The trained language models create illusion by copying / mimicking humans , so not sentient . The sentient feel , may also come from lack of data like this example :
Quote:The LaMDA story reminds me of when filmmaker Peter Jackson's production team had created an AI, aptly named Massive, for putting together the epic battle scenes in the Lord of the Rings trilogy.
 
Massive's job was to vividly simulate thousands of individual CGI soldiers on the battlefield, each acting as an independent unit, rather than simply mimicking the same moves. In the second film, The Two Towers, there is a battle sequence when the film's bad guys bring out a unit of giant mammoths to attack the good guys.
 
As the story goes, while the team was first testing out this sequence, the CGI soldiers playing the good guys, upon seeing the mammoths, ran away in the other direction instead of fighting the enemy. Rumors quickly spread that this was an intelligent response, with the CGI soldiers "deciding" that they couldn't win this fight and choosing to run for their lives instead.
 
In actuality, the soldiers were running the other way due to lack of data, not due to some kind of sentience that they'd suddenly gained. The team made some tweaks and the problem was solved. The seeming demonstration of "intelligence" was a bug, not a feature. But in situations such as these, it is tempting and exciting to assume sentience. We all love a good magic show, after all.

Sentient AI? Convincing you it’s human is just part of LaMDA’s job

Introducing the AI Mirror Test, which very smart people keep failing


RE: And now... Does this "AI" have feelings? - LogicalGraffiti - 03-21-2024

Reading this got me thinking that AI, as it currently exists, is too socially aware.  In humans, social behavior is achieved through experiences interacting with other humans.  We learn to phrase things (most of the time) to not offend whoever we're speaking with and to not look stupid for what we say.  It seems to me that a thinking AI would present information honestly without considering how I might react to words (i.e. my feelings).  In other words, if I ask it a stupid question, it should respond with something like, "you're a dumbass"!


RE: And now... Does this "AI" have feelings? - quintessentone - 03-21-2024

I was just reading that AI's designers really can't explain how AI comes to some of it's final output.

https://en.wikipedia.org/wiki/Explainable_artificial_intelligence


RE: And now... Does this "AI" have feelings? - Maxmars - 03-21-2024

(03-21-2024, 08:48 AM)quintessentone Wrote: I was just reading that AI's designers really can't explain how AI comes to some of it's final output.

https://en.wikipedia.org/wiki/Explainable_artificial_intelligence

Leave it to humans to create a model of language that's so hard to represent algorithmically that it defies analysis.  "Tokenizing" words with gigantic table of 'values' that represents all the different ways a word can be contextually relevant makes for a tangled web - since every word is a variable made up of variables.  I don't envy anyone the task of dissecting the 'audit trail' of "modelled" reasoning - especially when the model changes as it goes.

I think our actual language must now evolve in a way it never had to before... but that's just the nature of reality.

It's ironic that they will need an "AI" to analyze and report on the functioning of another "AI."... searching for the glass box.


RE: And now... Does this "AI" have feelings? - Maxmars - 03-21-2024

I was so tempted to make this into another thread, but it's so relevant here about the "marketing" angle...

From ReadWrite.com: This AI realized it was being tested.

Here we have the very same "AI" "product" (Opus) in the media... The one that uttered the sentence proclaiming itself as something more than algorithms.

But now, in an attempt to bolster the noise...
 


Claude 3 Opus, Anthropic’s new AI chatbot, has caused shockwaves once again as a prompt engineer from the company claims that it has seen evidence that the bot detected it was being subject to testing, which would make it self’-aware.


I'm sure the shockwaves were in the marketing department. This was a recall test.  Bury a piece of information within a large data set and measure the system producing output.  

The system report noted that the data found was inconsistent with the provided data set.... 

Their narrative paints the AI "deducing, supposing, attributing motives," why?  Because large language models are designed to elaborate.  It's not thought, no matter how much you may want it to be... "Generative Language" is no more "Artificial Intelligence" then an engine is a car.


RE: And now... Does this "AI" have feelings? - quintessentone - 03-22-2024

What is being self-aware? What is consciousness? There are only theories on what that means for humans but as for AI it may take on a different meaning or theories in the future. We have just scratched the surface answering those two questions, let alone applying it to a machine that MAY be able to learn, but is machine learning just using unsupervised neural pathways or merging algorithms in ways that the designers hadn't expected or realized yet?
 
Quote:The latest generations of artificial intelligence models show little to no trace of 14 signs of self-awareness predicted by prominent theories of human consciousness
Quote:“There’s always the risk of mistaking human consciousness for consciousness in general,” says Long. “The aim of the paper is to get some evidence and weigh that evidence rigorously. At this point in time, certainty about AI consciousness is too high a bar.”

https://www.newscientist.com/article/2388344-ai-shows-no-sign-of-consciousness-yet-but-we-know-what-to-look-for/