06-30-2024, 12:49 AM
This post was last modified 06-30-2024, 12:50 AM by Maxmars.
Edit Reason: spelling
 
Perhaps it's best to define our terms.
Artificial Intelligence is now a well-marketed concept. One definition which seems practically generic reads: "... artificial intelligence is the ability of a machine to perform tasks that are commonly associated with intelligent beings, including reasoning, learning, generalization, and intelligence." (Irony of ironies, this is an AI-provided definition.) For the sake of ease, I will accept that as the legitimately intended definition.
Yet all we have been treated to (at least publicly) are machines that 'sort of approximate' what reasoning humans 'speak or communicate', and they inevitably fail in substantive form over time. Large language models (LLMs) have now been reported to "hallucinate" text meanings and context, even to the point of purposefully fabricating nonexistent 'supporting links' for their outputs.
This is an indication that we are not dealing with intelligence, and that it is far from sentience... which is the ultimate boogeyman used for fear-fodder, (and far from a reliable communicator of factual reality, which is the desired model.)
No, unless you can provide other 'non-algorithmic' methods for modeling human sentience/intelligence, you will never have a similarly sentient computer. LLMs are output filters, not 'thinking' processes.
(This is all of course, my opinion.)
Artificial Intelligence is now a well-marketed concept. One definition which seems practically generic reads: "... artificial intelligence is the ability of a machine to perform tasks that are commonly associated with intelligent beings, including reasoning, learning, generalization, and intelligence." (Irony of ironies, this is an AI-provided definition.) For the sake of ease, I will accept that as the legitimately intended definition.
Yet all we have been treated to (at least publicly) are machines that 'sort of approximate' what reasoning humans 'speak or communicate', and they inevitably fail in substantive form over time. Large language models (LLMs) have now been reported to "hallucinate" text meanings and context, even to the point of purposefully fabricating nonexistent 'supporting links' for their outputs.
This is an indication that we are not dealing with intelligence, and that it is far from sentience... which is the ultimate boogeyman used for fear-fodder, (and far from a reliable communicator of factual reality, which is the desired model.)
No, unless you can provide other 'non-algorithmic' methods for modeling human sentience/intelligence, you will never have a similarly sentient computer. LLMs are output filters, not 'thinking' processes.
(This is all of course, my opinion.)