09-15-2024, 08:51 AM
(09-15-2024, 12:50 AM)Maxmars Wrote: [Meaning they simply siphoned the whole thing? With no thought of what it was they were 'training' the "AI" with?]
[I'm sorry to disagree... "caveat emptor" seems an appropriate term to use here. Training "AI" requires scientific discipline - or doesn't it?]
They are not doing any training, they only supply the data for anyone that wants to use it to do their own training.
Quote:[I get that chat bots are very limited examples of what we are calling "AI"... but learning is different from remembering... as of now, what our technology does is remember. And perhaps I am wrong, but I'm unwilling to consider a device designed to only answer questions as an "intelligence."]
It's more than that, it finds the patterns on the data so it can apply them to unknown situations.
When I presented ChatGPT with the classic "wolf, goat, cabbage" problem it was, as expected, able to solve it. When I presented it with a modified version it tried to apply the same way of solving the problem that is used to solve the original "wolf, goat, cabbage" problem.
It failed.
Quote:In the example above, telling me something is wrong means it trainers have made that resolution happen... mathematically. - And yet we have the occasional occurrence of an hallucination... How does the math 'stop adding up?' Their collective models seem unreliable.
Unless the trainers make specific connections between input and output, like saying something like "when someone asks about drugs just say they are bad", the output is always a choice of the system, not of the trainers, but, naturally, based on the training materials, the only thing the system knows.
That's why they get those "hallucinations", results that are not expected by the trainers
Quote:[What I mean is that the entire project of "AI" seems to be pervasively populated by people 'envisioning' it's commercial "use," driving it's design... not seriously concerned that such a reality is fraught not with just the whole "terminator" vibe, but that there is a moral hazard in creating a mind trapped in silicon - to task with our applications of it... they're not thinking about what happens to that mind - and what that might mean for us... the people who they want dependent on their construct.]
For any thing anyone creates there will always be someone trying to use it commercially.
And you are assuming intelligence implies a mind, something we cannot really know.