06-24-2024, 04:13 AM
(06-23-2024, 04:37 PM)Maxmars Wrote: I have to say, this is a lot more revealing of the programmers' training and direction than I initially considered.
I get the idea of making language synthesis more "human," even if only from a utilitarian perspective. We would naturally find it less cumbersome to absorb data if we felt that it was presented as "we speak." So yeah, I can give them that latitude...
However, there is a big "BUT" following that understanding.
Someone is 'deciding" what manner of human speech actually is "better."
They are creating a "model" which the so-called "AI" will adhere to.
They are inserting "perplexities" of language on a "random" basis to ensure the machine communicates "as humans do."
Except that is NOT "what humans do."
This will lead to output that is even more distinctly "artificial."
The subtleties of human speech are not a function of thought, but of feeling.
Algorithmic processes have no 'feeling,' and any attempt to mechanize it will inevitably fail.
Process models can only go so far and no further, until someone actually develops a true AI.
We haven't, or at least I have yet to see AI... only clever "algorithmic models."
Yeeh i get what you are meaning....
This "model" , we will see what`s it all about soon maybe. It Will be artificial yes .
Do you think algorithmic or current incanation of AI could copy persons way of communicating? I dont know the answer. I am not using any AI , so i am bit outsider in the whole AI chatbot stuf ..