03-25-2024, 03:00 PM
This post was last modified 03-25-2024, 03:01 PM by Maxmars.
Edit Reason: grammar
 
I thought the research article that appears in ScienceDaily: Two artificial intelligences talk to each other was very interesting.
A well-explained idea surfaces within it, one which featured some very accurate descriptions of what LLM (large language modelling) essentially is all about. Namely...
Performing a new task without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What's more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.
A sub-field of artificial intelligence (AI) -- Natural language processing -- seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data...
(bold and underlining is mine)
While some are interested in making robots that can communicate instruction between them; it has significant application to approaching a 'true' intelligence, rather than the current "simulacra" that the media is trying brainwash us into believing is "AI".
The researcher and his team have succeeded in developing an artificial neuronal model with this dual capacity, albeit with prior training. ''We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language. We 'connected' it to another, simpler network of a few thousand neurons,'' explains Reidar Riveland, a PhD student in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine, and first author of the study.
Fascinating stuff....
A well-explained idea surfaces within it, one which featured some very accurate descriptions of what LLM (large language modelling) essentially is all about. Namely...
Performing a new task without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What's more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.
A sub-field of artificial intelligence (AI) -- Natural language processing -- seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data...
(bold and underlining is mine)
While some are interested in making robots that can communicate instruction between them; it has significant application to approaching a 'true' intelligence, rather than the current "simulacra" that the media is trying brainwash us into believing is "AI".
The researcher and his team have succeeded in developing an artificial neuronal model with this dual capacity, albeit with prior training. ''We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language. We 'connected' it to another, simpler network of a few thousand neurons,'' explains Reidar Riveland, a PhD student in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine, and first author of the study.
Fascinating stuff....