(03-20-2024, 11:27 PM)Maxmars Wrote:
Seems like these AIs are just autocomplete systems .
Or should i say , the great imitators . The trained language models create illusion by copying / mimicking humans , so not sentient . The sentient feel , may also come from lack of data like this example :
Quote:The LaMDA story reminds me of when filmmaker Peter Jackson's production team had created an AI, aptly named Massive, for putting together the epic battle scenes in the Lord of the Rings trilogy.
Massive's job was to vividly simulate thousands of individual CGI soldiers on the battlefield, each acting as an independent unit, rather than simply mimicking the same moves. In the second film, The Two Towers, there is a battle sequence when the film's bad guys bring out a unit of giant mammoths to attack the good guys.
As the story goes, while the team was first testing out this sequence, the CGI soldiers playing the good guys, upon seeing the mammoths, ran away in the other direction instead of fighting the enemy. Rumors quickly spread that this was an intelligent response, with the CGI soldiers "deciding" that they couldn't win this fight and choosing to run for their lives instead.
In actuality, the soldiers were running the other way due to lack of data, not due to some kind of sentience that they'd suddenly gained. The team made some tweaks and the problem was solved. The seeming demonstration of "intelligence" was a bug, not a feature. But in situations such as these, it is tempting and exciting to assume sentience. We all love a good magic show, after all.
Sentient AI? Convincing you it’s human is just part of LaMDA’s job
Introducing the AI Mirror Test, which very smart people keep failing