1 |
323 |
JOINED: |
May 2024 |
STATUS: |
OFFLINE
|
POINTS: |
532.00 |
REPUTATION: |
91
|
I've been using Brave search lately with the AI component turned on just to see how the AI response compares with the results. I have found 3 or 4 instances of the AI stating something as fact about a subject that is a complete lie. I've also found several instances where the links returned in the AI portion of the output appear to be fictitious.
I read the earlier statements about AI "hallucinations" in fact being "lies". That is my gut reaction to the phenomenon as well. However, I do think I have to agree that "hallucinate" is perhaps more accurate. What the user is effectively doing is asking the "AI" (LLM) to return a short dissertation on a specific topic. The AI dutifully searches its coffers and writes up a short essay about the topic asked. Mission accomplished - ? Well, the trouble arises when the model doesn't contain data (or has had the data restricted from use) that matches that query, but it still needs to complete the task, so it makes stuff up to fill in the blanks. Its not the LLM's fault it doesn't have/can't access the data necessary to answer the query. It doesn't think, it's a computer program.
I was looking for the last Republican Mayor of Phoenix the other day and couldn't find a direct answer anywhere but Bing. I found that curious, so I fired up a local copy of GPT-3 and asked it and it said it didn't know. I then asked who the mayor was by name, and it returned the guy's info and said he was a Democrat - and he was not! So, its troubling that these datasets are incomplete and in many cases willfully so. It also sure makes you wonder what is going on with Maricopa County elections that you can't find a list of mayors of Phoenix and their party affiliation together anywhere on the Internet.