Login to account Create an account  


Thread Rating:
  • 3 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
"Don't talk about the assassination attempt"
#10
(08-01-2024, 02:25 PM)Maxmars Wrote: A) Re-affirming the embedded lie that "AI" is the subject... LLM's are NOT AI - they are an algorithmic processing of language.

LLMs, as the name says, are language models, which, by themselves, do nothing. The AI chat bots that use the LLMs do, those are the ones "talking" to people.

Quote:B) "Fact checking" is an interesting designation - the implication being that their so-called AI's purpose is for "fact checking."

Meta's fact-checking is done by humans, one image was tagged as "altered" and their "technology" (they do not mention AI in that paragraph) reapplied that tag to similar images, including the original, non-altered photo.

Quote:C) "Meta AI responses" - as if Meta has "AI" and it is responding under the guise of Meta.

"Meta AI" is the name of their AI system, so if the system responds to something they are really "Meta AI" responses.

Quote:They seem to say that "emergent events" are too complicated for the "AI" to contend with?  Why?  It can't "keep up?"  Is "AI" suddenly too feeble to deal with the informational reality that humans live and thrive in?  What happened to all that 'fearful power' that AI is supposed to bring onto data analysis? Or is it that the sanctioned "minders" can't keep up with the political filters and their changing objectives or the "mood" of the owners?

AI, in cases like LLMs, needs lots of information to work with, so it can find patterns and apply them.
AI dedicated to other work uses different data sets, so if it is "fed" only data about a specific topic, for example, chemical reactions, that's what it will be able to analyse.

To have consistent results, that data needs to be consistent, otherwise it will be hard to find one pattern, and in the end the AI system may find more than one pattern, one more correct that the other(s).

With breaking news events, in which the information is anything but consistent, it's natural that the results will be far from good, I don't know why they even risk using it in such cases.

Quote:Apparently the very idea that using an alleged "AI" to resolve, collate, and compile information creates 'issues.'   And while Meta (and others) encourage everyone to marvel at their 'product' and make use of it; it is unwise to use it for that purpose because it lies....

The problem is information compilation, it's the interpretation of that information that is used in answers the system gives.
Garbage in, garbage out. If they do not control the input, the system's output will be unreliable, obviously.

Quote:It lies... but they don't call that lying... they call it "hallucinating" because "lying" would be a bad thing.

It's not a lie because AI does not have intent, although I disagree with the choice of the word "allucination".
 
Quote:"Doctored" as in had the contrast enhanced, was cropped, or otherwise rendered more 'useful?'  And to whom exactly did it appear that anyone was smiling, as opposed to grimacing or clenching their teeth?  Passive assertion is a sin in explanations.

This image

[Image: 450905258_997765228466604_3313953216044584660_n.jpg]

is an altered version of this image.

[Image: transferir.jpg]

Obviously.

Did you comment without seeing the image?
Reply



Messages In This Thread
RE: "Don't talk about the assassination attempt" - by ArMaP - 08-01-2024, 06:48 PM


TERMS AND CONDITIONS · PRIVACY POLICY