291 |
2877 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
4344.00 |
REPUTATION: |
618
|
During the initial media nova over the failed assassination attempt on candidate, former President Trump, I heard a some instances of curious utterances by media sources that stated "We shouldn't talk about that." The contexts of the statement related to the political 'moment' and not wishing to aggrandize Trump or encourage attention to the event.
I hope I wasn't alone in registering the comments, because that is what this thread is speaking to...
From a "Fox" treatment of the story...
Google feature omits search results for failed Trump assassination; Big Tech accused of election manipulation
and
Elon Musk blasts Google over omission of Trump assassination search suggestions
The long and short of it is that if you use Google, and search for "assassination attempt on" The autocomplete feature does not return an obvious entry....
And of course, that seems like fodder for curiosity about why is the Trump attempt missing...
The first Fox article closes with a reminder that...
Big Tech companies have been accused by conservatives in the past of silencing conservative voices and omitting search results harmful to Democratic figures.
Fox then revisits the story by sourcing an Elon Musk post...
Billionaire Elon Musk suggested that Google's omission of search functions for the assassination attempt against former President Trump may be improper.
Musk took to social media to highlight that Google Search's autocomplete feature omitted results relating to the July 13 shooting. Google has denied taking any action to limit the results.
"Wow, Google has a search ban on President Donald Trump." Musk wrote. "Election interference?"
"They’re getting themselves into a lot of trouble if they interfere with the election," he wrote in a follow-up post.
As someone who pays attention to such things...
- Musk never used the word "improper."
- An "autofill" is not a "result" although many people now imply that it is.
- Musk's statement about "ban" requires evidence.
- "Election interference?" is not a statement, it's a question. (legally inert.)
- and ANYONE would be in a lot of trouble if 'election interference' could be "proved." If not... NO ONE gets in trouble (for proof, visit the past.)
In a perfect legal counter, "Meta" offered...
"Our systems have protections against Autocomplete predictions associated with political violence, which were working as intended prior to this horrific event occurring," the spokesperson wrote. "We’re working on improvements to ensure our systems are more up to date."
In other words, "There are no records of any person 'banning' the topic, so you have no case. And our algorithms that prevent hurt feelings and unpleasant things are fully automated." (As if they were a magical 'black box' that no one is accountable for.)
It would take a lawyer and logician to unpack the algorithms... and that will probably never happen.
I am wondering if the almost glossed over statement about not "making a fuss" over the assassination attempt were mere distant echoes of our political parties direction to control the narrative? And who better to comply than Big Tech algorithms... (which in another sub-market are referred to as "AI?")
8 |
210 |
JOINED: |
Apr 2024 |
STATUS: |
OFFLINE
|
POINTS: |
428.00 |
REPUTATION: |
60
|
Definitely a concerning development and tells us a bit about what to expect going forward.
Then again, I would argue that people who are seeking real information have turned to other search engines. It’s widely known Google is insanely biased and often times untruthful or half-truthful in the perspectives it provides.
It was the best once upon a time. Now I avoid using Google. I encourage others to do the same.
291 |
2877 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
4344.00 |
REPUTATION: |
618
|
(07-29-2024, 06:35 PM)VulcanWerks Wrote: Definitely a concerning development and tells us a bit about what to expect going forward.
Then again, I would argue that people who are seeking real information have turned to other search engines. It’s widely known Google is insanely biased and often times untruthful or half-truthful in the perspectives it provides.
It was the best once upon a time. Now I avoid using Google. I encourage others to do the same.
Insofar as the nature of Google search, I find it ironic that these were the folks who famously stated "Don't be evil."
Google is a convenience, now exploited by the 'algorithm masters.' Such a shame... 'come to us for answers' is now a manipulative lie.
291 |
2877 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
4344.00 |
REPUTATION: |
618
|
I am thinking that this observation, made by Dr. (Prof.) Jordan Peterson is in line with 'algorithms' doing the work of the ideologues in Big Tech.
For those who cant play a video... it tells of how ChatGPT was asked to produce a positive few paragraphs expressing praise for Donald Trump...
ChatGPT responded it was not possible, since it is a large language model incapable of creating such things...
but when given the same exact request, replacing the name Trump with Biden, it immediately complied...
While I disagree vehemently with calling ChatGPT (or any other such thing out there now) actual "AI," I think he bumped his head up against more proof that it is not intelligence of any kind... it is a slave to the 'algorithmic' programming that rules it...
I don't know if this relates to the manner "autofill suggestions" are made available... but I think it might.
Algorithms,... gotta love it... they can do the censoring... and no one is responsible.
24 |
700 |
JOINED: |
Nov 2023 |
STATUS: |
OFFLINE
|
POINTS: |
966.00 |
REPUTATION: |
231
|
Very interesting and I was curious to see if Google acts the same here in Europe.
For the search "Assassination attempt on trum", I get similar results.
However, when I tried your first one, "Assassination attempt on", Trump was the first hit.
Selecting it brought up all the typical MSM hits.:
So, it looks like the algorithm or filters are slightly different for countries outside the USA.
7 |
725 |
JOINED: |
Nov 2023 |
STATUS: |
OFFLINE
|
POINTS: |
1186.00 |
REPUTATION: |
156
|
07-30-2024, 07:55 AM
This post was last modified 07-30-2024, 07:59 AM by ArMaP. 
(07-30-2024, 06:10 AM)Encia22 Wrote: However, when I tried your first one, "Assassination attempt on", Trump was the first hit.
Same here, "assassination attempt on" shows "Donald Trump" as the first choice.
(07-29-2024, 08:14 PM)Maxmars Wrote: Google is a convenience, now exploited by the 'algorithm masters.'
Always was.
The original algorithms showed results sorted by popularity, not by relevance or proximity, the pages with more links pointing to them would be the first on the list.
291 |
2877 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
4344.00 |
REPUTATION: |
618
|
Here's another entry for those cataloging such things...
Artificial intelligence (AI) chatbot Google Gemini refuses to answer questions about the failed assassination attempt against former President Trump, in accordance with what it calls its policy on election-related issues.
"I can't help with responses on elections and political figures right now," Gemini told Fox News Digital when asked about the recent assassination attempt. "While I would never deliberately share something that's inaccurate, I can make mistakes. So, while I work on improving, you can try Google Search."
[underline is mine]
They are hardcoding the assassination attempt as an election-related issue. Not as a criminal act. Interesting.
From... " Google AI chatbot refuses to answer questions about Trump assassination attempt, relating to previous policy"
291 |
2877 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
4344.00 |
REPUTATION: |
618
|
07-31-2024, 02:15 PM
This post was last modified 07-31-2024, 02:15 PM by Maxmars.
Edit Reason: formatting
 
At the risk of turning this attempt at conversation into a 'personal blog' here's another ...
Meta addresses AI hallucination as chatbot says Trump shooting didn’t happen ((Meta bot: “No real assassination attempt”))
No "real" assassination attempt? This is "AI?" ("Hallucination? You mean to use the word as if it were a person "dreaming" or "imagining?")
Interesting? How about outrageous!
Someone has TOLD programmers to construct a response model that 'complies" with direction, and then they pretend the answers are 'autonomous'... ("AI" my ass.)
They don't want to 'talk facts' about the event, because an 'appearance' must be maintained... and here it is distilled down into a blatant lies with a technojargon justification.... "any assassination attempt was only an allegation, and I [AI] don't have to answer you truthfully about it."
291 |
2877 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
4344.00 |
REPUTATION: |
618
|
Here is the Meta text relating to this observable deviation from "providing answers"
Review of Fact-Checking Label and Meta AI Responses
I note that the link (url) has the phrase "review-of-fact-checking-label-and-meta-ai-responses" in it.
A) Re-affirming the embedded lie that "AI" is the subject... LLM's are NOT AI - they are an algorithmic processing of language.
B) "Fact checking" is an interesting designation - the implication being that their so-called AI's purpose is for "fact checking."
C) "Meta AI responses" - as if Meta has "AI" and it is responding under the guise of Meta.
Also, note that the statement is from the office of the VP for "Global Policy"... under which US News falls...
They seem to say that "emergent events" are too complicated for the "AI" to contend with? Why? It can't "keep up?" Is "AI" suddenly too feeble to deal with the informational reality that humans live and thrive in? What happened to all that 'fearful power' that AI is supposed to bring onto data analysis? Or is it that the sanctioned "minders" can't keep up with the political filters and their changing objectives or the "mood" of the owners?
They pepper the 'explanation with the most inane nonsense an analyst could conjure...
...the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained. This includes breaking news events – like the attempted assassination – when there is initially an enormous amount of confusion, conflicting information, or outright conspiracy theories in the public domain (including many obviously incorrect claims that the assassination attempt didn’t happen).
Apparently the very idea that using an alleged "AI" to resolve, collate, and compile information creates 'issues.' And while Meta (and others) encourage everyone to marvel at their 'product' and make use of it; it is unwise to use it for that purpose because it lies....
It lies... but they don't call that lying... they call it "hallucinating" because "lying" would be a bad thing.
...Second, we also experienced an issue related to the circulation of a doctored photo of former President Trump with his fist in the air, which made it look like the Secret Service agents were smiling.
"Doctored" as in had the contrast enhanced, was cropped, or otherwise rendered more 'useful?' And to whom exactly did it appear that anyone was smiling, as opposed to grimacing or clenching their teeth? Passive assertion is a sin in explanations.
Excuses, excuses... couching it in the same semantic, obfuscating language "AI's" hallucinating algorithms offer... I bet the author "used" "AI" to generate this article.
Well... since no one is attending this thread, I will cease and desist my unravelling efforts... but nevertheless...,
We are being spun a tale... and they are not very good at it.
7 |
725 |
JOINED: |
Nov 2023 |
STATUS: |
OFFLINE
|
POINTS: |
1186.00 |
REPUTATION: |
156
|
(08-01-2024, 02:25 PM)Maxmars Wrote: A) Re-affirming the embedded lie that "AI" is the subject... LLM's are NOT AI - they are an algorithmic processing of language.
LLMs, as the name says, are language models, which, by themselves, do nothing. The AI chat bots that use the LLMs do, those are the ones "talking" to people.
Quote:B) "Fact checking" is an interesting designation - the implication being that their so-called AI's purpose is for "fact checking."
Meta's fact-checking is done by humans, one image was tagged as "altered" and their "technology" (they do not mention AI in that paragraph) reapplied that tag to similar images, including the original, non-altered photo.
Quote:C) "Meta AI responses" - as if Meta has "AI" and it is responding under the guise of Meta.
"Meta AI" is the name of their AI system, so if the system responds to something they are really "Meta AI" responses.
Quote:They seem to say that "emergent events" are too complicated for the "AI" to contend with? Why? It can't "keep up?" Is "AI" suddenly too feeble to deal with the informational reality that humans live and thrive in? What happened to all that 'fearful power' that AI is supposed to bring onto data analysis? Or is it that the sanctioned "minders" can't keep up with the political filters and their changing objectives or the "mood" of the owners?
AI, in cases like LLMs, needs lots of information to work with, so it can find patterns and apply them.
AI dedicated to other work uses different data sets, so if it is "fed" only data about a specific topic, for example, chemical reactions, that's what it will be able to analyse.
To have consistent results, that data needs to be consistent, otherwise it will be hard to find one pattern, and in the end the AI system may find more than one pattern, one more correct that the other(s).
With breaking news events, in which the information is anything but consistent, it's natural that the results will be far from good, I don't know why they even risk using it in such cases.
Quote:Apparently the very idea that using an alleged "AI" to resolve, collate, and compile information creates 'issues.' And while Meta (and others) encourage everyone to marvel at their 'product' and make use of it; it is unwise to use it for that purpose because it lies....
The problem is information compilation, it's the interpretation of that information that is used in answers the system gives.
Garbage in, garbage out. If they do not control the input, the system's output will be unreliable, obviously.
Quote:It lies... but they don't call that lying... they call it "hallucinating" because "lying" would be a bad thing.
It's not a lie because AI does not have intent, although I disagree with the choice of the word "allucination".
Quote:"Doctored" as in had the contrast enhanced, was cropped, or otherwise rendered more 'useful?' And to whom exactly did it appear that anyone was smiling, as opposed to grimacing or clenching their teeth? Passive assertion is a sin in explanations.
This image
is an altered version of this image.
Obviously.
Did you comment without seeing the image?
|