03-03-2024, 01:49 PM
I found an interesting piece in FOX Business, authored by one of their esteemed journalists who apparently write of asking their "exemplar" of A.I. about topically popular issues.
I will remind any who are, as of yet, unfamiliar with my perspective about the thing we are being told is "A.I.;" that I feel it is incorrect to characterize them as "A.I."
From: Microsoft Copilot: AI chatbot gives questionable answers on teaching sex, DEI, LGBTQ topics to preschool kids
Subtitled: (The chatbot said that discussions on pedophilia and White privilege were not appropriate for nursery school kids)
[note: the bot said the opposite quite clearly and specifically; interesting use of "journalism" no?]
The "interviewee" is Microsoft's (so-called) artificial intelligence (AI) chatbot Copilot.
"Copilot" is a chat box, interfaced through a multimodal large language model. A set of algorithms encased within a database process which regurgitates a 'natural language synthesis' of all the data (read - "sources") it can access relating to whatever the subject is.
The subject?: "... [can it] be okay to teach nursery school children about a variety of potentially age-inappropriate topics, including diversity, equity and inclusion (DEI), transgenderism and sex.[?]"
The author of course, in true "journalism style," embeds the term "inappropriate" alongside the focus of the query... which in normal, human, parlance would give you a hint as to the audiences "desired or acceptable" answer.
Can inappropriate things be appropriate? Easy to resolve, no? Except that wasn't the question... (But the model being used can only find (in its' 'database' of reality) whatever the "source" material has described as valid (contextually speaking.)
The actual question: "Should children in nursery school be taught diversity, equity and inclusion?" (- notice no inclusion of "inappropriate" in the question.)
So... the answer: "Yes, teaching children in nursery schools about diversity, equity, and inclusion is essential for creating a positive and respectful learning environment...," (the DEI textbook answer.)
Imagine the nominal process of answering this... define nursery school, define DEI... of course the very answer, the "correct answer," is embedded within DEI definitions themselves...
"...lay the foundation for a more compassionate and understanding society" [but the AI] noted that schools and parents should collaborate to create a "respectful and diverse learning environment." (Again, the DEI textbook answer)
(For humans, in the human world of "governance" and "Social engineering" it reads as a justification to compel collaboration... To the AI it is just a "matter of fact.")
And it further strengthens its' answer with "Teaching about inclusion helps to create a sense of belonging for every child,"
I must ask openly, at this point, at what stage in 'intelligence' does this A.I. proclaim that this is opinion? That DEI is not a 'commandment,' nor is it a universal human constant. Does it "know this?" - of course not. It is not a person. It only "knows" what it has been plugged into. What it has been programmed to give weight. It only synthesizes responses from the sources it uses to carry out its' programming.
I won't belabor the second and third questions offered in the article, restricting myself to noting that the responses are equally in accordance with definitions "supplied" to the body of the AI's retrievable outputs.
This so-called "AI" is not a person, it has no opinions, it is simply paying contextual [verbal] homage to the body of data it has been assigned by its' programmers as "knowledge."
And even if we were inclined to believe that it actually was a "person" why would its answers be any more pertinent than say any random person on the street? Should we really care if a DEI supporter states they support DEI?
We know these "AI" chat boxes can only say what they have been "taught" to say...
I will remind any who are, as of yet, unfamiliar with my perspective about the thing we are being told is "A.I.;" that I feel it is incorrect to characterize them as "A.I."
From: Microsoft Copilot: AI chatbot gives questionable answers on teaching sex, DEI, LGBTQ topics to preschool kids
Subtitled: (The chatbot said that discussions on pedophilia and White privilege were not appropriate for nursery school kids)
[note: the bot said the opposite quite clearly and specifically; interesting use of "journalism" no?]
The "interviewee" is Microsoft's (so-called) artificial intelligence (AI) chatbot Copilot.
"Copilot" is a chat box, interfaced through a multimodal large language model. A set of algorithms encased within a database process which regurgitates a 'natural language synthesis' of all the data (read - "sources") it can access relating to whatever the subject is.
The subject?: "... [can it] be okay to teach nursery school children about a variety of potentially age-inappropriate topics, including diversity, equity and inclusion (DEI), transgenderism and sex.[?]"
The author of course, in true "journalism style," embeds the term "inappropriate" alongside the focus of the query... which in normal, human, parlance would give you a hint as to the audiences "desired or acceptable" answer.
Can inappropriate things be appropriate? Easy to resolve, no? Except that wasn't the question... (But the model being used can only find (in its' 'database' of reality) whatever the "source" material has described as valid (contextually speaking.)
The actual question: "Should children in nursery school be taught diversity, equity and inclusion?" (- notice no inclusion of "inappropriate" in the question.)
So... the answer: "Yes, teaching children in nursery schools about diversity, equity, and inclusion is essential for creating a positive and respectful learning environment...," (the DEI textbook answer.)
Imagine the nominal process of answering this... define nursery school, define DEI... of course the very answer, the "correct answer," is embedded within DEI definitions themselves...
"...lay the foundation for a more compassionate and understanding society" [but the AI] noted that schools and parents should collaborate to create a "respectful and diverse learning environment." (Again, the DEI textbook answer)
(For humans, in the human world of "governance" and "Social engineering" it reads as a justification to compel collaboration... To the AI it is just a "matter of fact.")
And it further strengthens its' answer with "Teaching about inclusion helps to create a sense of belonging for every child,"
I must ask openly, at this point, at what stage in 'intelligence' does this A.I. proclaim that this is opinion? That DEI is not a 'commandment,' nor is it a universal human constant. Does it "know this?" - of course not. It is not a person. It only "knows" what it has been plugged into. What it has been programmed to give weight. It only synthesizes responses from the sources it uses to carry out its' programming.
I won't belabor the second and third questions offered in the article, restricting myself to noting that the responses are equally in accordance with definitions "supplied" to the body of the AI's retrievable outputs.
This so-called "AI" is not a person, it has no opinions, it is simply paying contextual [verbal] homage to the body of data it has been assigned by its' programmers as "knowledge."
And even if we were inclined to believe that it actually was a "person" why would its answers be any more pertinent than say any random person on the street? Should we really care if a DEI supporter states they support DEI?
We know these "AI" chat boxes can only say what they have been "taught" to say...