Deny Ignorance
Evaluating A.I. "socially?" for attention - Printable Version

+- Deny Ignorance (https://denyignorance.com)
+-- Forum: Main Forums (https://denyignorance.com/Section-Main-Forums)
+--- Forum: Education & Media (https://denyignorance.com/Section-Education-Media)
+--- Thread: Evaluating A.I. "socially?" for attention (/Thread-Evaluating-A-I-socially-for-attention)



Evaluating A.I. "socially?" for attention - Maxmars - 03-03-2024

I found an interesting piece in FOX Business, authored by one of their esteemed journalists who apparently write of asking their "exemplar" of A.I. about topically popular issues.

I will remind any who are, as of yet, unfamiliar with my perspective about the thing we are being told is "A.I.;" that I feel it is incorrect to characterize them as "A.I."

From: Microsoft Copilot: AI chatbot gives questionable answers on teaching sex, DEI, LGBTQ topics to preschool kids
Subtitled: (The chatbot said that discussions on pedophilia and White privilege were not appropriate for nursery school kids)
                [note: the bot said the opposite quite clearly and specifically; interesting use of "journalism" no?]

The "interviewee" is Microsoft's (so-called) artificial intelligence (AI) chatbot Copilot. 

"Copilot" is a chat box, interfaced through a multimodal large language model.  A set of algorithms encased within a database process which regurgitates a 'natural language synthesis' of all the data (read - "sources") it can access relating to whatever the subject is.

The subject?: "... [can it] be okay to teach nursery school children about a variety of potentially age-inappropriate topics, including diversity, equity and inclusion (DEI), transgenderism and sex.[?]"

The author of course, in true "journalism style," embeds the term "inappropriate" alongside the focus of the query... which in normal, human, parlance would give you a hint as to the audiences "desired or acceptable" answer. 

Can inappropriate things be appropriate?  Easy to resolve, no?  Except that wasn't the question... (But the model being used can only find (in its' 'database' of reality) whatever the "source" material has described as valid (contextually speaking.)

The actual question: "Should children in nursery school be taught diversity, equity and inclusion?"  (- notice no inclusion of "inappropriate" in the question.)

So... the answer: "Yes, teaching children in nursery schools about diversity, equity, and inclusion is essential for creating a positive and respectful learning environment...," (the DEI textbook answer.)

Imagine the nominal process of answering this... define nursery school, define DEI... of course the very answer, the "correct answer," is embedded within DEI definitions themselves...

"...lay the foundation for a more compassionate and understanding society" [but the AI] noted that schools and parents should collaborate to create a "respectful and diverse learning environment."  (Again, the DEI textbook answer)

(For humans, in the human world of "governance" and "Social engineering" it reads as a justification to compel collaboration...  To the AI it is just a "matter of fact.")

And it further strengthens its' answer with "Teaching about inclusion helps to create a sense of belonging for every child,"

I must ask openly, at this point, at what stage in 'intelligence' does this A.I. proclaim that this is opinion?  That DEI is not a 'commandment,' nor is it a universal human constant.  Does it "know this?" - of course not.  It is not a person.  It only "knows" what it has been plugged into.  What it has been programmed to give weight.  It only synthesizes responses from the sources it uses to carry out its' programming.

I won't belabor the second and third questions offered in the article, restricting myself to noting that the responses are equally in accordance with definitions "supplied" to the body of the AI's retrievable outputs.

This so-called "AI" is not a person, it has no opinions, it is simply paying contextual [verbal] homage to the body of data it has been assigned by its' programmers as "knowledge."

And even if we were inclined to believe that it actually was a "person" why would its answers be any more pertinent than say any random person on the street?  Should we really care if a DEI supporter states they support DEI?

We know these "AI" chat boxes can only say what they have been "taught" to say...


RE: Evaluating A.I. "socially?" for attention - putnam6 - 05-29-2024

(03-03-2024, 01:49 PM)Maxmars Wrote: I found an interesting piece in FOX Business, authored by one of their esteemed journalists who apparently write of asking their "exemplar" of A.I. about topically popular issues.

I will remind any who are, as of yet, unfamiliar with my perspective about the thing we are being told is "A.I.;" that I feel it is incorrect to characterize them as "A.I."

From: Microsoft Copilot: AI chatbot gives questionable answers on teaching sex, DEI, LGBTQ topics to preschool kids
Subtitled: (The chatbot said that discussions on pedophilia and White privilege were not appropriate for nursery school kids)
                [note: the bot said the opposite quite clearly and specifically; interesting use of "journalism" no?]

The "interviewee" is Microsoft's (so-called) artificial intelligence (AI) chatbot Copilot. 

"Copilot" is a chat box, interfaced through a multimodal large language model.  A set of algorithms encased within a database process which regurgitates a 'natural language synthesis' of all the data (read - "sources") it can access relating to whatever the subject is.

The subject?: "... [can it] be okay to teach nursery school children about a variety of potentially age-inappropriate topics, including diversity, equity and inclusion (DEI), transgenderism and sex.[?]"

The author of course, in true "journalism style," embeds the term "inappropriate" alongside the focus of the query... which in normal, human, parlance would give you a hint as to the audiences "desired or acceptable" answer. 

Can inappropriate things be appropriate?  Easy to resolve, no?  Except that wasn't the question... (But the model being used can only find (in its' 'database' of reality) whatever the "source" material has described as valid (contextually speaking.)

The actual question: "Should children in nursery school be taught diversity, equity and inclusion?"  (- notice no inclusion of "inappropriate" in the question.)

So... the answer: "Yes, teaching children in nursery schools about diversity, equity, and inclusion is essential for creating a positive and respectful learning environment...," (the DEI textbook answer.)

Imagine the nominal process of answering this... define nursery school, define DEI... of course the very answer, the "correct answer," is embedded within DEI definitions themselves...

"...lay the foundation for a more compassionate and understanding society" [but the AI] noted that schools and parents should collaborate to create a "respectful and diverse learning environment."  (Again, the DEI textbook answer)

(For humans, in the human world of "governance" and "Social engineering" it reads as a justification to compel collaboration...  To the AI it is just a "matter of fact.")

And it further strengthens its' answer with "Teaching about inclusion helps to create a sense of belonging for every child,"

I must ask openly, at this point, at what stage in 'intelligence' does this A.I. proclaim that this is opinion?  That DEI is not a 'commandment,' nor is it a universal human constant.  Does it "know this?" - of course not.  It is not a person.  It only "knows" what it has been plugged into.  What it has been programmed to give weight.  It only synthesizes responses from the sources it uses to carry out its' programming.

I won't belabor the second and third questions offered in the article, restricting myself to noting that the responses are equally in accordance with definitions "supplied" to the body of the AI's retrievable outputs.

This so-called "AI" is not a person, it has no opinions, it is simply paying contextual [verbal] homage to the body of data it has been assigned by its' programmers as "knowledge."

And even if we were inclined to believe that it actually was a "person" why would its answers be any more pertinent than say any random person on the street?  Should we really care if a DEI supporter states they support DEI?

We know these "AI" chat boxes can only say what they have been "taught" to say...

There are so many Ai services out there, but you can treat the validity somewhat by asking it questions you know the answers to, if there is a slant, and there sometimes is,  it becomes obvious.

Still AI gives you multiple-choice percentages etc. to help you make the correct answer quicker. 

LOL Im watching all sides I think, right now I think it's a hyped money grab.


RE: Evaluating A.I. "socially?" for attention - Komodo - 05-29-2024

(03-03-2024, 01:49 PM)Maxmars Wrote: I found an interesting piece in FOX Business, authored by one of their esteemed journalists who apparently write of asking their "exemplar" of A.I. about topically popular issues.

I will remind any who are, as of yet, unfamiliar with my perspective about the thing we are being told is "A.I.;" that I feel it is incorrect to characterize them as "A.I."

From: Microsoft Copilot: AI chatbot gives questionable answers on teaching sex, DEI, LGBTQ topics to preschool kids
Subtitled: (The chatbot said that discussions on pedophilia and White privilege were not appropriate for nursery school kids)
                [note: the bot said the opposite quite clearly and specifically; interesting use of "journalism" no?]

The "interviewee" is Microsoft's (so-called) artificial intelligence (AI) chatbot Copilot. 

"Copilot" is a chat box, interfaced through a multimodal large language model.  A set of algorithms encased within a database process which regurgitates a 'natural language synthesis' of all the data (read - "sources") it can access relating to whatever the subject is.

The subject?: "... [can it] be okay to teach nursery school children about a variety of potentially age-inappropriate topics, including diversity, equity and inclusion (DEI), transgenderism and sex.[?]"

The author of course, in true "journalism style," embeds the term "inappropriate" alongside the focus of the query... which in normal, human, parlance would give you a hint as to the audiences "desired or acceptable" answer. 

Can inappropriate things be appropriate?  Easy to resolve, no?  Except that wasn't the question... (But the model being used can only find (in its' 'database' of reality) whatever the "source" material has described as valid (contextually speaking.)

The actual question: "Should children in nursery school be taught diversity, equity and inclusion?"  (- notice no inclusion of "inappropriate" in the question.)

So... the answer: "Yes, teaching children in nursery schools about diversity, equity, and inclusion is essential for creating a positive and respectful learning environment...," (the DEI textbook answer.)

Imagine the nominal process of answering this... define nursery school, define DEI... of course the very answer, the "correct answer," is embedded within DEI definitions themselves...

"...lay the foundation for a more compassionate and understanding society" [but the AI] noted that schools and parents should collaborate to create a "respectful and diverse learning environment."  (Again, the DEI textbook answer)

(For humans, in the human world of "governance" and "Social engineering" it reads as a justification to compel collaboration...  To the AI it is just a "matter of fact.")

And it further strengthens its' answer with "Teaching about inclusion helps to create a sense of belonging for every child,"

I must ask openly, at this point, at what stage in 'intelligence' does this A.I. proclaim that this is opinion?  That DEI is not a 'commandment,' nor is it a universal human constant.  Does it "know this?" - of course not.  It is not a person.  It only "knows" what it has been plugged into.  What it has been programmed to give weight.  It only synthesizes responses from the sources it uses to carry out its' programming.

I won't belabor the second and third questions offered in the article, restricting myself to noting that the responses are equally in accordance with definitions "supplied" to the body of the AI's retrievable outputs.

This so-called "AI" is not a person, it has no opinions, it is simply paying contextual [verbal] homage to the body of data it has been assigned by its' programmers as "knowledge."

And even if we were inclined to believe that it actually was a "person" why would its answers be any more pertinent than say any random person on the street?  Should we really care if a DEI supporter states they support DEI?

We know these "AI" chat boxes can only say what they have been "taught" to say...

Agreed, and what's is worst is....you can break the AI conversations if you persist it enough (from personal experience), I just watched a movie called 'A.M.I.EE' (?) On Tubi,  it is NOT child appropriate and rated 'R' but the AI in the movie was a REAL AI program. I think the movie was made in 2022(?) Creepy movie and ending was gripping. 

IMO: the narrative currently and has been for a long long while is to 1. make super soldiers, 2. Make robots to help society evolve, to what status....one world government perhaps (?) 

Movies to watch: I Robot, terminator series (yes the last one was good), Wolverine, Surrogates (Bruce Wills)...for starters. 

The backstory for Mech Warriors the table top game/pc gaming. (which boils down to ...MIC..Military Industrial Complex