Login to account Create an account  


  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Propaganda and AI (the newest 'threat.)
#1
Propaganda: What does it actually represent as an idea?

I have heard it said that the word "propaganda" has early roots in the Latin phrase "Congregatio de Propaganda Fide" meaning "Congregation for the Propagation of the Faith."  That refers to a committee formed early in the Roman Catholic Church; their purpose was to spread church doctrine throughout the world.

"Propaganda" is a tool of persuasion.  It is an approach to communications which prioritizes the "motivation" of speech or writing (or whatever) over the content (or information) itself.  It manifests by taking the listener or audience on a speaker-controlled voyage... 

"... Actors, preachers, teachers, politicians, editors, advertisers, salesman, reformers, authors, artists, parents, - our friends and even ourselves - practice the art of persuasion... acting in the ancient Roamn tradition of the word, we are all missionaries for our causes.

Propaganda, as we know it today, can be a nefarious as well as a noble art.  For at one moment its techniques can be used to whip up racial hatred among groups of people; at another moment, it's methods can be employed to move persons to acts of warmth and kindness...
" (From Lorne Green and Robert W. Allen in The Propaganda Game.)

In Allan and Green's short introduction they make an interesting, but important distinction.

"In a democratic society it is the role of every citizen to make decisions after evaluating many ideas.  It is especially important then that a citizen be able to think clearly about the ideas that are daily presented to him (or her.)  It is imperative that he be able to analyze and distinguish between the emotional aura surrounding the ideas, and the actual content of the ideas...."

And therein lies our first big hurdle... the standing question is that:

Are citizens actually "able" to make that distinction... can they detect when they are being subjected to persuasion?

Our education system certainly doesn't seem to value or pursue critical 'analysis' among students.  Our media and political theater actively 'hides' deliberate efforts to persuade, preferring instead to engender a confidence in the audience to simply accept that they are being "informed" rather than "persuaded."  Our social marketing is flooded with ideas where all judgement is removed, and the authors conclusions are presented as supported exclusively by impenetrable fact.

It has become so overt in practice that memes of the "stupidity" of people are all you can hear in the public voice.  We have been conditioned (perhaps brainwashed) into thinking people MUST be "told" what to think, because they can't be trusted to learn "how" to.  That embedded meme, or trope, is ever-present... it appears in theater, in media talking-head speech, in music, in think-tank 'pronouncements,' even in common parlance...   Common humor exalts the "stupidity" of people, often marking it as the only way we can freely laugh (at other people.)

The term "propaganda" has been conflated and obfuscated by the propagandizers themselves.  We hear words like "misinformation" and "disinformation," carefully defined in the most 'fuzzy' and insubstantial way.  Often never even treated as propaganda they are seen as stand-alone 'products' - used exclusively by "those people... (you know who I mean.")  Did you notice that?  That's one of the many techniques one might face in persuasive argumentation.

I offer an example of the end game of "people are stupid."

From SecurityWeek: Preparing Society for AI-Driven Disinformation in the 2024 Election Cycle
Subtitled: The rapid evolution of AI and analytics engines will put campaign-year disinformation into hyperspeed in terms of false content creation, dissemination and impact.

The subtitle contains several "propaganda" techniques, as it "persuades" rather than informs.  "AI" is currently "sold" to the population as an existing thing. It is not... technological advances have brought us close to the capabilities of AI, but it is not at all 'done and done.'  "Campaign-year disinformation" is neither "new," nor has it ever been... it is what "campaigning" is about... persuasion at any cost to the information.  "Hyper speed" is a technojargon term... meaning nothing, relatively speaking.  Demanding that the prevalence of disinformation means it is "effective" is a false concept... disinformation itself is nothing unless the audience actually accepts it as true.  Simply making 'disinformation' doesn't guarantee successful persuasion.
 

If you believe that the 2020 Presidential election in the United States represented the worst kind of campaign replete with lies, misstated facts and disinformation, I have some news for you. You haven’t seen anything yet.

The rapid evolution of artificial intelligence (AI) and analytics engines will put campaign-year disinformation into hyperspeed in terms of false content creation, dissemination and impact. To prepare ourselves as a society to sift through falsehoods, deal with them appropriately and arrive at the truth, we need to understand how disinformation works in the age of AI.

This article describes the four steps of an AI-driven disinformation campaign and how to get ahead of them so that security teams can be more prepared to deal with – and seek the truth behind – advancing tactics of malicious actors.



I challenge the approach to this argument.  Understanding how "disinformation works" is a misdirection.  Disinformation is propaganda... propaganda doesn't 'work' on its own, it is used... implemented... with intent, motive, and design.  Understanding it isn't as much about its existence, it's about discernment.  The means to discern is not a matter of 'methodology' not a measure of metrics... it is a matter of observation and analysis.

The article lists 4 step (The 4-Steps of AI-Driven Disinformation Campaigns) which are intended to "give the audience to seek the truth behind malicious actors" (while causally associating the word "campaign" with the act.)  "Malicious actors" also seems a biased term, since all governments, marketing or religious institutions, and other 'social' collectives practice propaganda (including misinformation) all the time... with none of it rejected as "malicious."

So, we see that this is to be a primer of just how one can rationalize 'labelling' anything against our preferences as "misinformation."  Bravo.

Step one is labelled "Reconnaissance" ... in which "threat actors" features as the only people who make use of "disinformation."  They use "big data" to determine what disinformation people will believe and react to. "It is literally akin to a professional marketing organization developing personas for persona-based marketing and sales."  But that is definitely NOT called "misinformation," is it?

Step two relates to automation in content creation.  Aside from the obvious, (save time = save money) it seems like a 'step' not requiring isolation - since it is the object of the discussion.  "Misinformation" doesn't just spontaneously arise... by definition, it is made with the intent to deceive or mislead.  Propaganda.  I have to add, a machine whose function is to relay information, will do so... and AI is a machine (especially at this state of technology.)  It will do as it is instructed to do, using data provided, not 'created.'
 

The tie-in to elections? Fake people voicing election opinions. Threat actors can instruct AI to sound like different types of Americans, across all segments of the population. For example, AI can easily create content to reflect the attitudes, opinions and vocabulary of a midwestern farmer if instructed to do so. It can then leverage the same data to create realistic content that would likely come from someone in Texas. AI is that flexible, and it can continue adjusting as it takes in more and more data, much of which people generate themselves on social media.


I suppose, given the effectiveness of the trope "people are stupid," it would stand to reason that they shouldn't be exposed to 'wrong' ideas, because they will simply adopt them... they will automatically believe them...  As in "Fake people voicing election opinions."  Because all it takes is hearing that opinion and all our reason is "lost" to the truth. That trope is tiresome.

But I don't believe that people are stupid in any general sense.  I feel that people can, and will, most often be capable of holding their own opinion, and not simply passively waiting to be "told" what to think or feel.  It seems to me that someone wants to take the next step in "stupid people brainwashing 101" by conditioning us in the present that we need to 'fear' opinions.  Accept how feeble you are and let "us" tell you what is and isn't 'valid' ... that differences of opinions are 'offensive' and 'dangerous.'  Any deviation from "our" opinion is the product of malice and bad intent... a battle cry against all we hold dear.

Step three is Amplification another questionable entry into the brotherhood of 'steps.'  From the perspective of the listener, we are to assume that ignoring "some data" ("we'll tell you which") is the noble act of refusing to "amplify" the information.  That now even listening to potentially "manufactured" content conveys guilt upon the listener... but of course.

Step four is Actualization. Which is something germane only to the executor, the sender of the 'false' message.  Analyze the feedback and tweak the message again... over and over.

Most of this carries presumptive baggage.  As it is now, AI is a tool... that is all.  Perhaps someday it can actually achieve a sentient form, but it hasn't.  Therefore, AI is no more of a threat to us than the internet is, or cars, or a hammer.

Any threat has to be 'inserted' by a human actor, not "perceived and interpreted" ...

Misinformation much?
Reply



Possibly Related Threads…
Thread Author Replies Views Last Post
  How "propaganda" will become 'debatable' Maxmars 3 82 12-14-2024, 06:14 PM
Last Post: jaded
  The subtle dance of propaganda - Iran/Hezbollah/Israel Maxmars 7 422 04-18-2024, 06:59 AM
Last Post: quintessentone
  AI-generated propaganda may be better... Maxmars 4 368 03-21-2024, 05:02 PM
Last Post: Nerb
  North Korean Anti-American Propaganda Cartoon theshadowknows 8 999 11-24-2023, 03:56 PM
Last Post: CoyoteAngels