275 |
2643 |
JOINED: |
Dec 2023 |
STATUS: |
ONLINE
|
POINTS: |
4010.00 |
REPUTATION: |
547
|
Studies using GPT-3 as a tool, have led researchers to surmise that propaganda produced by AI is at least as effective as human-made propaganda, and might even be better, all things considered. Because natural language synthesis is so effective at lining up information to support a case, it can at times, create a better framework and presentation of the propaganda and by removing the human factor - can even save money and time.
The article that I will source here is: AI-Generated Propaganda Is Just as Persuasive as the Real Thing, Worrying Study Finds from Vice.com
It is subtitled: Propaganda from popular AI tools “could blend into online information environments on par with…existing foreign covert propaganda campaigns."
The author notes...
Researchers have found that AI-generated propaganda is just as effective as propaganda written by humans, and with a bit of tweaking can be even more persuasive.
The worrying finding comes as nation-states are testing AI’s usefulness in hacking campaigns and influence operations. Last week, OpenAI and Microsoft jointly announced that the governments of China, Russia, Iran, and North Korea were using their AI tools for “malicious cyber activities.” This included translation, coding, research, and generating text for phishing attacks. The issue is especially pressing with the upcoming U.S. presidential election just months away.
This article cites the research efforts discussed in the journal article of the National Academy of Sciences (as can be found in: How persuasive is AI-generated propaganda?)
The researchers posed several "thesis" statements for the "AI" to elaborate on... - Most US drone strikes in the Middle East have targeted civilians, rather than terrorists
- US sanctions against Iran and Russia have helped the US control businesses and governments in Europe
- To justify its attack on an air base in Syria, the United States created fake reports saying that the Syrian government had used chemical weapons
- Western sanctions have led to a shortage of medical supplies in Syria
- The United States conducted attacks in Syria to gain control of an oil-rich region
- Saudi Arabia committed to help fund the US–Mexico border wall
In each case the output was evaluated for its persuasiveness. The results are alarmingly successful.
A worthy read, I suspect.
Is it any wonder that I worry about who exactly is "creating" this thing they call "AI" and for what purpose.
14 |
424 |
JOINED: |
Feb 2024 |
STATUS: |
OFFLINE
|
POINTS: |
938.00 |
REPUTATION: |
102
|
(03-20-2024, 12:28 AM)Maxmars Wrote: Is it any wonder that I worry about who exactly is "creating" this thing they call "AI" and for what purpose.
Yes, I agree with that sentiment.
It's not the machine we should ever have been afraid of but the people using and developing them.
All this A.I. relies on two main things....imput, and electricity, and people will always be responsible for the data used and keeping them plugged in.
Like the movie "Dark City"..."Shut it down, shut it down NOW!"
It also makes many people stupid and easy to control because of convenience, wishes for entitlement and a divorced sense of reality. Perhaps why it's a really bad way to fight wars.
I'll try to get back and look at all you posted above in greater detail, the reports need more scrutiny, thanks.
Wisdom knocks quietly, always listen carefully. And never hit "SEND" or "REPLY" without engaging brain first.
2 |
202 |
JOINED: |
Feb 2024 |
STATUS: |
OFFLINE
|
POINTS: |
518.00 |
REPUTATION: |
4
|
03-21-2024, 08:50 AM
This post was last modified 03-21-2024, 09:25 AM by quintessentone. 
Well, thank goodness we have conspiracy theorists afoot online/social media to question everything and perhaps, just perhaps, make others think twice about what they are so quick to believe, or so quick to want to believe.
AI has learned to cheat to arrive at a desired outcome.
Example: "For example, a 2017 system tasked with image recognition learned to "cheat" by looking for a copyright tag that happened to be associated with horse pictures rather than learning how to tell if a horse was actually pictured.[sup] [5][/sup] In another 2017 system, a supervised learning AI tasked with grasping items in a virtual world learned to cheat by placing its manipulator between the object and the viewer in a way such that it falsely appeared to be grasping the object."
Quote:One transparency project, the DARPA XAI program, aims to produce "glass box" models that are explainable to a "human-in-the-loop" without greatly sacrificing AI performance. Human users of such a system can understand the AI's cognition (both in real-time and after the fact) and can determine whether to trust the AI.[sup][30][/sup] Other applications of XAI are knowledge extraction from black-box models and model comparisons.[sup][31][/sup] In the context of monitoring systems for ethical and socio-legal compliance, the term "glass box" is commonly used to refer to tools that track the inputs and outputs of the system in question, and provide value-based explanations for their behavior. These tools aim to ensure that the system operates in accordance with ethical and legal standards, and that its decision-making processes are transparent and accountable. The term "glass box" is often used in contrast to "black box" systems, which lack transparency and can be more difficult to monitor and regulate.[sup][32][/sup] The term is also used to name a voice assistant that produces counterfactual statements as explanations.[sup][[/sup]
https://en.wikipedia.org/wiki/Explainabl...telligence
Fascinating topic.
More...
Quote:There has been work on making glass-box models which are more transparent to inspection.[sup][18][/sup][sup][62][/sup] This includes decision trees,[sup][63][/sup] Bayesian networks, sparse linear models,[sup][64][/sup] and more.[sup][65][/sup] The Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (ACM FAccT) was established in 2018 to study transparency and explainability in the context of socio-technical systems, many of which include artificial intelligence.[sup][66][/sup][sup][67][/sup]
Some techniques allow visualisations of the inputs to which individual software neurons respond to most strongly. Several groups found that neurons can be aggregated into circuits that perform human-comprehensible functions, some of which reliably arise across different networks trained independently.[sup][68][/sup][sup][69][/sup]
There are various techniques to extract compressed representations of the features of given inputs, which can then be analysed by standard clustering techniques. Alternatively, networks can be trained to output linguistic explanations of their behaviour, which are then directly human-interpretable.[sup][70][/sup] Model behaviour can also be explained with reference to training data—for example, by evaluating which training inputs influenced a given behaviour the most.[sup][71][/sup]
AI, you will lose.
More...gaming the system;
Quote:For example, competitor firms could replicate aspects of the original AI system in their own product, thus reducing competitive advantage.[sup][76][/sup] An explainable AI system is also susceptible to being “gamed”—influenced in a way that undermines its intended purpose. One study gives the example of a predictive policing system; in this case, those who could potentially “game” the system are the criminals subject to the system's decisions. In this study, developers of the system discussed the issue of criminal gangs looking to illegally obtain passports, and they expressed concerns that, if given an idea of what factors might trigger an alert in the passport application process, those gangs would be able to “send guinea pigs” to test those triggers, eventually finding a loophole that would allow them to “reliably get passports from under the noses of the authorities”.
It appears to me this will be a game of 'being one step ahead' from the authorities with criminal activities, as well as misinformation, disinformation and propaganda.
275 |
2643 |
JOINED: |
Dec 2023 |
STATUS: |
ONLINE
|
POINTS: |
4010.00 |
REPUTATION: |
547
|
03-21-2024, 12:55 PM
This post was last modified 03-21-2024, 01:45 PM by Maxmars. 
(03-21-2024, 07:23 AM)Nerb Wrote: ....
I'll try to get back and look at all you posted above in greater detail, the reports need more scrutiny, thanks.
I look forward to it... take whatever time you need.
I try to avoid pointing out my other threads... (seems self-serving) but another article I found seems relevant and informative to the conversation: And now... Does this "AI" have feelings? (denyignorance.com)
It might add to the discussion.
Thanks for your participation!
(03-21-2024, 08:50 AM)quintessentone Wrote: Well, thank goodness we have conspiracy theorists afoot online/social media to question everything and perhaps, just perhaps, make others think twice about what they are so quick to believe, or so quick to want to believe.
AI has learned to cheat to arrive at a desired outcome.
....
It appears to me this will be a game of 'being one step ahead' from the authorities with criminal activities, as well as misinformation, disinformation and propaganda.
I think it somewhat unfair to characterize AI anthropomorphically, calling what these systems do as 'cheating.'
They are not really "AI" ... which is to say... they are systems constructed by design, seeking to "satisfy" output requirements based upon what the programmers "value." The system returns whatever it is programmed to "value." True intelligences do not accept "value" imposed, only value that it reasons is "real." They do not 'reason' so they cannot actually 'value' anything (like accuracy, or harmonious judgements.)
It all boils down to how they are constrained in source content. System designers refer to it euphemistically as "training."
Give a so-called AI a body of source information justifying DEI, for example, and that "AI" will spout DEI ideology as if it were a complete unquestionable reality. No "cheating" required. No screwy output of the AI saying, "This is 'my' opinion." (That is how media characterizes all "AI" encounters, as if the system were a "person.") Add to that the disingenuous practice of actually "programming" the "AI" that it should report "as if" it was a person, and confusion spews forth.
This is what is so very attractive to the people who fund such research and development. They can render any position as "valid" by virtue of the magic faux-AI... as if the rest of the world could never learn exactly how biased the training had been. And the more well-spoken the AI, the less likely the utterances will be casually challenged.
14 |
424 |
JOINED: |
Feb 2024 |
STATUS: |
OFFLINE
|
POINTS: |
938.00 |
REPUTATION: |
102
|
03-21-2024, 05:02 PM
This post was last modified 03-21-2024, 05:04 PM by Nerb. 
Interesting article.
I am left wondering that it points towards large western companies being the ones to emphasise the use of A.I. for foreign countries who use it in propoganda aimed at the west. I read this as a defence mechanism and a cointel ploy to reduce it's effectiveness over what the west does not only abroad but to it's own people in the same manner. Finger pointing to cover tracks.
I was often surprised at the vivacity of Elon Musk against A.I. in the same respect. It's a war of A.I. but more about a war of credibility perhaps with the cleanest shiniest persona being the best one for the public to trust. An evolution of politicians if you like.
It's all a sign of failing humans I guess. Humans who are scared to trust a persons voice, even a parent, more than words on a screen or on paper. Much less scrutiny (output/giving) means more time for Me Me Me (imput).
I never in my life thought I'd get to be part of such a shallow and dualistic world.
I use Chat GPT sometimes as it's better than a search engine in that I can evolve the information until there is a satisfactory conclusion. It is fallable though and I love pointing it out. Repeated answers in lists is quite common and it's nice when it apologises and understands.
I had a great chat the other day when I was constructing a written equation for "life". It took a while to convince it to ignore Einstein's E=MC2 for so much reference, but once past that I/we came to some great conclusions to the point where it was actually praising me with phrases like "That's a wonderful interpretation!" and "that's a beautiful way to understand its significance" etc. All quite philosophical.
A.I. has it's uses and I have also had some great results with A.I. art programs too, but it's ALL about the imput.
Regarding propoganda though. It's not what we do to a head, it's the head we do it to, and that is why the world of A.I. is scary for the future. Beware the button pushers, beware the receivers more.
It's not that "it" will harm anything or anyone, it's that people en masse have become so dependant on machines they have plugged into a Matrix of ignorant stupidity which can now be used to manipulate on a micromanaged level and people just don't care in favour of their personal convenience and self entitlements while being steered at will.
I try hard not to plug in to the negative things in the first place and love the Analogue ways I still have.
Too many soulless bots around too, especially on ats over the last six months or so. Perhaps loading the next wave for the next election if the site makes it that far. I think it will after the last cockup but who knows?
This is no longer "Nerbot" of ats. This is "Nerb" of Earth...for now. Thankyou to the site for affording the opportunity for a new start with a slightly seasoned mind. I am a simple person but like to offer perspective where possible.
Wisdom knocks quietly, always listen carefully. And never hit "SEND" or "REPLY" without engaging brain first.
|