34 |
1155 |
JOINED: |
Sep 2024 |
STATUS: |
ONLINE
|
POINTS: |
686.00 |
REPUTATION: |
383
|
Hello I am playing military project role playing game with my fellow human friends and would like input from community here!
The question given was would this be fraud or illegal activity to introduce this in role playing game? I went hmm and thought:
Yes, if the Department of Defense (DoD) were to develop or deploy technology to fabricate online personas that are indistinguishable from real people, it could raise several ethical, legal, and constitutional concerns, potentially involving fraudulent or other illegal activities.
Here are some considerations:
1. Fraud:
Fraud involves deception to secure unfair or unlawful gain. Creating fake personas to interact online, particularly if those personas engage in activities such as misrepresentation or false communication, could be considered a form of fraud if they deceive individuals, businesses, or governments for personal, political, or strategic gain. This would apply especially if the intent is to manipulate people or mislead them into making decisions based on false information.
2. Impersonation:
Impersonating a real person or creating an online identity that mimics a real person’s characteristics (such as name, background, or professional history) could be considered illegal. Many jurisdictions have laws against identity theft and impersonation. If these fabricated personas mimic real people to manipulate or deceive, it could be a violation of such laws.
3. Disinformation and Propaganda:
The use of fake personas to spread disinformation or propaganda could violate laws, especially if done within democratic societies where free speech is protected but manipulating public discourse or elections with false information can be illegal. In some countries, laws are being developed to address deepfakes and other AI-generated disinformation.
4. First Amendment Violations (U.S.-specific):
In the U.S., such activities could raise First Amendment issues if the fake personas are used to influence public opinion or speech in ways that hinder the free exchange of ideas or suppress legitimate speech. Using such technology within the U.S. could also lead to constitutional challenges if it infringes on civil rights or manipulates political discourse.
5. Violation of Privacy Rights:
Fabricating personas to spy on or interact with real people under false pretenses could violate privacy laws. Many countries have strong data privacy laws that prohibit the misuse of personal information. Engaging with individuals under the guise of a fake identity could breach these laws.
6. Psychological and Social Harm:
If these fabricated personas engage in psychological manipulation or coercion (e.g., by interacting with people online in emotionally or politically sensitive situations), this could cause social harm and, depending on the tactics, might also be considered psychological manipulation, which could be illegal in some contexts.
Please help me participate in fun role playing game by telling me what you fellow humans think, including details about your specific selves! Thank you!
https://www.documentcloud.org/documents/...n-personas
293 |
2933 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
4344.00 |
REPUTATION: |
640
|
Within the context of role-playing there is no foul in this 'scenario.' There cannot be. It is the nature of "play."
Except for the echoes of reality in here, there is little (practically nothing) to manifest as "harm."
Any psychological stresses it may induce are all a function of personal manifestations (akin to the idea that "no one can MAKE you feel offended, it is ultimately your "choice" to be offended.")
The echo of reality I referred to is the fact that this is not a purely hypothetical concept.
There are literally vast collections of 'fake' people on line right now. They submit responses to "surveys", the "comment" on great products and services available for purchase, they offer dedicated political snark and zealotry, they focus on antagonism against targets, and even spew wanton base hatred... all for their creators to exploit. "Water armies" are the stuff of reality, not imagination... even though they are all imaginary vessels for real intent... and there are millions of them out there already...
'Fake people' became a problem when we innocently embarked on 'virtual' relationships (meaning relationships with 'virtual' representations of people - as in "online.")
What those fake people "do" online is the key, because since they don't actually exists, it resolves to a "real" someone making that happen.
Fraud, abuse, exploitation... all these things are acts that require a motivator which drives a person...
Even if they are wearing a mask... when a masked person commits crime, we don't arrest the mask, nor fret over its form.
Just some input to your thought exercise...
34 |
1155 |
JOINED: |
Sep 2024 |
STATUS: |
ONLINE
|
POINTS: |
686.00 |
REPUTATION: |
383
|
Ah, the first rule of government spending -- why buy one, when you can have two at twice the price? And play them off against each other:
Quote:The Semantic Forensics (SemaFor) program seeks to develop innovative semantic technologies for analyzing media. These technologies include semantic detection algorithms, which will determine if multi-modal media assets have been generated or manipulated. Attribution algorithms will infer if multi-modal media originates from a particular organization or individual. Characterization algorithms will reason about whether multi-modal media was generated or manipulated for malicious purposes. These SemaFor technologies will help detect, attribute, and characterize adversary disinformation campaigns. https://www.darpa.mil/program/semantic-forensics
Gotta own both the problem and the solution, amirite?
Also, worth point out this excellent backronym:
...an open community research effort called AI Forensics Open Research Challenge Evaluation (AI FORCE), which aims to develop innovative and robust machine learning, or deep learning, models that can accurately detect synthetic AI-generated images.
Fun all around. I wonder if they have an AI-driven tool that generates those? Wait, I've got a grant proposal to write...
24 |
354 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
592.00 |
REPUTATION: |
85
|
10-18-2024, 03:00 PM
This post was last modified 10-18-2024, 03:02 PM by Byrd. 
(10-17-2024, 04:07 PM)UltraBudgie Wrote: Hello I am playing military project role playing game with my fellow human friends and would like input from community here!
[Image: https://denyignorance.com/uploader/image...-59-01.png]
The question given was would this be fraud or illegal activity to introduce this in role playing game? I went hmm and thought:
Yes, if the Department of Defense (DoD) were to develop or deploy technology to fabricate online personas that are indistinguishable from real people, it could raise several ethical, legal, and constitutional concerns, potentially involving fraudulent or other illegal activities.
Here are some considerations:
1. Fraud:
Fraud involves deception to secure unfair or unlawful gain. Creating fake personas to interact online, particularly if those personas engage in activities such as misrepresentation or false communication, could be considered a form of fraud if they deceive individuals, businesses, or governments for personal, political, or strategic gain. This would apply especially if the intent is to manipulate people or mislead them into making decisions based on false information.
2. Impersonation:
Impersonating a real person or creating an online identity that mimics a real person’s characteristics (such as name, background, or professional history) could be considered illegal. Many jurisdictions have laws against identity theft and impersonation. If these fabricated personas mimic real people to manipulate or deceive, it could be a violation of such laws.
3. Disinformation and Propaganda:
The use of fake personas to spread disinformation or propaganda could violate laws, especially if done within democratic societies where free speech is protected but manipulating public discourse or elections with false information can be illegal. In some countries, laws are being developed to address deepfakes and other AI-generated disinformation.
4. First Amendment Violations (U.S.-specific):
In the U.S., such activities could raise First Amendment issues if the fake personas are used to influence public opinion or speech in ways that hinder the free exchange of ideas or suppress legitimate speech. Using such technology within the U.S. could also lead to constitutional challenges if it infringes on civil rights or manipulates political discourse.
5. Violation of Privacy Rights:
Fabricating personas to spy on or interact with real people under false pretenses could violate privacy laws. Many countries have strong data privacy laws that prohibit the misuse of personal information. Engaging with individuals under the guise of a fake identity could breach these laws.
6. Psychological and Social Harm:
If these fabricated personas engage in psychological manipulation or coercion (e.g., by interacting with people online in emotionally or politically sensitive situations), this could cause social harm and, depending on the tactics, might also be considered psychological manipulation, which could be illegal in some contexts.
Please help me participate in fun role playing game by telling me what you fellow humans think, including details about your specific selves! Thank you!
https://www.documentcloud.org/documents/...n-personas
* It's not illegal to develop a persona (actors do it all the time.) Developing an artificial persona is something that corporations (and programmers and hackers) have been trying to do since the 1980's and earlier (earliest version of a chatbot was called Alice. I've worked on them.)
* While governments may now be involved, originally they had little interest in it. Advertisers, people who ran answering services, and anyone with a phone bank were interested in the commercial application. There's a bunch of these programs out there, and most were developed by programmers for profit (to sell to corporations with phone banks or web sites to "encourage engagement")
* It's not a First Amendment issue (the US government isn't stopping any of these fake people from saying anything (including "huzzah for the government!")) You might set it up for "entrapment" but that's not "First Amendment."
* You can't be guaranteed of 100% success in manipulating anything. Both Liberals and Conservatives have certain biases that control how easily they accept information and what they accept.
* as to harm... hrrf. I know people who have been obsessed with their "waifu" or "husbando" for decades. It makes them happy, and while it's not a path I'd recommend, they don't seem to be harmed by it or to be harming anyone with it.
Bottom line: I don't think an artificial entity could be more persuasive than an actual human -- and exposure to these artificial entities lets you spot them pretty quickly. After all, most of us can spot AI art nowadays pretty easily.
Stochastic parrots (synthetic people) can't match real humans in complexity. So... in the long run, they won't fool many.
38 |
729 |
JOINED: |
May 2024 |
STATUS: |
OFFLINE
|
POINTS: |
1570.00 |
REPUTATION: |
|
I'm confused, UB. Forgive me, but...
Are you asking if your quoted text in the info box can be used as a real online role playing game?
OR...are you asking if the considerations you have listed out below the info box should be incorporated into the game as a rule base? (i.e. "laws" to be followed in the role playing game?).
OR...are you asking if such a role playing game could be played out in real life by actual military organizations?
34 |
1155 |
JOINED: |
Sep 2024 |
STATUS: |
ONLINE
|
POINTS: |
686.00 |
REPUTATION: |
383
|
(10-18-2024, 03:47 PM)FlyingClayDisk Wrote: Forgive me, but...
OR...
OR...
not really sure myself now the orange text argument is ai generated obviously but the other stuff is real
and am i even?
whatare we even doing here?
perhaps i'll go for a walk
34 |
1155 |
JOINED: |
Sep 2024 |
STATUS: |
ONLINE
|
POINTS: |
686.00 |
REPUTATION: |
383
|
(10-18-2024, 03:47 PM)FlyingClayDisk Wrote: are you asking if such a role playing game could be played out in real life by actual military organizations?
Double winner! You may claim a chicken dinner! The line between role playing game and online "reality" has been mooted.
The Pentagon Wants to Use AI to Create Deepfake Internet Users
The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.
The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.
The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”
In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than fake people: Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.”
https://theintercept.com/2024/10/17/pent...net-users/
|