Login to account Create an account  


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
"Personhood credentials" to prove you're real... because of A.I.
#1
A number of researchers have published a paper to address a problem (a theoretical problem.)

The problem is that as "AI" becomes more used, and the schemes and objectives of more people are applied to its use, it is more likely that using AI for the sake of deception will become more common-place.  It is very possible that systems, as they are, won't be able to definitively discern which "users" are real humans, and which are not.  This opens up opportunities for abuse at the very least, and criminal activity, at worst.

In the past, simple gimmicks like CAPTCHA could weed out most problems in this regard...but in theory, AI won't be foiled by such tools.

From Arvix.org: Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online

(Also an article in TheRegister: AI firms propose 'personhood credentials' … to fight AI
Subtitled: It's going to take more than CAPTCHA to prove you're real)

Everyone's presence on-line is virtual.  The environment of the internet is one of simulacra, only 'images' or 'representations.'  A cynic might say, "It's all dots on a screen" or "everything is unverifiable."  Such a reality, for lack of a better framework, we cannot be certain of the messages we read, hear, or otherwise experience.  Trust becomes more important as we conduct ourselves in social behavior in this virtual reality... largely made up of the faith we put into it.

With the newest notional boogeyman of AI looming, we now realize that some of the denizens of the internet may not actually be 'people.'  Could AI create accounts on its own volition, apply for an on-line loan, get a job, buy things, sell things?  Much like other synthetic entities, cold it exist immortally, like a corporation?  Could it be punished for violating laws, go to jail, pay fines?

If it is an entity, does it have rights or responsibilities, or is that even possible in our social order?

In order to prepare for that eventuality we must hold a dialog about the idea that such possibilities exist.

People in several very prestigious organizations seem to be undertaking the problem, and the paper I attached is replete with their musings....
 

Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge: “personhood credentials” (PHCs), digital credentials that empower users to demonstrate that they are real people—not AIs—to online services, without disclosing any personal information. Such credentials can be issued by a range of trusted institutions—governments or otherwise. A PHC system, according to our definition, could be local or global, and does not need to be biometrics-based. Two trends in AI contribute to the urgency of the challenge: AI’s increasing indistinguishability from people online (i.e., lifelike content and avatars, agentic activity), and AI’s increasing scalability (i.e., cost-effectiveness, accessibility). Drawing on a long history of research into anonymous credentials and “proof-of-personhood” systems, personhood credentials give people a way to signal their trustworthiness on online platforms, and offer service providers new tools for reducing misuse by bad actors. In contrast, existing countermeasures to automated deception—such as CAPTCHAs—are inadequate against sophisticated AI, while stringent identity verification solutions are insufficiently private for many use-cases. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.


TheRegister's article seems to take on a skeptical tone, and I am for the most part, inclined to agree.  

The paper itself is a bit of a sales pitch for the idea that some combination of physical and digital information should suffice to create "credentials" for "real people" that AI cannot duplicate.  The "virtue signaling" words used to create the narrative are terms of privacy, access, 'free' expression... the problem is ... and this might just be my inner cynic ... that at nearly every turn I see assumptions that come from hubris and presumption.  For example:

The authors list two principle 'must haves' for any such credentials...
  • Credential limits: The issuer of a PHC gives at most one credential to an eligible person.
  • Unlinkable pseudonymity: PHCs let a user interact with services anonymously through a service-specific pseudonym; the user’s digital activity is untraceable by the issuer and unlinkable across service providers, even if service providers and issuers collude.
My objection here is the idea of "trust" on the "issuer" side... since this "I'm a human" credential requires "issuance" it is, by definition, open to abuse by the issuer.  The first abuse could be, in theory, making someone "pay" to have such a credential.  And "unlinkable pseudonymity" seems analogous to the block-chain approach to bit coin transactions.  While tokenizing a fiat currency like bitcoin may lend itself to that kind of 'programmatic' wrangling, I don't share those authors' confidence in 'pseudonymity' in a world where government and their commercial allies want to surveil us and know "everything" about who we are... that's our world.

The very first violation of trust that I foresee is the inevitable "backdoor' that will be inserted into the 'anonymity' aspect of the credentials... governments have demonstrated over and over that they can't help themselves as long as they can conjure the excuse of national security driven by political or ideological 'will.'

I found myself not being able to take the paper seriously once those two "defining principles" were outlined.   I found that the loosely described framework kind of scary... I think it could be perverted into yet one more control mechanism and a left-handed means to throttle internet users...  imagine having to "register" your "humanity"... kind of apocalyptic, now that I think about it.

But I thought some might disagree and find the idea palatable... so I'm sharing it here, if you're interested.  Feel free to convince me otherwise.
Reply



Messages In This Thread
"Personhood credentials" to prove you're real... because of A.I. - by Maxmars - 09-04-2024, 02:39 AM


TERMS AND CONDITIONS · PRIVACY POLICY