Login to account Create an account  


  • 2 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
"Personhood credentials" to prove you're real... because of A.I.
#1
A number of researchers have published a paper to address a problem (a theoretical problem.)

The problem is that as "AI" becomes more used, and the schemes and objectives of more people are applied to its use, it is more likely that using AI for the sake of deception will become more common-place.  It is very possible that systems, as they are, won't be able to definitively discern which "users" are real humans, and which are not.  This opens up opportunities for abuse at the very least, and criminal activity, at worst.

In the past, simple gimmicks like CAPTCHA could weed out most problems in this regard...but in theory, AI won't be foiled by such tools.

From Arvix.org: Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online

(Also an article in TheRegister: AI firms propose 'personhood credentials' … to fight AI
Subtitled: It's going to take more than CAPTCHA to prove you're real)

Everyone's presence on-line is virtual.  The environment of the internet is one of simulacra, only 'images' or 'representations.'  A cynic might say, "It's all dots on a screen" or "everything is unverifiable."  Such a reality, for lack of a better framework, we cannot be certain of the messages we read, hear, or otherwise experience.  Trust becomes more important as we conduct ourselves in social behavior in this virtual reality... largely made up of the faith we put into it.

With the newest notional boogeyman of AI looming, we now realize that some of the denizens of the internet may not actually be 'people.'  Could AI create accounts on its own volition, apply for an on-line loan, get a job, buy things, sell things?  Much like other synthetic entities, cold it exist immortally, like a corporation?  Could it be punished for violating laws, go to jail, pay fines?

If it is an entity, does it have rights or responsibilities, or is that even possible in our social order?

In order to prepare for that eventuality we must hold a dialog about the idea that such possibilities exist.

People in several very prestigious organizations seem to be undertaking the problem, and the paper I attached is replete with their musings....
 

Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge: “personhood credentials” (PHCs), digital credentials that empower users to demonstrate that they are real people—not AIs—to online services, without disclosing any personal information. Such credentials can be issued by a range of trusted institutions—governments or otherwise. A PHC system, according to our definition, could be local or global, and does not need to be biometrics-based. Two trends in AI contribute to the urgency of the challenge: AI’s increasing indistinguishability from people online (i.e., lifelike content and avatars, agentic activity), and AI’s increasing scalability (i.e., cost-effectiveness, accessibility). Drawing on a long history of research into anonymous credentials and “proof-of-personhood” systems, personhood credentials give people a way to signal their trustworthiness on online platforms, and offer service providers new tools for reducing misuse by bad actors. In contrast, existing countermeasures to automated deception—such as CAPTCHAs—are inadequate against sophisticated AI, while stringent identity verification solutions are insufficiently private for many use-cases. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.


TheRegister's article seems to take on a skeptical tone, and I am for the most part, inclined to agree.  

The paper itself is a bit of a sales pitch for the idea that some combination of physical and digital information should suffice to create "credentials" for "real people" that AI cannot duplicate.  The "virtue signaling" words used to create the narrative are terms of privacy, access, 'free' expression... the problem is ... and this might just be my inner cynic ... that at nearly every turn I see assumptions that come from hubris and presumption.  For example:

The authors list two principle 'must haves' for any such credentials...
  • Credential limits: The issuer of a PHC gives at most one credential to an eligible person.
  • Unlinkable pseudonymity: PHCs let a user interact with services anonymously through a service-specific pseudonym; the user’s digital activity is untraceable by the issuer and unlinkable across service providers, even if service providers and issuers collude.
My objection here is the idea of "trust" on the "issuer" side... since this "I'm a human" credential requires "issuance" it is, by definition, open to abuse by the issuer.  The first abuse could be, in theory, making someone "pay" to have such a credential.  And "unlinkable pseudonymity" seems analogous to the block-chain approach to bit coin transactions.  While tokenizing a fiat currency like bitcoin may lend itself to that kind of 'programmatic' wrangling, I don't share those authors' confidence in 'pseudonymity' in a world where government and their commercial allies want to surveil us and know "everything" about who we are... that's our world.

The very first violation of trust that I foresee is the inevitable "backdoor' that will be inserted into the 'anonymity' aspect of the credentials... governments have demonstrated over and over that they can't help themselves as long as they can conjure the excuse of national security driven by political or ideological 'will.'

I found myself not being able to take the paper seriously once those two "defining principles" were outlined.   I found that the loosely described framework kind of scary... I think it could be perverted into yet one more control mechanism and a left-handed means to throttle internet users...  imagine having to "register" your "humanity"... kind of apocalyptic, now that I think about it.

But I thought some might disagree and find the idea palatable... so I'm sharing it here, if you're interested.  Feel free to convince me otherwise.
Reply
#2
And, there's a flip side, or perhaps a different facet to the same side of the coin in all of this.  As you note, separating AI from reality will become increasingly difficult and inevitably people will use this with ill intent.  Agreed.  One of these 'slices' of ill intent is not just the use of AI to pose as something real, but quite the opposite...to use AI as a excuse or alibi to cover for something real.  In other words..."I didn't do it!  What you thought you saw was really AI."  And this is even harder still to prove.  Finding and proving false-positives is difficult enough, but now we will be faced with proving false-positives as positives which, depending on the maturity of the AI being used, could be nearly impossible.

I continue to maintain that mankind has no idea what kind of a monster it has created and unleashed on itself with AI. 

The Australian Cane Toad comes to mind, except this time it's not even a living thing, so you can't kill it.  In fact, even if you could kill it, you have no way to know if it's really dead.  And, with society's dependence on technology today, you can't "unplug" it.  Worse, you can't see it, you can't touch it, and it's certainly not going to announce its presence to you.  Why?  Because one of the first things AI will "learn" is self-preservation.
Reply
#3
There are sophisticated Captcha challenges that might still defeat AI. I've seen some that require seeing relationships between disparate things. I wonder if AI could fool a human into giving them a Captcha solution?
Reply
#4
(09-04-2024, 01:11 PM)Lynyrd Skynyrd Wrote: There are sophisticated Captcha challenges that might still defeat AI. I've seen some that require seeing relationships between disparate things. I wonder if AI could fool a human into giving them a Captcha solution?

You can already upload a random photo (in numerous file formats) to Google and it will go out and find similar photos, so image recognition is already there.  And that's not even AI.  You can already ask openAI to generate a dog walking a human on a leash, pick the dog type and the human characteristics, and it will generate a full length video of exactly that.  So, I don't know what kind of relationships between disparate things you could use to fool it.  (and by 'fool it', I mean make it unable to do something only a human could).

Plus, today most of the AI we've all been exposed to so far is "narrow" AI.  "General" AI is a far scarier prospect.  And there's "super" AI on the horizon now too.  And here's the thing, narrow AI is generally subject based.  General AI isn't.  But now they have narrow AI machines crunching down things where the subject is "General AI".  So, while the creator is only narrow AI, it can create something which is capable of general AI.  This has closed the developmental path from years to months (or even days in some cases).  Once we get to general AI boxes working on super AI "they/it" may ignore human inputs altogether.

edit - Back in the '80's people used to scoff at concepts like "Skynet" from the Terminator movie franchise.  Now it's almost a reality, and it's everywhere.  We're already hearing about AI being used on the battlefield by militaries around the globe.  How long will it be before AI realizes who the real problem is (i.e. people) and starts to eliminate them?
Reply
#5
The Captchas are getting more sophisticated, and the best now require creative thinking. They're not your father's image recognition any more.
Reply
#6
I guess the news articles and the interest they generated are spawning videos.... here's a very brief short on reCAPTCHA



https://www.youtube.com/shorts/rme6PT7-CRI

Note that the "behavior" aspect of the technological response to "human v 'bot'" conundrum....

This will clearly not suffice to deter a "conscious" AI.
Reply
#7
(09-04-2024, 01:11 PM)Lynyrd Skynyrd Wrote: There are sophisticated Captcha challenges that might still defeat AI. I've seen some that require seeing relationships between disparate things. I wonder if AI could fool a human into giving them a Captcha solution?

There are browser plugins that automatically solve captchas. They do work, although I'm fairly certain all they're doing is sending a screenshot of the captcha to India and someone in a boiler room is solving captchas all day.

What Maxmars describes is already here. ISPs have been adding headers to your comms with your credentials stuffed in them for YEARS now and they sell this as a service to... whoever wants to buy it.

Your smartphone is your ID. All the sensors in that phone are cataloging and tracking every single thing you do, how you interact with websites (google analytics does this too), points of interest, recently visited sites, the size of your screen, what graphics card is in the device you're using, etc etc and they use all this to develop a "fingerprint" that uniquely identifies you. Now, add in the new crap the Goog is pushing for - IPv6 with SLAAC addressing, in its default mode, includes your MAC address in the final address (sneaky sneaky) and tack that in with stuff like the QUIC protocol and "secure DNS" (which really just means sending your DNS requests to one of the central logging points with encryption keys that directly point back to your browser) and the level of accuracy goes WAY up.

If you'd like to see an example of this in your own browser, check out:

https://coveryourtracks.eff.org/

It will run a basic set of browser fingerprint tests(uncheck the box to use real providers!) and will tell you how unique your browser is based on the last XX days of tested browsers. There are some other sites that do this on a wider scale and can tell you how unique your browser is out of millions of tests.

I got into a tissy with Amazon a couple years back over an exploding smart watch which culminated in my sending them a request to delete all of my personal info and accounts. They went back and forth with me a few rounds about "verifying my identity" until I pointed out their request was entirely bullshit, they already knew my credentials and had my voice prints and everything else in their Alexa AI DB, etc etc. After about a week of silence, they deleted my account as requested. "Here's your sign."
Reply
#8
That's an interesting link. I didn't realize that just the background details of a browser, even Firefox Focus, gave up enough details to make an unique identity.
Reply
#9
(09-04-2024, 10:03 PM)Lynyrd Skynyrd Wrote: That's an interesting link. I didn't realize that just the background details of a browser, even Firefox Focus, gave up enough details to make an unique identity.

Ergo why I run Ublock Origin, Noscript, and have a whole slew of public IP blacklists for trackers and whatnot as well as DNS lists running at the firewall. I can tell the big boys absolutely hate this because I get cloudflare captchas "are you a bot" all over the place. Block outbound requests on UDP port 443 to block QUIC. If you can restrict HTTP/2 its also better although this may make accessing some sites(possibly this one, as it uses cloudflare) impossible. I also ditch IPv6 at the firewall, and the switches don't pass it upstream, internally. I also go through the requests for some sites I frequent and systematically block requests and scripts I don't like.

Edit for juicy extra paranoia(lol):

They're even recording what you point your mouse or finger at on the screen, how long you're doing it, which things you look at more than once, how fast or slow you move the mouse and where, etc etc. They're using all of that to fingerprint you as well. There are minor variations in the human motor response...............
Reply
#10
Recent headline about AI mastering CAPTCHA

From NewScientist: An AI can beat CAPTCHA tests 100 per cent of the time

Sadly I can't access the article because I selfishly refuse to "give my information" to them.
Reply