Login to account Create an account  


  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Is AI ethical enough to handle classified material?
#1
This is a particularly narrow OP.

It deals with something recently reported: That "Claude AI" is to become integrated via a partnership between Amazon, and Palantir, with some 'unspecified' US intelligence agencies...

Reportedly (unless I've misunderstood) the government seems to be "off-loading" data analysis to the private market... it appears to be limited to data classified no higher than "secret" but even so... misclassification happens often (caused either by abuse, intent, or error) so I am at the very least, concerned.

This is in no way a 'healthy' behavior for our government... thank you DHS.

From ArsTechnica: Claude AI to process secret government data through new Palantir deal

The partnership makes Claude available within Palantir's Impact Level 6 environment (IL6), a defense-accredited system that handles data critical to national security up to the "secret" classification level. This move follows a broader trend of AI companies seeking defense contracts, with Meta offering its Llama models to defense partners and OpenAI pursuing closer ties with the Defense Department.

First of all I have to point out that the "defense accreditation" they reference is a "deprecated" DoD process, meaning they are officially aware that it's not a 'superior' process... 
Add to that the idea of massive data exchanges with a for profit privately held corporation with numerous foreign ties.. and well... not warm and fuzzy.

This is a "press-release" sourced report so details will be the product of someone writing their own PR material.

Part of the marketing of this grand plan was the 'field-test' of using it for an unspecified American Insurance company... presumably with satisfactory results... but I'm uncertain that because an algorithm set can handle the step of underwriting, that it should necessarily be granted access to classified US data, even if it is only classified Secret and below.

Another part of the marketing push was in the form of a distinction that Claude is said to feature... an value system of imposed "ethical" behavioral rules... referred to as "Constitutional AI."  These guidelines were developed to restrain AI from bouts of 'human mimicry" reflecting racism, crass rudeness, etc.  The "values" were "adopted" from the model of  "Apple’s terms of service"... oh yeah... now I'm feeling safer!  Rolleyes

Anyway... this will be one to watch for me....
Reply
#2
(11-10-2024, 06:00 PM)Maxmars Wrote: Add to that the idea of massive data exchanges with a for profit privately held corporation with numerous foreign ties.. and well... not warm and fuzzy.

That concern is not warranted from the article presented. The Palantir deal allows the use of certain LLM models within an existing secure data environment, not data sharing with the LLM creators or other private corporations.
"I cannot give you what you deny yourself. Look for solutions from within." - Kai Opaka
Reply
#3
I've never known a multi-million dollar corporation to NOT have international ties.  "Investment" is a tentacled-monsters that transcends easy detection.

But your point is correct... the article doesn't address that specter.  So maybe my apprehension is misplaced.  Thumbup

I get "hinky" when corporate press releases use words like "ethical."
Reply
#4
Illuminati installing AI in DoD, this is going to be good.

This is a Palantir

[Image: sSfSXTb.jpeg]
compassion, even when hope is lost
Reply
#5
The plot thickens

https://medium.com/@nabeelqu/reflections...433cf95439
"For long-time employees and alumni of the company, this feels deeply weird. During the 2016–2020 era especially, telling people you worked at Palantir was unpopular. The company was seen as spy tech, NSA surveillance, or worse. There were regular protests outside the office. Even among people who didn’t have a problem with it morally, the company was dismissed as a consulting company masquerading as software, or, at best, a sophisticated form of talent arbitrage."

https://www.digitalhealth.net/2024/06/ca...nfed-expo/
compassion, even when hope is lost
Reply
#6
(11-12-2024, 02:53 AM)Sirius Wrote: The plot thickens

https://medium.com/@nabeelqu/reflections...433cf95439
"For long-time employees and alumni of the company, this feels deeply weird. During the 2016–2020 era especially, telling people you worked at Palantir was unpopular. The company was seen as spy tech, NSA surveillance, or worse. There were regular protests outside the office. Even among people who didn’t have a problem with it morally, the company was dismissed as a consulting company masquerading as software, or, at best, a sophisticated form of talent arbitrage."

https://www.digitalhealth.net/2024/06/ca...nfed-expo/

You don't have to dig too deep to find eye-raising Palantir stories.

Quote:Palantir, the controversial software company with ties to intelligence agencies, is turning to the agency world as part of its continued efforts to grow its commercial business.

The company, which was co-founded by venture capitalist Peter Thiel, has pitched advertising agencies on utilizing its year-old AI platform AIP, according to two executives from different agencies who attended pitches.

In a pitch deck shared with Marketing Brew, Palantir presents “wide-ranging use cases and applications” of AIP for tasks including “pricing and inventory planning,” “programmatic sales,” and “campaign optimization.”

https://www.marketingbrew.com/stories/20...technology

I've heard fun anecdotes, such as Thiel using Palantir for lobbying -- microtargeting billboard ads in one train station, buying a full page ad at an exact particular place in a newspaper, just to get the attention of a single congressman on their daily commute. Lol
"I cannot give you what you deny yourself. Look for solutions from within." - Kai Opaka
Reply