11-10-2024, 06:00 PM
This post was last modified 11-10-2024, 06:00 PM by Maxmars. Edited 1 time in total.
Edit Reason: formatting
 
This is a particularly narrow OP.
It deals with something recently reported: That "Claude AI" is to become integrated via a partnership between Amazon, and Palantir, with some 'unspecified' US intelligence agencies...
Reportedly (unless I've misunderstood) the government seems to be "off-loading" data analysis to the private market... it appears to be limited to data classified no higher than "secret" but even so... misclassification happens often (caused either by abuse, intent, or error) so I am at the very least, concerned.
This is in no way a 'healthy' behavior for our government... thank you DHS.
From ArsTechnica: Claude AI to process secret government data through new Palantir deal
The partnership makes Claude available within Palantir's Impact Level 6 environment (IL6), a defense-accredited system that handles data critical to national security up to the "secret" classification level. This move follows a broader trend of AI companies seeking defense contracts, with Meta offering its Llama models to defense partners and OpenAI pursuing closer ties with the Defense Department.
First of all I have to point out that the "defense accreditation" they reference is a "deprecated" DoD process, meaning they are officially aware that it's not a 'superior' process...
Add to that the idea of massive data exchanges with a for profit privately held corporation with numerous foreign ties.. and well... not warm and fuzzy.
This is a "press-release" sourced report so details will be the product of someone writing their own PR material.
Part of the marketing of this grand plan was the 'field-test' of using it for an unspecified American Insurance company... presumably with satisfactory results... but I'm uncertain that because an algorithm set can handle the step of underwriting, that it should necessarily be granted access to classified US data, even if it is only classified Secret and below.
Another part of the marketing push was in the form of a distinction that Claude is said to feature... an value system of imposed "ethical" behavioral rules... referred to as "Constitutional AI." These guidelines were developed to restrain AI from bouts of 'human mimicry" reflecting racism, crass rudeness, etc. The "values" were "adopted" from the model of "Apple’s terms of service"... oh yeah... now I'm feeling safer!
Anyway... this will be one to watch for me....
It deals with something recently reported: That "Claude AI" is to become integrated via a partnership between Amazon, and Palantir, with some 'unspecified' US intelligence agencies...
Reportedly (unless I've misunderstood) the government seems to be "off-loading" data analysis to the private market... it appears to be limited to data classified no higher than "secret" but even so... misclassification happens often (caused either by abuse, intent, or error) so I am at the very least, concerned.
This is in no way a 'healthy' behavior for our government... thank you DHS.
From ArsTechnica: Claude AI to process secret government data through new Palantir deal
The partnership makes Claude available within Palantir's Impact Level 6 environment (IL6), a defense-accredited system that handles data critical to national security up to the "secret" classification level. This move follows a broader trend of AI companies seeking defense contracts, with Meta offering its Llama models to defense partners and OpenAI pursuing closer ties with the Defense Department.
First of all I have to point out that the "defense accreditation" they reference is a "deprecated" DoD process, meaning they are officially aware that it's not a 'superior' process...
Add to that the idea of massive data exchanges with a for profit privately held corporation with numerous foreign ties.. and well... not warm and fuzzy.
This is a "press-release" sourced report so details will be the product of someone writing their own PR material.
Part of the marketing of this grand plan was the 'field-test' of using it for an unspecified American Insurance company... presumably with satisfactory results... but I'm uncertain that because an algorithm set can handle the step of underwriting, that it should necessarily be granted access to classified US data, even if it is only classified Secret and below.
Another part of the marketing push was in the form of a distinction that Claude is said to feature... an value system of imposed "ethical" behavioral rules... referred to as "Constitutional AI." These guidelines were developed to restrain AI from bouts of 'human mimicry" reflecting racism, crass rudeness, etc. The "values" were "adopted" from the model of "Apple’s terms of service"... oh yeah... now I'm feeling safer!
Anyway... this will be one to watch for me....