347 |
3,161 |
JOINED: |
Dec 2023 |
STATUS: |
ONLINE
|
POINTS: |
26,206 |

If anything, it will always be human effort with "AI" assist.
LLMs don't reason. LLMs aren't "AI" no matter how you package the PR.
And the pro investment hype/show.
Programmers will develop subtle and effective algorithms to isolate the words they
want to use in some prosecutorial narrative... and the media will proclaim "AI" did it, just so they can pro and con us
with a 'show' about an unachieved tech.
In theory, AI is achievable... so far they've shown us no AI... at all.
Only an LLM that apparently masters potentially useful language synthesis.
(which should be applauded...)
But many reports seem to foster the idea that AI is here... and evoke terrible imagery...
mostly with ironically misplaced intent from the creators to the creation.
And like a old crow, I keep cawing...
Anything you see in media particularly of "AI doing this now" must be understood as talking about
a dead-mind puppet LLM which can't bear the 'responsibility' for the programming set to it.
Which is to imply that what is done is the only will of the 'owner.'
An LLM doesn't decide... it calculates...
if we can't discern the difference, we're already lost.
AI, by definition, can decide... or am I wrong there?