Epstein Archive
 



  • 4 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
The Anti-AI Thread
#31
(03-28-2025, 04:45 PM)Nerb Wrote: Interesting ball bearing sitting quietly on the table.

Dashen?

Grok apparently didn’t understand "teaball" I even told Grok it was a little round ball with holes in it you put the tea inside. It still got it wrong. *shrug*
Reply
#32
(03-28-2025, 04:56 PM)Nerb Wrote: I did this a while ago with "Piclumen" image A.I.

I did this one with Grok:
[Image: IMG_4093.jpg]
Reply
#33
(03-28-2025, 05:29 PM)pianopraze Wrote: Grok apparently didn’t understand "teaball" I even told Grok it was a little round ball with holes in it you put the tea inside. It still got it wrong. *shrug*

Like I said in an earlier post about "prompts" and how it's important to consider the grammar. Perhaps "teaball" either needs to be split into "tea ball' or "tea-ball" or maybe try ignoring a proper name and use something literal like "silver filigree tea infusing ball on a chain".

Treating A.I. like a child sometimes works when it could be confused with things we have perhaps understood for ages and take for granted.

Some of my prompts run to over a hundred words and each part needs consideration as a seperate element and added in a certain order to provide priority of the element. Lots of commas and seperated by full stops (periods).

WE ARE IN CONTROL.

I love the cool tiger btw. It's Grrrrrrrreat!



Wisdom knocks quietly, always listen carefully. And never hit "SEND" or "REPLY" without engaging brain first.
Reply
#34
(03-28-2025, 05:03 PM)chr0naut Wrote: Calling what we have AI is pure marketing. We are nowhere close to true AGI.

The thing is, LLM's will always give you an answer, even if there isn't a factual one to give.

They can sort data into something that looks meaningful, but looks can be deceptive and as complexity rises, nonsense can be presented in a quite convincing manner.

It is our preconception, ignorance and gullibility that lends any credence to what an LLM spits out.

LLM's are useful though, in that they can summarise their vast array of training data in what looks like an intelligent manner, but you have to realize that they can wrongly link datapoints, and so present things that simply are not factual.

In no way does an LLM 'understand' the content it presents. Therefore it cannot verify the factuality of the links it assumes, and builds its answers on. It simply presents what it was trained on using dictates of language structure.

The rulesets, that to a certain extent contain some real-world data, are many gigabytes in size, and so they have their own inner complexity and as they build in part on previously defined rules, which may themselves not be factual, it becomes very hard to filter out chains of faulty 'reasoning rules'. Hence LLM's 'hallucinate' (a kind way of saying that they produce BS). As the models become bigger and the rulesets are corrected and reviewed over time, hallucinations will reduce, but they just aren't intelligent.

Back in history, there were quite simple programs such as ELIZA that also looked like they were intelligently responding to input, but the idea that a couple of hundred lines of BASIC code could be 'intelligent' was pure delusion. There were even people who claimed romantic relationships with chatbots, LOL.

LLM's are probably part of the way to AGI, but people are treating them like they were more than just the big algorithm they are.

Everything you say is exactly correct, yet none of it addresses the point of the human devolution that it's use is causing, and threatens to continue to cause. 
We have to call it AI just because that is what the idiots gaming the system in government and society using it, are calling it. 
I say idiots , because those pushing for it in the government and corporations, want no one to make their own decisions , and rather go 'with the program' , damned if it ruins culture, and destroys initiative and creativity.  Sorry to be blunt. 
It's the push for this kind of society, infected by this type of programming , that is the problem, not necessarily the specific 'AI' itself.
Reply
#35
(03-28-2025, 05:12 PM)UltraBudgie Wrote: This has been a long time coming. Ellison began decades ago, contracting with the US government to keep a database of information on each citizen. AT&T has been recording and archiving the contents of all phone calls and communications since the 80s. All that data is now pouring into aggregation and back-end AI models.

The vision is this: social interaction has been replaced by the web. The web is gradually being poisoned and made unusable, forcing a migration to personal AI agents. Personal AI agents will be "secure" against the end-user, providing prescribed services, interacting with private corporate systems, operating in their cloud. Network and data-sharing agreements between corporate entities, governments, and economic institutions allow back-end system coordination by upper-level models. Who or what controls those systems, even their very existence, will remain opaque.

Much of this is not new but merely a reimplementation of systems that have existed for centuries, based on paper and bureaucracy -- rings intersecting rings. What's new is the level of potential responsiveness, granularity, and integration.

The chaos of the modern world is overwhelming. We can finally get our lives under control! See, that was sarcasm haha.

This type of gross inhuman centralisation is also at the root of the problem, and AI is a tool they want to push to help run it.   Little robot humans in little robot boxes.  COVID lockdowns got a large segment of the population used to be a little robot veal in their homebox and not going anywhere, and they were also told not to think for themselves.
So now we have the AI doing the 'thinking' and since everything is a centralised crappy corporate plastic product, it can be gooped into their bodies Matrix-style, too. 
On another thread I posted about the dangers of Musk's push (among others) for these huge obscene AI data centres, which the NSA et al already have.
Reply
#36
(03-28-2025, 06:31 PM)sahgwa Wrote: This type of gross inhuman centralisation is also at the root of the problem, and AI is a tool they want to push to help run it.   Little robot humans in little robot boxes.  COVID lockdowns got a large segment of the population used to be a little robot veal in their homebox and not going anywhere, and they were also told not to think for themselves.
So now we have the AI doing the 'thinking' and since everything is a centralised crappy corporate plastic product, it can be gooped into their bodies Matrix-style, too. 
On another thread I posted about the dangers of Musk's push (among others) for these huge obscene AI data centres, which the NSA et al already have.

A little more of an explanation of what I meant.

Imagine a person who has listened to and memorized all your telephone calls over the past few decades. The can be consulted, for a fee, about what might appeal to you, what your political opinions might be, what value you might hold to society, etc. That's what AT&T (or Verizon, or whoever) will have for each of their users. Amazon will have a little deal with them, to help determine exactly how much you might be willing to pay for products you view. Uber might consult them to determine exactly how urgently you need to get from place to place.

Similarly, Google will have one of these little "digital consultants" for each person with a Gmail account, informed with the contents of all their emails, their search history, etc. This might seem like a slight violation of privacy, but remember that these are digital agents, not humans -- they can be instructed not to leak their source data, only generate conclusions drawn from it. For a price.

Now, as this ecosystem evolves, you can see how there will be many little "experts on you", each privy to a little piece of the huge pile of data you've agreed to allow be collected about you during your lifetime. Wouldn't it be great if these experts could consult each other?

With the appropriate technology and service-level agreements, they can! Think of a board room of expert-systems, serving you 24 hours a day, each versed in a different aspect of your life, each tasked with optimizing one aspect of the way you interact with the world. Defined by your corporate relationships. This, by the way, is what the Metaverse is about -- a proxy-model of reality that effectively subsumes human interaction, ruled by such entities.

This "board room", one for each person, effectively acts as a bulwark between the people and their rulers. Who is this aggregate beholden to? Well, no need to worry you're head about that, your personal AI agent will act as an interface between you and them. And they will act as an interface between it and the "State" that effects various controls and goals over it (whatever that might be, we'll never know).

So, no more worrying about backdoors or privacy leaks! It's not your concern any more. (haha sarcasm again see what I did there?)
Reply
#37
I'm going to double-post to make a point -- this all sounds rather dystopian. And yes, it is, somewhat. Where is "art"? Where is the unique human element?

Well, fear not. Humans have a long history of replacing anything that can be replaced with technology. Art evolves and moves on. Much of the hoo-hah hating against AI seems to be a form of sour grapes. Writers, actors, illustrators bemoaning the impending death of the way they've expressed their creativity.

But, western civilization has been on that trajectory for a while now, hasn't it? Where is the new art for the ages? It all seems like soulless crap, frankly. But! The art is there, you'll see it sometimes, if you look. It's just not where you would expect. It growing and changing, too, despite our cultural complacency, inertia, and materialism.

I really pissed off the owner of an advertising firm a few months ago. He was complaining about AI art, how it was getting mixed in with work product, making things tougher for their business and artists to get the recognition for creativity they deserve. I laughed when he called their employees "artists", and said "where's the art?" They really should be called "content-creators". If it can be replaced with a machine, acceptably, is it really art any more? Or is that just ego? He didn't like that one bit. I still think I'm right. Now, there will always be new modes of art in influencing people and playing on their emotions and aesthetic proclivities, but it seems like humanity is going to up its game in that regard. Whether we "like" it or not.

And besides, our masters will always thirst for novelty. The thought of humanity being reduced to identical automatons is a nightmare for them, too. What fun is to be had when there is no world left to conquer? They will ensure we still retain the capacity for entertainment. Even a pure AI would want that.
Reply
#38
(03-28-2025, 08:04 PM)UltraBudgie Wrote: A little more of an explanation of what I meant.
...

You outline a most dreadful possibility.

But we must remember that this is a "virtual" condition, only possible because we opt-in to it (pardon the ironic pun.)

We have allowed the tokenization of our thoughts and details... with no restraint or oversight... here we are.

Like the supreme yoke of money (also "virtual,") each individual's digital information exists as an unclaimed asset... shielded from actual assault behind technology... that someone owns.

Ahem.

I decided I don't like this scenario.
Reply
#39
(03-27-2025, 12:39 PM)sahgwa Wrote: It makes me nauseous from a humanity existential view, to contemplate the lack, the degradation of thought, the devolution occurring in regards to the push for AI. And no, I don’t think I am being hyperbolic.
 
I was daydreaming last night: remember all those warnings from AI designers and programmers about AI and how it was terrible for humanity? No sci fi nonsense about them physically taking over the world- no, rather the removal of critical thinking and human initiative, creativity , once and for all.
No one is talking about those warnings anymore.

And no, this is not some luddite, 'oh humans will never fly in airplanes ’ anti progress screed. This is far deeper than that.
 
This is not just some new mode of transportation.
This is not just a new media.
This is a thing that pretends to think, this is an infection that infiltrates , or wants to, all modes mediums and objects of daily human life. It can’t be ‘tamed’ , made ‘nonpartisan’ or made useful. It is inhuman and antihuman to the core.

All of the above is only from a mental, psychological and spiritual stand point.
I haven't even mentioned the physical and societal effect!  China CCP social totalitarian style control judging and corralling on steroids

AI is a tool. Don't you use it as such? I do.

Stop applying human traits to it, it is not human and never will be.

It is wondrous, yes, but will never rule us.

Lazy people will use it often, but there will always be the philosophers that others will want to poison because they are in the outer limits of the trend of society.
"The real trouble with reality is that there is no background music." Anonymous

Plato's Chariot Allegory
Reply
#40
(03-28-2025, 09:43 PM)quintessentone Wrote: AI is a tool. Don't you use it as such? I do.

Stop applying human traits to it, it is not human and never will be.

It is wondrous, yes, but will never rule us.

Lazy people will use it often, but there will always be the philosophers that others will want to poison because they are in the outer limits of the trend of society.

Agree.

But too many evil sociopaths exist that often end up controlling these tools.

An axe is a deadly weapon or farm tool depending on the wielder.

Judgment day is inevitable. Weither ai controlled or small group of sociopath controlled makes little difference to us the people they wish to depopulate. We must resist it but that will likely be too little too late.
Reply