deny ignorance.

 

Login to account Create an account  


Thread Rating:
  • 2 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
AI models choose violence ?
#31
(03-02-2024, 03:32 PM)ArMaP Wrote: Contrary to what many people think, AI is not a new thing, the first time I looked into it (and made small programs in an AI specific programming language) was at the beginning of the 1980s, 40 years ago, on a ZX Spectrum.  Smile

I remember a program from the early 1980s called "Eliza." It was supposed to psycho-analyze the user, and incorporated many of the aspects of today's "AI." Had a ball figuring out how to "break" it. Cool

AI is certainly not new.

Doug
Reply
#32
(03-01-2024, 02:03 PM)Kenzo Wrote: Who programs the programmers who program the AI ? Rolleyes

I don't know who programs the programmers or calls the shots here, but at some point government will have to step in to regulate it, that's for sure.
"The real trouble with reality is that there is no background music." Anonymous

Plato's Chariot Allegory
Reply
#33
(03-05-2024, 03:36 PM)quintessentone Wrote: I don't know who programs the programmers or calls the shots here, but at some point government will have to step in to regulate it, that's for sure.

But that would be dangerous, wouldn't it?  All linearity aside, "governments" are actually collections of powerbases (and ultimately worse, "people,") some presumably honorable, some demonstrably not.

If, for example, an AI that is "taught" that one particular ideology is beneficial, and the other is anathema will always skew their outputs towards "the creators' acceptability."  In other words, agencies that create something which they want to sell as "sentient life" will likely always support that agencies' agenda.

We don't agree enough with each other to create a "super mind." We already see so-called AI being "used" to influence.
Reply
#34
(03-05-2024, 04:12 PM)Maxmars Wrote:
(03-05-2024, 04:12 PM)Maxmars Wrote: But that would be dangerous, wouldn't it?  All linearity aside, "governments" are actually collections of powerbases (and ultimately worse, "people,") some presumably honorable, some demonstrably not.

If, for example, an AI that is "taught" that one particular ideology is beneficial, and the other is anathema will always skew their outputs towards "the creators' acceptability."  In other words, agencies that create something which they want to sell as "sentient life" will likely always support that agencies' agenda.

We don't agree enough with each other to create a "super mind." We already see so-called AI being "used" to influence.
UNESCO (United Nations Educational, Scientific and Cultural Organization) is on top of it, which IMO is remarkable considering how fast AI has taken off.

"Central to the Recommendation are four core values which lay the foundations for AI systems that work for the good of humanity, individuals, societies and the environment:

1. Human rights and human dignity. Respect, protection and promotion of human rights and fundamental freedoms and human dignity.

2. Living in peaceful just, and interconnected societies.

3. Ensuring diversity and inclusiveness.

4. Environment and ecosystem flourishing.

https://www.unesco.org/en/artificial-int...?hub=99488

Ten core principles lay out a human-rights centred approach to the Ethics of AI.

1. Proportionality and Do No Harm. The use of AI systems must not go beyond what is necessary to achieve a legitimate aim. Risk assessment should be used to prevent harms which may result from such uses.

2. Safety and Security. Unwanted harms (safety risks) as well as vulnerabilities to attack (security risks) should be avoided and addressed by AI actors.

3. Right to Privacy and Data Protection. Privacy must be protected and promoted throughout the AI lifecycle. Adequate data protection frameworks should also be established.

4. Multi-stakeholder and Adaptive Governance & Collaboration. International law & national sovereignty must be respected in the use of data. Additionally, participation of diverse stakeholders is necessary for inclusive approaches to AI governance.

5. Responsibility and Accountability. AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.

6. Transparency and Explainability. The ethical deployment of AI systems depends on their transparency & explainability (T&E). The level of T&E should be appropriate to the context, as there may be tensions between T&E and other principles such as privacy, safety and security.

7. Human Oversight and Determination. Member States should ensure that AI systems do not displace ultimate human responsibility and accountability.

8. Sustainability. AI technologies should be assessed against their impacts on ‘sustainability’, understood as a set of constantly evolving goals including those set out in the UN’s Sustainable Development Goals.

9. Awareness & Literacy. Public understanding of AI and data should be promoted through open & accessible education, civic engagement, digital skills & AI ethics training, media & information literacy.

10. Fairness and Non-Discrimation. AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all."
https://www.unesco.org/en/artificial-int...?hub=99488

Here's an organization just for AI ethics.

https://www.aiethicist.org/

 and...

Women in AI ethics.

https://womeninaiethics.org/about-us/

As for AI sentience, I'm not feeling it.
"The real trouble with reality is that there is no background music." Anonymous

Plato's Chariot Allegory
Reply
#35
(03-05-2024, 10:51 AM)TheRedneck Wrote: I remember a program from the early 1980s called "Eliza." It was supposed to psycho-analyze the user, and incorporated many of the aspects of today's "AI." Had a ball figuring out how to "break" it. Cool

AI is certainly not new.

Doug

"Eliza" is older than that, it was created in the late 1960s.
Reply
#36
(03-06-2024, 03:09 PM)ArMaP Wrote: "Eliza" is older than that, it was created in the late 1960s.

I can't argue that, but personal computers weren't available to the general public until the late 1970s... and even then, didn't actually become common until the 1990s when Windows 3.1 was released. Before the advent of the TRS-80/Commodore PET/Atari, only large companies used computers and they were pretty limited in application. Even then, the early PCs didn't catch on until software became more standardized under the Windows/IBM banner. I doubt Eliza had much of an audience back then.

TheRedneck
Reply
#37
(03-10-2024, 02:15 AM)TheRedneck Wrote: I can't argue that, but personal computers weren't available to the general public until the late 1970s... and even then, didn't actually become common until the 1990s when Windows 3.1 was released. Before the advent of the TRS-80/Commodore PET/Atari, only large companies used computers and they were pretty limited in application. Even then, the early PCs didn't catch on until software became more standardized under the Windows/IBM banner. I doubt Eliza had much of an audience back then.

TheRedneck

My point was to highlight that, contrary to what many people think, AI has been in development for a long time, even if the general public didn't have access to it.
Reply
#38
(03-10-2024, 07:14 AM)ArMaP Wrote: My point was to highlight that, contrary to what many people think, AI has been in development for a long time, even if the general public didn't have access to it.

Very true, I think.

We even imagined "artificial beings" back in Homeric Times (Hephaestus' "Golden Maidens.")
 

“In their hearts there is intelligence, and they have voice and vigor, and from the immortal gods they have learned skills. These bustled about supporting their master.”


So, it's no wonder that people have been "thinking about" AI, especially the logicians, and system designing types.   In more "modern times" even by the 1920's (with the Ising Model) the groundwork for the framework was being set (mathematically speaking.) 

Now I would be remiss if I didn't wonder about how far we must have come since then.  That's another question entirely.  What passes for AI in the media is not "appropriate" to the reality... but they are relentless... almost like they're "selling" something, no?
Reply
#39
(03-10-2024, 04:56 PM)Maxmars Wrote: Very true, I think.

We even imagined "artificial beings" back in Homeric Times (Hephaestus' "Golden Maidens.")
 

“In their hearts there is intelligence, and they have voice and vigor, and from the immortal gods they have learned skills. These bustled about supporting their master.”


So, it's no wonder that people have been "thinking about" AI, especially the logicians, and system designing types.   In more "modern times" even by the 1920's (with the Ising Model) the groundwork for the framework was being set (mathematically speaking.) 

Now I would be remiss if I didn't wonder about how far we must have come since then.  That's another question entirely.  What passes for AI in the media is not "appropriate" to the reality... but they are relentless... almost like they're "selling" something, no?

Humans are curious. creative critters.

What is more worthy of curiosity than intelligence? What is that spark that allows us to imagine, analyze, and predict? We can comprehend mathematics and explain many of the workings of the world around us in great technical detail, but at the same time we struggle to comprehend all of the world around us. Then we see animals, acting solely on instinct often, and realize that the thing that allows us to comprehend their actions must be incomprehensible to them. So we know intelligence has a range, and we know that despite our intellectual superiority on this planet, we are not as intelligent as we could be.

Indeed, in my early years, things which seemed so simple to me seemed to be beyond the comprehension of others, Things that I could not fathom seemed to be simple, almost instinctual, to others.

Is it any wonder that we then imagine beings that are superior to us intellectually? And, seeing the irrational acts of humanity all around us, is it any wonder that we would yearn to learn from those intellectually superior to us? This is the basis of religion, belief in alien visitation, universal karma, etc. With the advent of the information age and the advances in computing, it seems to me to be completely reasonable for the uneducated masses to imagine machines that can help teach us how to be a better society. After all, we can create/control machines whereas we cannot create/control a god or visiting aliens.

And that's another aspect of human nature: we want to control our environment.

The media knows this. The media uses this tendency to control us, following the orders of the politicians and elite in society. They, in turn, know that a people will trust a machine that is regarded as infallible more than another person. So AI is used as a means to that control. After al, our computers can help us with anything from finding the best gas prices to learning a hobby to finding the permeability of free space. If it is so omnipotent with information, why can it not also be intelligent? Is not intelligence akin to unlimited knowledge?

It's not... but many have come to believe it is.

Therein lies the true danger of AI. As a servant, it is useful to help us collate vast amounts of information, but as a master it is unfeeling, uncaring, and quite likely the greatest existential threat mankind has ever faced... moreso than even religion, because religion requires faith. Machines obviously exist... no faith required.

TheRedneck
Reply
#40
Regarding this "AI media craziness", I saw two different news articles: one about how they are expecting an increase of 10% in IT related sales this year (mostly from hardware sales) and the other about saying that only 4% of the banks in a survey said they are thinking about using some kind of AI tools.
Reply



Forum Jump: