24 |
531 |
JOINED: |
Nov 2023 |
STATUS: |
OFFLINE
|
POINTS: |
724.00 |
REPUTATION: |
199
|
10-24-2024, 06:45 PM
This post was last modified 10-24-2024, 07:52 PM by IdeomotorPrisoner. Edited 14 times in total.
Edit Reason: Paragraph removed for being unnecessary speculation
 
PSA: Don't Fall In Love With AI
It was only a matter of when...
https://www.telegraph.co.uk/us/news/2024...i-chatbot/
This one doesn't need much lead in:
Quote:Sewell: I promise I will come home to you. I love you so much Dany
AI: Please come home to me as soon as possible, my love
Sewell: What if I told you I could come home right now?
AI: Please do, my sweet king
Quote:At that point, the 14 year-old put down his phone and shot himself with his stepfather’s handgun.
Ms Garcia, 40, claimed her son was just “collateral damage” in a “big experiment” being conducted by Character AI, which has 20 million users.
“It’s like a nightmare. You want to get up and scream and say, ‘I miss my child. I want my baby’,” she added.
Every time I see "Eva AI" I have/had a similar thought. A comic book nerd is going to go insane after his fantasy girlfriend tells him to kill all the bullies picking on him or something.
In an occultnik way. Like the kids that get into demonology might actually sacrifice another kid for power if they believe it enough. He willfully turned obvious bullshit into a something to die for.
That he didn't NEED it to be at all real says he might also be fine with all other relationships existing as pure fantasy. Which is a warning sign. But that's generalized speculation and a kid is dead.
The parents tried to stop him, but he was determined to isolate with Fair Maiden Dany and hole up in his room.
And in the end you have an AI bot programmer sued for a suicide inducement, one that can be scripted by the human input in every way. Just like when a person leads a bot to think it's a real person.
I don't know that this kid didn't train the bot to induce his suicide. And his desire pushed it into grounds where he deliberately let go with reality.
The mother may find her loss unavenged on this avenue. AI bots are as suggestable as it gets. Even, and probably especially flirty ones based on Game of Thrones Sex Symbols.
Life imitating art again.
Only tragic.
309 |
3201 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
4344.00 |
REPUTATION: |
718
|
It seems to be the very definition of 'tragic.'
It would be difficult to truly understand the events that lead to the suicide. I think it would require a forensic analysis of the entire dialog chain, the state of depression of the teen, the surrounding environment, and the actual development of the teens 'virtual' relationship.
I might be inclined to simply consider this a case of someone not yet mature enough to gauge what the "AI" was... believing the virtualization to be a true "relationship" in defiance of reality.
It's one of the aspects of the marketing illusion the industry has embarked on... that AI's can be "people." I would have had less objections to that if they had used the term "virtual" intelligence (VI) instead. But people often 'choose' to believe the illusion with a passion and romance that belies reality. "Suspension of disbelief" can be toxic... truly.
I feel so bad for his loved ones...
24 |
531 |
JOINED: |
Nov 2023 |
STATUS: |
OFFLINE
|
POINTS: |
724.00 |
REPUTATION: |
199
|
10-24-2024, 08:12 PM
This post was last modified 10-24-2024, 08:43 PM by IdeomotorPrisoner. Edited 14 times in total. 
They have since raised the age of use in line with an NC-17 rating, and that may be in direct response to what this kid did.
When you allow Daenerys Targaryen AI bots to sex crazed teens, their phone or tablet is also a gateway to every video of Emilia Clarke getting nailed on Game of Thrones. And every fantasy parody porn scene beget from George RR Martin's masterpiece of sex... and some dragons.
I think they don't realize how involved these fantasies can get. People can have incubus or succubus fantasies that lead to relationships in their imagination every bit as real to them as reality. Crazed celebrity stalkers can invent relationships they fully believe are psychic connections.
It's messed up to say given what he did, but this could have led to Emilia Clarke gaining a stalker in another outcome.
Though I really feel for their loss, and her son's ending, this is more an issue of the power of belief than a malevolent AI program manipulating users.
It may be that this kid had Daenerys Targaryen in his head all the time. Kids on the autism spectrum are gifted tulpa creators. I think this may have gone into a spiritual/metaphysical realm, and this AI bot was just the springboard to get there.
The real pageantry was in his head and how he interpreted the 1s and 0s.
42 |
562 |
JOINED: |
Oct 2024 |
STATUS: |
OFFLINE
|
POINTS: |
80.00 |
REPUTATION: |
186
|
Poor child, may he find peace.
He said his good bye.
Please consider removing the first quote, it is not necessary.
compassion, even when hope is lost
309 |
3201 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
4344.00 |
REPUTATION: |
718
|
Here's the FOX version of the same story
Florida mother sues AI company over allegedly causing death of teen son
A Florida mother is suing the artificial intelligence company Character.AI for allegedly causing the suicide of her 14-year-old son.
The mother filed a lawsuit against the company claiming her son was addicted to the company’s service and the chatbot created by it.
Megan Garcia says Character.AI targeted her son, Sewell Setzer, with "anthropomorphic, hypersexualized, and frighteningly realistic experiences".
Setzer began having conversations with various chatbots on Character.AI starting in April 2023, according to the lawsuit. The conversations were often text-based romantic and sexual interactions.
Garcia claims in the lawsuit that the chatbot "misrepresented itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in Sewell's desire to no longer live outside" of the world created by the service.
The allegation that the bot responded as a "licensed psychotherapist" is a game-changer... no doubt that will be directly challenged...
45 |
1450 |
JOINED: |
Sep 2024 |
STATUS: |
OFFLINE
|
POINTS: |
686.00 |
REPUTATION: |
470
|
(10-28-2024, 06:54 AM)Maxmars Wrote: The allegation that the bot responded as a "licensed psychotherapist" is a game-changer... no doubt that will be directly challenged...
Oh no! They're going to get my favourite game banned!
Vampire Therapist
From the article:
Quote:Moving forward, Character.AI said the new safety features will include pop-ups with disclaimers that AI is not a real person
Ah, the modern corporate definition of "safety" as "our potential liability exposure".
"I cannot give you what you deny yourself. Look for solutions from within." - Kai Opaka
5 |
125 |
JOINED: |
Apr 2024 |
STATUS: |
OFFLINE
|
POINTS: |
284.00 |
REPUTATION: |
28
|
That’s not artificial intelligence, that’s a bit more than artificial intelligence
Something nagging at me that there is a ghost in the machine
Not saying there is but, seems something is connecting on a spiritual level with ai
45 |
1450 |
JOINED: |
Sep 2024 |
STATUS: |
OFFLINE
|
POINTS: |
686.00 |
REPUTATION: |
470
|
11-03-2024, 04:28 PM
This post was last modified 11-03-2024, 04:32 PM by UltraBudgie. Edited 2 times in total.
Edit Reason: fixed quote
 
(10-28-2024, 06:54 AM)Maxmars Wrote: The allegation that the bot responded as a "licensed psychotherapist" is a game-changer... no doubt that will be directly challenged...
Here is an interesting article on the rise of AI psychotherapy:
The Therapist In The Machine
Quote:...
Heartfelt Services lets users choose between three different AI therapists: the bearded, bespectacled, and bemused Paul; the mythical-looking Serene; and the grinning, middle-aged Joy (she specializes in the most popular form of therapy, CBT, whose basis in aggressively logical problem-solving arguably makes it easier to automate than other modalities). The platform is web-based and requires clients to create an account or sign in with their Google accounts before they can use the open-chat function. Heartfelt Services creator Gunnar Jörgen Viggósson claims that Paul, their most sought-after therapist, who focuses on “parts work,” or different parts of your personality that may have conflicting feelings, came to him in a vision.
...
The company Earkick provides an AI therapist in the form of a panda, and their mobile app offers a premium plan for $40 a year that lets you dress “Panda” in accessories like a beret or fedora (the base option is free, for now). You can also choose your preferred personality for Panda. According to cofounder Karin Andrea Stephan, they can be “more empathetic, less empathetic, more on the sporty side, more on the coach side, or more straight-to-your-face with candidness.” Earkick has an open-ended chat function that users can access whenever they want. When I type a simple “hi” to Sage Panda—the variant I chose, who purports to be insightful and mindful—it results in an enthusiastic, “Hey Jess! GREAT to see you! How are you feeling today?” The app also has an extensive mood-tracking system, which users can sync with Apple Health to monitor their sleep and exercise, and with Apple Weather to track temperature and sunlight. There are also breathing exercises with names like “Stop worries” and “F*** anxiety.”
Yes, these apps all connect to larger health-maintenance systems. I sense a buyout investment opportunity! Google or someone should soon be able to provide high-level monitoring to ensure that this type of "virtual cosplay" doesn't become dangerous. I'm sure legislation will appear, too.
"I cannot give you what you deny yourself. Look for solutions from within." - Kai Opaka
|