12-03-2024, 01:22 PM
There are certain names that might bring an end to your sessions querying an "AI."
When asked about these names, ChatGPT responds with "I'm unable to produce a response" or "There was an error generating a response" before terminating the chat session.
These names are said to be:
Since what the industry calls "AI" is a product, fully developed and programmed by human beings, it implies a specifically directed restriction... which begs many questions.
From ArsTechnica: Certain names make ChatGPT grind to a halt, and we know why
Subtitled: Filter resulting from subject of settled defamation lawsuit could cause trouble down the road.
The chat-breaking behavior occurs consistently when users mention these names in any context, and it results from a hard-coded filter that puts the brakes on the AI model's output before returning it to the user.
...
We first discovered that ChatGPT choked on the name "Brian Hood" in mid-2023 while writing about his defamation lawsuit. In that lawsuit, the Australian mayor threatened to sue OpenAI after discovering ChatGPT falsely claimed he had been imprisoned for bribery when, in fact, he was a whistleblower who had exposed corporate misconduct.
The case was ultimately resolved in April 2023 when OpenAI agreed to filter out the false statements within Hood's 28-day ultimatum. That is possibly when the first ChatGPT hard-coded name filter appeared.
As for Jonathan Turley, a George Washington University Law School professor and Fox News contributor, 404 Media notes that he wrote about ChatGPT's earlier mishandling of his name in April 2023. The model had fabricated false claims about him, including a non-existent sexual harassment scandal that cited a Washington Post article that never existed. Turley told 404 Media he has not filed lawsuits against OpenAI and said the company never contacted him about the issue.
Jonathan Zittrain, a Harvard Law School professor who studies Internet governance, recently published an article in The Atlantic about AI regulation and ChatGPT. While both professors' work appears in citations within The New York Times' copyright lawsuit against OpenAI, tests with other cited authors' names did not trigger similar errors. We also tested "Mark Walters," another person who filed a defamation suit against OpenAI in 2023, but it did not stop the chatbot's output.
The "David Mayer" block in particular (now resolved) presents additional questions, first posed on Reddit on November 26, as multiple people share this name. Reddit users speculated about connections to David Mayer de Rothschild, though no evidence supports these theories.
This is an interesting development, in that any theoretical "AI" should be able to navigate fact from fiction.
As the media industry sought to use AI as part of its operating practices... it is clear that "hardcoding" the output is not only possible, but it's already happening.
How soon before the embedded 'establishment' decides what you can and cannot use AI for?
When asked about these names, ChatGPT responds with "I'm unable to produce a response" or "There was an error generating a response" before terminating the chat session.
These names are said to be:
- Brian Hood
- Jonathan Turley
- Jonathan Zittrain
- David Faber
- Guido Scorza
- David Mayer
Since what the industry calls "AI" is a product, fully developed and programmed by human beings, it implies a specifically directed restriction... which begs many questions.
From ArsTechnica: Certain names make ChatGPT grind to a halt, and we know why
Subtitled: Filter resulting from subject of settled defamation lawsuit could cause trouble down the road.
The chat-breaking behavior occurs consistently when users mention these names in any context, and it results from a hard-coded filter that puts the brakes on the AI model's output before returning it to the user.
...
We first discovered that ChatGPT choked on the name "Brian Hood" in mid-2023 while writing about his defamation lawsuit. In that lawsuit, the Australian mayor threatened to sue OpenAI after discovering ChatGPT falsely claimed he had been imprisoned for bribery when, in fact, he was a whistleblower who had exposed corporate misconduct.
The case was ultimately resolved in April 2023 when OpenAI agreed to filter out the false statements within Hood's 28-day ultimatum. That is possibly when the first ChatGPT hard-coded name filter appeared.
As for Jonathan Turley, a George Washington University Law School professor and Fox News contributor, 404 Media notes that he wrote about ChatGPT's earlier mishandling of his name in April 2023. The model had fabricated false claims about him, including a non-existent sexual harassment scandal that cited a Washington Post article that never existed. Turley told 404 Media he has not filed lawsuits against OpenAI and said the company never contacted him about the issue.
Jonathan Zittrain, a Harvard Law School professor who studies Internet governance, recently published an article in The Atlantic about AI regulation and ChatGPT. While both professors' work appears in citations within The New York Times' copyright lawsuit against OpenAI, tests with other cited authors' names did not trigger similar errors. We also tested "Mark Walters," another person who filed a defamation suit against OpenAI in 2023, but it did not stop the chatbot's output.
The "David Mayer" block in particular (now resolved) presents additional questions, first posed on Reddit on November 26, as multiple people share this name. Reddit users speculated about connections to David Mayer de Rothschild, though no evidence supports these theories.
This is an interesting development, in that any theoretical "AI" should be able to navigate fact from fiction.
As the media industry sought to use AI as part of its operating practices... it is clear that "hardcoding" the output is not only possible, but it's already happening.
How soon before the embedded 'establishment' decides what you can and cannot use AI for?