04-22-2024, 04:07 PM
I believe this is an important issue. I feel that people generally, and children specifically, should never be subject to the kind of "image abuse" made casually possible by those who use technology for making a person into an object... be it sexual, shameful, or other malicious end. I find encouragement that people have been inclined to consider this.
But I have some concerns about this response proffered by the government in this case, and some question about this article.
From Wired: The Biggest Deepfake Porn Website Is Now Blocked in the UK
Subitled: The world’s most-visited deepfake website and another large competing site are stopping people in the UK from accessing them, days after the UK government announced a crackdown.
Two of the biggest deepfake pornography websites have now started blocking people trying to access them from the United Kingdom. The move comes days after the UK government announced plans for a new law that will make creating nonconsensual deepfakes a criminal offense.
Nonconsensual deepfake pornography websites and apps that “strip” clothes off of photos have been growing at an alarming rate—causing untold harm to the thousands of women they are used to target.
Clare McGlynn, a professor of law at Durham University, says the move is a “hugely significant moment” in the fight against deepfake abuse. “This ends the easy access and the normalization of deepfake sexual abuse material,” McGlynn tells WIRED.
Since deepfake technology first emerged in December 2017, it has consistently been used to create nonconsensual sexual images of women—swapping their faces into pornographic videos or allowing new “nude” images to be generated. As the technology has improved and become easier to access, hundreds of websites and apps have been created. Most recently, schoolchildren have been caught creating nudes of classmates.
I am not particularly interested in what some people say about this, as most journalists and editors are... I'm not looking for 'quotables' to embrace... but when I read the article, I do want information...
Over 700 words, and not one reference identifying the actual sites which are guilty of normalizing the exploitation... I wonder why?
WIRED is not naming the two websites due to their enabling of abuse.
"Ignorance is strength," eh? Or perhaps "knowledge is weakness?" "We can't tell you, because you'll go look." is the message there... But it sort of denies the fact that the only reason this is a problem is because of how the industry has been allowed to monetize people simply 'looking around'... hmm, food for thought.
And of course, the situation is - since identified as a crisis - immediately becomes justification for the suggestion that ...
... Ajder adds that search engines and hosting providers around the world should be doing more to limit the spread and creation of harmful deepfakes.
Oooh lookie here!... another call for a middleman to obscure access to the internet.
Let someone else decide to block things, rather than prosecute offenders... I know it seems contrary to my position, but this is clearly not a deterrence for the act of 'making harmful imagery'... it's a deterrence for anyone accessing it... does that seem wise?
But I have some concerns about this response proffered by the government in this case, and some question about this article.
From Wired: The Biggest Deepfake Porn Website Is Now Blocked in the UK
Subitled: The world’s most-visited deepfake website and another large competing site are stopping people in the UK from accessing them, days after the UK government announced a crackdown.
Two of the biggest deepfake pornography websites have now started blocking people trying to access them from the United Kingdom. The move comes days after the UK government announced plans for a new law that will make creating nonconsensual deepfakes a criminal offense.
Nonconsensual deepfake pornography websites and apps that “strip” clothes off of photos have been growing at an alarming rate—causing untold harm to the thousands of women they are used to target.
Clare McGlynn, a professor of law at Durham University, says the move is a “hugely significant moment” in the fight against deepfake abuse. “This ends the easy access and the normalization of deepfake sexual abuse material,” McGlynn tells WIRED.
Since deepfake technology first emerged in December 2017, it has consistently been used to create nonconsensual sexual images of women—swapping their faces into pornographic videos or allowing new “nude” images to be generated. As the technology has improved and become easier to access, hundreds of websites and apps have been created. Most recently, schoolchildren have been caught creating nudes of classmates.
I am not particularly interested in what some people say about this, as most journalists and editors are... I'm not looking for 'quotables' to embrace... but when I read the article, I do want information...
Over 700 words, and not one reference identifying the actual sites which are guilty of normalizing the exploitation... I wonder why?
WIRED is not naming the two websites due to their enabling of abuse.
"Ignorance is strength," eh? Or perhaps "knowledge is weakness?" "We can't tell you, because you'll go look." is the message there... But it sort of denies the fact that the only reason this is a problem is because of how the industry has been allowed to monetize people simply 'looking around'... hmm, food for thought.
And of course, the situation is - since identified as a crisis - immediately becomes justification for the suggestion that ...
... Ajder adds that search engines and hosting providers around the world should be doing more to limit the spread and creation of harmful deepfakes.
Oooh lookie here!... another call for a middleman to obscure access to the internet.
Let someone else decide to block things, rather than prosecute offenders... I know it seems contrary to my position, but this is clearly not a deterrence for the act of 'making harmful imagery'... it's a deterrence for anyone accessing it... does that seem wise?