deny ignorance.

 

Login to account Create an account  


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
AI Content Humanizer
#1
Houston, we have a problem .


While AI chatbots such as ChatGPT and AI content detectors are allready old news , the rise of tools that can create Undetectable AI content are allready here . These tools, not sure what is the official name, are "humanizers"  ....make content more human. So this cat and mouse game has gone to this next level . Both AI content creators and detectors / humanizers are also doing it as business...for money.

I see troubled waters ahead , we allready detected multible AI contents in the other site , i had impression we can just detect and get them busted......but not anymore. The new tools can make it much harder to detect .

Imagine if you will , a discussion board where  someone`s content/reply might not be fully verified reliably anymore as 100% authentic human  Puzzled .

[Video: https://youtu.be/9DXA1Q_7kTE?feature=shared]


UNDETECTABLE AI / Advanced AI Detector and Humanizer


Chat GPT Detectors


The top seven AI detectors of 2024 (free and paid)



How to Convert AI Text to Human Text
Quote:  
PerplexityPerplexity measures how random or unpredictable the word choice of a text is. For each word or phrase, is the word or phrase that follows logical? Or is it… perplexing?
Language Models try to pick the most likely words. They very rarely perplex you with their word choice. It’s strikingly… standard. AI-generated texts usually have very low perplexity scores.
Humans are sometimes predictable. But often times they are not. Humans tend to take more risks with their word choice. They also tend to be more original. And more erroneous. Humans are more likely to misuse words, have typos, or put words together that aren’t necessarily logical. 
BurstinessBurstiness measures the randomness in sentence structure, sentence length, and sentence type. While perplexity measures randomness at the word level, burstiness measures randomness at the structural level. AI-generated text tends to be very evenly structured. Sentences typically have similar lengths. AI-generated texts usually have low burstiness scores.
If you’ve noticed AI writing is “monotonous”, it’s probably because it has low burstiness. The sentences just sorta… blend together. Human writing has more bumps, jumps, and excitement.
Reply
#2
Not used any of this.

Other video makers are giving examples of human content falsely flagged AI.  

I’m just going to go hop on a sailboat and sail away into obscurity until our new ai pals get around to killing us all. Sometimes getting old and useless is a good thing. I want nothing to do with ai.
Reply
#3
(06-23-2024, 04:07 AM)pianopraze Wrote: Not used any of this.

Other video makers are giving examples of human content falsely flagged AI.  

I’m just going to go hop on a sailboat and sail away into obscurity until our new ai pals get around to killing us all. Sometimes getting old and useless is a good thing. I want nothing to do with ai.


I share similar feeling , the communication / socializing will get broken ,eventually to some degree at least .With having to think who`s thoughts is that, or even worse " are you a human " .

Yes i also seen human content falsely flagged AI . Will that be the end result also as the tools get better .
Reply
#4
(06-23-2024, 06:09 AM)Kenzo Wrote: I share similar feeling , the communication / socializing will get broken ,eventually to some degree at least .With having to think who`s thoughts is that, or even worse " are you a human " .

I suppose people just have to go back to communicate/socialize in person.  Smile
Reply
#5
(06-23-2024, 07:24 AM)ArMaP Wrote: I suppose people just have to go back to communicate/socialize in person.  Smile


That is true  , in person is more honest and direct .

This online chit-chat may become gray area , unless knows well all current writers and their styles . There are clearly writing styles i recognize in some members , like with Maxmars style to write for example .
Reply
#6
(06-23-2024, 07:45 AM)Kenzo Wrote: There are clearly writing styles i recognize in some members , like with Maxmars style to write for example .

Apparently, my style is also recognisable, as I once joined another forum with a different user name and was recognised after a few posts.  Smile
Reply
#7
(06-23-2024, 10:46 AM)ArMaP Wrote: Apparently, my style is also recognisable, as I once joined another forum with a different user name and was recognised after a few posts.  Smile

Well i doubt i would recognise your style  . I doubt i would recognise many here just by style .

Maybe they can create AI that can learn to mimic someones writing style in future , to AI it might  not be big job to read someones all writings in this place, and then copy the style and then write with same writing style, or it could be close enough .
Reply
#8
I have to say, this is a lot more revealing of the programmers' training and direction than I initially considered.

I get the idea of making language synthesis more "human," even if only from a utilitarian perspective.  We would naturally find it less cumbersome to absorb data if we felt that it was presented as "we speak." So yeah, I can give them that latitude...

However, there is a big "BUT" following that understanding.

Someone is 'deciding" what manner of human speech actually is "better." 
They are creating a "model" which the so-called "AI" will adhere to. 
They are inserting "perplexities" of language on a "random" basis to ensure the machine communicates "as humans do." 

Except that is NOT "what humans do." 
This will lead to output that is even more distinctly "artificial."

The subtleties of human speech are not a function of thought, but of feeling. 
Algorithmic processes have no 'feeling,' and any attempt to mechanize it will inevitably fail.

Process models can only go so far and no further, until someone actually develops a true AI. 
We haven't, or at least I have yet to see AI... only clever "algorithmic models."
Reply
#9
(06-23-2024, 04:37 PM)Maxmars Wrote: I have to say, this is a lot more revealing of the programmers' training and direction than I initially considered.

I get the idea of making language synthesis more "human," even if only from a utilitarian perspective.  We would naturally find it less cumbersome to absorb data if we felt that it was presented as "we speak." So yeah, I can give them that latitude...

However, there is a big "BUT" following that understanding.

Someone is 'deciding" what manner of human speech actually is "better." 
They are creating a "model" which the so-called "AI" will adhere to. 
They are inserting "perplexities" of language on a "random" basis to ensure the machine communicates "as humans do." 

Except that is NOT "what humans do." 
This will lead to output that is even more distinctly "artificial."

The subtleties of human speech are not a function of thought, but of feeling. 
Algorithmic processes have no 'feeling,' and any attempt to mechanize it will inevitably fail.

Process models can only go so far and no further, until someone actually develops a true AI. 
We haven't, or at least I have yet to see AI... only clever "algorithmic models."

Yeeh i get what you are meaning....

This "model" , we will see what`s it all about soon maybe. It Will be artificial yes .
Do you think algorithmic or current incanation of AI could copy persons way of communicating? I dont know the answer. I am not using any AI , so i am bit outsider in the whole AI chatbot stuf ..
Reply
#10
Something I didn't see discussed in this thread (so far), but maybe I missed it, is the impetus (or one of the main ones) behind 'content humanizers'.  One of the major driving forces behind this text AI and text to speech AI is videos, all sorts of videos.  If you go out on YT now you can find literally thousands of AI videos.  Heck, I don't think you can even look at a single page on YT where at least one if not more videos are AI.  Some of the titles are attractive and interesting, but when you watch the video even though it sounds and looks good, the theme jumps all over the place, and many of them make no sense at all.  The title will be one thing, and after a few seconds the subject of the video strays off into something completely unrelated.

There's so much money in videos that people are now using automation just to get clicks.  They figure if they can dupe someone into clicking on even 1% of their videos that this will be a "win"...because they produce hundreds if not thousands of them every single day.  And, if you look closely, you can see many videos actually scraping content together from other videos.  Now, stealing other people's content on YT is nothing new, but what's new here is this content scraping is being done with automation, completely autonomously (by AI).

I went down a rabbit hole recently while looking into ways to do text to speech in a video without having to record the actual audio.  I quickly discovered there are thousands of companies out there now doing exactly this.  What's even more interesting is, the latest trend is, you guessed it, "more humanistic" sounding text to speech.

When you stack AI like Chat GPT, on top of video creation AI, on top of text to speech AI, you basically don't have to do anything except for type in a brief subject and:
  • Chat GPT does the research
  • The video AI generates the video content
  • Chat GPT develops the script
  • And text to speech AI humanizes all the audio.
BOOM...DONE!  Even more concerning, most of this stuff is all open source.

Okay, so what's the big deal, it's just some stupid YT videos, right?  Who cares?  Well, that's just it, nobody notices at first...for exactly these reasons.  BUT, what's happening in the background is all these processes are becoming highly refined, and suddenly telling reality from fantasy is next to impossible.

Just the other day I had to do some of our required Annual Training.  (UGH...dread this every year!)  Historically, these remote training modules were usually poorly done videos with lots of glitches and endless brain-cancelling Powerpoint slide decks.  About halfway through my 2nd one I was marveling at how drastically they had seemingly 'improved' in quality.  Then I suddenly realized...they weren't real!  It was all AI driven content.  I was being programmed by the 'machine', not the other way around!  That's not how it's supposed to work.

Pretty soon, people will have no way to tell what is REAL and what isn't.  Think about it...news, history books, science papers, the internet...your doctor...your BANK STATEMENT...  And what's next???

AI is so monumentally dangerous to the human race, I fear people just do not understand how serious it truly is.  It invades everything, and when you "humanize" it, there is no way to tell what's real and what's not.  So many times people get all caught up in the novelty of some cute little AI gimmick and overlook the down the road implications of this technology.  And unlike humans, you can't kill it.  Once it's there, it's there to stay; there is no "OFF" switch.  AI is autonomous from humans, it teaches itself.  AI doesn't exist in one place, so you can't 'corral' it somehow; it's everywhere (if it's anywhere).  Once it gets turned on, it can't be turned off, and when it goes astray...you can't stop it!  Okay, you can stop it by turning off all computers and disconnecting from all networks, but that is not realistic, and it's never going to happen.  The World would be crippled overnight if any such thing was ever done.  The vast majority of modern society is 100% dependent on computers to survive.  You can't turn it off.
Reply



Forum Jump: