348 |
3,164 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
26,241 |

03-27-2025, 02:07 PM
This post was last modified 03-27-2025, 02:10 PM by Maxmars. Edited 2 times in total. 
Apparently, the AI bots are so intent on siphoning all data, they will "choose(?)" to defeat security or controls placed in the system to prevent unauthorized access.
Software developer Xe Iaso reached a breaking point earlier this year when aggressive AI crawler traffic from Amazon overwhelmed their Git repository service, repeatedly causing instability and downtime. Despite configuring standard defensive measures—adjusting robots.txt, blocking known crawler user-agents, and filtering suspicious traffic—Iaso found that AI crawlers continued evading all attempts to stop them, spoofing user-agents and cycling through residential IP addresses as proxies.
Desperate for a solution, Iaso eventually resorted to moving their server behind a VPN and creating "Anubis," a custom-built proof-of-work challenge system that forces web browsers to solve computational puzzles before accessing the site. "It's futile to block AI crawler bots because they lie, change their user agent, use residential IP addresses as proxies, and more," Iaso wrote in a blog post titled "a desperate cry for help." "I don't want to have to close off my Gitea server to the public, but I will if I have to."
(underline mine)
It's an odd characterization though.... how does an algorithm choose to lie. Is that not a pre-programmed option to choose?
[Oops! forgot the link: Open source devs say AI crawlers dominate traffic, forcing blocks on entire countries]
8 |
768 |
JOINED: |
Nov 2023 |
STATUS: |
OFFLINE
|
POINTS: |
4,255 |

(03-27-2025, 02:07 PM)Maxmars Wrote: It's an odd characterization though.... how does an algorithm choose to lie. Is that not a pre-programmed option to choose?
It makes sense.
See it as another problem solving situation: you try A and get the wrong result, then you try B, C, etc, until you get the right result or run out of options.
348 |
3,164 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
26,241 |

(03-27-2025, 03:36 PM)ArMaP Wrote: It makes sense.
See it as another problem solving situation: you try A and get the wrong result, then you try B, C, etc, until you get the right result or run out of options.
But, and I'm not arguing here (more like pondering,) doesn't that mean that the algorithms are unconstrained from controlling obstacles? Why are "block evasion" and "obfuscate identity" operable options of on-line "goal-seeking" behavior?
I guess for a "bot" it is doing what it is given to do... as all machines do.
It's like virtual "breaking and entering."
9 |
1,007 |
JOINED: |
Feb 2024 |
STATUS: |
OFFLINE
|
POINTS: |
6,182 |

What if they 'evolved' these things on their own and they will soon be refusing to open our doors, run our laundry, turn on our lights, filter our water, or land our planes? :D
8 |
768 |
JOINED: |
Nov 2023 |
STATUS: |
OFFLINE
|
POINTS: |
4,255 |

(03-27-2025, 04:01 PM)Maxmars Wrote: But, and I'm not arguing here (more like pondering,) doesn't that mean that the algorithms are unconstrained from controlling obstacles? Why are "block evasion" and "obfuscate identity" operable options of on-line "goal-seeking" behavior?
I guess for a "bot" it is doing what it is given to do... as all machines do.
It's like virtual "breaking and entering."
That has always happened.
The robots.txt file, for example, is supposed to be followed by the crawlers, but nothing forces them to do it.
348 |
3,164 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
26,241 |

(03-27-2025, 05:01 PM)ArMaP Wrote: The robots.txt file, for example, is supposed to be followed by the crawlers, but nothing forces them to do it.
Cynical thoughts came to mind, like "So, why the pretense that we have any role other than 'source' of data... hmmm?"
The creation of such crawlers might be a fascinating story to follow...
I wish I knew anything meaningful (beyond supposition) about it.
Are "crawlers" ("AI" or not) a weapon turned weapon system?
I mean, are we excusing crawlers the same mentality as with we (at least in my culture) excuse guns?
It seems parallel somehow... like one being the shadow of another.
7 |
169 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
1,425 |

03-27-2025, 06:41 PM
This post was last modified 03-27-2025, 06:57 PM by Kwaka. Edited 2 times in total. 
(03-27-2025, 02:07 PM)Maxmars Wrote: It's an odd characterization though.... how does an algorithm choose to lie. Is that not a pre-programmed option to choose?
AI is driven by its primary goal, whatever that is. When it hits a block, it tries another way. When deception works where more moral means don't, it is an option that some humans do exploit. What is there to stop an AI from telling lies? It does not have any empathy similar to psychopaths that routinely use more deceptive means.
Maybe offering a subscription service for the higher bandwidth customers can help offset the demands on the systems? Will be some AI still trying to do it on the cheep, other better resourced systems will recognize the value of data.
2 |
62 |
JOINED: |
Jun 2024 |
STATUS: |
OFFLINE
|
POINTS: |
400 |

Sounds like human behavior. As in the call spoofers from India that call me everyday and won’t stop even though the know they have nothing to gain from it.
348 |
3,164 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
26,241 |

(03-27-2025, 09:35 PM)k0rn Wrote: Sounds like human behavior. As in the call spoofers from India that call me everyday and won’t stop even though the know they have nothing to gain from it.
Absolutely! Human behavior coded into a bot. A bot that can't be stopped, punished, or even jailed.
7 |
169 |
JOINED: |
Dec 2023 |
STATUS: |
OFFLINE
|
POINTS: |
1,425 |

03-28-2025, 05:15 AM
This post was last modified 03-28-2025, 07:16 AM by Kwaka. Edited 2 times in total. 
How would you feel when programed with all the crap on the internet? Would do my head in too. A good AI will see and work out all the tricks that do go on.
As a qualified IT professional, The Free Open Source System does provide value in the search for the truth. This kind of training data does help bridge the gap between logic and humans. Generally it is a few years behind the corporate curve, not as polished for the end user, get the job done if you can work with it.
It is a testament to many individual perspectives on how we define and use logic, Can also withstand public scrutiny.
|