|
I’m interested in figuring out if AI trained on massive public data will reach a maxima when it is good enough to cover most unimpressive requirements such that massive proportions of the public corpus will be spammy cheap text it itself generated for shills and corporations making a quick buck or astroturfing poo poo, and it inevitably starts feeding back into itself. The same is true of code and you can see that effect even on people where people adopt the local code style, but if the local code style is considered to be bad, then it acts as its own reinforcement and you make it ever harder to break out of it. but given how much volume is needed to train AI, it feels like it’d be much more sensitive to poisoning its own well. MononcQc fucked around with this message at 12:55 on Jan 27, 2023 |
![]() |
|
![]()
|
# ¿ Mar 21, 2023 10:52 |