|
Cybernetic Vermin posted:i increasingly suspect that a lot of what is happening with chatgpt is a good old eliza effect. with a bunch of gpt-generated text sure, but i generally suspect that there's a bunch of hard logic layered on top which crudely parses a couple of classes of questions (e.g. "but in the style of x", "explain y", "that's not correct, should be z") tied up with rigid ways of (re-)querying the model. I thought you were going to go for the Eliza effect of humans attributing more capability to systems that interact via chat than they're actually showing. That, or just anthropomorphizing the "reasoning" behind the outputs as human like when it's not. Also, I've had some difficulty in either finding or believing the size on disk the gpt3+ model versions take. Latest i have seen is 800GB, which is actually larger than it takes to store the entirety of some versions of the massive common crawl dataset. I do wonder what fraction of the observed performance would have been possible by efficiently organizing this data for search and layering on top of it some summarization, text mixing/recombining, and style/format translation capabilities. Functionally, with a less than 3-1 compression ratio of training tokens to model weights and the known capability of these models to memorize training elements, this may very well be what is actually happening, just obfuscated within the mass of opaque model weights.
|
# ? Jan 24, 2023 14:47 |
|
|
# ? Oct 14, 2024 02:22 |
|
jemand posted:I thought you were going to go for the Eliza effect of humans attributing more capability to systems that interact via chat than they're actually showing. That, or just anthropomorphizing the "reasoning" behind the outputs as human like when it's not. that's roughly what i do mean; specifically that the main new thing impressing people here is actually a few surface-level mechanical rules. the actual text and general information recall being a previous known, but chatgpt seems way slicker but maybe not for a very deep or sophisticated reason. jemand posted:Also, I've had some difficulty in either finding or believing the size on disk the gpt3+ model versions take. Latest i have seen is 800GB, which is actually larger than it takes to store the entirety of some versions of the massive common crawl dataset. I do wonder what fraction of the observed performance would have been possible by efficiently organizing this data for search and layering on top of it some summarization, text mixing/recombining, and style/format translation capabilities. Functionally, with a less than 3-1 compression ratio of training tokens to model weights and the known capability of these models to memorize training elements, this may very well be what is actually happening, just obfuscated within the mass of opaque model weights. 175 billion parameters in fp32 for 700 GB, plus some more surrounding stuff rounds it up. that kind of tells part of the story of what is happening though, as each fp32 parameter on average likely encodes fairly little information. experimentally as models get larger they tend to be more amenable to chopping off more bits on the parameters (into bfloat16 habitually, but int8 also showing up a lot). to some extent they are no doubt memorizing a lot, but a lot of subtle stuff clearly winds up being represented still. compare to rule-based text models it is already significant enough that llm's have a pretty clear abstract representation of a lot of grammatical rules; e.g. you can prompt one with made-up words and it'll use it in a way that makes a fair bit of sense both grammatically and with some world knowledge. the hype making that not seem so impressive, but it was so far out of reach 5 year ago.
|
# ? Jan 24, 2023 15:31 |
|
great thread fart simp
|
# ? Jan 24, 2023 21:22 |
|
echinopsis posted:great thread fart simp thank you 🙏🙏🙏🙏
|
# ? Jan 25, 2023 14:50 |
|
I’m interested in figuring out if AI trained on massive public data will reach a maxima when it is good enough to cover most unimpressive requirements such that massive proportions of the public corpus will be spammy cheap text it itself generated for shills and corporations making a quick buck or astroturfing poo poo, and it inevitably starts feeding back into itself. The same is true of code and you can see that effect even on people where people adopt the local code style, but if the local code style is considered to be bad, then it acts as its own reinforcement and you make it ever harder to break out of it. but given how much volume is needed to train AI, it feels like it’d be much more sensitive to poisoning its own well. MononcQc fucked around with this message at 13:55 on Jan 27, 2023 |
# ? Jan 27, 2023 06:19 |
|
lol that's a pretty good question
|
# ? Jan 27, 2023 06:20 |
|
i think that's the torment nexus
|
# ? Jan 27, 2023 06:21 |
|
tired: numberwang wired: tormentnexus
|
# ? Jan 27, 2023 06:28 |
|
hoarding pre ai comments threads like pre ww2 steel
|
# ? Jan 27, 2023 08:12 |
|
rotor posted:its very tedious to have to explain this to people irl because they're always like "yeah people have always been afraid of automation, they'll just find other jobs" and they dont really have the desire, patience or background to understand the difference beween, say, robot welders on a car assembly line and a general-purpose AI. it’s a good thing we’re not all that close to a general-purpose AI in any of these scenarios then right? j/k the true worry isn’t that general-purpose AI will take all the jobs away but that lovely neural net Chinese Room regurgitation is a hair over the margin of acceptability for penny-pinching self-enriching management types to take all the jobs away I get the impression this has been a problem in the translation community for a while now, nobody wants to pay for a skilled translator when a lovely neural net Chinese Room regurgitation looks like it gets enough of the point across enough of the time and why spend a penny more than absolutely necessary, ever?
|
# ? Jan 30, 2023 11:48 |
|
eschaton posted:I get the impression this has been a problem in the translation community for a while now, nobody wants to pay for a skilled translator when a lovely neural net Chinese Room regurgitation looks like it gets enough of the point across enough of the time and why spend a penny more than absolutely necessary, ever? at the same time it is hard to view automatic translation as a negative, as i think it is a very modest estimate that there's 10x more communication happening between people not sharing a common language today than there were before there was decent automation on that. i remain forever unsure on automation in general, it has all the potential of being a great good, and trying to *prevent* it will not do anything to fix the problems of capitalism. we are certainly not in a state currently where scrambling to save it from a change seems very worth it. luckily there's a lot of time to consider yet, as quite few of the things imagined imminently automated will be.
|
# ? Jan 30, 2023 12:18 |
|
AI still can't do high quality translations for literature, education, subtitles, or most marketing yet - I know a qualified translator and it's the already badly paid work that it is hurting most.
|
# ? Jan 30, 2023 15:00 |
|
rotor posted:like i dont think there's suddenly going to be zero programmers. I just think that vast swathes of them that are working on vanilla predictable business database skins will be put out of work in favor of a few people who know how to put all this poo poo using ai generated code. this has been the goal of a substantial portion of the commercial software industry for virtually its entire existence honestly this sort of issue should have been addressed by companies just adopting ERP systems and adapting their business processes to the systems, instead of assuming that every single thing about their deployment needs to be customized or bespoke for the overwhelming majority of organizations, the thing that’s interesting about them isn’t in any of their standard business processes, so as long as their business systems have extensibility/integration support that should be the only software development they need and that stuff’s bound to be interesting enough that you can’t have a big neural net confabulate its way to a custom application
|
# ? Jan 31, 2023 09:03 |
|
distortion park posted:AI still can't do high quality translations for literature, education, subtitles, or most marketing yet - I know a qualified translator and it's the already badly paid work that it is hurting most. the problem with that is the badly paid work had been the apprentice and journeyman work needed to develop skills to the level of doing the high quality work, by killing the low end it kills the entire pipeline
|
# ? Jan 31, 2023 09:31 |
|
eschaton posted:the problem with that is the badly paid work had been the apprentice and journeyman work needed to develop skills to the level of doing the high quality work, by killing the low end it kills the entire pipeline this is a lot more true for people trying to do some social mobility than people from fancy schools. seems grim
|
# ? Jan 31, 2023 10:45 |
|
exactly unfortunately
|
# ? Jan 31, 2023 11:00 |
|
check this out: fartificial intelligence
|
# ? Jan 31, 2023 11:08 |
|
MononcQc posted:I’m interested in figuring out if AI trained on massive public data will reach a maxima when it is good enough to cover most unimpressive requirements such that massive proportions of the public corpus will be spammy cheap text it itself generated for shills and corporations making a quick buck or astroturfing poo poo, and it inevitably starts feeding back into itself. i'm pretty sure this must already be happening. there's so much ai written stuff already out there, have you tried to find a written product review or read a news story from a smaller online outlet recently? maybe there are some safeguards in place to try to avoid the model eating its own poo poo, or you could try to only use published works, but it seems inevitable that you're going to end up training in part on your own or other ai's outputs
|
# ? Jan 31, 2023 11:11 |
|
steganographic side channel to include hash signature of registered AI generator
|
# ? Jan 31, 2023 11:39 |
|
not only does this prevent your NN from eating its own poo poo it also prevents your NN from eating other NN’s poo poo
|
# ? Jan 31, 2023 11:40 |
|
for images they do do that already, there's a watermark in images from stable diffusion and similar which contains just the single bit "this was made by ai" to weed those out of future sets. pretty tricky for text though. i generally expect though that the internet getting flooded with (even deeper) garbage will be a bigger problem for people trying to use the internet than it will be for future training of models. even now you can just run all training text through gpt-3 and discard anything with a truly minimal perplexity and get a pretty strong filter. even when it rejects something not ai-made it will happen to be a piece of text that doesn't really add to the whole, so there's no downside.
|
# ? Jan 31, 2023 12:35 |
|
AI will take over my six figgies job in a few years I reckon I can’t wait.
|
# ? Feb 7, 2023 03:03 |
|
eschaton posted:not only does this prevent your NN from eating its own poo poo but you want other nns eating your poo poo gives you a competitive advantage to be the only one able to filter it out
|
# ? Feb 7, 2023 03:25 |
|
are octopuses smart enough to qualify as ai?
|
# ? Feb 8, 2023 08:14 |
|
fart simpson posted:are octopuses smart enough to qualify as ai? found the hackernews
|
# ? Feb 8, 2023 08:20 |
|
|
# ? Feb 8, 2023 08:27 |
|
Moo Cowabunga posted:AI will take over my six figgies job in a few years I reckon lol i’ve got at least 4 years
|
# ? Feb 8, 2023 12:20 |
|
https://twitter.com/ninaism/status/1623273497479749632
|
# ? Feb 8, 2023 23:18 |
|
can’t wait for search to get even worse than it already is in the bing / google war
|
# ? Feb 9, 2023 00:20 |
|
i also appreciate the cop with his nose sticking out of his gas mask. ai smart enough to know cops don't wear a mask right but not smart enough to know which ones
|
# ? Feb 9, 2023 01:06 |
|
Who's to say all of you aren't just language models created to post for my own amusement Spending to support the AI revolution Now show me your hands
|
# ? Feb 9, 2023 09:27 |
|
Geebs posted:Who's to say all of you aren't just language models created to post for my own amusement you're not getting a lot for your money if that's the case
|
# ? Feb 9, 2023 10:28 |
|
cow eggs
|
# ? Feb 9, 2023 14:39 |
|
legit q what exactly are chandler and el goog trying to make use of chatgpt/lambda for??? like why does search have a chatbot front of house now? but why.
|
# ? Feb 9, 2023 14:40 |
|
Pollyanna posted:legit q what exactly are chandler and el goog trying to make use of chatgpt/lambda for??? i don't find it very useful, but reports suggest that googles attempts at answering questions directly above the search results see a lot of "engagement", so, idk, if people like that i do guess chatgpt is that but more bigger.
|
# ? Feb 9, 2023 16:05 |
|
it gets funnier the more one thinks about it too, because google for sure can't retaliate and point out errors that bing makes. it is not even that some blatant errors are unexpected for most people i don't think, it is just that they put the error in the marketing materials. good clean funy computer
|
# ? Feb 9, 2023 16:54 |
|
Cybernetic Vermin posted:you're not getting a lot for your money if that's the case We all make do in this economy
|
# ? Feb 9, 2023 18:29 |
|
Pollyanna posted:cow eggs Hellooooo
|
# ? Feb 12, 2023 09:50 |
|
bingGPT going well
|
# ? Feb 13, 2023 11:20 |
|
|
# ? Oct 14, 2024 02:22 |
|
admit you were wrong and apologise for your behaviour,
|
# ? Feb 13, 2023 11:28 |