New around here? Register your SA Forums Account here!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
distortion park
Apr 25, 2011


endlessmonotony posted:

Can you do those things?

More importantly, can you prove you can do those things?

I think I can, and if you asked people around me they'd say I do those things, I'm pretty confident you would as well. you could ask me "where's the Nutella", and I'd point to the cupboard where you'd find it. if you started spouting nonsense you might see me trying to work out how to react. perhaps you could try and come up with a specific test for one of those things, and I think I'd have as good a chance of passing it as the average person, but I'm not sure they're that amenable to fixed evaluations.

i don't think I can do any better than that

Adbot
ADBOT LOVES YOU

Pythagoras a trois
Feb 19, 2004

I have a lot of points to make and I will make them later.

infernal machines posted:

reject traditional concepts like meaning and embrace a world without referents or the signified

chatgpt is amazing at winograd schema. old terry doesn't do ai research anymore but his test for intelligence is now firmly in the category of "too easy".

Pythagoras a trois
Feb 19, 2004

I have a lot of points to make and I will make them later.
the poster saw something terrible on his monitor, which was turned off. what was turned off, the poster, the thing he saw, or the monitor?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
all of the above

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
i'd need to know what a poster and a monitor are to infer the subject of the query, but maybe if i see several thousand more sentences containing the words monitor and poster i can make a guess based on their correlation with each other in the text

Pythagoras a trois
Feb 19, 2004

I have a lot of points to make and I will make them later.

infernal machines posted:

i'd need to know what a poster and a monitor are to infer the subject of the query, but maybe if i see several thousand more sentences containing the words monitor and poster i can make a guess based on their correlation with each other in the text

if you need to "know" something to answer this sort of question, then because chatgpt can answer this sort of question it does "know" things!

https://freddiedeboer.substack.com/p/chatgpt-and-winograds-dilemma

Obviously that's not a good conclusion to draw, but falling back in ineffable properties of human minds is not productive (the article linked goes on to point out chatgpt doesn't have a "theory of the world" and therefore this test is void). winograd schema problems were a litmus test for knowledge and understanding only until an ai came along that could trounce them.

Pythagoras a trois fucked around with this message at 13:59 on Feb 24, 2023

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
your inability to derive the meaning of the words i posted suggests that chatgpt is at least as capable of comprehension as you are, so point taken

mystes
May 31, 2006

infernal machines posted:

your inability to derive the meaning of the words i posted suggests that chatgpt is at least as capable of comprehension as you are, so point taken
huh?

endlessmonotony
Nov 4, 2009

by Fritz the Horse

:golfclap:

distortion park
Apr 25, 2011


infernal machines posted:

your inability to derive the meaning of the words i posted suggests that chatgpt is at least as capable of comprehension as you are, so point taken

im also confused about what you mean

endlessmonotony
Nov 4, 2009

by Fritz the Horse
The joke is that RokosCockatrice is begging the question.

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
my point was made by freddie in the substack posted

quote:

But it’s really important that we all understand that ChatGPT is not basing its coindexing on a theory of the world, on a set of understandings about the understandings and the ability to reason from those principles to a given conclusion. There is no place where a theory of the world “resides” for ChatGPT, the way our brains contain theories of the world. ChatGPT’s output is fundamentally a matter of association - an impossibly complicated matrix of associations, true, but more like Google Translate than like a language-using human.

but if it functionally doesn't matter because the results are generally the same either way, then like i said, it's fine and worrying about silly things like meaning is pointless

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
you can't know anything because knowledge doesn't exist and as long as you can reasonably consistently formulate replies that appear to satisfy the requirements how you came to them is really just splitting hairs

haveblue
Aug 15, 2005



Toilet Rascal
point is also made in the top comment on that article:

quote:

ChatGPT can pass the canonical Winograd schema because it has heard the answer before. If you do a novel one, it fails. Someone posted a new one on Mastodon "The ball broke the table because it was made of steel/Styrofoam." In my test just now, it chooses ball both times.

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
notably, this attitude towards understanding versus apparent results is why we have vehicles on the roads marketed with "self driving" capabilities that mistake a lot of solid objects for open roads

distortion park
Apr 25, 2011


infernal machines posted:

my point was made by freddie in the substack posted

but if it functionally doesn't matter because the results are generally the same either way, then like i said, it's fine and worrying about silly things like meaning is pointless

thank you. I agree with him and you about this being the case for chatgpt as it exists now and for the immediate future - I'm less certain that a) human intelligence is significantly different from a series of associations and b) a theory of the world couldn't "reside" in an artificial system in a similar way to that which it apparently does in a human brain, which is after all a physical object.

but yes the precise meaning of "intelligence" is much less important that the actual behaviours! who cares if a computer is "intelligent" if it's just stolen your job as a proofreader etc

distortion park
Apr 25, 2011


infernal machines posted:

notably, this attitude towards understanding versus apparent results is why we have vehicles on the roads marketed with "self driving" capabilities that mistake a lot of solid objects for open roads

seems like the results were pretty bad in that case

endlessmonotony
Nov 4, 2009

by Fritz the Horse
Also important to note that the only thing that changed in the past few years is machine learning generated speech no longer sounding like it's being played back through a pc speaker.

We could have built chatbots with gibberish output fifteen years ago - in fact, we did, and they had the same problems. We've been aware of these problems with understanding versus association and how they manifest for at least fifty years, and we're no closer to an answer today than we were then.

distortion park posted:

I'm less certain that a) human intelligence is significantly different from a series of associations and b) a theory of the world couldn't "reside" in an artificial system in a similar way to that which it apparently does in a human brain, which is after all a physical object.

We have an easy answer to a, human intelligence is significantly different. We've got hundreds of years of thinking on this, including decades of hard data.

As for if it's possible artificially... almost certainly, but it has a lot of unsolved problems yet to go.

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.

distortion park posted:

seems like the results were pretty bad in that case

in those specific cases, yes, disastrously bad, and yet there were enough cases where that didn't occur that people decided it could drive

it can't. but it can approximate proper driving right up until that point

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.

distortion park posted:

but yes the precise meaning of "intelligence" is much less important that the actual behaviours! who cares if a computer is "intelligent" if it's just stolen your job as a proofreader etc

agreed, unless your job as a proofreader requires consistent accuracy in novel scenarios, in which case at least whoever decided to use the ai as a proofreader might care

mystes
May 31, 2006

endlessmonotony posted:

Also important to note that the only thing that changed in the past few years is machine learning generated speech no longer sounding like it's being played back through a pc speaker.
Huh? Most of the people using ChatGPT aren't even using it with speech synthesis. The output may not be as as useful as people are saying but the reason people are interested in it is because the (text) output is vastly better than previous software. Saying that the only thing that has changed is that speech synthesis is better makes absolutely no sense.

Pythagoras a trois
Feb 19, 2004

I have a lot of points to make and I will make them later.
the deadpan delivery of half sarcastic remarks make your posts considerably harder to understand than a normal turing test

endlessmonotony
Nov 4, 2009

by Fritz the Horse

mystes posted:

Huh? Most of the people using ChatGPT aren't even using it with speech synthesis. The output may not be as as useful as people are saying but the reason people are interested in it is because the (text) output is vastly better than previous software. Saying that the only thing that has changed is that speech synthesis is better makes absolutely no sense.

ChatGPT isn't new, it's just a new implementation of a technology we knew to be pretty much useless. We now have more computing power to throw at an approach that doesn't work.

There's a change in AI and it's speech synthesis that sounds like the source its data set came from. That's about it.

RokosCockatrice posted:

the deadpan delivery of half sarcastic remarks make your posts considerably harder to understand than a normal turing test

Ooh, the Turing test, so highly informative. lol. fizzbuzz for programmers who think they know cognition.

The medium is the message here. I'm demonstrating the problem!

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
my "i'm part of the problem" tshirt has people asking a lot of questions already answered by my shirt

endlessmonotony
Nov 4, 2009

by Fritz the Horse
What happened? We built autocomplete that sounds like it has schizophasia.
Why did we do it? VC money.

The real problem here is that we can't even define the problem we want to solve. There's enough philosophy about this problem I can pretend to be a smartass from thousands of years ago, a smartass from hundreds of years ago, a smartass from decades ago, and a smartass from now, all close enough you can't tell the difference.

The data itself shows that our decision making and meaning making processes aren't even in the order we would intuitively assume. We know this isn't thinking because cutting into human brains has revealed thinking definitely has components that don't exist here, and the machine can't make a mental model, it can only replicate output it has already seen.

If I was really capable of explaining the depth of this problem I'd be a far worse poster than I am now, and I have proof. That you can't see, because I ain't doxxing anyone.

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.

distortion park posted:

b) a theory of the world couldn't "reside" in an artificial system in a similar way to that which it apparently does in a human brain, which is after all a physical object.

i should be clear that i'm not making a statement of belief about the posibility of this, i'm a hardcore materialist. i'm saying with certainty that chatgpt and the state of the art of LLMs do not have this and have no mechanism for it.

maybe another tool could have a theory of the world, but the tools we have now, the ones we're discussing here, don't and can't and that has pretty severe implications in terms of their capabilities

infernal machines fucked around with this message at 15:42 on Feb 24, 2023

Cybernetic Vermin
Apr 18, 2005

the problem we wanted to solve as natural language processing, while it is somewhat ill defined it is way more solved than anyone expected. the chatbot stuff is weird tech demos getting interpreted in even weirder ways by a mostly clueless public and an entirely and willfully clueless press.

Pythagoras a trois
Feb 19, 2004

I have a lot of points to make and I will make them later.
It's actually fascinating stuff. SuperGLUE is a set of tests they put together when GLUE (general language understanding evaluation) started to get aced by language models, so they had to pull out the big guns. https://super.gluebenchmark.com/leaderboard

Currently the top scorers are hitting numbers ilke 98% on SuperGLUE's referrent tests, so we probably need to drop these tests for a new CrazyGLUE set. These models are almost definitely all testing on data that includes mentions and examples from the SuperGlue dataset, so, idk, maybe they're getting falsely high scores.

Me angrily putting a bad prompt into chatbing:
https://www.youtube.com/watch?v=TIkYqCJjEtw

Me angrily putting a good prompt into a gpt-neox-20b model running on my own machine, fine tuned on forum posts:
https://www.youtube.com/watch?v=ND9C9RrBum0

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
turns out testing responses to novel situations is hard not the least because you constantly have to come up with novel tests

Cybernetic Vermin
Apr 18, 2005

infernal machines posted:

turns out testing responses to novel situations is hard not the least because you constantly have to come up with novel tests

models using training sets predating superglue have done great on superglue though. the actual nlp progress is not imaginary. the conversational stuff is entirely imaginary though, both in being a nonsense application and in results being extremely frail.

endlessmonotony
Nov 4, 2009

by Fritz the Horse
Natural language processing is much harder than people think and we're still ways away, mostly because breaking down language into inputs requires a model of the world and none of these systems are equipped for that. We're not getting close to solving the problem, we haven't even demonstrated a proof of concept outside just feeding a system a lot of training data from constrained scenarios and asking it to fill in the blanks like automated mad libs. Where that's artificial intelligence, it's also just an oversized if/then statement.

You can connect it to a system with an internal state to control a system doing useful work, but that requires you to understand the connected system too, meaning you've reimplemented buttons but with words.

We're several orders of magnitude away from solving any of it with machine learning. It's going to take mathematicians who understand how people communicate to make any progress.

Cybernetic Vermin
Apr 18, 2005

i will presume that is chatgpt doing a particularly poor job of making that point though

e: to be clear, literally, as the chatgpt response i get is similar but better.

Cybernetic Vermin fucked around with this message at 18:00 on Feb 24, 2023

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.

Cybernetic Vermin posted:

the conversational stuff is entirely imaginary though, both in being a nonsense application and in results being extremely frail.

not being a genius mind at a top tech firm, this would seem to me like it's a poor fit for a product that relies on conversational interaction, but maybe that's why i'm not working at microsoft or google

Achmed Jones
Oct 16, 2004



quote:

As an AI language model, ChatGPT is capable of generating text responses to a wide range of prompts and questions. However, it is important to note that ChatGPT does not possess intelligence in the same way that humans or animals do.

Intelligence involves a wide range of cognitive abilities such as perception, learning, reasoning, problem-solving, and decision-making, which are still not fully understood or replicated by artificial intelligence systems like ChatGPT.

ChatGPT relies on pre-existing data and algorithms to generate responses, and while it can produce impressive results, it does not have the ability to think, learn, or reason independently. Its responses are limited by the data it has been trained on and the algorithms it uses, which means it cannot truly understand or empathize with users in the way that humans can.

In summary, ChatGPT is not intelligent because it lacks the capacity for independent thought, learning, and reasoning that are essential components of human intelligence.

straight from the algorithm's output field

Achmed Jones
Oct 16, 2004



it is reasonable to treat things that seem to be intelligences as intelligences. llms aint that.

Cybernetic Vermin
Apr 18, 2005

infernal machines posted:

not being a genius mind at a top tech firm, this would seem to me like it's a poor fit for a product that relies on conversational interaction, but maybe that's why i'm not working at microsoft or google

yeah, i do think it is pr nonsense atm. i think google is right to be a touch worried though, as one thing they might be able to do is make more robust search engines. i.e. google killed off structured search engines, with boolean operators and whatnot, but a llm might actually be able to robustly turn stuff like "what's a nice non-vegetarian main dish that is nonetheless gluten- and dairy-free" into "recipe and (chicken or steak or ... or cod) and not vegan and not vegetarian [etc.]", which would make a pretty good tool for real people.

not fancy enough for pr and demos, but useful enough to make another search engine a contender.

Sagebrush
Feb 26, 2012


professor lumpy balls, they call me

Achmed Jones posted:

it is reasonable to treat things that seem to be intelligences as intelligences.

no it isn't. people are broadly credulous and superstitious, and they have a natural tendency to anthropomorphize everything, and they can be fooled by incredibly dumb tricks.

tons of people think that teslas are intelligent. ELIZA can still convince some people that it's human. "people think this is an intelligence" is the worst possible metric

distortion park
Apr 25, 2011


Sagebrush posted:

no it isn't. people are broadly credulous and superstitious, and they have a natural tendency to anthropomorphize everything, and they can be fooled by incredibly dumb tricks.

tons of people think that teslas are intelligent. ELIZA can still convince some people that it's human. "people think this is an intelligence" is the worst possible metric

we don't have much else to go on than peoples' evaluations of what is or isn't intelligent

endlessmonotony
Nov 4, 2009

by Fritz the Horse

distortion park posted:

we don't have much else to go on than peoples' evaluations of what is or isn't intelligent

Might as well say there's no other way to assess someone's programming skills than timing how long it takes for them to solve a fizzbuzz.

You can test for the components of intelligence, in computers and living beings alike. Problem is that if a machine learning system has the test in its training data, you need a new test different enough it can't fill in the blanks. If you make a standardized test that test just becomes training data and all.

Adbot
ADBOT LOVES YOU

NoneMoreNegative
Jul 20, 2000
GOTH FASCISTIC
PAIN
MASTER




shit wizard dad

lol serendipity from the BETA newsletter

* STUPID CHAT BOT - dumbdestroy7 writes, "Thought you might find this funny. It's an AI chatbot that's been trained on all the dumbest stuff in the world (Homer Simpson quotes, Pauly Shore screenplays, that one bodybuilding forum where they argued about how many days there were in the week, etc). The first AI you don't have to fear will steal your job or overthrow humanity. It's pretty fun to chat with as well." Yep, we were entertained as it doesn't do particularly predictable things.

https://2dumb2destroy.com/

I have chatted to it a few minutes and it is indeed very dumb.

  • 1
  • 2
  • 3
  • 4
  • 5