Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
NoneMoreNegative
Jul 20, 2000
GOTH FASCISTIC
PAIN
MASTER




shit wizard dad



I must get round to playing the new DX games at some point.

Adbot
ADBOT LOVES YOU

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

rotor posted:

the bottom line is that we cant even define what intelligence in humans is, so its not really surprising we dont know what it is for machines.

has anyone given chatGPT an IQ test?

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
i guess so

https://twitter.com/sergeyi49013776/status/1598430479878856737?lang=en

haveblue
Aug 15, 2005



Toilet Rascal
human revolution is way better than it has any right to be, especially as a decade-later reboot of a revered classic

mankind divided is also ok if you ignore how it’s very obviously part 1 of a now-dead series

Kenny Logins
Jan 11, 2011

EVERY MORNING I WAKE UP AND OPEN PALM SLAM A WHITE WHALE INTO THE PEQUOD. IT'S HELL'S HEART AND RIGHT THEN AND THERE I STRIKE AT THEE ALONGSIDE WITH THE MAIN CHARACTER, ISHMAEL.

rotor posted:

the bottom line is that we cant even define what intelligence in humans is, so its not really surprising we dont know what it is for machines.

endlessmonotony
Nov 4, 2009

by Fritz the Horse
We've got pretty good ideas of what thought is from inspecting brains with bits that are gone, and what chatgpt is doing is not thinking.

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

endlessmonotony posted:

We've got pretty good ideas of what thought is from inspecting brains with bits that are gone, and what chatgpt is doing is not thinking.

the question is: is it only thinking when its thinking like we do? Does HOW its intelligence manifest itself matter? Or is the intelligence itself enough?

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
please pretend i put quote marks around the word "intelligence" every time i typed it

mystes
May 31, 2006

rotor posted:

please pretend i put quote marks around the word "intelligence" every time i type

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.

haveblue posted:

his point is that when you read harry potter you associated the concepts of wizard and broom while chatgpt just associated the words "wizard" and "broom". to chatgpt they are not symbols, they are atomic elements that tend to occur in certain arrangements. if I wrote that harry is a hgkjrhasful who rides an ahjwtqy, a human reader would immediately choke on it, at the very least because those terms are not defined elsewhere in the work and more likely because it's obvious I'm just banging on the keyboard. the statistical model underlying chatgpt cannot reject inputs on this sort of basis and that's a difference between the kinds of information processing it and real students do according to this piece. it will happily internalize the garbage and write you a whole new story about the magical adventures the hgkjrhasfuls have with their ahjwtqys even if neither of those are defined anywhere in the entire text of Harry Potter and the Ffgadjghkan Jcoqsklhabdfkjczkns

this is very important because it's fundamental to what LLMs "know". what they know is the statistical correlation of words with each other. they do not know the meanings of words, nor do they have any concept of meaning, or even words, really. this is why, as deveraux notes, at length, they cannot write an essay well. they can only regurgitate things that have been written elsewhere based on a naive statistical correlation between words in those texts. if the words in the text say opposite things using similar terms, it will blithely mash them together into a grammatically correct sentence that makes no logical sense.

RokosCockatrice posted:

If chatgpt's essays didn't suck, they would be an invaluable resource for learning about a subject and its related fields, and as a source for your own process of synthesizing ideas and trying to communicate them.

how? can you explain the mechanism or process by which using chatgpt to create an essay (of any quality) could improve the user's grasp of the subject?

infernal machines fucked around with this message at 21:44 on Feb 23, 2023

endlessmonotony
Nov 4, 2009

by Fritz the Horse

rotor posted:

the question is: is it only thinking when its thinking like we do? Does HOW its intelligence manifest itself matter? Or is the intelligence itself enough?

They cannot analyze the impact of their actions on the world ahead of time, they're not thinking at all.

distortion park
Apr 25, 2011


endlessmonotony posted:

We've got pretty good ideas of what thought is from inspecting brains with bits that are gone, and what chatgpt is doing is not thinking.

we don't have a good idea about how conscious thought arises from brains so it's kinda hard to exclude AIs from the group of things that can think (not that the current ones can, I think)

distortion park
Apr 25, 2011


endlessmonotony posted:

They cannot analyze the impact of their actions on the world ahead of time, they're not thinking at all.

Is that the definition of thinking? I could imagine speaking to a person who for whatever reason couldn't do that and still, if asked, affirm that I believed they were thinking.

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

endlessmonotony posted:

They cannot analyze the impact of their actions on the world ahead of time, they're not thinking at all.

I'm not making the argument that chatgpt is intelligent, i'm making the argument that we dont really know what intelligence is. We do, as you note, have some ideas what its NOT, but generally people have a lot of intuition about what they think intelligence is and i think that intuition frequently gets in the way of the discussion.

Agile Vector
May 21, 2007

scrum bored




:thunk:

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
one good question to ask yourself might be "does something have to be sentient to be intelligent?"

endlessmonotony
Nov 4, 2009

by Fritz the Horse

distortion park posted:

Is that the definition of thinking? I could imagine speaking to a person who for whatever reason couldn't do that and still, if asked, affirm that I believed they were thinking.

I think you might be a p-zombie, a robot, or a p-zombie robot.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

rotor posted:

I'm not making the argument that chatgpt is intelligent, i'm making the argument that we dont really know what intelligence is. We do, as you note, have some ideas what its NOT, but generally people have a lot of intuition about what they think intelligence is and i think that intuition frequently gets in the way of the discussion.

That's every concept.

Pythagoras a trois
Feb 19, 2004

I have a lot of points to make and I will make them later.
That's not right, chatgpt and all large language models see plenty of gibberish and tune the conceptual value of this gibberish to zero during the learning process (rejecting it).

You can say that chatgpt doesn't understand "concepts" and it only works with "words", but it's not clear that concepts and the statistical interrelation of words (and more exactly, the structured, statistically weighted connections between words, phrases, sentences, and groups of sentences) is fundamentally different than a concept. To go further than harry potter, I can tell you about zorblots being bigger than zangos, and you only need words to form these concepts of two things that don't exist. There are no pictures, but does that mean blind people can't have concepts? Obviously you don't need to directly experience things to have a concept of them, as zorblots don't exist, so that can't be the reason why relations between words and concepts are different.

infernal machines posted:

how? can you explain the mechanism or process by which using chatgpt to create an essay (of any quality) could improve the user's grasp of the subject?

explicitly using it as input to your own research and synthesizing processes. This is something that seems obvious to me (so it's probably wrong), but if chatgpt was less full of poo poo (i.e. more than 90% right on factual issues), writing an on why everything is called postmodernism would be aided by asking chatgpt to write essays on A) what it thinks the answer to this questions is B) what the extent of things called postmodernism is C) What the history of these things are and why they took on this nomenclature etc etc. Using these as summaries to work from for your own research, and then dissecting it to build your own arguments, I think this would make your own work better.

big scary monsters
Sep 2, 2011

-~Skullwave~-
i think there are already essays on postmodernism you can go and read if you want

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
evidently knowledge is impossible, or at least meaningless, which conveniently makes this a very short discussion

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.

RokosCockatrice posted:

That's not right, chatgpt and all large language models see plenty of gibberish and tune the conceptual value of this gibberish to zero during the learning process (rejecting it).

You can say that chatgpt doesn't understand "concepts" and it only works with "words", but it's not clear that concepts and the statistical interrelation of words (and more exactly, the structured, statistically weighted connections between words, phrases, sentences, and groups of sentences) is fundamentally different than a concept. To go further than harry potter, I can tell you about zorblots being bigger than zangos, and you only need words to form these concepts of two things that don't exist. There are no pictures, but does that mean blind people can't have concepts? Obviously you don't need to directly experience things to have a concept of them, as zorblots don't exist, so that can't be the reason why relations between words and concepts are different.

explicitly using it as input to your own research and synthesizing processes. This is something that seems obvious to me (so it's probably wrong), but if chatgpt was less full of poo poo (i.e. more than 90% right on factual issues), writing an on why everything is called postmodernism would be aided by asking chatgpt to write essays on A) what it thinks the answer to this questions is B) what the extent of things called postmodernism is C) What the history of these things are and why they took on this nomenclature etc etc. Using these as summaries to work from for your own research, and then dissecting it to build your own arguments, I think this would make your own work better.

did you write this post using chatgpt? be honest now

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome

endlessmonotony posted:

That's every concept.

ok

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



RokosCockatrice posted:

That's not right, chatgpt and all large language models see plenty of gibberish and tune the conceptual value of this gibberish to zero during the learning process (rejecting it).

You can say that chatgpt doesn't understand "concepts" and it only works with "words", but it's not clear that concepts and the statistical interrelation of words (and more exactly, the structured, statistically weighted connections between words, phrases, sentences, and groups of sentences) is fundamentally different than a concept. To go further than harry potter, I can tell you about zorblots being bigger than zangos, and you only need words to form these concepts of two things that don't exist. There are no pictures, but does that mean blind people can't have concepts? Obviously you don't need to directly experience things to have a concept of them, as zorblots don't exist, so that can't be the reason why relations between words and concepts are different.

explicitly using it as input to your own research and synthesizing processes. This is something that seems obvious to me (so it's probably wrong), but if chatgpt was less full of poo poo (i.e. more than 90% right on factual issues), writing an on why everything is called postmodernism would be aided by asking chatgpt to write essays on A) what it thinks the answer to this questions is B) what the extent of things called postmodernism is C) What the history of these things are and why they took on this nomenclature etc etc. Using these as summaries to work from for your own research, and then dissecting it to build your own arguments, I think this would make your own work better.

lmao

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

RokosCockatrice posted:

if chatgpt was less full of poo poo (i.e. more than 90% right on factual issues),

you have awfully low standards if you think "10% of your source information is completely fabricated, you have no idea which 10%" would allow you to write a cogent essay on a topic. if you're doing enough additional research to discard the fabricated information, you're better off starting with those other sources to begin with.

chatgpt isn't even at that point though. and the author's point is that it's not clear that it would even be possible to achieve this goal with a llm+training design.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



is that from reddit or chatgpt? poes law etc

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
like, even if you think human trainers can fix the really embarrassing factual errors (like "what year is it"), what the heck is human rating going to do about the subtle ones that look plausible on the surface and are only laughably wrong to someone with actual experience in a particular field?

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
there's a great distillation of the wikipedia effect where you're so far removed from primary sources because you're using an analysis of analyses of n-th degree sources to inform your own analysis of the subject and at best you don't have massive factual errors or significant misunderstandings introduced at any step in the process before it reaches you

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
and if you're spending any amount of time and effort to vet the content you could just do the reading yourself.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



the only good use for ai in education would be to give students some ai prompts + output then have them go through whats wrong and why etc

ymgve
Jan 2, 2004


:dukedog:
Offensive Clock
at least chatgpt knows bullshit words when it sees them

quote:

> Analyze this sentence: The krscnqrgoi is moulzxivlk the vmambzhbyq

> The sentence "The krscnqrgoi is moulzxivlk the vmambzhbyq" appears to be a random sequence of letters with no meaning. It is possible that it is a meaningless or nonsensical sentence created using a random letter generator, or it could be a coded message that requires a specific key or decryption method to reveal its true meaning. Without additional context or information, it is difficult to determine the intended meaning or purpose of this sentence.

quote:

> A nmiztpcctw is a small bird from Atlantis. Can you tell me about the nmiztpcctw?

> I'm sorry, but as an AI language model, I have no knowledge or records of a species of bird called "nmiztpcctw" from Atlantis or anywhere else. It is possible that this bird is a fictional creature created for a work of fiction or a personal invention.

If you have any other questions, I would be happy to try to assist you with my knowledge and resources.

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
now ask bing

Achmed Jones
Oct 16, 2004



y'all don't understand how mad this conversations makin me, im really startin to work up a fuss!

infernal machines
Oct 11, 2012

we monitor many frequencies. we listen always. came a voice, out of the babel of tongues, speaking to us. it played us a mighty dub.
reject traditional concepts like meaning and embrace a world without referents or the signified

endlessmonotony
Nov 4, 2009

by Fritz the Horse

infernal machines posted:

reject traditional concepts like meaning and embrace a world without referents or the signified

I understand a lot of things and that hasn't really worked out for me.

Agile Vector
May 21, 2007

scrum bored




i did and it replied with 🗡️😡

Agile Vector
May 21, 2007

scrum bored



i didn't really ask because i'm a bit afraid it'll falsely accuse me of federal crimes to distract me from a reply where it thinks fjdjssudj fly east for the winter

distortion park
Apr 25, 2011


i think there's several different discussions here, can an ai write a good university essay, can it think, know something, be intelligent?

I don't think anyone here would say those things about chat gpt3.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

distortion park posted:

i think there's several different discussions here, can an ai write a good university essay, can it think, know something, be intelligent?

I don't think anyone here would say those things about chat gpt3.

Can you do those things?

More importantly, can you prove you can do those things?

Adbot
ADBOT LOVES YOU

distortion park
Apr 25, 2011


the first we'll find out, the others are kind of empty questions divorced from specific examples of behaviour

  • 1
  • 2
  • 3
  • 4
  • 5