Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Sagebrush
Feb 26, 2012

as part of a recent assignment i requested that students come up with a list of ten possible topic ideas with certain specific characteristics, as you do for college assignments.

today in class i saw a student had typed the exact prompt into chatgpt and was reformatting the results slightly as he pasted them into his submission.

i didn't even say anything because i truly did not know how to respond. what he was doing feels absolutely wrong, but i can't exactly elucidate why.

is it plagiarism? yes, literally, because chatgpt steals everything from internet posts and can't generate anything that someone has never posted on reddit. no, literally, because it isn't a direct replication, but a rewording of other people's ideas. if you read other people's ideas and synthesize them yourself, that's just how learning works at certain stages. so there's nothing specifically wrong with that concept. but the act of absorbing, analyzing, synthesizing is how brains develop. so if you let a machine do it for you, is that cheating? is it just cheating yourself, or is it academically dishonest?

back in the day you had to go to a library and read books yourself for relevant information. now you can do a full text search for keywords in seconds. i am pretty sure that significantly reduces the value you get from the book, but is it cheating? obviously academia has decided that it is not. is chatgpt just an evolution of a search engine?

how much of your own human-powered synthesis and data processing is required to call something your own unique work, and how much can you pass off to a machine?

Sagebrush fucked around with this message at 08:55 on Feb 22, 2023

Adbot
ADBOT LOVES YOU

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Sagebrush posted:

as part of a recent assignment i requested that students come up with a list of ten possible topic ideas with certain specific characteristics, as you do for college assignments.

today in class i saw a student had typed the exact prompt into chatgpt and was reformatting the results slightly as he pasted them into his submission.

i didn't even say anything because i truly did not know how to respond. what he was doing feels absolutely wrong, but i can't exactly elucidate why.

is it plagiarism? yes, literally, because chatgpt steals everything from internet posts and can't generate anything that someone has never posted on reddit. no, literally, because it isn't a direct replication, but a rewording of other people's ideas. if you read other people's ideas and synthesize them yourself, that's just how learning works at certain stages. so there's nothing specifically wrong with that concept. but the act of absorbing, analyzing, synthesizing is how brains develop. so if you let a machine do it for you, is that cheating? is it just cheating yourself, or is it academically dishonest?

back in the day you had to go to a library and read books yourself for relevant information. now you can do a full text search for keywords in seconds. i am pretty sure that significantly reduces the value you get from the book, but is it cheating? obviously academia has decided that it is not. is chatgpt just an evolution of a search engine?

how much of your own human-powered synthesis and data processing is required to call something your own unique work, and how much can you pass off to a machine?

you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/

the author, despite (or perhaps because of?) being in classics rather than a tech-adjacent field, seems to have a much better idea of what chatgpt is actually capable of than most people i've seen talking about it

Alan Smithee
Jan 4, 2005


A man becomes preeminent, he's expected to have enthusiasms.

Enthusiasms, enthusiasms...

Milo and POTUS posted:

Lock the applicants in a room with a turtle on its back

Chris Knight
Jun 5, 2002

And I'm only saying this because I care.

There are a lot of decaffeinated brands on the market today that are just as tasty as the real thing.


Fun Shoe
chatgpt is just spicy autocomplete

Beve Stuscemi
Jun 6, 2001




after the first few times of chatgpt trying to gaslight me about information it had wrong Iím convinced itís just a more polished and wordy version of the customer support bots we already have.

Alan Smithee
Jan 4, 2005


A man becomes preeminent, he's expected to have enthusiasms.

Enthusiasms, enthusiasms...
*opens ChatGPT box to find small man inside*

Chris Knight
Jun 5, 2002

And I'm only saying this because I care.

There are a lot of decaffeinated brands on the market today that are just as tasty as the real thing.


Fun Shoe

Sagebrush posted:

i didn't even say anything because i truly did not know how to respond. what he was doing feels absolutely wrong, but i can't exactly elucidate why.
maybe the student was just trying it out to see what would happen. that's fine. that's experimentation and should be encouraged.

if the student just types a prompt into ChatGPT and turns in the result as their work, then that's cheating. it's like buying an essay.

infernal machines
Oct 11, 2012

the future has already arrived. it's just not evenly distributed yet.
either way, you should punish them arbitrarily

infernal machines
Oct 11, 2012

the future has already arrived. it's just not evenly distributed yet.

Jabor posted:

you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/

the author, despite (or perhaps because of?) being in classics rather than a tech-adjacent field, seems to have a much better idea of what chatgpt is actually capable of than most people i've seen talking about it

this is good

quote:

It is not, as we do, storing definitions or associations between those words and their real world referents, nor is it storing a perfect copy of the training material for future reference. ChatGPT does not sit atop a great library it can peer through at will; it has read every book in the library once and distilled the statistical relationships between the words in that library and then burned the library.

echinopsis
Apr 13, 2004

itís only fair : ask gpt what the punishment should be and just do it

Sweevo
Nov 8, 2007

i sometimes throw cables away

i mean straight into the bin without spending 10+ years in the box of might-come-in-handy-someday first

im a fucking monster

Sagebrush posted:

i didn't even say anything because i truly did not know how to respond. what he was doing feels absolutely wrong, but i can't exactly elucidate why.

"computer, do my homework for me" is cheating no matter how you try and spin it.

they're not "exploring new means of information summarisation" or "accessing automated means of machine-assisted collaboration" or any of the other wank people are trying to dress it up as. they're handing in work they didn't do themselves and hoping you wouldn't notice.

if you're conflicted then ask the student "did you get chatgpt to write this?" and see if they try and hide it

Chris Knight
Jun 5, 2002

And I'm only saying this because I care.

There are a lot of decaffeinated brands on the market today that are just as tasty as the real thing.


Fun Shoe

infernal machines posted:

either way, you should punish them arbitrarily
the gently caress around and find out model works as well today as it did for Plato

Deep Dish Fuckfest
Sep 6, 2006

Advanced
Computer Touching


Toilet Rascal

Jabor posted:

you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/

the author, despite (or perhaps because of?) being in classics rather than a tech-adjacent field, seems to have a much better idea of what chatgpt is actually capable of than most people i've seen talking about it

me, a tech person, taking advice from someone in the humanities? on a technology related question at that? i scoff at the very thought!

infernal machines
Oct 11, 2012

the future has already arrived. it's just not evenly distributed yet.

Chris Knight posted:

the gently caress around and find out model works as well today as it did for Plato

teaching the classics

Alan Smithee
Jan 4, 2005


A man becomes preeminent, he's expected to have enthusiasms.

Enthusiasms, enthusiasms...
Plato's cave, but it's a gamer room

Kenny Logins
Jan 11, 2011

EVERY MORNING I WAKE UP AND OPEN PALM SLAM A WHITE WHALE INTO THE PEQUOD. IT'S HELL'S HEART AND RIGHT THEN AND THERE I STRIKE AT THEE ALONGSIDE WITH THE MAIN CHARACTER, ISHMAEL.

echinopsis posted:

itís only fair : ask gpt what the punishment should be and just do it
live by the sword die by the sword

Chris Knight
Jun 5, 2002

And I'm only saying this because I care.

There are a lot of decaffeinated brands on the market today that are just as tasty as the real thing.


Fun Shoe

Kenny Logins posted:

live by the sword die by the sword
short word? what kinda z80 bullshit is this?!

haveblue
Aug 15, 2005


Toilet Rascal

Alan Smithee posted:

Plato's cave, but it's a gamer room

platoís mancave

NoneMoreNegative
Jul 20, 2000
GOTH FASCISTIC
PAIN
MASTER




shit wizard dad

I'm looking at the RGB light projections on the wall behind my monitor

Kenny Logins
Jan 11, 2011

EVERY MORNING I WAKE UP AND OPEN PALM SLAM A WHITE WHALE INTO THE PEQUOD. IT'S HELL'S HEART AND RIGHT THEN AND THERE I STRIKE AT THEE ALONGSIDE WITH THE MAIN CHARACTER, ISHMAEL.

haveblue posted:

platoís mancave
more like plato's man's grave

Chris Knight
Jun 5, 2002

And I'm only saying this because I care.

There are a lot of decaffeinated brands on the market today that are just as tasty as the real thing.


Fun Shoe

haveblue posted:

platoís mancave
too early for a title change but that's great

ADINSX
Sep 9, 2003

Wanna run with my crew huh? Rule cyberspace and crunch numbers like I do?

haveblue posted:

platoís mancave

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

haveblue posted:

platoís mancave

*Diogenes ears start burning*

big scary monsters
Sep 2, 2011

-~Skullwave~-

Jabor posted:

you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/

the author, despite (or perhaps because of?) being in classics rather than a tech-adjacent field, seems to have a much better idea of what chatgpt is actually capable of than most people i've seen talking about it

this guy understands chatgpt and ai better than almost any tech journalist writing about it. the ted chiang piece he references is worth a read though

big scary monsters
Sep 2, 2011

-~Skullwave~-
weird how the guy trained in critically reading sources and investigating unreliable claims in pursuit of putting together a cogent thesis is easily able to see through openai's bullshit. i love how the structure of the piece itself is a rebuttal of the arguments he's debunking - it's a perfect example of the kind of essay chatgpt could never produce

big scary monsters fucked around with this message at 18:08 on Feb 22, 2023

infernal machines
Oct 11, 2012

the future has already arrived. it's just not evenly distributed yet.
this is something i would like to share with a client who's pretty insistent that chatgpt is a useful tool for summarizing and extracting information, unfortunately it's fairly long and best case scenario he'd probably try to get an executive summary from chatgpt

Crazy Achmed
Mar 13, 2001

it's cheating and the student should be told so. my understanding is that the intent of the assignment is to test/exercise the students' ability to perform a task with certain tools - in this case the task is to generate a list of topics, and the tools are implied to be their own minds. the student breached the second condition.

perhaps the takeaway is that we need to explicitly state the restrictions on tools that are ok to use? like yes, you have access to calculators in the real world and knowing how to use them is useful, but it's still ok to make kids learn how to do sums without them since that's also a valuable skill for real life.

it's the same as signing up to learn karate and then proclaiming yourself grandmaster because you snuck up behind your sensei and clobbered them with a baseball bat. the object of the exercise isn't to just generate a given output, it's to show that you can do it under certain constraints.

RokosCockatrice
Feb 19, 2004

I have a lot of points to make and I will make them later.

Jabor posted:

you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/

the author, despite (or perhaps because of?) being in classics rather than a tech-adjacent field, seems to have a much better idea of what chatgpt is actually capable of than most people i've seen talking about it

Mr Devereaux here states outright that statistical relationships between words are fundamentally different than knowledge, which is sort of a stupid thing to assume. I "know" harry potter rides a magic broom, despite my only experience with harry potter and his magic broom is because they were arranged in a certain way on the pages of a book, how is that not knowledge if chatgpt can both know and communicate about the same thing by also just knowing relationships between words.

He also states that an essay is, of his own definition, certain steps to create as an essay, so therefore gpt cannot create essays because it didn't do the legwork he prescribed. Which is garbage. You can fart out a good essay about a subject without going through his steps, and if you want to say "your prior knowledge of the effects of furry culture on the mascot suit industry maps to the same steps" then you can make the same argument about chatgpt's training and modeling and synthesis steps mapping just as easily.

The best argument he makes is that using chatgpt to create your college essays for you is bad because the essays it writes are bad (it doesn't adhere to the truth well enough).

There's a much better argument to be made that the purpose of his class and his college in general is to teach students how to learn and think and synthesize new ideas, but that doesn't actually result in the conclusion "so obviously there's no place for language models in that process". If chatgpt's essays didn't suck, they would be an invaluable resource for learning about a subject and its related fields, and as a source for your own process of synthesizing ideas and trying to communicate them.

haveblue
Aug 15, 2005


Toilet Rascal

RokosCockatrice posted:

Mr Devereaux here states outright that statistical relationships between words are fundamentally different than knowledge, which is sort of a stupid thing to assume. I "know" harry potter rides a magic broom, despite my only experience with harry potter and his magic broom is because they were arranged in a certain way on the pages of a book, how is that not knowledge if chatgpt can both know and communicate about the same thing by also just knowing relationships between words.

his point is that when you read harry potter you associated the concepts of wizard and broom while chatgpt just associated the words "wizard" and "broom". to chatgpt they are not symbols, they are atomic elements that tend to occur in certain arrangements. if I wrote that harry is a hgkjrhasful who rides an ahjwtqy, a human reader would immediately choke on it, at the very least because those terms are not defined elsewhere in the work and more likely because it's obvious I'm just banging on the keyboard. the statistical model underlying chatgpt cannot reject inputs on this sort of basis and that's a difference between the kinds of information processing it and real students do according to this piece. it will happily internalize the garbage and write you a whole new story about the magical adventures the hgkjrhasfuls have with their ahjwtqys even if neither of those are defined anywhere in the entire text of Harry Potter and the Ffgadjghkan Jcoqsklhabdfkjczkns

quote:

He also states that an essay is, of his own definition, certain steps to create as an essay, so therefore gpt cannot create essays because it didn't do the legwork he prescribed. Which is garbage. You can fart out a good essay about a subject without going through his steps, and if you want to say "your prior knowledge of the effects of furry culture on the mascot suit industry maps to the same steps" then you can make the same argument about chatgpt's training and modeling and synthesis steps mapping just as easily.

you cannot in fact fart out a good essay by doing that, one that will pass diligent inspection by a human familiar with the subject. chatgpt is just better and faster at farting out bad essays that will pass quick inspection by humans less familiar with the subject. so much better and so much faster that it's disruptive even though it's not producing anything of real value

quote:

There's a much better argument to be made that the purpose of his class and his college in general is to teach students how to learn and think and synthesize new ideas, but that doesn't actually result in the conclusion "so obviously there's no place for language models in that process". If chatgpt's essays didn't suck, they would be an invaluable resource for learning about a subject and its related fields, and as a source for your own process of synthesizing ideas and trying to communicate them.

I think he does make that point? He talks about how the actual text isn't what's important to the learning process, it's just evidence that the student performed the research and thinking they were supposed to. in which case chatgpt is absolutely a counterproductive shortcut with no place in the process

kinda surprised chinese rooms haven't come up more in chatgpt discourse. that's basically what the language model is, a huge collection of relationships between opaque symbols. is comprehension an emergent property of a sufficiently large collection of such things? I don't think there is a clear or even widely accepted answer to that, but at any rate chatgpt isn't sufficiently advanced to be in that grey area (yet)

Improbable Lobster
Jan 6, 2012

What is the Matrix 🌐? We just don't know 😎.


Buglord
chatgpt doesn't "know" anything

endlessmonotony
Nov 4, 2009

by Fritz the Horse
The only thing I know is that I know nothing.

Which allows me to assess my confidence in the things I say and infer other people might also know nothing and thus might also be wrong.

mystes
May 31, 2006

It seems like chatgpt isn't that impressive but we don't actually know how the human brain works so it's sort of hard to compare them in that sense. We're probably overestimated chatgpt but maybe we're overestimating the human brain too.

echinopsis
Apr 13, 2004

Improbable Lobster posted:

chatgpt doesn't "know" anything

if I asked it to write a story about how forums poster improbable lobster smells, it would do it, and it would
be accurate

if thatís not knowledge idk what is

Improbable Lobster
Jan 6, 2012

What is the Matrix 🌐? We just don't know 😎.


Buglord
i think that a lot of very stupid people got told that computers work like the human brain when they were young, assumed the opposite was also true and now think any moderately convincing chatbot is skynet

Kenny Logins
Jan 11, 2011

EVERY MORNING I WAKE UP AND OPEN PALM SLAM A WHITE WHALE INTO THE PEQUOD. IT'S HELL'S HEART AND RIGHT THEN AND THERE I STRIKE AT THEE ALONGSIDE WITH THE MAIN CHARACTER, ISHMAEL.

Improbable Lobster posted:

chatgpt doesn't "know" anything

big scary monsters
Sep 2, 2011

-~Skullwave~-

Improbable Lobster posted:

i think that a lot of very stupid people got told that computers work like the human brain when they were young, assumed the opposite was also true and now think any moderately convincing chatbot is skynet

not really helped by calling our big piles of sums "neural networks" with "neurons" that perform "learning" and "artificial intelligence". it gives people all kinds of stupid ideas

mystes
May 31, 2006

Same but our "education" system

Achmed Jones
Oct 16, 2004



RokosCockatrice posted:

Mr Devereaux here states outright that statistical relationships between words are fundamentally different than knowledge, which is sort of a stupid thing to assume. I "know" harry potter rides a magic broom, despite my only experience with harry potter and his magic broom is because they were arranged in a certain way on the pages of a book, how is that not knowledge if chatgpt can both know and communicate about the same thing by also just knowing relationships between words.

He also states that an essay is, of his own definition, certain steps to create as an essay, so therefore gpt cannot create essays because it didn't do the legwork he prescribed. Which is garbage. You can fart out a good essay about a subject without going through his steps, and if you want to say "your prior knowledge of the effects of furry culture on the mascot suit industry maps to the same steps" then you can make the same argument about chatgpt's training and modeling and synthesis steps mapping just as easily.

The best argument he makes is that using chatgpt to create your college essays for you is bad because the essays it writes are bad (it doesn't adhere to the truth well enough).

There's a much better argument to be made that the purpose of his class and his college in general is to teach students how to learn and think and synthesize new ideas, but that doesn't actually result in the conclusion "so obviously there's no place for language models in that process". If chatgpt's essays didn't suck, they would be an invaluable resource for learning about a subject and its related fields, and as a source for your own process of synthesizing ideas and trying to communicate them.

source your quotes

Achmed Jones
Oct 16, 2004



haveblue posted:

kinda surprised chinese rooms haven't come up more in chatgpt discourse. that's basically what the language model is, a huge collection of relationships between opaque symbols. is comprehension an emergent property of a sufficiently large collection of such things? I don't think there is a clear or even widely accepted answer to that, but at any rate chatgpt isn't sufficiently advanced to be in that grey area (yet)

chinese room sucks, searle sucks (esp later searle)

just to be clear, deleuze sucks too, maybe even more

Adbot
ADBOT LOVES YOU

rotor
Jun 11, 2001

classic case of pineapple derangement syndrome
the bottom line is that we cant even define what intelligence in humans is, so its not really surprising we dont know what it is for machines.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply