|
as part of a recent assignment i requested that students come up with a list of ten possible topic ideas with certain specific characteristics, as you do for college assignments. today in class i saw a student had typed the exact prompt into chatgpt and was reformatting the results slightly as he pasted them into his submission. i didn't even say anything because i truly did not know how to respond. what he was doing feels absolutely wrong, but i can't exactly elucidate why. is it plagiarism? yes, literally, because chatgpt steals everything from internet posts and can't generate anything that someone has never posted on reddit. no, literally, because it isn't a direct replication, but a rewording of other people's ideas. if you read other people's ideas and synthesize them yourself, that's just how learning works at certain stages. so there's nothing specifically wrong with that concept. but the act of absorbing, analyzing, synthesizing is how brains develop. so if you let a machine do it for you, is that cheating? is it just cheating yourself, or is it academically dishonest? back in the day you had to go to a library and read books yourself for relevant information. now you can do a full text search for keywords in seconds. i am pretty sure that significantly reduces the value you get from the book, but is it cheating? obviously academia has decided that it is not. is chatgpt just an evolution of a search engine? how much of your own human-powered synthesis and data processing is required to call something your own unique work, and how much can you pass off to a machine? Sagebrush fucked around with this message at 08:55 on Feb 22, 2023 |
![]() |
|
![]()
|
# ? May 29, 2023 23:12 |
|
Sagebrush posted:as part of a recent assignment i requested that students come up with a list of ten possible topic ideas with certain specific characteristics, as you do for college assignments. you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/ the author, despite (or perhaps because of?) being in classics rather than a tech-adjacent field, seems to have a much better idea of what chatgpt is actually capable of than most people i've seen talking about it
|
![]() |
|
Milo and POTUS posted:Lock the applicants in a room with a turtle on its back
|
![]() |
|
chatgpt is just spicy autocomplete
|
![]() |
|
after the first few times of chatgpt trying to gaslight me about information it had wrong I’m convinced it’s just a more polished and wordy version of the customer support bots we already have.
|
![]() |
|
*opens ChatGPT box to find small man inside*
|
![]() |
|
Sagebrush posted:i didn't even say anything because i truly did not know how to respond. what he was doing feels absolutely wrong, but i can't exactly elucidate why. if the student just types a prompt into ChatGPT and turns in the result as their work, then that's cheating. it's like buying an essay.
|
![]() |
|
either way, you should punish them arbitrarily
|
![]() |
|
Jabor posted:you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/ this is good quote:It is not, as we do, storing definitions or associations between those words and their real world referents, nor is it storing a perfect copy of the training material for future reference. ChatGPT does not sit atop a great library it can peer through at will; it has read every book in the library once and distilled the statistical relationships between the words in that library and then burned the library.
|
![]() |
|
it’s only fair : ask gpt what the punishment should be and just do it
|
![]() |
|
Sagebrush posted:i didn't even say anything because i truly did not know how to respond. what he was doing feels absolutely wrong, but i can't exactly elucidate why. "computer, do my homework for me" is cheating no matter how you try and spin it. they're not "exploring new means of information summarisation" or "accessing automated means of machine-assisted collaboration" or any of the other wank people are trying to dress it up as. they're handing in work they didn't do themselves and hoping you wouldn't notice. if you're conflicted then ask the student "did you get chatgpt to write this?" and see if they try and hide it
|
![]() |
|
infernal machines posted:either way, you should punish them arbitrarily
|
![]() |
|
Jabor posted:you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/ me, a tech person, taking advice from someone in the humanities? on a technology related question at that? i scoff at the very thought!
|
![]() |
|
Chris Knight posted:the gently caress around and find out model works as well today as it did for Plato teaching the classics
|
![]() |
|
Plato's cave, but it's a gamer room
|
![]() |
|
echinopsis posted:it’s only fair : ask gpt what the punishment should be and just do it
|
![]() |
|
Kenny Logins posted:live by the sword die by the sword
|
![]() |
|
Alan Smithee posted:Plato's cave, but it's a gamer room plato’s mancave
|
![]() |
|
I'm looking at the RGB light projections on the wall behind my monitor
|
![]() |
|
haveblue posted:plato’s mancave
|
![]() |
|
haveblue posted:plato’s mancave
|
![]() |
|
haveblue posted:plato’s mancave
|
![]() |
|
haveblue posted:plato’s mancave *Diogenes ears start burning*
|
![]() |
|
Jabor posted:you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/ this guy understands chatgpt and ai better than almost any tech journalist writing about it. the ted chiang piece he references is worth a read though
|
![]() |
|
weird how the guy trained in critically reading sources and investigating unreliable claims in pursuit of putting together a cogent thesis is easily able to see through openai's bullshit. i love how the structure of the piece itself is a rebuttal of the arguments he's debunking - it's a perfect example of the kind of essay chatgpt could never produce
big scary monsters fucked around with this message at 18:08 on Feb 22, 2023 |
![]() |
|
this is something i would like to share with a client who's pretty insistent that chatgpt is a useful tool for summarizing and extracting information, unfortunately it's fairly long and best case scenario he'd probably try to get an executive summary from chatgpt
|
![]() |
|
it's cheating and the student should be told so. my understanding is that the intent of the assignment is to test/exercise the students' ability to perform a task with certain tools - in this case the task is to generate a list of topics, and the tools are implied to be their own minds. the student breached the second condition. perhaps the takeaway is that we need to explicitly state the restrictions on tools that are ok to use? like yes, you have access to calculators in the real world and knowing how to use them is useful, but it's still ok to make kids learn how to do sums without them since that's also a valuable skill for real life. it's the same as signing up to learn karate and then proclaiming yourself grandmaster because you snuck up behind your sensei and clobbered them with a baseball bat. the object of the exercise isn't to just generate a given output, it's to show that you can do it under certain constraints.
|
![]() |
|
Jabor posted:you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/ Mr Devereaux here states outright that statistical relationships between words are fundamentally different than knowledge, which is sort of a stupid thing to assume. I "know" harry potter rides a magic broom, despite my only experience with harry potter and his magic broom is because they were arranged in a certain way on the pages of a book, how is that not knowledge if chatgpt can both know and communicate about the same thing by also just knowing relationships between words. He also states that an essay is, of his own definition, certain steps to create as an essay, so therefore gpt cannot create essays because it didn't do the legwork he prescribed. Which is garbage. You can fart out a good essay about a subject without going through his steps, and if you want to say "your prior knowledge of the effects of furry culture on the mascot suit industry maps to the same steps" then you can make the same argument about chatgpt's training and modeling and synthesis steps mapping just as easily. The best argument he makes is that using chatgpt to create your college essays for you is bad because the essays it writes are bad (it doesn't adhere to the truth well enough). There's a much better argument to be made that the purpose of his class and his college in general is to teach students how to learn and think and synthesize new ideas, but that doesn't actually result in the conclusion "so obviously there's no place for language models in that process". If chatgpt's essays didn't suck, they would be an invaluable resource for learning about a subject and its related fields, and as a source for your own process of synthesizing ideas and trying to communicate them.
|
![]() |
|
RokosCockatrice posted:Mr Devereaux here states outright that statistical relationships between words are fundamentally different than knowledge, which is sort of a stupid thing to assume. I "know" harry potter rides a magic broom, despite my only experience with harry potter and his magic broom is because they were arranged in a certain way on the pages of a book, how is that not knowledge if chatgpt can both know and communicate about the same thing by also just knowing relationships between words. his point is that when you read harry potter you associated the concepts of wizard and broom while chatgpt just associated the words "wizard" and "broom". to chatgpt they are not symbols, they are atomic elements that tend to occur in certain arrangements. if I wrote that harry is a hgkjrhasful who rides an ahjwtqy, a human reader would immediately choke on it, at the very least because those terms are not defined elsewhere in the work and more likely because it's obvious I'm just banging on the keyboard. the statistical model underlying chatgpt cannot reject inputs on this sort of basis and that's a difference between the kinds of information processing it and real students do according to this piece. it will happily internalize the garbage and write you a whole new story about the magical adventures the hgkjrhasfuls have with their ahjwtqys even if neither of those are defined anywhere in the entire text of Harry Potter and the Ffgadjghkan Jcoqsklhabdfkjczkns quote:He also states that an essay is, of his own definition, certain steps to create as an essay, so therefore gpt cannot create essays because it didn't do the legwork he prescribed. Which is garbage. You can fart out a good essay about a subject without going through his steps, and if you want to say "your prior knowledge of the effects of furry culture on the mascot suit industry maps to the same steps" then you can make the same argument about chatgpt's training and modeling and synthesis steps mapping just as easily. you cannot in fact fart out a good essay by doing that, one that will pass diligent inspection by a human familiar with the subject. chatgpt is just better and faster at farting out bad essays that will pass quick inspection by humans less familiar with the subject. so much better and so much faster that it's disruptive even though it's not producing anything of real value quote:There's a much better argument to be made that the purpose of his class and his college in general is to teach students how to learn and think and synthesize new ideas, but that doesn't actually result in the conclusion "so obviously there's no place for language models in that process". If chatgpt's essays didn't suck, they would be an invaluable resource for learning about a subject and its related fields, and as a source for your own process of synthesizing ideas and trying to communicate them. I think he does make that point? He talks about how the actual text isn't what's important to the learning process, it's just evidence that the student performed the research and thinking they were supposed to. in which case chatgpt is absolutely a counterproductive shortcut with no place in the process kinda surprised chinese rooms haven't come up more in chatgpt discourse. that's basically what the language model is, a huge collection of relationships between opaque symbols. is comprehension an emergent property of a sufficiently large collection of such things? I don't think there is a clear or even widely accepted answer to that, but at any rate chatgpt isn't sufficiently advanced to be in that grey area (yet)
|
![]() |
|
chatgpt doesn't "know" anything
|
![]() |
|
The only thing I know is that I know nothing. Which allows me to assess my confidence in the things I say and infer other people might also know nothing and thus might also be wrong.
|
![]() |
|
It seems like chatgpt isn't that impressive but we don't actually know how the human brain works so it's sort of hard to compare them in that sense. We're probably overestimated chatgpt but maybe we're overestimating the human brain too.
|
![]() |
|
Improbable Lobster posted:chatgpt doesn't "know" anything if I asked it to write a story about how forums poster improbable lobster smells, it would do it, and it would be accurate if that’s not knowledge idk what is
|
![]() |
|
i think that a lot of very stupid people got told that computers work like the human brain when they were young, assumed the opposite was also true and now think any moderately convincing chatbot is skynet
|
![]() |
|
Improbable Lobster posted:chatgpt doesn't "know" anything
|
![]() |
|
Improbable Lobster posted:i think that a lot of very stupid people got told that computers work like the human brain when they were young, assumed the opposite was also true and now think any moderately convincing chatbot is skynet not really helped by calling our big piles of sums "neural networks" with "neurons" that perform "learning" and "artificial intelligence". it gives people all kinds of stupid ideas
|
![]() |
|
Same but our "education" system
|
![]() |
|
RokosCockatrice posted:Mr Devereaux here states outright that statistical relationships between words are fundamentally different than knowledge, which is sort of a stupid thing to assume. I "know" harry potter rides a magic broom, despite my only experience with harry potter and his magic broom is because they were arranged in a certain way on the pages of a book, how is that not knowledge if chatgpt can both know and communicate about the same thing by also just knowing relationships between words. source your quotes
|
![]() |
|
haveblue posted:kinda surprised chinese rooms haven't come up more in chatgpt discourse. that's basically what the language model is, a huge collection of relationships between opaque symbols. is comprehension an emergent property of a sufficiently large collection of such things? I don't think there is a clear or even widely accepted answer to that, but at any rate chatgpt isn't sufficiently advanced to be in that grey area (yet) chinese room sucks, searle sucks (esp later searle) just to be clear, deleuze sucks too, maybe even more
|
![]() |
|
![]()
|
# ? May 29, 2023 23:12 |
|
the bottom line is that we cant even define what intelligence in humans is, so its not really surprising we dont know what it is for machines.
|
![]() |