Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Jabor
Jul 16, 2010

#1 Loser at SpaceChem
creating some serious cyberfunk

Adbot
ADBOT LOVES YOU

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
you have tens of thousands of words of high quality training data, that you unambiguously own by virtue of having paid for it to be created, and you're giving it away for free to these ai dummys and letting them rent it back to you?

they should be the ones paying you!

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Alan Smithee posted:

y'all should look into mpreg

while i'm at it, do you have any other search terms for me to check out on my work computer in the middle of a busy office?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
just fully chinese rooming a neural net to try and invent correlations with entirely different data

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
big respect to the artist for drawing them as actual prescription glasses rather than like, fakes worn purely for an aesthetic

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Sagebrush posted:

as part of a recent assignment i requested that students come up with a list of ten possible topic ideas with certain specific characteristics, as you do for college assignments.

today in class i saw a student had typed the exact prompt into chatgpt and was reformatting the results slightly as he pasted them into his submission.

i didn't even say anything because i truly did not know how to respond. what he was doing feels absolutely wrong, but i can't exactly elucidate why.

is it plagiarism? yes, literally, because chatgpt steals everything from internet posts and can't generate anything that someone has never posted on reddit. no, literally, because it isn't a direct replication, but a rewording of other people's ideas. if you read other people's ideas and synthesize them yourself, that's just how learning works at certain stages. so there's nothing specifically wrong with that concept. but the act of absorbing, analyzing, synthesizing is how brains develop. so if you let a machine do it for you, is that cheating? is it just cheating yourself, or is it academically dishonest?

back in the day you had to go to a library and read books yourself for relevant information. now you can do a full text search for keywords in seconds. i am pretty sure that significantly reduces the value you get from the book, but is it cheating? obviously academia has decided that it is not. is chatgpt just an evolution of a search engine?

how much of your own human-powered synthesis and data processing is required to call something your own unique work, and how much can you pass off to a machine?

you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/

the author, despite (or perhaps because of?) being in classics rather than a tech-adjacent field, seems to have a much better idea of what chatgpt is actually capable of than most people i've seen talking about it

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

RokosCockatrice posted:

if chatgpt was less full of poo poo (i.e. more than 90% right on factual issues),

you have awfully low standards if you think "10% of your source information is completely fabricated, you have no idea which 10%" would allow you to write a cogent essay on a topic. if you're doing enough additional research to discard the fabricated information, you're better off starting with those other sources to begin with.

chatgpt isn't even at that point though. and the author's point is that it's not clear that it would even be possible to achieve this goal with a llm+training design.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
like, even if you think human trainers can fix the really embarrassing factual errors (like "what year is it"), what the heck is human rating going to do about the subtle ones that look plausible on the surface and are only laughably wrong to someone with actual experience in a particular field?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
all of the above

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
the second bang is your wrist snapping

Adbot
ADBOT LOVES YOU

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
that's my jacking arm, i use it to lift cars

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply