Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
endlessmonotony
Nov 4, 2009

by Fritz the Horse

Truman Peyote posted:

im more of a cyberhunk myself

I'm a cyborg. And I hate it. The future sucks.

Adbot
ADBOT LOVES YOU

endlessmonotony
Nov 4, 2009

by Fritz the Horse

I'm going to gently caress it up.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

Alan Smithee posted:

Tbf the lack of distinctiveness between 2000-present has messed with all of us

Maybe now with AI changing the landscape rapidly we’ll finally get some good music

It hasn't messed with me at all.

Because I lost over half a decade of memories when I got cyborgized.

The cyberpunk future is here, you just aren't doing it wrong enough.

endlessmonotony
Nov 4, 2009

by Fritz the Horse
My timeline is "before subprime mortgage crisis -> EVERYTHING WENT WRONG AND NOTHING MAKES SENSE -> oh we have Trump now".

Which I suppose is just as true for everyone else, they just got old instead of being a cyborg now.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

Beeftweeter posted:

:yossame:

something that just barfs all over EM is probably fine lol

It's not! :mad:

endlessmonotony
Nov 4, 2009

by Fritz the Horse

Alan Smithee posted:

y'all should look into mpreg

ffmpreg, so it doesn't take most of the year to do anything.

Hideo you've done it again.

endlessmonotony
Nov 4, 2009

by Fritz the Horse
The only thing I know is that I know nothing.

Which allows me to assess my confidence in the things I say and infer other people might also know nothing and thus might also be wrong.

endlessmonotony
Nov 4, 2009

by Fritz the Horse
We've got pretty good ideas of what thought is from inspecting brains with bits that are gone, and what chatgpt is doing is not thinking.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

rotor posted:

the question is: is it only thinking when its thinking like we do? Does HOW its intelligence manifest itself matter? Or is the intelligence itself enough?

They cannot analyze the impact of their actions on the world ahead of time, they're not thinking at all.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

distortion park posted:

Is that the definition of thinking? I could imagine speaking to a person who for whatever reason couldn't do that and still, if asked, affirm that I believed they were thinking.

I think you might be a p-zombie, a robot, or a p-zombie robot.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

rotor posted:

I'm not making the argument that chatgpt is intelligent, i'm making the argument that we dont really know what intelligence is. We do, as you note, have some ideas what its NOT, but generally people have a lot of intuition about what they think intelligence is and i think that intuition frequently gets in the way of the discussion.

That's every concept.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

infernal machines posted:

reject traditional concepts like meaning and embrace a world without referents or the signified

I understand a lot of things and that hasn't really worked out for me.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

distortion park posted:

i think there's several different discussions here, can an ai write a good university essay, can it think, know something, be intelligent?

I don't think anyone here would say those things about chat gpt3.

Can you do those things?

More importantly, can you prove you can do those things?

endlessmonotony
Nov 4, 2009

by Fritz the Horse

:golfclap:

endlessmonotony
Nov 4, 2009

by Fritz the Horse
The joke is that RokosCockatrice is begging the question.

endlessmonotony
Nov 4, 2009

by Fritz the Horse
Also important to note that the only thing that changed in the past few years is machine learning generated speech no longer sounding like it's being played back through a pc speaker.

We could have built chatbots with gibberish output fifteen years ago - in fact, we did, and they had the same problems. We've been aware of these problems with understanding versus association and how they manifest for at least fifty years, and we're no closer to an answer today than we were then.

distortion park posted:

I'm less certain that a) human intelligence is significantly different from a series of associations and b) a theory of the world couldn't "reside" in an artificial system in a similar way to that which it apparently does in a human brain, which is after all a physical object.

We have an easy answer to a, human intelligence is significantly different. We've got hundreds of years of thinking on this, including decades of hard data.

As for if it's possible artificially... almost certainly, but it has a lot of unsolved problems yet to go.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

mystes posted:

Huh? Most of the people using ChatGPT aren't even using it with speech synthesis. The output may not be as as useful as people are saying but the reason people are interested in it is because the (text) output is vastly better than previous software. Saying that the only thing that has changed is that speech synthesis is better makes absolutely no sense.

ChatGPT isn't new, it's just a new implementation of a technology we knew to be pretty much useless. We now have more computing power to throw at an approach that doesn't work.

There's a change in AI and it's speech synthesis that sounds like the source its data set came from. That's about it.

RokosCockatrice posted:

the deadpan delivery of half sarcastic remarks make your posts considerably harder to understand than a normal turing test

Ooh, the Turing test, so highly informative. lol. fizzbuzz for programmers who think they know cognition.

The medium is the message here. I'm demonstrating the problem!

endlessmonotony
Nov 4, 2009

by Fritz the Horse
What happened? We built autocomplete that sounds like it has schizophasia.
Why did we do it? VC money.

The real problem here is that we can't even define the problem we want to solve. There's enough philosophy about this problem I can pretend to be a smartass from thousands of years ago, a smartass from hundreds of years ago, a smartass from decades ago, and a smartass from now, all close enough you can't tell the difference.

The data itself shows that our decision making and meaning making processes aren't even in the order we would intuitively assume. We know this isn't thinking because cutting into human brains has revealed thinking definitely has components that don't exist here, and the machine can't make a mental model, it can only replicate output it has already seen.

If I was really capable of explaining the depth of this problem I'd be a far worse poster than I am now, and I have proof. That you can't see, because I ain't doxxing anyone.

endlessmonotony
Nov 4, 2009

by Fritz the Horse
Natural language processing is much harder than people think and we're still ways away, mostly because breaking down language into inputs requires a model of the world and none of these systems are equipped for that. We're not getting close to solving the problem, we haven't even demonstrated a proof of concept outside just feeding a system a lot of training data from constrained scenarios and asking it to fill in the blanks like automated mad libs. Where that's artificial intelligence, it's also just an oversized if/then statement.

You can connect it to a system with an internal state to control a system doing useful work, but that requires you to understand the connected system too, meaning you've reimplemented buttons but with words.

We're several orders of magnitude away from solving any of it with machine learning. It's going to take mathematicians who understand how people communicate to make any progress.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

distortion park posted:

we don't have much else to go on than peoples' evaluations of what is or isn't intelligent

Might as well say there's no other way to assess someone's programming skills than timing how long it takes for them to solve a fizzbuzz.

You can test for the components of intelligence, in computers and living beings alike. Problem is that if a machine learning system has the test in its training data, you need a new test different enough it can't fill in the blanks. If you make a standardized test that test just becomes training data and all.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

rotor posted:

if you showed chatgpt to an AI researcher from the 80s or 90s or even early 00s and asked them if it was an AI I think they'd have said yes. We know how it works so we're inclined to say that no its not intelligent but I am not sure whether knowing how it works should factor into the decision.

:wrong:

No, this was a part of AI research back then too.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

rotor posted:

we can agree to disagree.

:wrong:

endlessmonotony
Nov 4, 2009

by Fritz the Horse
Things aren't working better now, we're hitting the same wall we did back then, the wall we knew existed well before we had the computing power to test it.

Understanding the world requires a mental model of the world, and without it, you've got a system that repeats what it hears, and nothing else.

It's not a natural language processing breakthrough because it can't extract meaning from the words used, it can't translate what you want into controlling some other system.

Sagebrush is, very unfortunately, right. In that specific post. People thinking something seems intelligent is a stupid metric.

endlessmonotony
Nov 4, 2009

by Fritz the Horse
A system being better at replicating human speech doesn't make it more intelligent - or indeed, more useful - any more than house prices going up makes an economy healthy. Ironically, recognizing these specific fuckups in thinking is key to evaluating intelligence in the first place. What being a techie helps understand is that this is a solution looking for a problem - it's completely unable to understand if the information its regurgitating is right or wrong, which means you need an use case for a robot that produces infinite amounts of vaguely plausible bullshit.

None of this is vaguely new, these are all known problems in research of cognition, perception, and intelligence.

Hell, my entire thing is focused around the topic of being wrong in banal, predictable ways that people don't do because they know them to be wrong, just so I can understand why people form the ideas of how they need to act to be right or wrong. And then not repeating that mistake myself and instead making more boring mistakes because I know how hill climbing algorithms work and that's a mistake that's very easy to be.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

Cybernetic Vermin posted:

also lets move to the chatgpt thread please, i love the cyberpunk thread for *not* being about extremely shallow current tech garbage.

Agreed.

endlessmonotony
Nov 4, 2009

by Fritz the Horse

Now that's some quality cyberpunk.

Adbot
ADBOT LOVES YOU

endlessmonotony
Nov 4, 2009

by Fritz the Horse

infernal machines posted:

who the gently caress is scraeming "LOG OFF" at my house. show yourself, coward. i am unable to log off

My life support updates its settings from the cloud.

  • 1
  • 2
  • 3
  • 4
  • 5