Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
A Wizard of Goatse
Dec 14, 2014

Cingulate posted:

Humans don't learn the knowledge required to uphold civilization without supervision, so your definition is too strong.
That said, it's also too weak because the definition itself is fulfilled by e.g. ravens.

Who is doing the supervision, is this a God actually told Newton what to do kind of thing? Because if it's just more humans all the way down then I think you're missing the entire point in your rush to um actually pedantry

A computer that is as smart and capable as a raven would be absolutely breathaking science-fiction bullshit. Hell, you're never going to see a computer as smart as a budgie in your lifetime.

A Wizard of Goatse fucked around with this message at 00:38 on Jan 11, 2017

Adbot
ADBOT LOVES YOU

moebius2778
May 3, 2013

A Wizard of Goatse posted:

Who is doing the supervision, is this an angel on the shoulder kind of thing? Because if it's just more humans then I think you're missing the entire point in your rush to um actually pedantry

A computer that is as smart and capable as a raven would be absolutely breathaking science-fiction bullshit. Hell, you're never going to see a computer as smart as a budgie in your lifetime.

....It kinda sounds like you're looking for an AI as smart as the totality of the human species, rather than an AI that's as smart as an individual average human.

A Wizard of Goatse
Dec 14, 2014

what is it you suppose the totality of the human species is made of?

I can't tell whether you're arguing that a few tens of thousands of desktop computers dropped into Kenya could collectively form the rise of the machines in a few millennia or you're just very, very stupid.

moebius2778
May 3, 2013

A Wizard of Goatse posted:

what is it you suppose the totality of the human species is made of?

Individual humans. But I don't imagine that if you matched my brain up against the brains of the entire human species, I'd come off looking very smart. I suspect your average person wouldn't either, but I'm still not sure what intelligence is.

Cingulate
Oct 23, 2012

by Fluffdaddy

A Wizard of Goatse posted:

Who is doing the supervision, is this a God actually told Newton what to do kind of thing?
No. But somebody told you who Newton is. That was you, benefiting from very supervised learning.

thechosenone
Mar 21, 2009

Subjunctive posted:

I think the Turing test is usually framed as being written, because perfect voice synthesis is hard and not really relevant to the core question.

Would you consider a machine to have passed if it couldn't be distinguished from a 5-year-old? Someone with dementia?

Kindof? Like if I it really felt like I was talking with a 5 year old I would be pretty impressed at least. Not nearly as much as if it were an adult, but still impressed.

As it is, 'ai's' Don't really even have that.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

thechosenone posted:

Kindof? Like if I it really felt like I was talking with a 5 year old I would be pretty impressed at least. Not nearly as much as if it were an adult, but still impressed.

As it is, 'ai's' Don't really even have that.

How about a 2-year-old? A written, detailed log of a newborn's activities? Or, I think more interestingly, a person with dementia or other serious neurological damage? I wouldn't say that such people have ceased to be intelligent, personally. That's not a philosophical line I'm comfortable crossing.

A Wizard of Goatse
Dec 14, 2014

are you planning on building an electric retard? what exactly is the point of this exercise to you

'talk like a human' is a serviceable standard of consciousness because language (which yes, includes writing, guy) is the readiest form by which we perceive the thoughts of others. Where it does not exist in humans, we infer it based on our knowledge of how humans, generally, operate, which is unlike the manner in which we know rocks or trees to. People, at least people outside the ones ITT who see no manner in which they are more generally competent than a Powerbook, do not assume rocks are actually the same as nonverbally autistic people. A computer that spits out random letters like a baby banging on a keyboard or outputs nothing at all like a coma patient is likewise not going to persuade anybody but weirdo masturbatory computer cultists that there is actually the awareness of a human - young, disabled, or otherwise - lurking inside there.

This isn't the only possible measure of intelligence, but it's one that'd settle the issue in most humans' eyes. I don't know if you think you're going to lawyer someone into accepting that your paperweight thinks and feels as much as you do or what but that's not really how it works.

A Wizard of Goatse fucked around with this message at 02:22 on Jan 11, 2017

thechosenone
Mar 21, 2009

Subjunctive posted:

How about a 2-year-old? A written, detailed log of a newborn's activities? Or, I think more interestingly, a person with dementia or other serious neurological damage? I wouldn't say that such people have ceased to be intelligent, personally. That's not a philosophical line I'm comfortable crossing.

Like if it was soundbased? probably still yeah, Since it is hard to even get that with ai. like a 2 year old does a lot of stuff.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

I'm trying to figure out how much people want to test "is intelligent" versus "behaves like a human", and in the latter case what it means for a given capability to not be present for a human.

thechosenone
Mar 21, 2009

A Wizard of Goatse posted:

are you planning on building an electric retard? what exactly is the point of this exercise to you

'talk like a human' is a serviceable standard of consciousness because language (which yes, includes writing, guy) is the readiest form by which we perceive the thoughts of others. Where it does not exist in humans, we infer it based on our knowledge of how humans, generally, operate, which is unlike the manner in which we know rocks or trees to. People, at least people outside the ones ITT who see no manner in which they are more generally competent than a Powerbook, do not assume rocks are actually the same as nonverbally autistic people. A computer that spits out random letters like a baby banging on a keyboard or outputs nothing at all like a coma patient is likewise not going to persuade anybody but weirdo masturbatory computer cultists that there is actually the awareness of a human - young, disabled, or otherwise - lurking inside there.

This isn't the only possible measure of intelligence, but it's one that'd settle the issue in most humans' eyes. I don't know if you think you're going to lawyer someone into accepting that your paperweight thinks and feels as much as you do or what but that's not really how it works.

Are you talking to me? If you are, a person who was brain dead and a 2 year probably couldn't interact with a keyboard. But If you could make a robot that could move around, act and learn like a 2 year old, that would be pretty drat impressive.

I don't think I personally have much in particular in mind, just that it would have to be something pretty hard to bullshit.

thechosenone
Mar 21, 2009

Subjunctive posted:

I'm trying to figure out how much people want to test "is intelligent" versus "behaves like a human", and in the latter case what it means for a given capability to not be present for a human.

Honestly I would settle for something that is intelligent, or something that could do something really impressive. I don't really have much money on human level ai, or even really a general purpose, learning ai. The idea is good for getting rich people to open their wallets for more realistic research maybe.

A Wizard of Goatse
Dec 14, 2014

thechosenone posted:

Are you talking to me? If you are, a person who was brain dead and a 2 year probably couldn't interact with a keyboard. But If you could make a robot that could move around, act and learn like a 2 year old, that would be pretty drat impressive.

I don't think I personally have much in particular in mind, just that it would have to be something pretty hard to bullshit.

That was directed to Subjunctive, the one who immediately responded to the concept of the Turing test with a digression about the difficulties of voice synthesis and is now apparently struggling with the idea that people don't generally figure that things that don't look like people or communicate like people are people. I agree with you on the learning point, I figure on the day you can drop the most advanced robot in the world in some random environment and it does as well at navigating the new terrain and carving out a little robot life for itself as, say, a squirrel in Manhattan would, we can start talking about "machine intelligence" as a real thing, and not to actual intelligence what strong liquor is to a bodybuilder's biceps.

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

A Wizard of Goatse posted:

people don't generally figure that things that don't look like people or communicate like people are people

Yes, quite. Are we talking about artificial intelligence, or artificial people?

Liquid Communism
Mar 9, 2004


Out here, everything hurts.




Subjunctive posted:

Yes, quite. Are we talking about artificial intelligence, or artificial people?

Can you meaningfully distinguish the two?

Char
Jan 5, 2013

A Wizard of Goatse posted:

The ability to problem-solve and independently learn and perform complex tasks in an uncontrolled environment, without supervision.

I'd add, "given the limits of the tools available to interact with reality", which could be obvious, but maybe isn't. I don't think an hypothetical intelligent machine connected to a photocamera and a 3D printer could ever achieve intelligence.

Liquid Communism posted:

Can you meaningfully distinguish the two?

An artificial chimpanzee would be an artificial intelligence without being an artificial person.

Cingulate
Oct 23, 2012

by Fluffdaddy

Char posted:

I'd add, "given the limits of the tools available to interact with reality", which could be obvious, but maybe isn't. I don't think an hypothetical intelligent machine connected to a photocamera and a 3D printer could ever achieve intelligence.


An artificial chimpanzee would be an artificial intelligence without being an artificial person.
Neither of these are uncontroversial.

Char
Jan 5, 2013

Cingulate posted:

Neither of these are uncontroversial.

I don't care much for the artificial intelligence/artificial people argument. I think they're different things, but I don't see how discussing it is beneficial to the topic.

But why you think that there's not correlation between the quality and quantity of data an entity manages to gather about its surroundings, the complexity of the interactions such entity can have with its surroundings, and its potential for intelligence?
A blind man still has ad incredibly refined set of receptors and actuators to interact with reality - the focus of his intelligence switches to the other receptors left.

Cingulate
Oct 23, 2012

by Fluffdaddy

Char posted:

But why you think that there's not correlation between the quality and quantity of data an entity manages to gather about its surroundings, the complexity of the interactions such entity can have with its surroundings, and its potential for intelligence?
I guess there's a pretty decent correlation, but where does that take us?

I don't think your "blind man" intelligence works though. The visual cortex is doing a lot of really smart work (in sighted humans), but I wouldn't call it intelligence. It's all automatic, unconscious processes. (And of course, humans are beginning to be beat by robots on a lot of the associated tasks.)

Phyzzle
Jan 26, 2008
A few definitions of intelligence may involve pattern recognition - not just repeating patterns in strings of numbers, but patterns in any abstract characteristic of a thing. You can test this by making up Bongard problems and seeing if the computer can solve them:
http://www.w-uh.com/posts/041122a-bongard_problems.html

Char
Jan 5, 2013

Cingulate posted:

I guess there's a pretty decent correlation, but where does that take us?

Back to

quote:

The ability to problem-solve and independently learn and perform complex tasks in an uncontrolled environment, without supervision.

So, is an octopus intelligent? I mean, I think it is. An octopus-level AI would be pretty impressive. What's "human level", then?

I think that "intelligence" is achieved by animals as well, given that type of interpretation. I just added the part regarding context because our hypothetical artificial machine would be created in a vacuum, so it should match a level of intelligence comparable to what its capabilities to influence the surroundings are - something that animals managed to obtain with thousands of millennia of evolution.

But I think we're asking ourselves something more like "How far away are we from an AI that could develop into a civilization? Be it a single AI or a society of AIs?"
Mind that I'm putting a quite anthropocentric spin to this question, by using the term "civilization".

So... why there's no octopus civilization? What are they lacking in?

Bear with me as I don't think I'm educated enough to put this whole concept in words, but I'l try nonetheless: to achieve what we'd call "problem-solving intelligence" you need to be able, as an entity, to handle a specific treshold of complexity of the reality surrounding you.
To achieve "civilization-level intelligence", what conditions did we humans meet that other intelligent animals didn't? It cannot be sheer brain size, whales have a huge brain compared to us and they still aren't civilization-capable. It cannot be the opposable thumb, monkeys have that as well.

It has to be something related to our advanced abstract thinking, which allows us, among other things, to write, read and talk. Now, the question moves. What is abstract thinking made of? Why is ours much better than other animals' ?

So, where does that take us? That currently, "testing for intelligence" is a question that still needs development, but basically, if you can prove it can think in an abstract manner at least as well as a human does, it's civilization-intelligent. How you do prove that?
Can you build abstract thinking off neural networks?

Char fucked around with this message at 17:03 on Jan 11, 2017

Cingulate
Oct 23, 2012

by Fluffdaddy

Char posted:

So, is an octopus intelligent? I mean, I think it is. An octopus-level AI would be pretty impressive. What's "human level", then?

I think that "intelligence" is achieved by animals as well, given that type of interpretation.
And by AIs.

AIs match octopusses (octopi? ...), and actually also humans, on multiple cognitive dimensions, but not on others.

Char posted:

So... why there's no octopus civilization? What are they lacking in?

Bear with me as I don't think I'm educated enough to put this whole concept in words, but I'l try nonetheless: to achieve what we'd call "problem-solving intelligence" you need to be able, as an entity, to handle a specific treshold of complexity of the reality surrounding you.
To achieve "civilization-level intelligence", what conditions did we humans meet that other intelligent animals didn't? It cannot be sheer brain size, whales have a huge brain compared to us and they still aren't civilization-capable. It cannot be the opposable thumb, monkeys have that as well.
I guess as a linguist, I have to go with language - specifically, communicating near-arbitrary intentions and propositions. It's not sufficient to create human civilization as-is, but it seems to be the key difference between the cognitive (rather than material) situations we find ourselves in.

Of course, machines are pretty decent at generating powerful syntax and general pattern recognition, but they have problems with intentionality and meaning. They're bad at generating a mental representation of where the other person is coming from and going for.

I don't think that's in itself necessary for AI to be True AI (you could imagine our robot overlords taking over without ever really figuring us out), nor am I very confident its antecedents (whatever makes us able to have, represent, and en- and decode intentions and propositions) are.

Adbot
ADBOT LOVES YOU

Char
Jan 5, 2013

Cingulate posted:

And by AIs.

AIs match octopusses (octopi? ...), and actually also humans, on multiple cognitive dimensions, but not on others.

I mostly agree with this. I've been thinking if there was anything wrong in what you were expressing, then it struck me. Something that screws with my thinking is looking into the purpose of the learning softwares we're developing: AlphaGo is meant to play Go and only Go: its purpose is limited. What about living organisms? Aren't them, basically, machines built to propagate and mix information? Which is not so limited - it's narrow but general.
Once again, I'm trying to draw comparisons - AlphaGo is extremely good at performing a very limited set of tasks (only one, actually), and it uses a specific tool: mathematical analysis. Since we're developing it, it's never going to use anything else than what we're allowing.
Is there anything in nature that has similar limitations? Nature used all the chemical and physical tools it could manage to. From carbon-oxygen reactions to electrical impulses to bioluminscence to adaptation to extreme ecosystems. An endless set of tools to fulfill that purpose.

So, would I be wrong into thinking that AlphaGo's problem solving is subconscious? That would place AlphaGo somewhere between plants and insects? Spiders aren't social and have developed an extremely refined method for hunting. But it's how much of this process is conscious? And how much can they adapt their webbing to huge ecosystem alterations? Only natural mutations managed to differentiate the problem solving of organisms without a capable nervous system - which allowed them to use, instead, a wider set of tools to fulfill their purpose.

We're designing our learning algorithms without possibility of mutation, so they'll never adapt to different ecosystems (or, different problems, different point of views) by theirselves. We're forcing the hand onto them: we need them to fullfil the tasks we're designing these intelligences to do, and these tasks are way more specific than what nature gives to living organisms.

Limited toolsets, specific purposes: unlike nature, we're deliberately cherry-picking their cognitive dimensions, avoiding spontanous alterations, so to speak.

quote:

I guess as a linguist, I have to go with language - specifically, communicating near-arbitrary intentions and propositions. It's not sufficient to create human civilization as-is, but it seems to be the key difference between the cognitive (rather than material) situations we find ourselves in.

Of course, machines are pretty decent at generating powerful syntax and general pattern recognition, but they have problems with intentionality and meaning. They're bad at generating a mental representation of where the other person is coming from and going for.

I don't think that's in itself necessary for AI to be True AI (you could imagine our robot overlords taking over without ever really figuring us out), nor am I very confident its antecedents (whatever makes us able to have, represent, and en- and decode intentions and propositions) are.

I agree completely: First, language allows us to trick our biology and keep huge amounts of knowledge across generations - knowledge that would be otherwise lost.
Second, I don't think any true AI has to exactly match our "features": we need to communicate because our inherent weaknesses, compared to reality, force us to cooperate - social behaviour, given our ecosystem and physiology, offers one of the best compromises.
I can forsee an intelligent entity not needing social behaviour, but I cannot forsee one having no ability to understand the unknown. I think any social function any AI could have, would be developed on the basis of hardcoded need. The human point of view on communitacting is extremely biased, given our nature - our hardcoded needs.

And now, I'm having a hard time imagining an entity, or entities living across iterative generations, achieving intelligence without sharing a basic trait with terran life: having a very un-specific purpose, and an endless array of tools to fulfill it.

So... I think civilization-level intelligence could be achieved by man-made machines who satisfied all conditions described so far: being able to handle huge amounts of data, being able to adapt its known frames to the unknown, and have an endless amount of tools to attempt fulfilling a narrow but generalist purpose.

Char fucked around with this message at 13:18 on Jan 12, 2017

  • Locked thread