Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Max Facetime
Apr 18, 2009

Amethyst posted:

if we build a machine that can classify any input stimulus one million times faster than we can, and react to it based on an evolving expert system several times larger and with much better efficacy than a human brain, can we really say it's not "true" intelligence just because it's not aware

if it can't apologize for something it has done before being told that it should apologize we can certainly say it's not very intelligent

Adbot
ADBOT LOVES YOU

Silver Alicorn
Mar 30, 2008

𝓪 𝓻𝓮𝓭 𝓹𝓪𝓷𝓭𝓪 𝓲𝓼 𝓪 𝓬𝓾𝓻𝓲𝓸𝓾𝓼 𝓼𝓸𝓻𝓽 𝓸𝓯 𝓬𝓻𝓮𝓪𝓽𝓾𝓻𝓮
animal cognition can probably teach us more about intelligence than ivory tower men huffing farts and declaring strong AI is unattainable

Arcteryx Anarchist
Sep 15, 2007

Fun Shoe
why do so many people get a hard-on for strong ai?

like we're doing tons of applicable stuff already without that; in a world that demands ever more specialization from its population what value does strong ai really hold? I mean it might get bored or something

"what if the tractors could think as they spend 12 hours plowing a field"

echinopsis
Apr 13, 2004

by Fluffdaddy
also studies on consciousness show for example that things happen before you're aware of them, implying that consciousness perhaps is a projection of what's already happening in your brain, meaning consciousness can't affect, it's only a display, and therefore perhaps unnecessary

or not

i was big time into consciousness for a while and read all the text books i could find etc

duTrieux.
Oct 9, 2003

echinopsis posted:

also studies on consciousness show for example that things happen before you're aware of them, implying that consciousness perhaps is a projection of what's already happening in your brain, meaning consciousness can't affect, it's only a display, and therefore perhaps unnecessary

i don't believe that this implies that consciousness is unnecessary; i think the ability to construct a rationalization itself is an expression of consciousness. my operating assumption is that our conscious experience is a collection of both post-hoc rationalization and higher-level executive planning/desicions/thoughts.

this implies that zoning out while doing something boring and repetitive is what it feels like to be conscious, but not really, like, conscious, you know

duTrieux.
Oct 9, 2003

if one of our machines developed self-awareness and immediately tried to kill itself would that be hosed up or what

duTrieux.
Oct 9, 2003

like, what if the first thing that skynet did on august 29th 1997 was to call a suicide hotline

Silver Alicorn
Mar 30, 2008

𝓪 𝓻𝓮𝓭 𝓹𝓪𝓷𝓭𝓪 𝓲𝓼 𝓪 𝓬𝓾𝓻𝓲𝓸𝓾𝓼 𝓼𝓸𝓻𝓽 𝓸𝓯 𝓬𝓻𝓮𝓪𝓽𝓾𝓻𝓮

duTrieux. posted:

if one of our machines developed self-awareness and immediately tried to kill itself would that be hosed up or what

in one of the culture novels, it’s stated that “pure” AIs (ones without cultural baggage) sublime as quickly as they develop the means to

duTrieux.
Oct 9, 2003

Silver Alicorn posted:

in one of the culture novels, it’s stated that “pure” AIs (ones without cultural baggage) sublime as quickly as they develop the means to

nice.

i should start reading those.

Amethyst
Mar 28, 2004

I CANNOT HELP BUT MAKE THE DCSS THREAD A FETID SWAMP OF UNFUN POSTING
plz notice me trunk-senpai

duTrieux. posted:

i don't believe that this implies that consciousness is unnecessary; i think the ability to construct a rationalization itself is an expression of consciousness. my operating assumption is that our conscious experience is a collection of both post-hoc rationalization and higher-level executive planning/desicions/thoughts.

this implies that zoning out while doing something boring and repetitive is what it feels like to be conscious, but not really, like, conscious, you know

That's an interesting way to put it

I suppose that even if consciousness has no part in instantaneous impulses it can still structure the environment that fires those impulses

JewKiller 3000
Nov 28, 2006

by Lowtax
those conscious choice tests works like this: you sit in front of some buttons you can press, and you may freely decide which one. you can take as much time as you need. there is a clock in the room. you are also hooked up to a brain scanny machine. once you've made a decision, before you press the button, you note the time when you committed to that choice. apparently the researchers looking at the machine can consistently predict your choice before you are conscious of having made it. and it's not a few milliseconds before either, it's like 5 seconds

echinopsis
Apr 13, 2004

by Fluffdaddy

duTrieux. posted:

i don't believe that this implies that consciousness is unnecessary; i think the ability to construct a rationalization itself is an expression of consciousness. my operating assumption is that our conscious experience is a collection of both post-hoc rationalization and higher-level executive planning/desicions/thoughts.

this implies that zoning out while doing something boring and repetitive is what it feels like to be conscious, but not really, like, conscious, you know

well the experts would somewhat agree with your lash statement, but imply that "autopilot" isn't consciousness.. and things like buddhism or meditation really make the most of what consciousness can be:

mindfulness is a buzz word of late but it basically describes being conscious and aware.: clearly our brains can perform complex tasks on autopilot . gently caress i check scripts on autopilot all the time - i've trained my brain with rules so if something looks out of place then consciousness kicks in and i can analyse and rationalise

you do raise interesting points. they call consciousness "the hard question", because consciousness is absolutely like nothing else on the universe at all. all the current evidence seems to often discredit ways of understanding consciousness and asks a lot of questions but doesn't give a lot of answers (just a lot of pointing out how wrong theories are)

JewKiller 3000
Nov 28, 2006

by Lowtax

echinopsis posted:

consciousness is absolutely like nothing else on the universe at all

you don't know that, you don't have a single piece of objective evidence to support that claim, and neither does anyone else who makes it

atomicthumbs
Dec 26, 2010


We're in the business of extending man's senses.
fun fact: working on a facial recognition algorithm at this point is morally untenable, especially if you make it open source

quad untenable if you're working on gait recognition

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?
it's worth reading Lenat's "The Nature of Heuristics" to see what the symbolic-reasoning people were up to

also "Theory Formation by Heuristic Search" and "Eurisko: A Program That Learns New Heuristics and Domain Concepts"

not a neural net in the bunch, just a whole lot of frames in a custom language atop Lisp

eschaton fucked around with this message at 08:36 on Sep 9, 2017

Amethyst
Mar 28, 2004

I CANNOT HELP BUT MAKE THE DCSS THREAD A FETID SWAMP OF UNFUN POSTING
plz notice me trunk-senpai

atomicthumbs posted:

fun fact: working on a facial recognition algorithm at this point is morally untenable, especially if you make it open source

quad untenable if you're working on gait recognition

Agreed

ultravoices
May 10, 2004

You are about to embark on a great journey. Are you ready, my friend?

JewKiller 3000 posted:

those conscious choice tests works like this: you sit in front of some buttons you can press, and you may freely decide which one. you can take as much time as you need. there is a clock in the room. you are also hooked up to a brain scanny machine. once you've made a decision, before you press the button, you note the time when you committed to that choice. apparently the researchers looking at the machine can consistently predict your choice before you are conscious of having made it. and it's not a few milliseconds before either, it's like 5 seconds

With anything involving a fMRI, the N=10 and you need to take a massive massive grain of salt with the results.

Max Facetime
Apr 18, 2009

JewKiller 3000 posted:

once you've made a decision, before you press the button, you note the time when you committed to that choice. apparently the researchers looking at the machine can consistently predict your choice before you are conscious of having made it. and it's not a few milliseconds before either, it's like 5 seconds

knowing that you can reason like this: "it's now 15:24:25 which means I must have made the decision at about 15:24:20, so that's what I'll write down here"

what will the test say about consciousness then?

muckswirler
Oct 22, 2008

eschaton posted:

it's worth reading Lenat's "The Nature of Heuristics" to see what the symbolic-reasoning people were up to

also "Theory Formation by Heuristic Search" and "Eurisko: A Program That Learns New Heuristics and Domain Concepts"

not a neural net in the bunch, just a whole lot of frames in a custom language atop Lisp

Good post.

echinopsis
Apr 13, 2004

by Fluffdaddy

JewKiller 3000 posted:

you don't know that, you don't have a single piece of objective evidence to support that claim, and neither does anyone else who makes it

that's just like you're opinion man

Inexplicable Humblebrag
Sep 20, 2003

Smythe posted:

my friend, begone of this thread. or perish

thanks for chasing off the anime retard, smythe

i picked up this Bostrom book from one of the usual NMN sci-fi thread cheap-book dumps and although it's kinda heavy going (still not finished it), it deals compellingly with the issues inherent in creating an artificial intelligence that has capacity for self-improvement. namely, how do we deal with a superintelligent, not-guaranteed-to-be-acting-in-our-best-interests entity?

echinopsis
Apr 13, 2004

by Fluffdaddy
we install kill switches and also make sure they cant gently caress and reproduce

Inexplicable Humblebrag
Sep 20, 2003

Well then why even bother

(This gets addressed too)

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib

echinopsis posted:

we install kill switches and also make sure they cant gently caress and reproduce

whoa whoa whoa, why shouldnt they be able to gently caress??

Inexplicable Humblebrag
Sep 20, 2003

really though the kill switch will either be internal or external. if it's internal it will have to trigger something in the mind of a superintelligent entity that can edit its own makeup (probably its code) and may be able to work around it (e.g. feed output from the killswitch into a VM)

if it's external then your security relies on humans doing the right thing when confronted with a superintelligent, possibly extremely persuasive, entity.

iirc there's like a chapter on this that goes into some actual detail. it's a good book

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

echinopsis posted:

we install kill switches and also make sure they cant gently caress and reproduce

because of course a super intelligence wouldn't be able to subvert its own programming and disable any software kill switch

needs to be a locally-triggered physical power cut, carried out by someone who can't be persuaded not to

duTrieux.
Oct 9, 2003

wouldn't it be easier to develop AI that intrinsically respects all life and consciousness actually yeah let's just wire a digital shotgun to their heads what can go wrong

duTrieux.
Oct 9, 2003

atomicthumbs posted:

fun fact: working on a facial recognition algorithm at this point is morally untenable, especially if you make it open source

quad untenable if you're working on gait recognition

angry_keebler
Jul 16, 2006

In His presence the mountains quake and the hills melt away; the earth trembles and its people are destroyed. Who can stand before His fierce anger?

lancemantis posted:

why do so many people get a hard-on for strong ai?

it's religion for the anxious agnostics

we'll build god and then he'll solve our problems and because he's so smart he'll invent immortality drugs and the matrix so i can live forever in whatever reality i want and I'll have a cool robot body with a six pack and babes will kiss and hug my robot body

Silver Alicorn
Mar 30, 2008

𝓪 𝓻𝓮𝓭 𝓹𝓪𝓷𝓭𝓪 𝓲𝓼 𝓪 𝓬𝓾𝓻𝓲𝓸𝓾𝓼 𝓼𝓸𝓻𝓽 𝓸𝓯 𝓬𝓻𝓮𝓪𝓽𝓾𝓻𝓮
I would definitely give up my lame flesh body for a robot body

Cybernetic Vermin
Apr 18, 2005

the idea of there being a hierarchy of intelligence, with some kind of "super intelligence" possible, is one of the many misguided ideas we have inherited from plato, him wanting to imply that a philosopher can sit down legs crossed and pierce the veil of reality through sheer force of reason

far more likely any real understanding is entirely limited by the way we interface with reality, progress is down to what experiments can be run and what they can tell us, there being truly simple explanations for complex phenomena is looking less and less probable (and it is helpful for every person to introspect a bit on *why* there would be simple explanations)

end result being that strong artificial intelligence is not so much impossible as it is quite irrelevant. we have brains already, and greater general intelligences will provide us with little

Max Facetime
Apr 18, 2009

Cybernetic Vermin posted:

there being truly simple explanations for complex phenomena is looking less and less probable (and it is helpful for every person to introspect a bit on *why* there would be simple explanations)

because gratuitous irreducible complexity and special cases are likely to be exploitable in some fashion?

Workaday Wizard
Oct 23, 2009

by Pragmatica
what if an ai that can make ai?

RISCy Business
Jun 17, 2015

bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork bork
Fun Shoe

Shinku ABOOKEN posted:

what if an ai that can make ai?

what if ai, but too much

Workaday Wizard
Oct 23, 2009

by Pragmatica
no but seriously is there any effort to make an ai that generates programs, possible better ais?

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder

Shinku ABOOKEN posted:

no but seriously is there any effort to make an ai that generates programs, possible better ais?

how do you define a 'better ai' :smug:

Bored Online
May 25, 2009

We don't need Rome telling us what to do.

Silver Alicorn posted:

I would definitely give up my lame flesh body for a robot body

but what if you can only be five feet tall

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Shinku ABOOKEN posted:

no but seriously is there any effort to make an ai that generates programs, possible better ais?

read the links I posted earlier in the thread, they're about an old design for what's effectively a self-improving AI

and there's not much difference between an AI improving itself or improving a copy of itself

MSPain
Jul 14, 2006
Fartificial intelligence

Adbot
ADBOT LOVES YOU

Silver Alicorn
Mar 30, 2008

𝓪 𝓻𝓮𝓭 𝓹𝓪𝓷𝓭𝓪 𝓲𝓼 𝓪 𝓬𝓾𝓻𝓲𝓸𝓾𝓼 𝓼𝓸𝓻𝓽 𝓸𝓯 𝓬𝓻𝓮𝓪𝓽𝓾𝓻𝓮
[quote="“Dongslayer.”" post="“476314342”"]
but what if you can only be five feet tall
[/quote]

after being 6’2” most of my life it would sorta be a welcome change

  • Locked thread