Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
LaughMyselfTo
Nov 15, 2012

by XyloJW

Mercrom posted:

Sorry to nitpick here, but nothing Mr Harry Potter fanfiction has ever written is anywhere near as repulsive or idiotic as taking the concept of a philosophical zombie seriously. It's basically the same thing as solipsism.

You can't rationally disprove solipsism. :colbert:

Adbot
ADBOT LOVES YOU

The Vosgian Beast
Aug 13, 2011

Business is slow
Yudkowsky seems to work weird rape stuff into most of the stories he writes.

It could just be lame attempts at being shocking, but I wouldn't be surprised if it's a weird fetish thing. And now that I have you thinking about Eliezer Yudkowsky's boners, my work is done.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

LaughMyselfTo posted:

"You" can't rationally disprove solipsism. :colbert:
:v:

SolTerrasa
Sep 2, 2011

Krotera posted:

the sort of thing that looks deep to normal people.

It's this one, but for maximum :smug:, Yudkowsky actually went and wrote a post about how easy it is to sound deep to "normal people".

quote:


Several people came up to me and told me I was very "deep". Well, yes, I am, but this got me thinking about what makes people seem deep.

Yep, probably rapetorturemurderdeath. Good job, Yudkowsky.

http://lesswrong.com/lw/k8/how_to_seem_and_be_deep/

ThePlague-Daemon
Apr 16, 2008

~Neck Angels~
If I think Roko's basilisk is bullshit, and the AI can't actually torture my original consciousness and has to rely on me being afraid for the consciousness of a simulation of me, why would it bother? It can't justify torturing simulations of people if that possibility didn't motivate the original, and since it's in the future it already knows who wasn't motivated. But then, who is it supposed to punish?

Do they really buy into this?

Phobophilia
Apr 26, 2008

by Hand Knit
If you don't display the Correct magic thinking then an invisible computer will make you suffer when you die.

Lead Psychiatry
Dec 22, 2004

I wonder if a soldier ever does mend a bullet hole in his coat?
An AI that goes and simulates me for torture would be kinda cool to have at a Six Flags amusement park as a momentary distraction between rides. Plus with the right instruction set (Or the term would be to define its capabilities) you could have it also do caricaturist and themed photography stuff to make more room for more attractions!

Lead Psychiatry fucked around with this message at 04:26 on Apr 22, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

ThePlague-Daemon posted:

If I think Roko's basilisk is bullshit, and the AI can't actually torture my original consciousness and has to rely on me being afraid for the consciousness of a simulation of me, why would it bother?

It wouldn't; torture only "works" on true believers who think torture works. That being said, I don't *think* it's meant to make you fear for the simulation of you, it's meant to make you fear that you are a simulation of you and will be tortured in five minutes if fake-you doesn't hit donate now now NOW

ThePlague-Daemon posted:

It can't justify torturing simulations of people if that possibility didn't motivate the original, and since it's in the future it already knows who wasn't motivated. But then, who is it supposed to punish?

Do they really buy into this?

It needs to torture people in the future so that we, in the past, can see it torturing people in the future, which will scare us into donating more. What's that, you say? We can't actually see the future that way, so what it actually does in the future is irrelevant and it has zero motivation to actually commit mass torture? Congratulations, you've put more thought into this than Yudkowsky.

Even if we could simulate the future perfectly, we'd need to see in our simulation the AI creating its own sub-simulations in which fake-past-us is running sub-sub-simulations to see the second-generation-fake-AI running its own sub-sub-sub-simulations in which we are running sub-sub-sub-sub-simulations of the AI running sub-sub-sub-sub-sub-simulations... all of which is meant to make us afraid that we are ourselves in a simulation and need an extra layer of sub- added to all of the above. The basilisk demands not only simulations but also infinitely-nested simulations, which calls attention to how implausible Yudkowsky's perfect simulations really are.

Luigi's Discount Porn Bin
Jul 19, 2000


Oven Wrangler

Lottery of Babylon posted:

It needs to torture people in the future so that we, in the past, can see it torturing people in the future, which will scare us into donating more. What's that, you say? We can't actually see the future that way, so what it actually does in the future is irrelevant and it has zero motivation to actually commit mass torture? Congratulations, you've put more thought into this than Yudkowsky.

Even if we could simulate the future perfectly, we'd need to see in our simulation the AI creating its own sub-simulations in which fake-past-us is running sub-sub-simulations to see the second-generation-fake-AI running its own sub-sub-sub-simulations in which we are running sub-sub-sub-sub-simulations of the AI running sub-sub-sub-sub-sub-simulations... all of which is meant to make us afraid that we are ourselves in a simulation and need an extra layer of sub- added to all of the above. The basilisk demands not only simulations but also infinitely-nested simulations, which calls attention to how implausible Yudkowsky's perfect simulations really are.
It also seems to hinge on the idea that the AI has a literally infinite amount of time and energy to work with, because otherwise surely it'd have better things to do than torture a gazillion simulations of some 21st-century reactionary WoW player. Like this is literally a situation in which you could apply Yudkowsky's ridiculous mathematical hedonic utilitarianism. Every 80,000,000 sim-years that Benevolent Robot Devil simulates in order to torture a million copies of Sim-Goon for a past sin is a few less seconds of non-tortured consciousness for the rest of Sim-Humanity before the heat death of the universe.

Chamale
Jul 11, 2010

I'm helping!



ThePlague-Daemon posted:

If I think Roko's basilisk is bullshit, and the AI can't actually torture my original consciousness and has to rely on me being afraid for the consciousness of a simulation of me, why would it bother? It can't justify torturing simulations of people if that possibility didn't motivate the original, and since it's in the future it already knows who wasn't motivated. But then, who is it supposed to punish?

Do they really buy into this?

I've been trying to think of a way to explain this in non-whackjob terms, and I can take a shot.

The first basis for the belief in Roko's Basilisk is that a computer simulation of a person can be just as conscious and alive as a real person - this makes sense if you don't believe in souls, and the human brain is purely physical causes and effect.

The second basis for the belief is the notion that one day after the Singularity there will be many computer simulations of people, far surpassing the number of pre-singularity people. This makes sense if you believe the dubious proposition that one day a computer will be able to design a better computer, which can subsequently design an even more efficient computer, and so on until an arbitrarily powerful computer exists.

This means that you, the decision maker, might be a living human being in the year 2014 or a simulated mind that has so far been led to believe that it is 2014. You can't say for certain which is correct, but on the balance of probabilities it is more likely that you are a process meant to simulate a pre-Singularity life than a genuine example of pre-Singularity life. Therefore you should live your life in a way that is ideal for a conscious simulation in a supercomputer's brain, not in a way that is ideal for a living human being. If the AI won't react to your behaviour inside the simulation, there is no difference between the two possibilities. If and only if the super-AI makes it known that it intends to torture anyone who fails to help the singularity, then there is an incentive for any conscious person to donate to help the singularity. In the unlikely chance that that consciousness is a living being, he or she just did the right thing although it was not self-beneficial. In the more likely event that that conscious being is a computer simulation, it just avoided cyberhell. This creates an incentive for everyone to donate to the Singularity.

The idea falls apart as quickly as the idea of an omnipotent, omniscient, omnibenevolent God. If this AI truly wants to make people happy, it needs a credible threat of infinite torture. That means all conscious people need to have a serious chance of actually being simulated beings. Simulating people living the normal human experience instead of a perfect Utopia to create this credible threat costs a lot of virtual happiness points, and considering that we're already developing computer technology nicely, won't do much to speed the development of computer technology. It illustrates the flaw in Bayesian probability that you need certain predetermined probabilities - if you believe the probability that souls exist is near 0 and the probability that we get a post-Singularity benevolent AI is near 1, Roko's Basilisk makes sense, but only then.

Do you understand the thought process now? I've though of an analogy that might help or just confuse the issue, so I've hidden it behind a spoiler:

A company creates a holodeck experience that perfectly replicates the experience of being in the Louvre, with the exception that you get tasered instantly if you damage a painting. After a night of drinking you wake up in what appears to be the Louvre with no one around. There is a small chance that you're actually in the Louvre, but it's more likely that this is a virtual reality where damaging the art will result in a painful electric shock. Damaging the paintings in virtual reality would be harmless, but results in punishment. Damaging the paintings in reality would be harmful, but you would not be punished. The threat of punishment stops you from willfully damaging the paintings because it will probably hurt, even though someone who is genuinely in the Louvre could harm the art and get away with it. In this analogy, damaging the art at the Louvre is not helping the Singularity people, since they believe axiomatically that the Singularity would be a good thing, obviously.

Tracula
Mar 26, 2010

PLEASE LEAVE
I'm trying to figure this poo poo out and I can't. Does Yudkowsky literally believe that AM from I Have no Mouth could genuinely come true or something?

Wheat Loaf
Feb 13, 2012

by FactsAreUseless
The "simulation" thing confuses me. Just to clarify, is the basic thesis that we are (or might be) simulations generated by a super-advanced AI? I don't really get it. Is it like the Matrix or something?

Chamale
Jul 11, 2010

I'm helping!



Tracula posted:

I'm trying to figure this poo poo out and I can't. Does Yudkowsky literally believe that AM from I Have no Mouth could genuinely come true or something?

Definitely. The idea of the singularity is that as soon as a computer is smart enough to design a smarter computer faster than a group of people could design a smarter computer, it's a short matter of time before a computer with essentially infinite power. Understanding that is crucial to understanding the rest of their mindset.

Metal Loaf posted:

The "simulation" thing confuses me. Just to clarify, is the basic thesis that we are (or might be) simulations generated by a super-advanced AI? I don't really get it. Is it like the Matrix or something?

The idea is that a brain is all physical processes, and that if a machine could simulate those processes with enough accuracy the resulting simulation would be conscious in the same way a living human is. It's not quite the matrix because the brain in a jar is not from a living human, it's a simulated being experiencing simulated inputs.

Chamale fucked around with this message at 08:53 on Apr 22, 2014

e X
Feb 23, 2013

cool but crude
Basically, you are like a Sim character and if you don't do what the AI wants, it is going to remove the ladder from the pool.

Wales Grey
Jun 20, 2012

AATREK CURES KIDS posted:

Definitely. The idea of the singularity is that as soon as a computer is smart enough to design a smarter computer faster than a group of people could design a smarter computer, it's a short matter of time before a computer with essentially infinite power. Understanding that is crucial to understanding the rest of their mindset.

The snag in their faith logical and reasoned beliefs is that "the Singularity" is more or less the Rapture for futurists/techno-fetishists/transhumanists: a wholly speculative event born of an over-active imagination while having a weak grasp of the subject based almost wholly on popular knowledge.

What I'm saying is that "the Singularity" is a potentially interesting science-fantasy fiction premise at best, and that any work that takes "the Singularity" as anything more serious than that is a steaming pile of bullshit.

grate deceiver
Jul 10, 2009

Just a funny av. Not a redtext or an own ok.

AATREK CURES KIDS posted:

If and only if the super-AI makes it known that it intends to torture anyone who fails to help the singularity, then there is an incentive for any conscious person to donate to help the singularity. In the unlikely chance that that consciousness is a living being, he or she just did the right thing although it was not self-beneficial. In the more likely event that that conscious being is a computer simulation, it just avoided cyberhell. This creates an incentive for everyone to donate to the Singularity.

Uhh, wait, so avoiding cyberhell only works if I donate while being a simulation? Why would the AI care about what I do with my fake simdollers?

Namarrgon
Dec 23, 2008

Congratulations on not getting fit in 2011!

grate deceiver posted:

Uhh, wait, so avoiding cyberhell only works if I donate while being a simulation? Why would the AI care about what I do with my fake simdollers?

Ah you see, but you don't know if you are in a simulation! If there are 99 perfect simulations of you running and one real you, there is only a 1% chance of being real-you (rather big jump in logic, but go with it). Now would you risk that 1% chance of ending up in cyber-hell for three gazillion years if you could just donate your money to the AI fund?

Ineffable
Jul 4, 2012

grate deceiver posted:

Uhh, wait, so avoiding cyberhell only works if I donate while being a simulation? Why would the AI care about what I do with my fake simdollers?

It doesn't - but since the real you can't tell that they're real any more than a simulated version of you can, and (assuming that you accept that the creation of a literal God-AI is a certainty) they must also donate out of a sense of self preservation. The AI only tortures simulations of you in order to give the real you an incentive to donate. The flaw (well, one at them, at least) is that the AI only needs to convince the real you that it's going to torture a huge number of simulations of you, rather than actually doing it.

If it sounds crazy, it's because it is.

Mercrom
Jul 17, 2009

Lottery of Babylon posted:

It's idiotic when you're talking about the real world and wondering if that real organic person sitting across from you is a real "person". But when your argument hinges on "Real live human beings feel exactly the same as a subroutine in FriendlyTortureBot's simulation, so I myself might be a fake subroutine-person right now", it doesn't seem unreasonable to ask whether the binary strings in FTB's memory banks feel the same way a person does, or have any consciousness at all. After all, a brain and a microchip are very different types of hardware.
Do you know how consciousness works? How pain works? Maybe it's inherent in carbon atoms. Maybe it's inherent in human DNA. Maybe it's inherent in just me personally.

LaughMyselfTo posted:

You can't rationally disprove solipsism. :colbert:
This is a better argument honestly.

Lottery of Babylon posted:

I guess it's really just another way of being skeptical of how perfect the ~perfect simulations~ that Yudkowsky asserts FTB will have really could be.
It's really easy to prove the basilisk is retarded without throwing the kitchen sink of bad arguments at it.

This is not about AI or simulations, but about the kind of thinking that allows people to look at a person in pain and think "that isn't a human being like us, like me", just so they can treat them like animals. Even if you are only using it to devalue people in a stupid hypothetical it's still wrong.

Namarrgon
Dec 23, 2008

Congratulations on not getting fit in 2011!

Mercrom posted:

This is not about AI or simulations, but about the kind of thinking that allows people to look at a person in pain and think "that isn't a human being like us, like me", just so they can treat them like animals. Even if you are only using it to devalue people in a stupid hypothetical it's still wrong.

22nd century geopolitics; "It's not genocide if they are statistically more likely to be simulations!"

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Mercrom posted:

Do you know how consciousness works? How pain works? Maybe it's inherent in carbon atoms. Maybe it's inherent in human DNA. Maybe it's inherent in just me personally.

This is a better argument honestly.

It's really easy to prove the basilisk is retarded without throwing the kitchen sink of bad arguments at it.

This is not about AI or simulations, but about the kind of thinking that allows people to look at a person in pain and think "that isn't a human being like us, like me", just so they can treat them like animals. Even if you are only using it to devalue people in a stupid hypothetical it's still wrong.

Calm down, it was just a way of saying that the simulations aren't actually as good as the singularity-rapturists think they are. I didn't realize that that was such a loaded term to use or that it had been involved in justifying genocides or whatever. All I meant was that I don't believe in the magic "AI somehow creates infinite processing power and figures out how to perfectly duplicate the universe" step, therefore I don't believe in the "AI creates simulations so perfect that I myself might unknowingly be one" step.

Lottery of Babylon fucked around with this message at 12:43 on Apr 22, 2014

neongrey
Feb 28, 2007

Plaguing your posts with incidental music.

AATREK CURES KIDS posted:

Definitely. The idea of the singularity is that as soon as a computer is smart enough to design a smarter computer faster than a group of people could design a smarter computer, it's a short matter of time before a computer with essentially infinite power. Understanding that is crucial to understanding the rest of their mindset.

So where do hardware limitations exist in this fantasy land, or do we not talk about that?

90s Cringe Rock
Nov 29, 2006
:gay:

neongrey posted:

So where do hardware limitations exist in this fantasy land, or do we not talk about that?
More servers. Nanotech. Picotech. Femtotech. FTL, because otherwise pinging the server across the room takes seventeen million subjective years. With all that technology, there are no limitations!

Presumably sufficient torture will bring about these advances.

Wales Grey
Jun 20, 2012

neongrey posted:

So where do hardware limitations exist in this fantasy land, or do we not talk about that?

"Sufficently advanced technology" will solve all problems.

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

AATREK CURES KIDS posted:

The idea is that a brain is all physical processes, and that if a machine could simulate those processes with enough accuracy the resulting simulation would be conscious in the same way a living human is. It's not quite the matrix because the brain in a jar is not from a living human, it's a simulated being experiencing simulated inputs.

Which is insanely dumb. Ignoring the hardware limitations (which are myriad), there's one very good reason this is impossible.

Entropy.

In order to make a perfect simulation of the behavior of a human being, you need perfect information about that human being's start conditions- all of which is destroyed by entropy. The information needed is both unrecorded and unrecoverable. Only Yudlowsky's pathological fear of death makes him entertain the idea that not only can he live after he dies, but he must live, and suffer forever; and only acceptance of his mortality and the mortality of all things can save him from that fear.

Dean of Swing
Feb 22, 2012
I can imagine Yud talking in the same stilted speech of a free man of the land: rehearsed and formulaic. And similarly, his audience having no idea what the hell he is talking about and wondering why he keeps bringing up torture.
Am I supposed to be spooked by the torture AI, cause my mind must be too simple to comprehend the true form of its attack.

Tidy Flea
May 2, 2013
I think, therefore I am most likely one of an arbitrarily large number of simulations of myself.

grate deceiver
Jul 10, 2009

Just a funny av. Not a redtext or an own ok.

Ineffable posted:

It doesn't - but since the real you can't tell that they're real any more than a simulated version of you can, and (assuming that you accept that the creation of a literal God-AI is a certainty) they must also donate out of a sense of self preservation. The AI only tortures simulations of you in order to give the real you an incentive to donate. The flaw (well, one at them, at least) is that the AI only needs to convince the real you that it's going to torture a huge number of simulations of you, rather than actually doing it.

If it sounds crazy, it's because it is.

But if I'm simulated (which is most likely in this batshit scenario), then nothing I do makes any difference. So far the AI didn't even bother to send me a letter or w/e, much less torture me, so why should I care?

If I'm real, then obviously the Machine doesn't exist and who gives a gently caress? Idk, is it going to kill me from the future or something?

Namarrgon
Dec 23, 2008

Congratulations on not getting fit in 2011!

grate deceiver posted:

But if I'm simulated (which is most likely in this batshit scenario), then nothing I do makes any difference. So far the AI didn't even bother to send me a letter or w/e, much less torture me, so why should I care?

If I'm real, then obviously the Machine doesn't exist and who gives a gently caress? Idk, is it going to kill me from the future or something?

Yeah this seems a bit of a critical flaw. If you are not a simulation, it doesn't matter. If you are, then your previous real-you has obviously decided not to donate in the past so it doesn't matter what simulation-you does.

made of bees
May 21, 2013
It sounds like Calvinist predestination: do the right thing, because only people who were preselected to go to Heaven are capable of doing the right thing.

Th_
Nov 29, 2008

Namarrgon posted:

Yeah this seems a bit of a critical flaw. If you are not a simulation, it doesn't matter. If you are, then your previous real-you has obviously decided not to donate in the past so it doesn't matter what simulation-you does.

In my mind, the most critical flaw is that it wants your money. If it has to simulate your brain to make a simulation of you (and why shouldn't it?) the best anyone has ever done for neuronal simulations is not so good. Namely here, which is not in real time, not the scale of a single human, and massively complicated. In particular, even if the AI already had enough computing power, it'd have to power it up somehow, and that electricity costs money (or is at least equivalent to some money in terms of things it could be otherwise doing with the power besides simulating you.) Even if it gets better even by orders of magnitude, and you believe the full Yud, the AI can't likely afford to do anything but bluff.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

I've never understood why "We have a machine capable of improving itself" is assumed to imply "Growth of our power increases exponentially without bound and we become all-powerful immortal eutopian cybergods". We already have a machine capable of improving itself. It's me. In the last year my physical strength and my knowledge have grown through exercise and study; I have improved myself. But that doesn't mean I can bring about the singularity rapture.

grate deceiver posted:

But if I'm simulated (which is most likely in this batshit scenario), then nothing I do makes any difference. So far the AI didn't even bother to send me a letter or w/e, much less torture me, so why should I care?

If I'm real, then obviously the Machine doesn't exist and who gives a gently caress? Idk, is it going to kill me from the future or something?

You find yourself in one of two identical rooms. Both rooms have a red door and a blue door, and you have no way of telling which of the two rooms you're in. (Perhaps an exact duplicate of you has just materialized in the other room, but for now it doesn't matter.)

In Room R, if you choose the red door you must give your good friend Bob $5 and Bob is happy, but if you choose the blue door you get to keep your money and Bob is sad.

In Room S, if you choose the red door you walk away and nothing happens, but if you choose the blue door your good friend Bob appears, tasers you, and keeps you in his secret underground torture chamber for fifty years. Bob neither enjoys torturing you nor is inconvenienced by torturing you.

If you're in Room S, neither of your options helps or harms Bob in any way. And if you're in Room R, you're not in danger of suffering horrible torture. So your good friend Bob can only directly threaten you in the situation where Bob doesn't care which option you choose, and you can only help Bob in the situation where Bob is powerless. But because the rooms are identical, you don't know which room you're in. You have to weigh the option of choosing the blue door (and possibly getting tortured) against the option of choosing the red door (and possibly giving Bob $5).

Bob is hoping that even if you're (unknowingly) in Room R the threat of possible torture will be scary enough that you'll choose the red door and give him your money. If you're in Room S he doesn't care which option you choose, but he needs to let you make your decision so that the situations appear identical from your perspective.

Now, suppose that instead of one room S, there are a million billion gazillion room S's, but there's still only one room R. And if you like, suppose the rooms are connected by glass walls, and you can see duplicates of yourself faced with the same decision in all of the other rooms, so you know the scenario actually does look identical in each of the rooms. How likely is it that you're the one who happened to be in Room R? Not very likely. So you choose the red door because you're scared of being tortured by the blue door. In fact, each version of you in each room chooses the red door for that reason, since you're identical people in an identical situation. Which means all copies of you in the S-rooms get to avoid torture, but also means that the you in the R Room chose the red door and needs to pay Bob $5.

Of course, you (and all copies of you) can decide on the blue door instead, but then all but one of you will be tortured by Bob (who is your very good friend and takes no pleasure in this). How lucky are you feeling?


That's the basic setup behind the torture threats. The full version for things like Roko's Basilisk (but not for other scenarios like In Soviet Russia AI Boxes You) involves some poorly-thought-out time-travel bullshit, and the whole thing falls apart when you ask questions like "How did Bob become able to perfectly copy me and my circumstances?" or "If Bob is really my friend why the hell would he do this?" or "What if the S-rooms are actually being controlled by not Bob but Mike, a completely different person who gives you candy instead of torture if you take a blue door?" and all the usual counterarguments to Pascal's Wager.

Grondoth
Feb 18, 2011

ol qwerty bastard posted:

(It should also be pointed out that another of his forays into "rationalist" fiction, Three Worlds Collide, features a totally advanced and enlightened future human civilization... where rape has been legalized. Even if this is just to point out the differences in their ethics from our own, it seems odd that he should be so fixated on rape.)

Holy poo poo I remember this story. I think it was linked in this forum, actually. For those unfamiliar, it's about humans meeting another alien race that does horrible things, and trying to figure out what we should do. I think they ate their kids or something? Then another alien race shows up and thinks the same thing about us, that we do horrible things and they need to change that. Interesting turnaround, right? There's gonna be lots of different species out there, who knows how they handle ethical questions and what they see as good and right.
Then there's a long bit where they explain that humans have legalized rape and it solved a bunch of crime problems. I had no real idea how to take it, since I was linked the story as something good, and boy that sure seems like a justification to legalize rape, doesn't it?

Grondoth fucked around with this message at 19:05 on Apr 22, 2014

Numerical Anxiety
Sep 2, 2011

Hello.

Wales Grey posted:

"Sufficently advanced technology" will solve all problems.

Beginning, of course, with the problem that some might be skeptical of the possibility of this "sufficiently advanced technology."

Chamale
Jul 11, 2010

I'm helping!



grate deceiver posted:

But if I'm simulated (which is most likely in this batshit scenario), then nothing I do makes any difference. So far the AI didn't even bother to send me a letter or w/e, much less torture me, so why should I care?

If I'm real, then obviously the Machine doesn't exist and who gives a gently caress? Idk, is it going to kill me from the future or something?

Here's why Yudkowsky is so scared of Roko's Basilisk: The AI would only torture someone who believed in the techno-Rapture and failed to donate everything to the cause after learning of the AI's threat. So according to their belief system, all the Less Wrongers are safe from eternal torture unless they learn about Roko's Basilisk, and then (if they are simulations) they become fair game for millennia of torment.

grate deceiver
Jul 10, 2009

Just a funny av. Not a redtext or an own ok.

AATREK CURES KIDS posted:

Here's why Yudkowsky is so scared of Roko's Basilisk: The AI would only torture someone who believed in the techno-Rapture and failed to donate everything to the cause after learning of the AI's threat. So according to their belief system, all the Less Wrongers are safe from eternal torture unless they learn about Roko's Basilisk, and then (if they are simulations) they become fair game for millennia of torment.

Oh, so by not giving a poo poo either way I'm perfectly safe and the Great Satanic Computer will only target obsessive nerdlingers who buy into this singularity crap? I'm cool with that.

What I still don't get is why the AI would even bother torturing simpeople when their real counterparts are long dead and why would real people even care about this supposed threat.

grate deceiver fucked around with this message at 20:01 on Apr 22, 2014

Polybius91
Jun 4, 2012

Cobrastan is not a real country.
Something I was sad to discover was that, when I took a class on artificial intelligence in college, the textbook cited Yudkowsky on something relating to potential dangers of AI. It was an accredited, legitimate school and the class was otherwise very informative and useful. It's kind of depressing that a crank of his magnitude was able to make enough noise and fool enough people to be used as a source by an actual academic.

Chamale
Jul 11, 2010

I'm helping!



grate deceiver posted:

What I still don't get is why the AI would even bother torturing simpeople when their real counterparts are long dead and why would real people even care about this supposed threat.

It's the idea of a credible threat from game theory. Why bother launching a nuclear attack on the Soviet when their missiles are two minutes from impact and your counterattack will be too late to make a difference? The Ai needs to make a credible threat to blackmail people into donating, but the threat only works to people who already believe. They think Roko's Basilisk is so dangerous because the AI will only bother torturing people who have been exposed to the idea but still did not give up their life savings for AI research.

Babysitter Super Sleuth
Apr 26, 2012

my posts are as bad the Current Releases review of Gone Girl

Lottery of Babylon posted:

I've never understood why "We have a machine capable of improving itself" is assumed to imply "Growth of our power increases exponentially without bound and we become all-powerful immortal eutopian cybergods".

Well, the thing is that actual intelligent people who have discussed the idea of a technological singularity (IE John Von Neumann and Stanislaw Ulam, the guys who first theorized the idea) neither A) consider AI self improvement to be its sole trigger, nor B) saw technofuturist paradise as an immediate and unavoidable consequence thereof.

In actual academic terms, a technological singularity really only means the acceleration of technological development and a technological innovation that has sociological consequences far reaching enough that a post-singularity society cannot be reasonably explained without the context of said innovation. Singularities have already happened a couple times in human history, like the invention of written language or germ theory, or in smaller scales, the printing press causing the vast spread of literacy or radio technology ushering in the information age. Supermedical advancements, cybernetics, all that sci-fi stuff is only a potential result of a hypothetical scientific innovation, not a given.

The problem is that one of the examples Vernor Vinge used to give an idea of a potential future singularity was the now famous self-improving wonder AI, which was glommed onto by a bunch of turbonerds looking for the Next Big Thing to justify their faith that techno-immortality would be achieved within their lifetime. They already had the Endpoint mapped out (living as a libertarian robogod that could crush all those people who didn't take a STEM major) and the Singularity is just the latest in a long list of ways these people think they can get there. None of these people actually understand what a singularity really means, they just think of it as a magical word that will make Ghost in the Shell happen if you say it enough.

Babysitter Super Sleuth fucked around with this message at 21:09 on Apr 22, 2014

Adbot
ADBOT LOVES YOU

LordSaturn
Aug 12, 2007

sadly unfunny

AATREK CURES KIDS posted:

Here's why Yudkowsky is so scared of Roko's Basilisk: Because he has invented a mystery cult around the future emergence of an omnipotent AI deity, and one of his main catechisms (The AI Box "Experiment") is constructed in a similar way.

I feel like this is the more concise explanation. Once you convince people that you can hold (their immortal soul)/(their body thetans)/(seventy million copies of themselves) ransom in exchange for their fealty and financial contribution, then you can tell them the details, but to the uninitiated it just sounds like idiotic garbage of various kinds.

  • Locked thread