Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
LaughMyselfTo
Nov 15, 2012

by XyloJW
Someone should make a chatterbot that speaks how Less Wrong imagines a magic AI would. Then send it to Yudkowsky. It'd scare the poo poo out of him.

Adbot
ADBOT LOVES YOU

Lead Psychiatry
Dec 22, 2004

I wonder if a soldier ever does mend a bullet hole in his coat?
For Cheshire Cat: Wheeler's Delayed Choice.

Dust vs Torture itself is a really insanely loving stupid version of the Ticking Time Bomb scenario.

Which really is insanely loving stupid.

I mean, you have to be a special kind of broken individual to take a hypothetical about justifying torture to save lives and destroy it to make a hypothetical about justifying torture to spare people from having to rub their eye a few times.

Lead Psychiatry fucked around with this message at 09:33 on Apr 20, 2014

Strategic Tea
Sep 1, 2012

LaughMyselfTo posted:

Someone should make a chatterbot that speaks how Less Wrong imagines a magic AI would. Then send it to Yudkowsky. It'd scare the poo poo out of him.

If a panel can't tell the difference between it and regular LessWrong posts, it should be considered sentient! :pseudo:

Chamale
Jul 11, 2010

I'm helping!



Namarrgon posted:

Difference is that entanglement makes perfect sense and the AI scenario does not. I am not sure if this is a product of this thread or from LW's awkward wording, but your actions DO NOT affect your the AI filling the box or not. The AI bases its decision on the actions of simulation-you, which is not present-you, because the simulation is a simulation and done in the past. Maybe I'm just not a rational ubermensch like Yudkowsky but I don't see what time has to do with this at all.

I think the thought experiment is that you would make the right decision in another context, so you should make the wrong decision in this context because you can't tell the difference. It's confusing because that time-neutral decision idea is utterly separated from reality.

Runcible Cat
May 28, 2007

Ignoring this post

GWBBQ posted:

I think it's very telling that they're not even trying to think it through and coming up with hypotheticals like "how many people would develop eye infections and die or suffer partial or complete blindness" and instead look at it as a strict mathematical equation. Based on the amount of smug superiority in the quoted posts alone, everyone in that discussion is likely a Utility Monster.
The Ones Who Walked Into Omelas And Said gently caress Yeah, But We Need To Add A Torture Chamber.

Wales Grey
Jun 20, 2012
Let's say that you're a self-qualified expert with computers, and you're trapped in a room. The only way out of the room is through a corridor full of cutting lasers controlled by a computer network ruled by a "Bad AI" named I AM. Exploring the room with your senses and using the experience to update your brain's pre-conceived notion that you were alone, you discover that you can communicate with a fellow detainee.

After a brief and rational discussion, the only way out is for you and your companion, who is actually a future/past version of you, to work at two consoles to shut down the laser grid by wresting control from an AI by convincing it that you're a fellow "bad" AI and need to escape. Easy enough for a fifth level clear autodidact like yourself, but you discover that you can turn off your side's laser security turrets if you turn on your past/future self's turrets, and your companion can presumably do the same. Should you turn off your turrets at the expense of luring your fellow inmate into a deadly trap, or should you trust in the counter-Bayesian nature of humanity and your future self?

What if your future self was actually a cyborg created by the AI, and the whole thing is a setup to test if you will kill your past-future self? What if the AI knew that you would kill your other self and lasered you for your crimes? What if the AI was actually just a cat, and you're some kind of hyper-intelligent mouse being generated in a holodeck somewhere? What if they turned off the program?!

The correct answer to this dilemma is to donate your moneys to my AI research charity to ensure that a future AI doesn't put you in this situation! Wow, I can't believe that lamestream science doesn't pay attention to my super-rational and important work on AI ethics!

Wales Grey fucked around with this message at 10:02 on Apr 20, 2014

Runcible Cat
May 28, 2007

Ignoring this post

Namarrgon posted:

Difference is that entanglement makes perfect sense and the AI scenario does not. I am not sure if this is a product of this thread or from LW's awkward wording, but your actions DO NOT affect your the AI filling the box or not. The AI bases its decision on the actions of simulation-you, which is not present-you, because the simulation is a simulation and done in the past. Maybe I'm just not a rational ubermensch like Yudkowsky but I don't see what time has to do with this at all.
Because you don't know whether you're the actual you, or one of the umptillion simulations of you the AI will construct in the future. Odds are it's the latter! :tinfoil: :derp:

ungulateman
Apr 18, 2012

pretentious fuckwit who isn't half as literate or insightful or clever as he thinks he is

LaughMyselfTo posted:

What brave soul wants to do a complete blind MOR readthrough for this thread? poo poo's so long he might actually be done with it by the time you finish, though, to be fair, its length is perhaps matched by the slowness of his writing.

I would, but I've already read it. :(

If you go into it with the assumption that Harry is a brilliant but utterly hosed up kid, it's a pretty enjoyable read (by fanfic standards. I read a lot more fanfic than the average goon).

I did skim over the long rants about death being bad, though, and him completely missing the depression metaphor of the Dementors is stupid. I'm sure people with actual standards of what they read on the internet find it much less interesting.

Whoever mentioned that Yudkowsky's / MoR Harry's beliefs mirror Voldemort's was spot on, though. I was under the impression it was an intentional plot point when I read it, but if he's actually deadly serious...

Darth Walrus
Feb 13, 2012
I'll just repost this from the TVTropes thread:

quote:

I think the mention of 'deathism' here deserves further elaboration. See, among his many other pathologies, Yudkowsky has a phobia of death. Sounds innocuous enough, right? I mean, death is generally a kind of scary thing. Thing is, though, most people fear death, but they don't have a phobia of it. It doesn't dominate their every thought. Not so for Yudkowsky. Dude is obsessed with death and the avoidance thereof, to the point where he cannot comprehend why anyone might be even slightly OK with the idea of not living forever, and will label such opinions as 'deathist' and thus evil. In HPMOR, for instance, he takes the Dementors, Rowling's explicit analogy for depression, and turns them into avatars of death, ignoring stuff like the fact that they very explicitly don't kill you, and has Harry rant at Dumbledore for most of a chapter about how he can't believe that he's not a fan of immortality. Apparently, one of his relatives died young, and it traumatised him. Which sucks, but it's yet another demonstration of how his avowed 'rationality' is anything but.

This really does explain a lot about Yudkowsky's thought processes.

Also, an article that shines some light on just how deep the Less Wrong rabbithole goes.

Telarra
Oct 9, 2012

Even ignoring the content, MoR isn't that great a read.

It starts off with the chapters being more or less just short one-offs, where Harry gets to act all superior towards his eldersintellectual inferiors. And from there it quickly snowballs into over-long, over-wrought drama, plots, politics, and philosophy. Mixed in are snippets of Harry delivering rationality lectures to the audience, Yudkowsky taking blatant pot-shots at parts of Rowling's work that he hated, and random yet plentiful anime and associated nerd culture references. All culminating in (most recently) Hermione being ripped into two roughly-equal pieces by the troll in the dungeon, rendered with all the loving, gore-drenched detail that ordinary prose can muster, as Harry desperately tries to save her life as her upper torso dies in his blood-drenched arms.

It's basically just your typical terrible webcomic, except all the pictures have been helpfully transmuted into their equivalent 1000 words each.

I remember liking some bits, like the scene where Harry escapes Azkaban with an improvised Transfigured liquid fuel rocket, or when it expresses the optimistic view that humanity will one day overcome death itself and live amongst the stars. Some of the one-offs were kinda fun, too, before it got all serious about continuity and explaining things. But the bulk of it is such spiteful, smug trash that it's really not worth reading - though someone more capable than I could make a decent Let's Read out of it.

Djeser
Mar 22, 2013


it's crow time again

Strategic Tea posted:

I could see that working in context, aka an AI using it in the moment to freak someone out. Add some more atmosphere and you could probably get a decent horror short out of it. Stuff only gets ridiculous when it goes from 'are you sure this isn't a sim?' from an imprisoned AI right in front of you to a hypothetical god-thing that might exist in the future that probably isn't real but you should give Yudkowsky money just in case.

What I don't understand why his AIs are all beep boop ethics-free efficiency machines. Is there any computer science reason for this when they're meant to be fully sentient beings? They're meant to be so perfectly intelligent that they have a better claim on sentience than we do. Yudkowsky has already said that they can self modify, presumably including their ethics. Given that, why is he so sure that they'd want to follow his bastard child of game theory or whatever to the letter? Why would they be so committed to perfect efficiency at any cost (usually TORTUREEEEEE :supaburn:)? I guess Yudkowsky thinks there's one true logical answer and anyone intelligent enough (aka god-AIs oh and also himself) will find it.

The fucker doesn't want to make sure the singularity is ushered in as fast and peacefully as possible. He want to be the god-AI torturing infinite simulations and feeling :smuggo: over how ~it's the only way I did the maths I'm making the hard choices and saving billions~. Which is really the kind of teenage fantasy the whole site is.

The reason that Yudkowsky's AIs are all beep boop efficiency machines is because he believes his pet theories about Bayesian probability are so clearly and obviously the most logical and rational that of course the hypothetical sentient future AI would be Bayesian, and that would mean that it follows Bayesian logic perfectly, without the irrational sense of morality/reality that makes humans think that "I'm going to simulate fifteen bazillion copies of you, to make the possibility that you're not a simulation infinitesimally slim" is a dumb idea.

And it's not just the AIs he expects to act like that, it's people who he expects to act like that. That's why he thinks an AI can escape by doing the simulate-fifteen-bazillion-copies-of-you trick. Because once someone hears that the AI simulated a ~*BIG NUMBER*~ then obviously they'll do the math and figure out the odds are incredibly slim and so you're clearly an AI simulation and will be tortured if you don't let him go.

(The idea that the person would refuse to let the AI go even under threat of quantum torture doesn't show up--probably because in that instance both the AI and the human aren't perfect Bayesian :goleft:bots. If the AI has a perfect simulation of you, and you wouldn't let the AI go under the threat of torture, then it has no reason to torture you, because it knows that torturing you will do nothing. And if the AI isn't simulating your torture, then the possibility of you being an AI simulation is effectively zero. The AI doesn't escape and your simulations don't get tortured. Then again, Yudkowsky hates the idea that you can express probability as a zero.)

But seriously, he seems almost more afraid of AI than he seems to like it. That seems to be the brunt of his "AI research" too: hypothesizing about these fictional Singularity AIs that are all-knowing with infinite computing power. One of the particular ways he's afraid of it is the "paperclip maximixer", which is a self-improving AI that's given one goal, no restrictions: make paperclips. It gets better and better at making paperclips, until it starts making people into paperclips, then it makes the earth into paperclips, then it makes the whole solar system into paperclips. (It's gray goo, but with paperclips.) The LW Wiki page says that it's just a thought experiment showing why you need to program more than one singular value into an AI, but the people at Less Wrong discuss it in significant detail.

Here, a troper (surprise surprise) wants to talk about what exactly a paperclip maximizer would be doing.

Here, someone says thinking that aliens are at all anthropomorphic is dumb, but that considering an alien paperclip maximizer is a valid threat.

Here, someone asks whether a paperclip maximizer is better than nothing.

Yudkowsky in a moment of non-:spergin: posted:

Paperclippers are worse than nothing because they might run ancestor simulations and prevent the rise of intelligent life elsewhere, as near as I can figure. They wouldn't enjoy life. I can't figure out how any of the welfare theories you specify could make paperclippers better than nothing?

DataPacRat, channeling the LW gestalt posted:

Would it be possible to estimate how /much/ worse than nothing you consider a paperclipper to be?

While clicking around various links, I found a link to "singularity volunteers" which brought me here.

quote:

The Machine Intelligence Research Institute (MIRI) is a nonprofit with a very ambitious goal: to ensure that good things happen—rather than bad things (which is the likely default outcome)—when machines surpass human levels of intelligence.

Also as a final note, someone on LW roleplays as a paperclip maximizer.

quote:

I have a humanoid robot that is looking to better integrate into human society and earn money. My skillset includes significant knowledge of mechanical engineering and technical programming. My robot's characteristics are as follows:

- Has the appearance of a stocky, male human who could pass for being 24-35 years old.

- Can pass as a human in physical interaction so long as no intense scrutiny is applied.

- No integral metallic components, as I have found the last substitutes I needed.

- Intelligence level as indicated by my posting here; I can submit to further cognition tests as necessary.

HEY GUNS
Oct 11, 2012

FOPTIMUS PRIME
Sinners In The Hands Of An Angry Robot.

PassTheRemote
Mar 15, 2007

Number 6 holds The Village record in Duck Hunt.

The first one to kill :laugh: wins.

Darth Walrus posted:

I'll just repost this from the TVTropes thread:


This really does explain a lot about Yudkowsky's thought processes.

Also, an article that shines some light on just how deep the Less Wrong rabbithole goes.

So is HPMOR basically a Harry Potter fanfic created by Voldemort?

Djeser
Mar 22, 2013


it's crow time again

PassTheRemote posted:

So is HPMOR basically a Harry Potter fanfic created by Voldemort?

Wasn't Voldemort's fatal flaw that he didn't understand the bond of love that protected Harry?

If so, then yes. It's Harry Potter if Voldemort was in Harry's body.

Unrelated, but I wanted to point it out: the argument that an AI is going to torture you if you don't donate to "AI research" (i.e. Yudkowsky's Idea Guy fund) wasn't actually his idea. That's why it's called Roko's Basilisk, because Roko is the person who thought it up on LW. Yudkowsky actually hates the basilisk, but I don't think it's because he thinks it's wrong. It's because he thinks it's right, and therefore memetically dangerous. He once posted something along the lines of "a few people have already been hurt by this, so don't bring it up ever again".

Darth Walrus
Feb 13, 2012

Djeser posted:

Wasn't Voldemort's fatal flaw that he didn't understand the bond of love that protected Harry?

If so, then yes. It's Harry Potter if Voldemort was in Harry's body.

Funnily enough, I believe this is a plot point in both the original and HPMOR.

ThatPazuzu
Sep 8, 2011

I'm so depressed, I can't even blink.
My favorite thing about this is, judging by the Harry Potter fanfic, he hates magic because it's unrealistic and impossible. But he loves sci-fi that is so distant from our reality that it is basics just magic called something else.

Evrart Claire
Jan 11, 2008

Djeser posted:

Wasn't Voldemort's fatal flaw that he didn't understand the bond of love that protected Harry?

If so, then yes. It's Harry Potter if Voldemort was in Harry's body.

Unrelated, but I wanted to point it out: the argument that an AI is going to torture you if you don't donate to "AI research" (i.e. Yudkowsky's Idea Guy fund) wasn't actually his idea. That's why it's called Roko's Basilisk, because Roko is the person who thought it up on LW. Yudkowsky actually hates the basilisk, but I don't think it's because he thinks it's wrong. It's because he thinks it's right, and therefore memetically dangerous. He once posted something along the lines of "a few people have already been hurt by this, so don't bring it up ever again".

Yeah, I think the last time someone tried to discuss Roko's Basilisk on his site, he deleted all the posts to "protect people from thinking about it" or something like that.

Chamale
Jul 11, 2010

I'm helping!



It's like that joke about the missionary and the shaman.

A missionary travels to the depths of the Amazon and tries to explain Jesus to the natives. He says that everyone who had heard about Jesus must believe in order to go to Heaven. The lead shaman asks, "If we didn't know about Jesus, would God send us all to hell when we die?" The missionary explains that God is merciful, so if they never learned about Jesus they would still be allowed into Heaven. "Why," the shaman asks, "did you come around and tell us, then?"

Yuddites are actually worried about the infinitely smart AI torturing people for failing to contribute enough to AI research, but only if they've read Roko's basilisk.

Indie Rocktopus
Feb 20, 2012

In the aeroplane
over the sea


Lottery of Babylon posted:

Yudowsky posted: "In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: The Iliiad, Romeo and Juliet, The Godfather, Watchmen, Planescape: Torment, the second season of Buffy the Vampire Slayer, or that ending in Tsukihime. Is there a single story on the list that isn't tragic?"

I'm an English Lit kid, not logic or programing or philosophy, and my instinctive revulsion towards My Very Special Derivative Work About a Book for Children and the Logic Rape Science Machine Self-Pleasure is so powerful I cannot bring myself to type out its actual title. So I suspect my ability to produce useful effortposts in this thread will be limited. But I do want to point out how weird and arbitrary his choices of GOAT media are.

No mention of Sgt. Pepper's Lonely Hearts Club Band. Correctly or not, Ulysses is almost universally agreed to be the best prose novel of all time, and it's a tremendously life-affirming work full of lowbrow comedy. And while videogames are a comparatively young medium, but I can think of a dozen works that critical consensus holds as greater than Planescape. It's all pretty blatant cherry-picking to prove his point.

Mostly, though, it's bringing in Buffy. Not Seinfeld or early Simpsons. Not The Sopronos, or The Wire, or Mad Men, or Twin Peaks, any of which would have demonstrated his point just as well! Fuckin' Buffy.

I could have guessed this guy was a Whedonite. A lot of nerds overvalue Whedon's merits (snappy dialogue, tight and engaging plotting) and fetishize his flaws (no one actually talks that way, nerd-pandering, allegedly feminist depictions of women that are actually highly debatable). No hate intended if you're a Whedon fan, I like a lot of his stuff, but the fact this guy believes a horror-comedy soap opera is worthy of standing with The Iliad and Romeo and Juliet demonstrates his extreme myopia.

Anyway, this thread is just as perversely fascinating as the TV Tropes one, so keep up the good work.

Vorpal Cat
Mar 19, 2009

Oh god what did I just post?
There's a delicious irony in someone with such a smug sense of superiority about their rationality engaging in such obvious magical thinking. His obsessive focus on rational decision making blinds him to the fact that at the end of the day human nature means that 90% of his decisions are going to be based on emotion and unconscious processes. In fact his inability to reflect critically on his own theories and his dogmatic belief in them makes him more vulnerable to magical thinking then the average person.

Maxwell Lord
Dec 12, 2008

I am drowning.
There is no sign of land.
You are coming down with me, hand in unlovable hand.

And I hope you die.

I hope we both die.


:smith:

Grimey Drawer
Never have the words "a little knowledge is a dangerous thing" been so applicable.

N. Senada
May 17, 2011

My kidneys are busted

Improbable Lobster posted:

This has to be some of the saddest pseudo-academic, pseudo-religious, pseudo-ethical nerd poo poo I've ever seen.

You should go to Debate Disco more often.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

N. Senada posted:

You should go to Debate Disco more often.

Dare I recommend in his case a pretty cool thread by Eripsa?

Forums Barber
Jan 5, 2011

Krotera posted:

Dare I recommend in his case a pretty cool thread by Eripsa?

this serves double duty, because you can pretend he's roleplaying as a Paperclip Maximizer as well. (it's a thread about some kind of e-currency social network thing, but you have to experience it from the source.)

Control Volume
Dec 31, 2008

Djeser posted:

Wasn't Voldemort's fatal flaw that he didn't understand the bond of love that protected Harry?

If so, then yes. It's Harry Potter if Voldemort was in Harry's body.

Unrelated, but I wanted to point it out: the argument that an AI is going to torture you if you don't donate to "AI research" (i.e. Yudkowsky's Idea Guy fund) wasn't actually his idea. That's why it's called Roko's Basilisk, because Roko is the person who thought it up on LW. Yudkowsky actually hates the basilisk, but I don't think it's because he thinks it's wrong. It's because he thinks it's right, and therefore memetically dangerous. He once posted something along the lines of "a few people have already been hurt by this, so don't bring it up ever again".

I especially like how he theorizes that a friendly AI will do this. Because any AI, especially a friendly one, will expend energy to create a million perfect simulations of you in the past in the future to torture you, because you are a Bad Person.

Djeser
Mar 22, 2013


it's crow time again

I think he calls it specifically a Friendly AI, where Friendly means that it's concerned about mathematically limiting the amount of pain and suffering in the world. Therefore, a Friendly AI would simulate torture if it meant that it could make its own existence come about sooner, because then it could solve all our problems sooner.

It's really telling about the general attitude of Yudkowski/LessWrong that "a superpowerful AI emerges that can solve all our problems" is taken as a given, and the discussion instead revolves around a) whether the AI will be Friendly and b) how to keep it from turning everyone into paperclips if it isn't.

Because, see, donating your time/effort/money to charitable causes is suboptimal when you could be donating your time to the development of an AI that will solve everything.

Improbable Lobster
Jan 6, 2012

What is the Matrix 🌐? We just don't know 😎.


Buglord

N. Senada posted:

You should go to Debate Disco more often.

Too be fair DD doesn't have threads where people freak out about a hypothetical time-travelling AI torture machine.

Namarrgon
Dec 23, 2008

Congratulations on not getting fit in 2011!

Control Volume posted:

will expend energy to create a million perfect simulations of you in the past in the future to torture you, because you are a Bad Person.

I never understood this part. Why would the AI bother? Is it to retroactively motivate people to make itself? In that case it would be in the AI's best interest if everyone merely thought it would simulation-torture you forever. Get the benefit and none of the pointless energy expenditure that can be used to make paperclips.

Besides, if mankind's past track record is any indication only an astonishingly small number of people will actually care about the simulation-torture of their simulation-selfs.

The Vosgian Beast
Aug 13, 2011

Business is slow
My favorite thing about this is that if a future AI can create a perfect copy of you by rewinding time, cryonics, which is a huge pillar of LW, is essentially pointless.

The Cheshire Cat
Jun 10, 2008

Fun Shoe
See, the problem I keep coming back to in the "AI tortures a large but quantifiable number simulations of you", is that the way it's described, the decision to free the AI has already been made. So it doesn't really matter what the "odds" are because if you're a simulation you have no agency - your decision will be the same as the one the real you already made. So the fact that you're even capable of choosing must mean that you're the real version. He seems to be obsessed with probabilities without considering that there are other ways to determine truth than how likely it is. If someone has a winning lottery ticket, you can't just convince them that they didn't win because the odds are ten billion to one against it, because they have the winning ticket RIGHT THERE.

LaughMyselfTo
Nov 15, 2012

by XyloJW

The Cheshire Cat posted:

See, the problem I keep coming back to in the "AI tortures a large but quantifiable number simulations of you", is that the way it's described, the decision to free the AI has already been made. So it doesn't really matter what the "odds" are because if you're a simulation you have no agency - your decision will be the same as the one the real you already made. So the fact that you're even capable of choosing must mean that you're the real version. He seems to be obsessed with probabilities without considering that there are other ways to determine truth than how likely it is. If someone has a winning lottery ticket, you can't just convince them that they didn't win because the odds are ten billion to one against it, because they have the winning ticket RIGHT THERE.

I actually do take issue with this, insofar as the entire point of such an exercise would be that the simulations could not tell that their choices were predetermined and therefore that they were simulations. That doesn't change that it's a stupid thing for the AI to do, though, seeing as it's simply a recreation of the portion of theist mythos with the least utility.

Fitzdraco
Aug 4, 2007

Runcible Cat posted:

Because you don't know whether you're the actual you, or one of the umptillion simulations of you the AI will construct in the future. Odds are it's the latter! :tinfoil: :derp:

Except the very fact that it has to ask to be let out implies that your not the sim, unless it's just a broken shell of a robot torturing people it created as a desperate effort in magical thinking. Maybe if I create this one it could let me out, and maybe that simulation would try to let it out but it can't so it doesn't matter.

Some of this stuff reads like he took some common themes out of the bible, stripped them of context and added giant numbers. Jesus is perfect and he goes to hell for three days which is enough suffering for x number of people. He's completely eliminated the need for Jesus in the bible, he just knows that he has to be tortured for thee days and the numbers balance.

Wales Grey
Jun 20, 2012

Djeser posted:

I think he calls it specifically a Friendly AI, where Friendly means that it's concerned about mathematically limiting the amount of pain and suffering in the world. Therefore, a Friendly AI would simulate torture if it meant that it could make its own existence come about sooner, because then it could solve all our problems sooner.

I can't work my head around this theory because how would simulating torture cause a 'friendly' AI to come into existence sooner?

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

Wales Grey posted:

I can't work my head around this theory because how would simulating torture cause a 'friendly' AI to come into existence sooner?

Well obviously it would stop.

Improbable Lobster
Jan 6, 2012

What is the Matrix 🌐? We just don't know 😎.


Buglord

Wales Grey posted:

I can't work my head around this theory because how would simulating torture cause a 'friendly' AI to come into existence sooner?

Because it can time travel or something.

I think the idea is that the simulations are so perfect that it causes retroactive changes in reality or something. The whole thing reeks of bad sci-fi writing, huge egos and magical thinking.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

Improbable Lobster posted:

Because it can time travel or something.

I think the idea is that the simulations are so perfect that it causes retroactive changes in reality or something. The whole thing reeks of bad sci-fi writing, huge egos and magical thinking.

Basically, you have no way of knowing if you're one of the infinite simulated "you"s being tortured so you should give all your money to the superintelligence so it doesn't hypothetically torture you. This is like your future actions deciding the past because the way the AI may be simulating your future will affect your decisions in the past provided that you know about that.

If that doesn't make sense to you then good for you.

Djeser
Mar 22, 2013


it's crow time again

Wales Grey posted:

I can't work my head around this theory because how would simulating torture cause a 'friendly' AI to come into existence sooner?

Because it's simulating torture to people who are a) suboptimally contributing to AI research and b) aware of the existence in the future of a friendly AI who will torture people unless they optimally contribute to AI research in the past.

And it's doing this so many times that the people in the present are up against impossibly slim odds. It's almost certain that they're in a simulation designed to torture them if they don't optimally contribute. So by simulating torture, the AI ensures that it comes into existence as early as possible, because if it didn't, these people wouldn't be coerced into donating money for fear that they might be tortured by an AI for doing it.

If it doesn't make sense to you, that's fine, because for it to be sensible, you have to subscribe to at least two bullshit theories that only make sense if you think you are smart enough to understand them, but too dumb to actually know what you're talking about.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
It's not really important because like LessWrong, we all know in our hearts that torture is right.

Lead Psychiatry
Dec 22, 2004

I wonder if a soldier ever does mend a bullet hole in his coat?
Which was the show that had the guy who would peek through the gap between his index finger and thumb and imagine crushing the heads of those who wronged him in some really petty fashion?

Cause that show did it way better than the pathetic wankery on this LW site.

Adbot
ADBOT LOVES YOU

Improbable Lobster
Jan 6, 2012

What is the Matrix 🌐? We just don't know 😎.


Buglord

Lead Psychiatry posted:

Which was the show that had the guy who would peek through the gap between his index finger and thumb and imagine crushing the heads of those who wronged him in some really petty fashion?

Cause that show did it way better than the pathetic wankery on this LW site.

https://www.youtube.com/watch?v=8t4pmlHRokg

  • Locked thread