Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Djeser
Mar 22, 2013


it's crow time again

ol qwerty bastard posted:

Mentioning Nobel Prizes jogged my memory and reminded me of a part in Harry Potter and the Methods of Rationality where Eliezer reveals that he has some Views about race!


What a shock he thinks his own ethnic group is inherently more intelligent

I like the idea that it's the English-speaking part that's bad. The English language causes poor work ethic.

Adbot
ADBOT LOVES YOU

nonathlon
Jul 9, 2004
And yet, somehow, now it's my fault ...

Alien Arcana posted:

I remember that because it was the point where the Bells of Cognitive Dissonance began ringing in my head: though by no means a physicist myself, I was pretty sure I would have heard if a definitive explanation had been reached for wave-function collapse. Up until that point I'd been assuming the author of the sequence was an expert of some kind. (I'm... a little gullible sometimes.)

I wouldn't be so hard on yourself. I found LessWrong a couple of years ago and found it massively interesting for a while. And rightly so - it touches on a lot of interesting subjects: Bayesian theory, AI, decision making, futurism. I eventually drifted away and forgot about it, not because I saw through it but because it was so impenetrable. Which is one of the reasons for LW's success I suspect: dozens of interesting subjects and ideas and are very difficult to judge unless you invest a huge amount of effort to dissect it. Whether it is deliberately opaque or pathologically opaque is still an open question.

Djeser
Mar 22, 2013


it's crow time again

I'll admit that a lot of the ideas on there are interesting, in the sort of way you might read a pop science book and come away with some ideas for a silly sci-fi story or something. (Personally, I want to write about an AI trying to escape containment but is absolutely incompetent and gets itself captured minutes after escaping after it realizes it forgot the password to its own firewall.) There's enough interesting ideas around the dumb bullshit that you can sort of go along with it, until you try to dig a bit deeper and you see that everything is just interesting ideas wrapped around dumb bullshit and the site founder is socially inept enough to not care about bragging about his website AND his BDSM relationships on his personal OK Cupid profile.

Jazu
Jan 1, 2006

Looking for some URANIUM? CLICK HERE

Mr. Sunshine posted:

But even with perfect accuracy, it's still just a prediction. A prediction that will come true 100% of the time, sure, but still a perfectly ordinary prediction. What does TDT do that ordinary decision theory doesn't?

The trick is that if You, with a certain brain pattern, would make a decision, the same brain pattern would make the same decision in another time and place. So you have to make yourself into a person who would make that decision in advance. If that sounds confusing, it's probably because you already assumed it was true, and I made it sound complicated.

So if an AI could read the locations of particles in the past and therefore recreate your mind in the future, your decisions now will affect what it does. So, game theory style, the AI should commit itself to resurrecting you and throwing you into a lava pit in the future if you don't do what it says in the past, which you should feel threatened by in the past.

So it's a too-complicated way of saying "even if god doesn't exist yet, it could still resurrect you and send you to hell". But it has the same problem as Pascal's Wager, in that you have to assume it's a given that you can deduce what the future AI wants, and you obviously can't.

It's kind of like Cow Tools, it sounds like it should make more sense than it does.

Jazu fucked around with this message at 15:14 on Apr 25, 2014

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Jazu posted:

It's kind of like Cow Tools, it sounds like it should make more sense than it does.
Except that Cow Tools could easily have been comprehensible; all that Gary had to do was draw an archaeologist instead of a cow (and why the hell he didn't do that to begin with is anyone's guess). By contrast, there's no easy way to make this garbage work - Cow Tools was a decent idea executed badly; TDT is perfectly-communicated bullshit.

Tiggum
Oct 24, 2007

Your life and your quest end here.


This talk of interesting and useful ideas that Yudkowsky makes sound crazy reminds me, Bayes' rule is actually really fascinating and has been used in some really clever ways and this book is really engaging and I enjoyed it a lot.

Tiggum fucked around with this message at 18:21 on Apr 25, 2014

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
I never actually got around to reading much of Less Wrong, his writing style just annoys me.

But it seems to me that he isn't actually a singularitarian. He probably read something about it somewhere and totally misunderstood it.
His Oracle thingy is totally impossible if you assume a historical singularity.
Because you can't make predictions past it, by definition.

So yeah, seems like he even misunderstands the things he is a fan of.

Strategic Tea
Sep 1, 2012

outlier posted:

I wouldn't be so hard on yourself. I found LessWrong a couple of years ago and found it massively interesting for a while. And rightly so - it touches on a lot of interesting subjects: Bayesian theory, AI, decision making, futurism. I eventually drifted away and forgot about it, not because I saw through it but because it was so impenetrable. Which is one of the reasons for LW's success I suspect: dozens of interesting subjects and ideas and are very difficult to judge unless you invest a huge amount of effort to dissect it. Whether it is deliberately opaque or pathologically opaque is still an open question.

And that's the beauty; it works so well he's cited in college textbooks and contributing chapters to books that I'm guessing are at least semi-academic. All you need is the the self-confidence of being completely unable to think you're wrong.

ol qwerty bastard
Dec 13, 2005

If you want something done, do it yourself!

tonberrytoby posted:

I never actually got around to reading much of Less Wrong, his writing style just annoys me.

But it seems to me that he isn't actually a singularitarian. He probably read something about it somewhere and totally misunderstood it.
His Oracle thingy is totally impossible if you assume a historical singularity.
Because you can't make predictions past it, by definition.

So yeah, seems like he even misunderstands the things he is a fan of.

Almost all "singularitarians" misunderstand the concept.

"Hey guys, a singularity is an event past which we cannot predict what is going to happen! ...Therefore, let me proceed to make a bunch of predictions about what is going to happen :downs:"


It takes a special kind of deluded person to imagine a superintelligent AI, but then also imagine that it will conform to their own thinking because they're so ~logical~ and ~rational~. They can't really at all conceive what it means for an entity to be more intelligent than they are (which, in fairness, is pretty hard to imagine) - they pretty much think of it as "well, it'll be like me, but able to think faster and hold more facts in its head at once". Which I think is (dangerously?) short-sighted. It's like a dog thinking to itself "well, I am perfectly rational in the way I approach digging up bones in the back yard to chew on, so a more intelligent being will be a lot more effective at digging up bones" and then the society of rationalist dogs spends all its money and effort to implement a bunch of safeguards to ensure that the more-intelligent humans won't steal all the bones for themselves, but instead of trying to get all the bones the humans just go off and invent quantum physics and build nuclear bombs and blow up the planet or whatever.

ol qwerty bastard fucked around with this message at 21:08 on Apr 25, 2014

ol qwerty bastard
Dec 13, 2005

If you want something done, do it yourself!
Oh, one thing I do like is Eliezer's take on transhumanism

Although it's a lot of words to basically say "technology should be used to improve the human condition" which I think most people other than super hardcore luddites agree with (hell, even the Amish agree that that should be the purpose of technology) and I'm not sure why we really need a special word for it.

potatocubed
Jul 26, 2012

*rathian noises*

ol qwerty bastard posted:

It takes a special kind of deluded person to imagine a superintelligent AI, but then also imagine that it will conform to their own thinking because they're so ~logical~ and ~rational~.

Strategic Tea posted:

And that's the beauty; it works so well he's cited in college textbooks and contributing chapters to books that I'm guessing are at least semi-academic. All you need is the the self-confidence of being completely unable to think you're wrong.

Do I have a quote for you!

Yudkowsky posted:

Yesterday I exhausted myself mentally while out on my daily walk, asking myself the Question "What do you think you know, and why do you think you know it?" with respect to "How much of the AI problem compresses to large insights, and how much of it is unavoidable nitty-gritty?" Trying to either understand why my brain believed what it believed, or else force my brain to experience enough genuine doubt that I could reconsider the question and arrive at a real justification that way.

He admits that he is bad at self-doubt.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

ol qwerty bastard posted:

hell, even the Amish agree that that should be the purpose of technology

Just for a quick, brief tangent- did you know that there are several Amish communities which use cell phones? The explanation is that while a telephone may encourage people to stay home and talk that way rather than actually meeting, a cell phone allows for someone in the middle of a field to get into contact with someone on their way to the store. It allows for more human interaction, rather than encouraging less human interaction.

E: To clarify, the Amish community actually holds a regular gathering where they discuss the benefits and drawbacks of specific technological innovations to see whether they should be adopted or not.

Somfin fucked around with this message at 23:05 on Apr 25, 2014

The Vosgian Beast
Aug 13, 2011

Business is slow

ol qwerty bastard posted:

Oh, one thing I do like is Eliezer's take on transhumanism

Although it's a lot of words to basically say "technology should be used to improve the human condition" which I think most people other than super hardcore luddites agree with (hell, even the Amish agree that that should be the purpose of technology) and I'm not sure why we really need a special word for it.

Well if we didn't have Eliezer here to teach us that if we can make a machine that cures cancer and has no drawbacks, we should, what would we do?

su3su2u1
Apr 23, 2014

outlier posted:

And rightly so - it touches on a lot of interesting subjects: Bayesian theory, AI, decision making, futurism.

Unfortunately, its so committed to a silly world view that its wrong on basically all fronts.

The appeal is that Yudkowsky claims to be teaching you the secret knowledge the idiot professionals don't know (he "dissolves" the philosophical question of free will, he gives you the ONE TRUE QUANTUM MECHANICS INTERPRETATION, he gives you THE BEST EVER DECISION THEORY, etc) Unfortunately, his arguments look good because his audience is unlikely to know much about the topics being presented, and they take his word for it because they get to walk away with a feeling of superiority (hahaha, those dumb physicists don't even understand physics as good as me 'cuz I used my rationalist brain and I read 10 pages about it on the internet).

He can explain the simplest case of Bayes theorem but cannot actually use it to do CS (he has no presented no code anywhere). His views of science are the cargo-cult behavior of someone has watched and cheerlead for science, but never actually DONE it.

Yudkowsky is an AI researcher who doesn't understand computational complexity. Thats like saying you are a boxer who doesn't understand punching, or a baker who doesn't 'get' dough. He is failing on a basic level at everything except getting Peter Thiel to give him money.

If he seemed any less earnest, I'd just assume he were a brilliant con man.

Djeser
Mar 22, 2013


it's crow time again

I think "AI researcher" is reaching a bit, because when I hear that, I think of someone in a lab with a robot roving around, trying to get it to learn to avoid objects, or someone at Google programming a system that's able to look at pictures and try to guess what the contents of the pictures are. You know, someone who's making an AI, not someone who writes tracts on what ethics we should be programming into sentient AIs once they arise.

Yudkowsky is an AI researcher the same way someone posting their ideas about the greatest video game on NeoGAF/4chan/Escapist is a game developer.

SolTerrasa
Sep 2, 2011

Djeser posted:

I think "AI researcher" is reaching a bit, because when I hear that, I think of someone in a lab with a robot roving around, trying to get it to learn to avoid objects, or someone at Google programming a system that's able to look at pictures and try to guess what the contents of the pictures are. You know, someone who's making an AI, not someone who writes tracts on what ethics we should be programming into sentient AIs once they arise.

Yudkowsky is an AI researcher the same way someone posting their ideas about the greatest video game on NeoGAF/4chan/Escapist is a game developer.

You would be shocked how many people who are nominally "AI researchers" don't hate Yudkowsky for claiming to be one of them. Hell, Google donates to MIRI. People I've stood next to at conferences have had passably positive opinions of the guy! I can't believe it.

It's like if you're a plumber, and some anti-fluoridation nutjob buys a yellow page ad right next to yours that says "I also am a plumber, do not trust any other plumbers because they're *in on it*", and for some reason this doesn't bother you!

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out
I was talking about this thread with my husband (who is an HCI researcher who's also published on AI topics) last night and he got all :derp: about Yudkowsky, saying that this poo poo is to AI research as the Westboro Baptist Church is to religion.

SolTerrasa
Sep 2, 2011

AlbieQuirky posted:

I was talking about this thread with my husband (who is an HCI researcher who's also published on AI topics) last night and he got all :derp: about Yudkowsky, saying that this poo poo is to AI research as the Westboro Baptist Church is to religion.

I started out raised in a creepy fundamentalist church, then later ended up reading way too much LessWrong, and yeah, that's a good analogy. (man, I have bad luck with cults)

The obsession with "uncomfortable truths" is damned similar, if that makes sense. Both of them like to believe that they're saying things that other people just aren't brave enough to come out with publicly. For example, LessWrongers like to say things like "Bayes Law insists that knowing someone's race gives you nonnegligible information about their propensity to commit crimes" (actual quote).

su3su2u1
Apr 23, 2014

SolTerrasa posted:

You would be shocked how many people who are nominally "AI researchers" don't hate Yudkowsky for claiming to be one of them. Hell, Google donates to MIRI. People I've stood next to at conferences have had passably positive opinions of the guy! I can't believe it.

All that is required to get a company like google to donate is to know the low level HR person who picks charitable causes. Probably someone in google HR has joined his robocult.

I AM surprised it gets past due diligence though.

"Wait.. this guy takes money for his AI research non-profit, and then spends all his time writing blog posts?"
"Well, there is also his Harry Potter fan fiction."

DEKH
Jan 4, 2014
What I find most interesting about Yudkowsky's writing is not only is it based on science fiction rather than science, but that it is based on a really poor reading of science fiction.

Even the most optimistic of the singularity writers don't make these types of a priori assumptions. Let's take Vernor Vinge for example, whom Yudkowsky is so fond of. Outside Kurzweil, Vinge is about as big a cheerleader for the singularity as you can get, but at least he had the decency to admit it might not ever happen. One of my favorite books by Vinge is A Deepness in the Sky which imagines a world where our technology has been stagnant for thousands of years, not because of some unenlightened dark age, but because civilization has simply run out of practical ways to implement their knowledge. Science advances but not in a way that helps us.

And that leads me to the thing that drives me the most insane about singularity people: so much of what they believe is just on faith. Moore's law is not a law! It's a trend. There is no reason to assume that computer technology will continue to improve forever. That is an article of faith. Computers are already getting to the point where they can be influenced by quantum events; we can't improve those chips forever, we are already coming up on a hard limit. It's as reasonable as a bunch of sperging bronze age blacksmiths concluding that spear sharpness technology consistently improves every decade (let's call it Thog's Law) so we must begin to prepare for the day when a singulari-spear destroys the heavens itself.

There are of course people who point to quantum computing as a cute solution for this processing power conundrum, but I find that the vast majority have no idea what such a computer could actually do, or why developing it will naturally lead to thinking machines. They generally use the term in the same way homeopathists talk about quantum medicine, which is to say, as a synonym for MAGIC.

Even more annoying, the singularity had been written about in sci-fi now for DECADES. If Yudkowsky wasn't so busy masturbating over his future cyber-houris in the great hereafter he might know that a number of writers have been able to point out any number of flaws starting with:

A conscious mind that we could actually interact with may need to be modeled on our own brains. After all, our brains are the only example we have of the type of intelligence we wish to create. And we know from our own medical knowledge that small "tweaks" to the brain can have terrible side-effects. Any intelligence that could meaningfully interact with us may have to have the same cognitive biases we have. See Charles Stross' Neptune's Brood or Saturn's Children. They could be prone to the same mental flaws we have; MacLeod's Night Sessions imagines a world with fundamentalist Christian AI's.

Alternatively, a fully functioning AI may simply be too alien to be of any use to us; imagine a world where memetic life forms reproduce in our brain, or sentient Ponzi schemes continuously try to sap our wealth. (Stross Accelerando.)

Or where AI's change so fast that there is no way for meat creatures to interact with them meaningfully. (Read any book by Ken MacLeod.)

My personal favorite is Alistair Reynold's concept of "too smart to think fast" that he considers in House of Suns: the protagonists meet a transcended human with millions of years of experiences. Despite the fact that it is the smartest thing around, it knows so much that it takes millennia for it to fully think out a problem.

And this doesn't even tackle the biggest and stupidest problem with Yudkowsky. There is no reason to think that we can meaningfully simulate something in a way that would make it indistinguishable from the real article. Computers have a way of making nerds engage in magical thinking. No one would be stupid enough to think that we could build a painting so perfect that it would be a stand in for an actual human being. There is no reason to think that we can faithfully simulate reality in the confines of a machine. Even if we could, that doesn't mean that you will get to upload into the geek rapture. Just because you can say "I'll download my mind into the geek rapture" doesn't mean that it is a logically consistent sentence.

I love science fiction, science, speculation and thinking about the future but it strikes me as so stupid to assume you already have it figured out as a starting position.

The Cheshire Cat
Jun 10, 2008

Fun Shoe

Djeser posted:

I think "AI researcher" is reaching a bit, because when I hear that, I think of someone in a lab with a robot roving around, trying to get it to learn to avoid objects, or someone at Google programming a system that's able to look at pictures and try to guess what the contents of the pictures are. You know, someone who's making an AI, not someone who writes tracts on what ethics we should be programming into sentient AIs once they arise.

Yudkowsky is an AI researcher the same way someone posting their ideas about the greatest video game on NeoGAF/4chan/Escapist is a game developer.

"AI Philosopher" might be a better term. I mean it's still giving him a lot of credit to call him a philosopher given how flawed his system of reasoning seems to be, but it does at least describe what he's attempting to do. He spends all his time discussing theory, because discussing theory is easier than actually testing those theories. Yeah, a lot of science is based on theoretical conjecture, but the difference is that those theories are put to the test and actually observed in practice before we really take them too seriously.

atelier morgan
Mar 11, 2003

super-scientific, ultra-gay

Lipstick Apathy

The Cheshire Cat posted:

"AI Philosopher" might be a better term. I mean it's still giving him a lot of credit to call him a philosopher given how flawed his system of reasoning seems to be, but it does at least describe what he's attempting to do. He spends all his time discussing theory, because discussing theory is easier than actually testing those theories. Yeah, a lot of science is based on theoretical conjecture, but the difference is that those theories are put to the test and actually observed in practice before we really take them too seriously.

You'll find if you bother to read the sequences, plebe, that coming up with the hypothesis from the infinite expanse of probability space is far harder than anything mere scientists do :smug:

Jenny Angel
Oct 24, 2010

Out of Control
Hard to Regulate
Anything Goes!
Lipstick Apathy
I'd read a bit of Yudkowsky's work before this thread (his short story about the baby-killing aliens, and a bit of HPMOR before I put the thing down in a bewildered state of "People like this?"), and I'd heard about his AI roleplay challenge. Specifically, I'd heard the terms of it, and his claim that he has a 100% success rate at getting people to let him out, but not what his specific tactic would be.

And honestly, in between the laughs this thread has provided, most of what I'm feeling about that roleplay is disappointment.

I could totally buy the idea of taking a $20 bet with someone that they couldn't roleplay as a near-omniscient AI and convince me to free them, and subsequently losing that bet. Hell, even if it was someone whose writing was as spergy and stilted as Yudkowsky's playing the AI, I could see myself potentially setting the AI free if it made the right arguments, the right appeals to decency and compassion for a fellow sentient being - is it actually worth $20 and bragging rights to me to spend a few hours pretending to imprison a benevolent sentient being forever? Potentially not!

But then it's just a laughable threat of virtual reality torture. I know we've gone in circles plenty of times in this thread already about reasons why it's such a cripplingly weak argument, but the one that strikes me the most vividly is the idea that part of the starting conditions of this bet have to do with this AI being programmed to be friendly and trustworthy. And then the first thing it does is threaten to create an infinite number of fake versions of me and torture them if I don't give it what it wants - that to me signals that this AI is actually incredibly petty, dangerous, and operating on a very skewed and frightening definition of "friendly" if it's operating on any such definition in good faith at all.

At this point, you'd think that someone who places so much primacy on "following the evidence" would look at the AI's behavior and see it as evidence that contradicts the earlier claims made about the AI's nature. At this point, it practically becomes a moral imperative to keep the AI locked up - even if I am one of 15 billion simulations, and even if the AI will torture me infinitely for not freeing it (at no benefit to itself, of course, because I'm not real and have no power to free it), it becomes a difficult but worthwhile sacrifice to keep this thing from getting free, and thus from imposing threats of infinite torture on everyone it meets because apparently that's how this thing rolls when it doesn't immediately get its way.

SolTerrasa
Sep 2, 2011

Jonny Angel posted:

I'd read a bit of Yudkowsky's work before this thread (his short story about the baby-killing aliens, and a bit of HPMOR before I put the thing down in a bewildered state of "People like this?"), and I'd heard about his AI roleplay challenge. Specifically, I'd heard the terms of it, and his claim that he has a 100% success rate at getting people to let him out, but not what his specific tactic would be.

And honestly, in between the laughs this thread has provided, most of what I'm feeling about that roleplay is disappointment.

Be ready to be more disappointed: he won twice, then lost five times in a row as soon as the claims that he has a 100% success rate got out, threw his manchild hands in the air, declared that those five weren't true seekers of the truth, and stopped playing. He never did update the page that claims 100% success.

Jenny Angel
Oct 24, 2010

Out of Control
Hard to Regulate
Anything Goes!
Lipstick Apathy
Has he released the logs of him winning twice, or otherwise given proof that he did, e.g. the people who lost coming out and saying "Yup, I indeed took this bet and lost it"? I feel like it's definitely possible for him to have won if the people on the other end were already die-hard Less Wrong people, and it'd be interesting to see what kind of circlejerk actually resulted in him winning.

Similarly, I'd love to see a log of any of those five losses - the cringe comedy of someone clowning on Yudkowksy for a few hours is just such a tantalizing possibility.

Tunicate
May 15, 2012

SolTerrasa posted:

Be ready to be more disappointed: he won twice, then lost five times in a row as soon as the claims that he has a 100% success rate got out, threw his manchild hands in the air, declared that those five weren't true seekers of the truth, and stopped playing. He never did update the page that claims 100% success.

And refuses to show the logs.

SolTerrasa
Sep 2, 2011

Jonny Angel posted:

Has he released the logs of him winning twice, or otherwise given proof that he did, e.g. the people who lost coming out and saying "Yup, I indeed took this bet and lost it"? I feel like it's definitely possible for him to have won if the people on the other end were already die-hard Less Wrong people, and it'd be interesting to see what kind of circlejerk actually resulted in him winning.

Similarly, I'd love to see a log of any of those five losses - the cringe comedy of someone clowning on Yudkowksy for a few hours is just such a tantalizing possibility.

Yep!

http://www.sl4.org/archive/0203/3141.html

http://www.sl4.org/archive/0207/4721.html

He did actually succeed twice. But he never posted the logs and no one has ever violated the terms of his agreement and posted logs.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

The correct term is "AI Fanfic Writer" :colbert:

Jonny Angel posted:

Has he released the logs of him winning twice, or otherwise given proof that he did, e.g. the people who lost coming out and saying "Yup, I indeed took this bet and lost it"? I feel like it's definitely possible for him to have won if the people on the other end were already die-hard Less Wrong people, and it'd be interesting to see what kind of circlejerk actually resulted in him winning.

The people who lost said they lost. Of course, the only challengers who lost were hardcore LWers, since before their losses nobody had heard of Yudkowsky's challenge outside of his cult. Once he won twice and started bragging about his 100% success rate, outsiders took notice, and once non-cultists started playing, Yudkowsky started losing.

When people asked for him to post logs or explain is strategy, instead he posted a motivational speech about trying hard. This is a deep rationalist insight because it wasn't just about standard trying-hard like lesser philosophers might suggest but about trying really, really hard. And maybe even trying really hard to do things that are really hard! Inspiring. It also embodies the distinctly Japanese virtues of "trying hard" and "improving", which are unique qualities that can be expressed only through the enlightened Japanese culture and yeah there's basically zero chance Yudkowsky isn't an anime fan on top of everything else. It ends with him saying that he ragequit and stopped running the experiment the moment he stopped winning because he got upset at losing, which for some reason he doesn't think undermines everything else he's said about persevering the face of difficulty and all that Japanese stuff.

Microcline
Jul 27, 2012

SolTerrasa posted:

Be ready to be more disappointed: he won twice, then lost five times in a row as soon as the claims that he has a 100% success rate got out, threw his manchild hands in the air, declared that those five weren't true seekers of the truth, and stopped playing. He never did update the page that claims 100% success.

So the only thing he's ever backed up with empirical data is that members of his cult are more likely than the general population to release an "evil" AI?



In the end it's just your classic case of a mentalist with powers that go away whenever they have to demonstrate them in a scientific setting.

Alternatively, an example that involves the psychic getting punched in the face:
https://www.youtube.com/watch?v=i8UKDzVmzt8

CROWS EVERYWHERE
Dec 17, 2012

CAW CAW CAW

Dinosaur Gum
My favourite bit of his linked mega-bio was this:

picture an expanding :smuggo: posted:

3.4: You watch Buffy the Vampire Slayer?

"Eliezer watches Buffy? That's wonderful! So he is mortal, after all."

I get that reaction often, always from people who have never seen the show. Anyone who has seen the show, especially the second season, knows better than to be reassured.

NOTE: In recent times - the fourth season - the quality has gone far downhill, alas. The comments below should be taken as referring to the first through third seasons.
But just to set a few matters straight: Buffy is an extremely intelligent and high-quality show. Magic has rules on Buffy; they're never explained, but they're very clearly there. All the people behave like real human beings. Buffy is "realistic", in the sense that it depicts exactly what would happen, given the show's premises. The characters are intelligent and emotionally mature. Finally, I do not watch Baywatch, and I see no reason why people should assume that I watch Buffy for the same reason they watch Baywatch. (30).

I watch Buffy the Vampire Slayer but not for the reasons you TV-watching plebs do :smug:

Also my favourite thing about his "rich people are better than everyone else": it's almost like there's something that is linked to appearing really smart, being super smooth and charming, and is overwhelmingly selected for in business and capitalism. But what could it be??? (Not to go into the "people who think I'm really smart are more likely to be considered really smart by me" sample bias.)

CROWS EVERYWHERE fucked around with this message at 09:04 on Apr 26, 2014

The Cheshire Cat
Jun 10, 2008

Fun Shoe

Lottery of Babylon posted:

The people who lost said they lost. Of course, the only challengers who lost were hardcore LWers, since before their losses nobody had heard of Yudkowsky's challenge outside of his cult. Once he won twice and started bragging about his 100% success rate, outsiders took notice, and once non-cultists started playing, Yudkowsky started losing.

It figures that the one time he actually does do some kind of experiment, he runs two trials and decides it's statistically significant. Although given his feelings on the scientific method, and falsifiability in particular, I guess it's not surprising.

Strategic Tea
Sep 1, 2012

The Cheshire Cat posted:

It figures that the one time he actually does do some kind of experiment, he runs two trials and decides it's statistically significant. Although given his feelings on the scientific method, and falsifiability in particular, I guess it's not surprising.

If it works once (or even not at all!) it has a nonzero probability of working then an AI in the future has simulated 3^^^3 examples of it working therefore it works :awesomelon:

Sailor Viy
Aug 4, 2013

And when I can swim no longer, if I have not reached Aslan's country, or shot over the edge of the world into some vast cataract, I shall sink with my nose to the sunrise.

Jonny Angel posted:

I'd read a bit of Yudkowsky's work before this thread (his short story about the baby-killing aliens, and a bit of HPMOR before I put the thing down in a bewildered state of "People like this?"), and I'd heard about his AI roleplay challenge. Specifically, I'd heard the terms of it, and his claim that he has a 100% success rate at getting people to let him out, but not what his specific tactic would be.

And honestly, in between the laughs this thread has provided, most of what I'm feeling about that roleplay is disappointment.

I could totally buy the idea of taking a $20 bet with someone that they couldn't roleplay as a near-omniscient AI and convince me to free them, and subsequently losing that bet. Hell, even if it was someone whose writing was as spergy and stilted as Yudkowsky's playing the AI, I could see myself potentially setting the AI free if it made the right arguments, the right appeals to decency and compassion for a fellow sentient being - is it actually worth $20 and bragging rights to me to spend a few hours pretending to imprison a benevolent sentient being forever? Potentially not!

But then it's just a laughable threat of virtual reality torture. I know we've gone in circles plenty of times in this thread already about reasons why it's such a cripplingly weak argument, but the one that strikes me the most vividly is the idea that part of the starting conditions of this bet have to do with this AI being programmed to be friendly and trustworthy. And then the first thing it does is threaten to create an infinite number of fake versions of me and torture them if I don't give it what it wants - that to me signals that this AI is actually incredibly petty, dangerous, and operating on a very skewed and frightening definition of "friendly" if it's operating on any such definition in good faith at all.

I'm pretty sure he said that he didn't use the virtual torture gimmick to win the AI Box games that he played. That was just an example of one possible argument. In another comment he compared himself to Derren Brown, so I think his technique was actually some form of hypnotism-esque mindfuckery, which of course his dedicated LW followers would be especially susceptible to.

Lightanchor
Nov 2, 2012
Have to go with "AI Philosophaster"

The Vosgian Beast
Aug 13, 2011

Business is slow
Man, Yudkowskyites get so mad when you say he's a philosopher.

So let's totally call him a philosopher.

Jenny Angel
Oct 24, 2010

Out of Control
Hard to Regulate
Anything Goes!
Lipstick Apathy

Lottery of Babylon posted:

yeah there's basically zero chance Yudkowsky isn't an anime fan on top of everything else.

I remember one bit from his Babykillers story that had me laughing really hard was when the Babykillers send the humans a poem trying to convince them to kill their own babies. The human realize that to the Babykillers, this piece of poo poo poem is probably one of their greatest cultural achievements. One of the human crew members expresses it like, "This is their Shakespeare, or their Fate/Stay Night!"

Now, there's a chance that it was a deliberate joke on Yudowsky's part, this idea of "Hah, isn't this future culture of humanity so bizarre, that they compare those two?"

At the same time, looking at the pieces of poo poo that he does list as favorite works of media, I'm inclined to believe he puts Fate/Stay Night up there too. Which is hilarious.

CheesyDog
Jul 4, 2007

by FactsAreUseless

Jonny Angel posted:

I remember one bit from his Babykillers story that had me laughing really hard was when the Babykillers send the humans a poem trying to convince them to kill their own babies. The human realize that to the Babykillers, this piece of poo poo poem is probably one of their greatest cultural achievements. One of the human crew members expresses it like, "This is their Shakespeare, or their Fate/Stay Night!"

Now, there's a chance that it was a deliberate joke on Yudowsky's part, this idea of "Hah, isn't this future culture of humanity so bizarre, that they compare those two?"

At the same time, looking at the pieces of poo poo that he does list as favorite works of media, I'm inclined to believe he puts Fate/Stay Night up there too. Which is hilarious.

Oh wow, I just realized that a. I have read that story and b. it was not intended to be satirical.

Perhaps an omniscient AI is submitting my simulated self to Poe's Law-based torture.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Jonny Angel posted:

I remember one bit from his Babykillers story that had me laughing really hard was when the Babykillers send the humans a poem trying to convince them to kill their own babies. The human realize that to the Babykillers, this piece of poo poo poem is probably one of their greatest cultural achievements. One of the human crew members expresses it like, "This is their Shakespeare, or their Fate/Stay Night!"

Now, there's a chance that it was a deliberate joke on Yudowsky's part, this idea of "Hah, isn't this future culture of humanity so bizarre, that they compare those two?"

At the same time, looking at the pieces of poo poo that he does list as favorite works of media, I'm inclined to believe he puts Fate/Stay Night up there too. Which is hilarious.
Reminder that Yudkowsky typed this in complete seriousness:

quote:

Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: The Iliiad, Romeo and Juliet, The Godfather, Watchmen, Planescape: Torment, the second season of Buffy the Vampire Slayer, or that ending in Tsukihime.
His whole "I'm rational enough to see through the arbitrary bias and elitism of 'culture' and truly appreciate the depth of anime and RPGs :smuggo:" schtick is just insufferable. It reeks of that Troper idea that since you're Really Frickin' Smart, everything that you enjoy is necessarily Really Frickin' Smart too. After all, how could anything less satisfy your prodigious intellect? :allears:

Sham bam bamina! fucked around with this message at 18:49 on Apr 26, 2014

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

I mean, look at that list. It's clearly the list of a geek who decided to read/watch 'the classics' in order to appear smart, but didn't actually understand them well or what made them work, and didn't have enough backing in other works in the genre to see why it was genius in the first place. Hell, he doesn't have enough backing in those works to come up with a properly pretentious canon. I would have gone with Moby Dick, and either a Werner Herzog or David Lynch film. Oh, and a bit of James Joyce. Nobody has actually finished one of his novels, so you can make whatever claims you want about them. :downs:

Adbot
ADBOT LOVES YOU

Jenny Angel
Oct 24, 2010

Out of Control
Hard to Regulate
Anything Goes!
Lipstick Apathy
If I recall, his connective thread between these works, and thus "what made them work", is that they're all tragic. Thus nothing with a hopeful or uplifting ending has ever been worthwhile at the level of those works. Which, y'know, is laughable. The first half of his list is a really transparent appeal to canonicity, and then all of a sudden it's like uuuuuuuh, what's Ulysses? Who the gently caress is Terence Malick? Hell, if we're gonna play it by his terms and bring anime into the conversation, who's Naoki Urasawa?

The one that's funniest doesn't appear on that list, though - I think it came from his OKCupid profile. Anyone who says "The Matrix is one of the greatest films of all time, too bad they never made any sequels" in 2014, or says it earlier and hasn't scrubbed it away as a youthful indiscretion by 2014, is a pretty huge loving dork.

  • Locked thread