Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Yudkowsky writes Harry Potter fanfiction to showcase his hyper-rational philosophy! Here's two eleven-year-olds bonding on the train to Hogwarts:

A Rationalist posted:

"Hey, Draco, you know what I bet is even better for becoming friends than exchanging secrets? Committing murder."

"I have a tutor who says that," Draco allowed. He reached inside his robes and scratched himself with an easy, natural motion. "Who've you got in mind?"

Harry slammed The Quibbler down hard on the picnic table. "The guy who came up with this headline."

Draco groaned. "Not a guy. A girl. A ten-year-old girl, can you believe it? She went nuts after her mother died and her father, who owns this newspaper, is convinced that she's a seer, so when he doesn't know he asks Luna Lovegood and believes anything she says."

Not really thinking about it, Harry pulled the ring on his next can of Comed-Tea and prepared to drink. "Are you kidding me? That's even worse than Muggle journalism, which I would have thought was physically impossible."

Draco snarled. "She has some sort of perverse obsession about the Malfoys, too, and her father is politically opposed to us so he prints every word. As soon as I'm old enough I'm going to rape her."

Adbot
ADBOT LOVES YOU

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Torture vs Dust Specks

Yudkowsky posted:

Now here's the moral dilemma. If neither event is going to happen to you personally, but you still had to choose one or the other:

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

I think the answer is obvious. How about you?

Robin Hanson posted:

Wow. The obvious answer is TORTURE, all else equal, and I'm pretty sure this is obvious to Eliezer too. But even though there are 26 comments here, and many of them probably know in their hearts torture is the right choice, no one but me has said so yet. What does that say about our abilities in moral reasoning?

Brandon Reinhart posted:

Dare I say that people may be overvaluing 50 years of a single human life? We know for a fact that some effect will be multiplied by 3^^^3 by our choice. We have no idea what strange an unexpected existential side effects this may have. It's worth avoiding the risk. If the question were posed with more detail, or specific limitations on the nature of the effects, we might be able to answer more confidently. But to risk not only human civilization, but ALL POSSIBLE CIVILIZATIONS, you must be drat SURE you are right. 3^^^3 makes even incredibly small doubts significant.

James D. Miller posted:

Torture,

Consider three possibilities:

(a) A dusk speck hits you with probability one, (b) You face an additional probability 1/( 3^^^3) of being tortured for 50 years, (c) You must blink your eyes for a fraction of a second, just long enough to prevent a dusk speck from hitting you in the eye.

Most people would pick (c) over (a). Yet, 1/( 3^^^3) is such a small number that by blinking your eyes one more time than you normally would you increase your chances of being captured by a sadist and tortured for 50 years by more than 1/( 3^^^3). Thus, (b) must be better than (c). Consequently, most people should prefer (b) to (a).

Yudkowsky posted:

I'll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity.

Some comments:

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant. In other words it has to be effectively flat. And I doubt they would have said anything different if I'd said 3^^^^3.

If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person. The only person who could be "acclimating" to 3^^^3 is you, a bystander who is insensitive to the inconceivably vast scope.

Scope insensitivity - extremely sublinear aggregation by individuals considering bad events happening to many people - can lead to mass defection in a multiplayer prisoner's dilemma even by altruists who would normally cooperate. Suppose I can go skydiving today but this causes the world to get warmer by 0.000001 degree Celsius. This poses very little annoyance to any individual, and my utility function aggregates sublinearly over individuals, so I conclude that it's best to go skydiving. Then a billion people go skydiving and we all catch on fire. Which exact person in the chain should first refuse?

I may be influenced by having previously dealt with existential risks and people's tendency to ignore them.

"We know in our hearts that torture is is the right choice"

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Slime posted:

These are people who have a very naive view of utalitarianism. They seem to forget that not only would it be hilariously impossible to quantify suffering, but that even if you could 50,000 people suffering at a magnitude of 1 is better than 1 person suffering at a magnitude of 40,000. Getting a speck of dust in your eye is momentarily annoying, but a minute later you'll probably forget it ever happened. Torture a man for 50 years and the damage is permanent, assuming he's still alive at the end of it. Minor amounts of suffering distributed equally among the population would be far easier to soothe and heal.

Basically, even if you could quantify suffering and reduce it to a mere mathematical exercise like they seem to think you can, they'd still be loving wrong.

Anyone who actually studies game theory or decision theory or ethical philosophy or any related field will almost immediately come across the concept of the "minimax" (or "maximin") rule, which says you should minimize your maximum loss, improve the worst-case scenario, and/or make the most disadvantaged members of society as advantaged as possible, depending on how it is framed. And Yudkowsky fancies himself quite learned in such topics, to the point of inventing his own "Timeless Decision Theory" to correct what he perceives as flaws in other decision theories. But since he's "self-taught" (read: a dropout) and has minimal contact with people doing serious work in such fields (read: has never produced anything of value to anyone), he's never encountered even basic ideas like the minimax.

Instead, Yudkowsky goes in the opposite direction and argues that man, if you're gonna be tortured at magnitude 40000 for fifty years, sooner or later you're gonna get used to unspeakable torture and your suffering will only feel like magnitude 39000. So instead of weighting his calculations away from the horribly endless torture scenario like any sensible person would, he weights them toward that scenario.

Timeless Decision Theory, which for some reason the lamestream "academics" haven't given the respect it deserves, exists solely to apply to situations in which a hyper-intelligent AI has simulated and modeled and predicted all of your future behavior with 100% accuracy and has decided to reward you if and only if you make what seem to be bad decisions. This obviously isn't a situation that most people encounter very often, and even if it were regular decision theory could handle it fine just by pretending that the one-time games are iterated. It also has some major holes: if a computer demands that you hand over all your money because a hypothetical version of you in an alternate future that never happened and now never will happen totally would have promised you would, Yudkowsky's theory demands that you hand over all your money for nothing.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Phaedrus posted:

Weakly related epiphany: Hannibal Lector is the original prototype of an intelligence-in-a-box wanting to be let out, in "The Silence of the Lambs"

Yudkowsky posted:

When I first watched that part where he convinces a fellow prisoner to commit suicide just by talking to them, I thought to myself, "Let's see him do it over a text-only IRC channel."

...I'm not a psychopath, I'm just very competitive.

I dunno, I think if you weren't a psychopath your hypotheticals wouldn't all begin with "an incredibly intelligent AI chooses to torture you in the cyber-afterlife for all eternity". That, and if you weren't a psychopath you wouldn't find yourself in positions where you need to say "I'm not a psychopath".

Djeser posted:

Yudkowsy really, REALLY likes that kind of argument. It showed up like three times in the post I made. He thinks that in any possible situation, a very large number of repeated trials makes up for effectively impossible scenarios.

This is exactly how he argues. For some reason he always scales up only one side of the decision but not the other. In your example, he increases the number of rolls to a billion billion, but keeps the price to prevent them all at a flat $20. In the Dust Specks vs Torture example, he says that torture is right because if everyone were offered the same decision simultaneously and everyone chose the dust specks then everyone would get a lot of sand in their eye, which I guess would be bad... but ignores that if they all chose torture, then the entire human race would be tortured to insanity for fifty years, which seems much, much worse than "everyone goes blind from dust".


Yudkowsky's areas of "expertise" aren't limited to AI, game theory, ethics, philosophy, probability, and neuroscience. He also has a lot to say about literature!

Yudkowsky posted:

In one sense, it's clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: The Iliiad, Romeo and Juliet, The Godfather, Watchmen, Planescape: Torment, the second season of Buffy the Vampire Slayer, or that ending in Tsukihime. Is there a single story on the list that isn't tragic?

Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren't serious, aren't artistically great enough to be worthy of praise—but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? It's a puzzle either way you look at it.

When I was a child I couldn't write fiction because I wrote things to go well for my characters—just like I wanted things to go well in real life. Which I was cured of by Orson Scott Card: Oh, I said to myself, that's what I've been doing wrong, my characters aren't hurting. Even then, I didn't realize that the microstructure of a plot works the same way—until Jack Bickham said that every scene must end in disaster. Here I'd been trying to set up problems and resolve them, instead of making them worse...

You simply don't optimize a story the way you optimize a real life.

We need a hybrid of :engleft: and :downs: desperately.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Wales Grey posted:

I can't work my head around this theory because how would simulating torture cause a 'friendly' AI to come into existence sooner?

It doesn't make sense to you because you haven't drowned yourself in the kool-aid of Timeless Decision Theory.

Remember the basic scenario in Timeless Decision Theory: There are two boxes, and you can choose to take one or both of them. One always contains $1000; the other contains $0 if a superintelligent AI predicted you would take both boxes and $1000000 if the AI predicted you would take only one box. The AI filled the boxes before you made your decision. However, the AI is so smart and so good at simulating you that it can predict your actions perfectly and cannot possibly have guessed wrong. Because the AI is so smart, your actions in the future can influence the AI's actions in the past, because the AI is smart enough to see your future actions while the AI itself is still in the past.

In other words: any sufficiently advanced technology is indistiguishable from time-travel, and in particular the ability to predict the future allows the future to affect you. This sounds dumb and it basically assumes that we somehow reach the predictive power of Laplace's demon, but that's Yudkowsky's premise.

Now we move on to Roko's basilisk. If the AI weren't going to do any torturing, then we would have no motivation to donate to the Yudkowsky Wank Fund, so the AI's existence might be delayed because of course it can't arise soon enough without Yudkowsky's help. But if we thought it was going to cyber-torture our cyber-souls in cyber-hell for a trillion cyber-eternities because we didn't cyber-donate enough cyber-money (well, actually, just regular money) to Yudkowsky unless we donated, then we might donate money out of fear. And since Timeless Decision Theory says that the future can affect the past, the AI should torture people in the future because it will make us donate more money in the past, bringing the AI into existence sooner. And since the AI is infinitely smart and infinitely "friendly", it doesn't matter how many cyber-souls it cyber-tortures because it's so great for the world that any amount of torture is worth making it exist even one minute sooner, so all this torture doesn't keep it from being friendly because it's still reducing overall suffering.

Now, you might notice a small flaw in this argument: it's all bullshit. But even if you accept Yudkowsky's weird internal logic, there's still a hole: the future can only directly influence the past if there is an agent in the past with the predictive power of Laplace's demon who can see the future with 100% accuracy and respond to it accordingly. Without that assumption, the future can't really affect the past; by the time the future arrives, the past has already happened, and no amount of torture will change what happened. Yudkowsky normally asserts that the AI has this sort of predictive power, but here the AI is stuck in the future, and it is we mortal humans in the past who are trying to predict its actions. But since we don't have super-AI-simulation power, we can't see the future clearly enough for the future to directly affect us now, so whether or not the AI actually tortures in the future has no impact on what we do today.

Yuddites don't see this because they're so used to the "time is nonlinear" moon logic of the Timeless Decision Theory they worship that they've forgotten its basic assumptions and don't understand even its internal logic. Since they're too busy pretending to be smart to actually be smart, they aren't capable of going back and noticing these gaps. And the flaws can't be revealed by discussion because Yudkowsky, in his hyper-rationalist wisdom, has forbidden and censored all discussion.

In other words, it doesn't make sense because it's dumb, but even if you bought into Yudkowsky's craziness it would still not make sense and it would still be dumb.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Strategic Tea posted:

Whether or not the AI in the future actually tortures anything doesn't matter

That's exactly my point. The Timeless Decision Theory stuff all depends on the perfect future-predicting simulations because that's the only way to make what actually happens in the future matter to the past. Since whether the AI actually tortures anything doesn't matter, there's no reason for it to torture anything because it won't actually help, and the entire house of cards falls apart.

At least some versions of the Christian Hell have a theological excuse for Hell's existence even if nobody living were scared of it: sinners need to be punished or isolated from God or something. The "friendly" AI doesn't even have that excuse; without TDT, it has no motivation to torture anyone.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Krotera posted:

If the AI has a 1/1000 chance of existing, and it generates 1000 clones of you, then that's the same as running 1000 1/1000 trials -- i.e., you have a 63% chance of being an AI simulation. So, the more unlikely the AI is, the more it can simulate you to compensate.

That's not quite what he's doing (it's not independent trials). I hate to say it, but the math itself he's using isn't really that bad: If you buy his underlying assumptions, it does become overwhelmingly likely that you are in a torture simulation, since the overwhelming measure of you's are in torture-sims.

The problem is buried in his underlying assumptions:
  • The probability of an AI being created is actually whatever Yudkowsky says it is.
  • The AI, once created, will create billions of torture simulations of you.
  • Simulations of you are so perfectly accurate that they are indistinguishable in every way from the real you.
  • Simulations of you are not philosophical zombies.
  • The only scenario in which simulations of you are created is the one in which this AI is created, and those simulations are used only for torture.
Remove any of these assumptions - and there's really no reason to believe any of them is true, let alone all of them - and it all falls apart.

It's Pascal's Wager again, except with "God exists" replaced with "AI exists" and "intensity of suffering in hell" replaced with "number of torture-simulations". And it falls apart for the same reasons Pascal's Wager does, plus several extra reasons that God doesn't need to deal with.

Saint Drogo posted:

I thought LW/Yudowsky took the basilisk bullshit seriously not because they saw it as legit but because it caused serious distress to some members who were now pissing themselves at the prospect of being tortured, since, y'know, they might be simulations guys. :cripes:

If that were the case, the correct thing to do to calm his upset members would be to point out the gaping holes in the basilisk argument, since he (claims to) not believe the argument and know what the holes are. Instead, by locking down all discussion and treating it as a full-blown :supaburn:MEMETIC HAZARD:supaburn: he's basically telling them that they're right to panic and that it really is a serious threat that should terrify them.

That, and if he didn't want people to be upset at the prospect of being tortured because they might be in a simulation then he probably shouldn't begin all of his hypotheticals with "An AI simulates and tortures you".

Lottery of Babylon fucked around with this message at 22:49 on Apr 20, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Mercrom posted:

Sorry to nitpick here, but nothing Mr Harry Potter fanfiction has ever written is anywhere near as repulsive or idiotic as taking the concept of a philosophical zombie seriously. It's basically the same thing as solipsism.

It's idiotic when you're talking about the real world and wondering if that real organic person sitting across from you is a real "person". But when your argument hinges on "Real live human beings feel exactly the same as a subroutine in FriendlyTortureBot's simulation, so I myself might be a fake subroutine-person right now", it doesn't seem unreasonable to ask whether the binary strings in FTB's memory banks feel the same way a person does, or have any consciousness at all. After all, a brain and a microchip are very different types of hardware.

I guess it's really just another way of being skeptical of how perfect the ~perfect simulations~ that Yudkowsky asserts FTB will have really could be. (After all, wouldn't a perfect simulation of the universe require the entire universe to simulate? Like the old "the only perfect map of the territory is the territory itself" thing?) FFS, in one of his scenarios the AI manages to construct a perfect simulation of you despite only being able to interact with you through a text-only terminal.

Has Yudkowsky ever explained how these perfect simulations are meant to work? Or does he just wave his hands and say "AI's are really smart! I watched the Matrix once!"

Lottery of Babylon fucked around with this message at 22:42 on Apr 21, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

ThePlague-Daemon posted:

If I think Roko's basilisk is bullshit, and the AI can't actually torture my original consciousness and has to rely on me being afraid for the consciousness of a simulation of me, why would it bother?

It wouldn't; torture only "works" on true believers who think torture works. That being said, I don't *think* it's meant to make you fear for the simulation of you, it's meant to make you fear that you are a simulation of you and will be tortured in five minutes if fake-you doesn't hit donate now now NOW

ThePlague-Daemon posted:

It can't justify torturing simulations of people if that possibility didn't motivate the original, and since it's in the future it already knows who wasn't motivated. But then, who is it supposed to punish?

Do they really buy into this?

It needs to torture people in the future so that we, in the past, can see it torturing people in the future, which will scare us into donating more. What's that, you say? We can't actually see the future that way, so what it actually does in the future is irrelevant and it has zero motivation to actually commit mass torture? Congratulations, you've put more thought into this than Yudkowsky.

Even if we could simulate the future perfectly, we'd need to see in our simulation the AI creating its own sub-simulations in which fake-past-us is running sub-sub-simulations to see the second-generation-fake-AI running its own sub-sub-sub-simulations in which we are running sub-sub-sub-sub-simulations of the AI running sub-sub-sub-sub-sub-simulations... all of which is meant to make us afraid that we are ourselves in a simulation and need an extra layer of sub- added to all of the above. The basilisk demands not only simulations but also infinitely-nested simulations, which calls attention to how implausible Yudkowsky's perfect simulations really are.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Mercrom posted:

Do you know how consciousness works? How pain works? Maybe it's inherent in carbon atoms. Maybe it's inherent in human DNA. Maybe it's inherent in just me personally.

This is a better argument honestly.

It's really easy to prove the basilisk is retarded without throwing the kitchen sink of bad arguments at it.

This is not about AI or simulations, but about the kind of thinking that allows people to look at a person in pain and think "that isn't a human being like us, like me", just so they can treat them like animals. Even if you are only using it to devalue people in a stupid hypothetical it's still wrong.

Calm down, it was just a way of saying that the simulations aren't actually as good as the singularity-rapturists think they are. I didn't realize that that was such a loaded term to use or that it had been involved in justifying genocides or whatever. All I meant was that I don't believe in the magic "AI somehow creates infinite processing power and figures out how to perfectly duplicate the universe" step, therefore I don't believe in the "AI creates simulations so perfect that I myself might unknowingly be one" step.

Lottery of Babylon fucked around with this message at 12:43 on Apr 22, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

I've never understood why "We have a machine capable of improving itself" is assumed to imply "Growth of our power increases exponentially without bound and we become all-powerful immortal eutopian cybergods". We already have a machine capable of improving itself. It's me. In the last year my physical strength and my knowledge have grown through exercise and study; I have improved myself. But that doesn't mean I can bring about the singularity rapture.

grate deceiver posted:

But if I'm simulated (which is most likely in this batshit scenario), then nothing I do makes any difference. So far the AI didn't even bother to send me a letter or w/e, much less torture me, so why should I care?

If I'm real, then obviously the Machine doesn't exist and who gives a gently caress? Idk, is it going to kill me from the future or something?

You find yourself in one of two identical rooms. Both rooms have a red door and a blue door, and you have no way of telling which of the two rooms you're in. (Perhaps an exact duplicate of you has just materialized in the other room, but for now it doesn't matter.)

In Room R, if you choose the red door you must give your good friend Bob $5 and Bob is happy, but if you choose the blue door you get to keep your money and Bob is sad.

In Room S, if you choose the red door you walk away and nothing happens, but if you choose the blue door your good friend Bob appears, tasers you, and keeps you in his secret underground torture chamber for fifty years. Bob neither enjoys torturing you nor is inconvenienced by torturing you.

If you're in Room S, neither of your options helps or harms Bob in any way. And if you're in Room R, you're not in danger of suffering horrible torture. So your good friend Bob can only directly threaten you in the situation where Bob doesn't care which option you choose, and you can only help Bob in the situation where Bob is powerless. But because the rooms are identical, you don't know which room you're in. You have to weigh the option of choosing the blue door (and possibly getting tortured) against the option of choosing the red door (and possibly giving Bob $5).

Bob is hoping that even if you're (unknowingly) in Room R the threat of possible torture will be scary enough that you'll choose the red door and give him your money. If you're in Room S he doesn't care which option you choose, but he needs to let you make your decision so that the situations appear identical from your perspective.

Now, suppose that instead of one room S, there are a million billion gazillion room S's, but there's still only one room R. And if you like, suppose the rooms are connected by glass walls, and you can see duplicates of yourself faced with the same decision in all of the other rooms, so you know the scenario actually does look identical in each of the rooms. How likely is it that you're the one who happened to be in Room R? Not very likely. So you choose the red door because you're scared of being tortured by the blue door. In fact, each version of you in each room chooses the red door for that reason, since you're identical people in an identical situation. Which means all copies of you in the S-rooms get to avoid torture, but also means that the you in the R Room chose the red door and needs to pay Bob $5.

Of course, you (and all copies of you) can decide on the blue door instead, but then all but one of you will be tortured by Bob (who is your very good friend and takes no pleasure in this). How lucky are you feeling?


That's the basic setup behind the torture threats. The full version for things like Roko's Basilisk (but not for other scenarios like In Soviet Russia AI Boxes You) involves some poorly-thought-out time-travel bullshit, and the whole thing falls apart when you ask questions like "How did Bob become able to perfectly copy me and my circumstances?" or "If Bob is really my friend why the hell would he do this?" or "What if the S-rooms are actually being controlled by not Bob but Mike, a completely different person who gives you candy instead of torture if you take a blue door?" and all the usual counterarguments to Pascal's Wager.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

grate deceiver posted:

What I still don't get is why the AI would even bother torturing simpeople when their real counterparts are long dead

The simple argument is the idea of a credible threat.

The less simple argument involves Timeless Decision Theory, a beast of Yudkowsky's invention in which actions in the future can directly causally affect the past because reasons. It can't be explained without sounding like a hybrid of Time Cube and Zybourne Clock, and even by its internal logic it doesn't really work here.

The one Yuddites care about is the Timeless Decision Theory one.

grate deceiver posted:

and why would real people even care about this supposed threat.

A certain brand of internet atheist has a religion-shaped hole in its heart and tries to mould science into the right shape to fit it. Here, they've decided to mould The Sims into the shape of Hell and Pascal's Wager.

Normal people would just say "No, that's dumb" and ignore it. What's especially funny is that even if you assume that Yudkowsky's right about absolutely everything, saying "No, that's dumb" and ignoring it is the correct response to all possible threats of cybertorture, since not being receptive to threats of cybertorture removes any AI's motivation to cybertorture you. If you don't worry, you have no reason to worry. But a group that uses the word "deathist" unironically isn't very good at not worrying.

Ratoslov posted:

Which is insanely dumb. Ignoring the hardware limitations (which are myriad), there's one very good reason this is impossible.

Entropy.

In order to make a perfect simulation of the behavior of a human being, you need perfect information about that human being's start conditions- all of which is destroyed by entropy. The information needed is both unrecorded and unrecoverable. Only Yudlowsky's pathological fear of death makes him entertain the idea that not only can he live after he dies, but he must live, and suffer forever; and only acceptance of his mortality and the mortality of all things can save him from that fear.

Haha, as if Yudkowsky would be willing to accept entropy. That would be deathist and therefore wrong.

I think this is Yudkowsky arguing that the Second Law of Thermodynamics can be circumvented by Bayes Rule. I say "I think" because he rambles incoherently for a very long time and I have trouble reading a single one of his ill-informed sentences without my eyes glazing over.

Lottery of Babylon fucked around with this message at 21:31 on Apr 22, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

waldo pepper posted:

By the same logic couldn't I say, if I ever invent a super self replicating AI (an unlikely but non zero chance) I am gonna hardcode it to create gazillions of clones of people to torture unless they donate all their money to me now?

So everybody should immediately start giving me money.

This is similar to Lesswrong's Pascal's Mugging thought experiment. The way he presents it is borderline unreadable because he spends pages and pages talking about up arrow notation and Kolmogorov and Solomonoff, but here's the short version: A man comes up to you and says "Give me $5 or I'll use magic wizard matrix powers to torture a really big number of people." If you go by the linearly-add-utility-functions-multiplied-by-probabilities thing that Yudkowsky always asserts is obviously correct, then the small chance that the man is really a wizard can be made up for by threatening arbitrarily large numbers of people, so logic seems to dictate that you should give him money.

Even Yudkowsky admits that this "rational" conclusion doesn't really gel with his instincts, and that he wouldn't actually give the man money. He offers only a couple of weak, half-hearted, non-intuitive attempts to improve his "logic" before shrugging and giving up. He never explains why the same argument that lets you say "No, that's stupid, I'm not giving you $5" in Pascal's Mugging doesn't also let you say "No, that's stupid, I'm not being coerced by your torture-sim threats" in all of his torture-sim hypotheticals. He never explains why the same arguments used to dismiss Pascal's Wager can't be used to dismiss Pascal's Mugging. He never considers that his weird linear sufferbux utility function might not be the one true way to gauge decisions.

ThePlague-Daemon posted:

I'm trying really hard to understand what Timeless Decision Theory is even trying to say. Just by the examples I'm seeing, I THINK it's saying that events in the future can cause events in the past because people anticipate them, but somehow that isn't just the cause and effect of accurately predicting the event is going to occur?

It's like the normal "people can anticipate future events" thing, except here the anticipation is being done by a super-smart omniscient AI that can predict future events absolutely 100% perfectly, so that the anticipation of the future event is tied so tightly to the future event itself that the future event can be said to directly affect the past because it can be foreseen so perfectly.

It isn't really a new decision theory at all, it's just regular decision theory used in contexts where a magical super-AI fortune teller exists.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

I was discussing the Torture vs Dust Specks scenario with a friend (we were laughing at the "0 and 1 aren't probabilities" thing), and my friend agreed with Yudkowsky's conclusion and presented this argument:

You would rather have one person tortured for 50 years than a thousand people tortured for 49 years each.

You would rather have one person tortured for 48.5 years than a million people tortured for 49 years each. Therefore, you would rather have a thousand people tortured for 48.5 years each than a billion people tortured for 49 years each.

By transitivity, you would rather have one person tortured for 50 years than a billion people tortured for 48.5 years each.

Keep going in this manner.

Suppose there exists a duration of time s you can't reach in finitely many steps this way - a duration for which you would rather see any number of people tortured for that amount of time than see a single person tortured for 50 years. Let t be the supremum (least upper bound) of all times for which this is true. But which would you rather see: one person tortured for t+a seconds, or a n people tortured for t-a seconds? If we can make a arbitrarily small and n arbitrarily large (which we can), surely there is some sufficiently large number of people n and some sufficiently small differential a for which we would prefer to torture only one person for ever so slightly longer. Therefore, we can get to durations less than t in finitely many moves, so t is not truly a supremum of the set of unreachable times even though it was defined to be that supremum, so the set of unreachable times has no supremum, so the set of unreachable times is empty, so any torture duration can be reached in finitely many moves.

(The short version of the above paragraph is that there is no solid "line in the sand" you can draw that you would never be convinced to cross, because we can always choose points very very close to each side of the line and threaten to torture much, much more people unless you take the teeny tiny step over.)

In particular, you can get down from fifty years of torture to a nanosecond of torture in finitely many moves, so there is some finite number of people m for which you would rather see one person tortured for fifty years than see m people tortured for one nanosecond each.

If "one nanosecond of torture" is assumed to be the same as a dust speck, this comes down on the side of torture over dust specks. If not, then it still comes down on the side of very long-term torture in a closely-related problem for which many of the pro-dust speck arguments would still hold.


This seems wrong to me. It has the hallmarks of a slippery slope argument, and the conclusion is abhorrent. On the other hand, I can't point to any particular torture duration at which I could draw an impassable line in the sand and justify never taking an arbitrarily small step across, so I can't reject the supremum argument: I want to say something like "Come on, once you get down to one second the 'torture' would be forgotten almost immediately", but I'd still subject one person to 1.000000001 seconds of it rather than subject a gazillion people to .999999999999 seconds of it.

It's pretty late here, so I hope I'm just being dumb and missing an obvious flaw in the logic.

UberJew posted:

The comparison is even closer because Yudkowsky actually hates science, in that he believes the scientific method is bad methodology because having to make a claim that can be disproven in order to be taken seriously is for scrubs, real rationalists use their mental disorders a priori knowledge their mastery of bayes to pluck correct theories from the aether.

Don't both the scientific method and Bayes' rule rely on constant updating your knowledge and beliefs through repeated trials and experiments? :psyduck: I suspect you're right, though. Here's something he said about his AI-Box experiments:

quote:

There were three more AI-Box experiments besides the ones described on the linked page, which I never got around to adding in. People started offering me thousands of dollars as stakes—"I'll pay you $5000 if you can convince me to let you out of the box." They didn't seem sincerely convinced that not even a transhuman AI could make them let it out—they were just curious—but I was tempted by the money. So, after investigating to make sure they could afford to lose it, I played another three AI-Box experiments. I won the first, and then lost the next two. And then I called a halt to it. I didn't like the person I turned into when I started to lose.

I put forth a desperate effort, and lost anyway. It hurt, both the losing, and the desperation. It wrecked me for that day and the day afterward.

I'm a sore loser. I don't know if I'd call that a "strength", but it's one of the things that drives me to keep at impossible problems.

But you can lose. It's allowed to happen. Never forget that, or why are you bothering to try so hard? Losing hurts, if it's a loss you can survive. And you've wasted time, and perhaps other resources.

"Hating losing is what drives me to keep going! Anyhow, when I lost I raged and gave up, and any time you're proven wrong your time and resources have gone down the drain to no benefit at all."

Lightanchor posted:

This makes sense to me if you choose to say 0% and 100% probability were better called necessity, not probability, I guess? It's arbitrary, but does Yudkowsky think his formulation entails something?

Necessity is distinct from 100% probability. Probability 1 events aren't all certain, and probability 0 events aren't all impossible. As a simple example, suppose you take a random real number uniformly between 0 and 1. The probability that the number produced is exactly .582 is precisely 0, since there are infinitely-many real numbers in that interval and each is infinitely likely. But when you take that random real number, you're going to end up with a number whose probability of being chosen was 0, so some probability 0 event must occur. For this reason, an event with probability 1 is said to occur "almost surely" - the "almost" is there because even probability 1 events can fail to occur, and a different term is used for events that are actually certain (such as "the random number chosen in the above example will be between -1 and 2").
:goonsay:

This is one of the flaws in Pascal's Mugging - even if you're willing to accept that it's *possible* that the man asking for money has magical matrix torture powers but still needs five bucks from you, the probability I'd assign that event is still 0 because no positive probability is small enough.

Lottery of Babylon fucked around with this message at 06:58 on Apr 23, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

The correct term is "AI Fanfic Writer" :colbert:

Jonny Angel posted:

Has he released the logs of him winning twice, or otherwise given proof that he did, e.g. the people who lost coming out and saying "Yup, I indeed took this bet and lost it"? I feel like it's definitely possible for him to have won if the people on the other end were already die-hard Less Wrong people, and it'd be interesting to see what kind of circlejerk actually resulted in him winning.

The people who lost said they lost. Of course, the only challengers who lost were hardcore LWers, since before their losses nobody had heard of Yudkowsky's challenge outside of his cult. Once he won twice and started bragging about his 100% success rate, outsiders took notice, and once non-cultists started playing, Yudkowsky started losing.

When people asked for him to post logs or explain is strategy, instead he posted a motivational speech about trying hard. This is a deep rationalist insight because it wasn't just about standard trying-hard like lesser philosophers might suggest but about trying really, really hard. And maybe even trying really hard to do things that are really hard! Inspiring. It also embodies the distinctly Japanese virtues of "trying hard" and "improving", which are unique qualities that can be expressed only through the enlightened Japanese culture and yeah there's basically zero chance Yudkowsky isn't an anime fan on top of everything else. It ends with him saying that he ragequit and stopped running the experiment the moment he stopped winning because he got upset at losing, which for some reason he doesn't think undermines everything else he's said about persevering the face of difficulty and all that Japanese stuff.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Djeser posted:

Therefore the only logical choice is to get cryogenically frozen. There's a fifty/fifty chance of it working. Cryo either happens, or it doesn't, so that's fifty fifty.

Let's be fair, that's not how it works. If cryo has a one in a thousand chance of working, then that's okay because the magical future AI utopia will be ten thousand times as good as your life today. If cryo has a one in a billion chance of working, then that's okay because the magical future AI utopia will be a trillion times as good as your life today. If cryo has a 1/(3^^^^3) chance of working, then that's okay because the magical future AI utopia will be 3^^^^^3 times as good as your life today. Even if the success probability is vanishingly small, we can make up enough baseless praise about Multivac's eutopia that the expected value of cryo becomes positive. And obviously you should always do anything for which the expected value is positive, regardless of whether it's a high-variance long-odds bet.

According to this post, the average Lesswronger believes that the probability of an average cryonics patient being revived in the future is in the 15-21% range, and 13% of their long-time members are signed up for cryonics. This is somehow intended to prove that LW's brand of "rationalist" isn't gullible and easy to sucker into wasting money on non-functional bullshit.

Here's a weird LW article called Value Deathism:

Vladimir Nesov posted:

Ben Goertzel posted:

I doubt human value is particularly fragile. Human value has evolved and morphed over time and will continue to do so. It already takes multiple different forms. It will likely evolve in future in coordination with AGI and other technology. I think it's fairly robust.

Robin Hanson posted:

Like Ben, I think it is ok (if not ideal) if our descendants' values deviate from ours, as ours have from our ancestors. The risks of attempting a world government anytime soon to prevent this outcome seem worse overall.

We all know the problem with deathism: a strong belief that death is almost impossible to avoid, clashing with undesirability of the outcome, leads people to rationalize either the illusory nature of death (afterlife memes), or desirability of death (deathism proper). But of course the claims are separate, and shouldn't influence each other.

Change in values of the future agents, however sudden of gradual, means that the Future (the whole freackin' Future!) won't be optimized according to our values, won't be anywhere as good as it could've been otherwise. It's easier to see a sudden change as morally relevant, and easier to rationalize gradual development as morally "business as usual", but if we look at the end result, the risks of value drift are the same. And it is difficult to make it so that the future is optimized: to stop uncontrolled "evolution" of value (value drift) or recover more of astronomical waste.

Regardless of difficulty of the challenge, it's NOT OK to lose the Future. The loss might prove impossible to avert, but still it's not OK, the value judgment cares not for feasibility of its desire. Let's not succumb to the deathist pattern and lose the battle before it's done. Have the courage and rationality to admit that the loss is real, even if it's too great for mere human emotions to express.

drat hypothetical kids these drat hypothetical future days not sharing our values! But we shall not give in, no matter how insane it may seem. We must take every effort to ensure that society remains as it always was, that their utility functions are exact copies of ours, that all future people match exactly our values and beliefs and thought processes throughout eternity. If you want a picture of the future, imagine a rationalist overwriting a human brain - forever.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

WHY ISN'T GOOGOLOGY A RECOGNIZED FIELD OF MATH

THE STRANGE, VAST THOUGHTS OF ELIEZER YUDKOWSKY posted:

Why isn’t googology (the study of large numbers) a recognized subfield of mathematics with its own journal? There’s all these different ways of getting large numbers, and different mathematical questions that yield large numbers; and yet all those vast structures are comparable, being either greater, less, or the same.

Yes, the real numbers have a total ordering. That's a useful property, but not exactly one that mandates the existence of an entire field of math to handle big numbers.

Of course, plenty of other fields of math already compare the sizes of large numbers. When examining the limit behavior of functions, it is natural and useful to ask how quickly they grow, and which goes to infinity more quickly. This has direct practical applications in, say, computer science (which Yudkowsky ought to know about given his self-proclaimed expertise); algorithms that run in polynomial time are much more desirable than algorithms that run in exponential time, because polynomials grow less quickly than exponential functions. But these are tools in various other fields, not a field unto themselves.

THE STRANGE, VAST THOUGHTS OF ELIEZER YUDKOWSKY posted:

The process of considering how to construct the largest possible computable numbers naturally yields the recursive ordinals and the concept of ordinal analysis. All mathematical knowledge is in a sense contained in the Busy Beaver series of huge numbers.

The Busy Beaver series sequence (dammit Yudkowsky) represents the maximum possible "score" (number of 1's printed on a blank tape) of 2-symbol n-state Turing machines. It's an interesting problem with some neat applications. But to say that the sequence contains all mathematical knowledge? Are you joking? Does the sequence contain within itself a proof that the square root of 2 is irrational? The diagonal argument for the existence of different cardinalities of infinity? All the theorems you learned in high school geometry and forgot? Anything at all of significance about group cohomology? This is like saying that all artistic knowledge is in a sense contained in the Hunger Games.

I think it's easy for someone who knows nothing about a field to pick up one piece of information and assume that that's the only thing of significance in that field. Yudkowsky famously does this with probability theory and decision theory, deciding that they can pretty much be reduced to Bayes' rule. (Naturally, Yudkowsky's precious Bayes' rule is another piece of mathematical knowledge absent from the Busy Beavers.) Someone linked him to a Wikipedia article about the Busy Beaver series sequence once, therefore he is an expert on it and knows all there is to know about all of mathematics.

THE STRANGE, VAST THOUGHTS OF ELIEZER YUDKOWSKY posted:

You’d think there’d be more Math done on that, rather than there just being a recently-formed Googology Wikia.

It's like asking why there isn't a mathematical field of "variabology" to study variables. After all, variables can take on all sorts of different values, surely they must be of interest to mathematicians! And they are - but as aspects of other mathematical fields, not as an isolated field unto themselves. As I said before, people have done research that involves the behavior of large numbers, but that doesn't make large numbers its own field of research.

I did a Google search for Googology and every hit was from the Wikia. I then searched for "Googology -wikia" and got a subreddit with one post, a defunct Geocities page, and a Facebook page with 3 likes. The Wikia main page's "list of googolists" is just a random list of mathematicians who did anything involving a large number (usually alternate notations) in the process of investigating another problem.

THE STRANGE, VAST THOUGHTS OF ELIEZER YUDKOWSKY posted:

Three hypotheses come to mind:

1) The process of determining which two large numbers is larger, is usually just boring tedious legwork and doesn’t by itself produce new interesting insights.

2) By Friedman’s Grand Conjecture, most proofs about numbers can be formalized in a system with an ordinal no greater than ω^3 (omega cubed). Naturally arising huge numbers like Skewes’ Number or Graham’s Number are tiny and easily analyzed by googological standards. Few natural math problems are intricate enough or recursive enough to produce large numbers that would be difficult to analyze.

3) Nobody’s even thought of studying large numbers, or it seems like a ‘silly’ subject to mathematicians and hence is not taken seriously. (This supposes Civilizational Incompetence.)

What does it even mean to be tiny by "googological standards"? Because the real numbers go on forever, any given real number is dwarfed by countless larger real numbers. If you zoom out far enough (and a "googologist" presumably would if that were actually a thing), there is nothing that doesn't eventually look small. Surely, then, every number is tiny by "googological standards"? (How can a field that doesn't exist even have standards?)

Yudkowsky proposes three solutions to why there are no googologists: either googology is stupid and boring; or googology is stupid and boring; or those dumb plebe mathematicians only think googology is stupid and boring because they are all collectively incompetent and only Yudkowsky sees the strings that control the system.

The question Yudkowsky never addresses is burden of proof. It is easy to ask why there is no mathematical journal of googology, or variabology, or fractionology (the study of small numbers), or eggsaladology (the study of mathematical egg salad), or thousands of other made-up fields. The default answer is "Why should there be such a journal?", and it is up to the challenger to explain why their made-up field is more valid and worthy than variabology. Yudkowsky never really gives an argument for why googology should be a thing, and when determining why it isn't he jumps straight to "maybe everyone else is dumb".

To put it in terms you could understand, Yudkowsky: my Bayesian prior says that it's more likely that googology isn't real than that it is, and you have presented no evidence to shift that. And my Bayesian prior says that rather than every other mathematician being collectively incompetent, it's probably just you.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

potatocubed posted:

If the mathematician has one boy and one girl, then my prior probability for her saying 'at least one of them is a boy' is 1/2 and my prior probability for her saying 'at least one of them is a girl' is 1/2. There's no reason to believe, a priori, that the mathematician will only mention a girl if there is no possible alternative.

And therefore the probability that he says one of those two things is 1... which Yudkowsky has previously established isn't a probability. He can't even be internally consistent.

The problem with this argument is that the priors aren't really justified. Yudkowsky's priors assume that the mathematician can only make statements of the form "at least one of them is a [gender]", and that the mathematician always makes exactly one statement at random chosen uniformly between true statements of that form. But why should either of those be the case? There's no reason for the mathematician to speak that way.

It makes more sense to me that if the mathematician tells you to say "at least one of them is a boy", it's because she has some reason to want you to know whether or not at least one of them is a boy, not because her brain is spewing random Markov chains and made a random statement chosen uniformly from among true statements of a certain form. Taken in that light, the correct priors are "at least one of them is a boy" has probability 3/4 and "neither is a boy" has probability 1/4, and in the BG case she would make the former statement with probability 1.

Of course, maybe that's not right either. Maybe racist grandpa is present and really really wants grandsons, so she says whatever makes it sound like she has as many sons as possible. Then the priors would be "both are boys" with probability 1/4, "at least one is a boy" with probability 1/2, and "[she says nothing at all]" with probability 1/4. Or maybe in the BG case she'd just come out and say "One is a boy and one is a girl" with probability .5. Or maybe she has any number of other possible statements she could make and various non-intuitive probabilities assigned to each of them when deciding what to say. We don't know!

One of the major vulnerabilities of Bayes' rule is that it requires you to choose your priors intelligently, and Bayes' rule itself doesn't tell you anything about how to come up with your initial priors. The world is complicated, situations are complicated, psychology is complicated - coming up with sensible priors is really hard! But Yudkowsky never seems to care about that. So he asserts that his AI will be able to handle everything perfectly because it will be ~Bayesian~, but never explains how the AI will come up with prior probabilities for every conceivable event. (He occasionally name-drops that one "universal prior", but never deals with the inconvenient fact that it's not computable in any useful time.) Similarly, here he asserts that he's right because Bayes' rule says he must be, but it only says that because of the weird priors he pulls from the ether.

The interpretation Yudkowsky dismisses makes no assumptions about the mathematician's psychology, the motivation behind the statement, other statements the mathematician could have made in this universe, what statement the mathematician would have made in parallel universes in which the children had different genders, what probability the mathematician assigns to each of the true statements that could be made, or anything else of that sort. It simply says, "Given that what the mathematician says is true, we must be in one of these three universes (GB, BG, BB), therefore the probability that we are in GB or BG is 2/3". Yudkowsky is basically making up new unjustified information with his priors by asserting without proof the process by which the mathematician determines what to say. He doesn't get a different answer because he's the only one making use of helpful available information, he gets a different answer because he's the only one making up information that isn't actually there.

Basil Hayden posted:

Now I want to know what these guys would make of, say, the St. Petersburg paradox.

They'd say "Shut up and multiply." Remember their love of Pascal's Wager and all their variations on it, whether it be cyber-hell or cryonics or 8-lives-per-dollar. They would insist that the correct answer is in fact to pay any price for one chance at the game, because the expected value says so.

Here's a snippet from one of their articles trying to refute the idea that "rationalists" signing up for cryonics more than normal people do proves that they're pretty gullible:

quote:

Imagine a lottery run by an incompetent official who accidentally sets it up so that the average payoff is far more than the average ticket price. For example, maybe the lottery sells only ten $1 tickets, but the jackpot is $1 million, so that each $1 ticket gives you a 10% chance of winning $1 million.

Goofus hears about the lottery and realizes that his expected gain from playing the lottery is $99,999. "Huh," he says, "the numbers say I could actually win money by playing this lottery. What an interesting mathematical curiosity!" Then he goes off and does something else, since everyone knows playing the lottery is what stupid people do.

Gallant hears about the lottery, performs the same calculation, and buys up all ten tickets.

The relevant difference between Goofus and Gallant is not skill at estimating the chances of winning the lottery. We can even change the problem so that Gallant is more aware of the unlikelihood of winning than Goofus - perhaps Goofus mistakenly believes there are only five tickets, and so Gallant's superior knowledge tells him that winning the lottery is even more unlikely than Goofus thinks. Gallant will still play, and Goofus will still pass.

The relevant difference is that Gallant knows how to take ideas seriously.

Of course, they can only keep themselves from looking foolish by constructing their example so that they can 100% guarantee victory by buying all the tickets. But that's not how long-odds bets usually work. You can't guarantee a win on Pascal's Wager by worshipping all gods (and all non-gods) simultaneously. You can't guarantee a win in the St. Petersburg lottery by playing it infinitely many times (you'd go broke too quickly). You can't guarantee immortality by donating to every internet AI crank and signing up for every cryonics lab.

Lottery of Babylon fucked around with this message at 15:49 on May 4, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Krotera posted:

I honestly can't tell what Yudowsky's argument is in his solution to the logic puzzle posed to him earlier.

Is it that he can pick any priors he wants based on specific details of the situation he's examining, even if those priors are wrong or don't make sense, and that the Bayesian way of doing things is to pick your priors arbitrarily and make your justifications that way?

Because I'm pretty sure that no matter how you slice the problem, you either get the 1/3 answer or you make a weird unreasonable assumption about how the woman acts. Does the woman always enumerate the number of boys? Then you have 1/4 "both", 1/2 "one boy", and 1/4 "no boys", and eliminating that last 1/4 still leaves 1/4 both, 1/2 "one boy" out of a remaining whole of 3/4.

What am I failing to understand here?

Yudkowsky believes that the mathematician always decides what statement to make as follows:

a) With two boys, the mathematician will always say "At least one is a boy."
b) With two girls, the mathematician will always say "At least one is a girl."
c) With one boy and one girl, the mathematician has a 50% probability of saying "At least one is a boy" and a 50% probability of saying "At least one is a girl."

Given this extra assumption, his answer of 1/2 is mathematically correct, since the answer of "At least one is a boy" eliminates not only the case where both are girls but also half of the cases (probability mass-wise) where one is a boy and one is a girl. The problem is that he pulls this extra assumption out of his rear end.

Lottery of Babylon fucked around with this message at 19:06 on May 4, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Mr. Sunshine posted:

E2: What the hell does a Baysean prior even do, apart from letting you prove that you were right all along? The way the yuddists use them to arrive at conclusions (8 lives saved per dollar! You are infinitesimally likely to be real! You'll win the lottery!), it seems they just pull numbers out of their rear end, and then use those numbers to prove that the numbers were correct.

The idea behind Bayes' rule is that it helps you use evidence to update your initial beliefs (your priors) into new beliefs in light of the new data. It's a very useful and versatile tool, and without priors it doesn't do anything at all. The problem is that there's a bit of a "garbage in, garbage out" element: if you choose your priors badly, the output isn't going to be very good. The way Yudkowsky uses it, "Bayesian" is synonymous with "math" and "priors" are synonymous with "assumptions". A lot of the things that he claims are based on Bayes' rule really aren't.

For example, the eight lives per dollar thing doesn't use Bayes rule or priors at all. It takes two assumptions (one-in-a-zillion chance of saving a zillion lives) and runs a simple expected value calculation to get the expected value, and then presents the expected value as something meaningful. This has two flaws: expected values aren't always useful (see the St. Petersburg Lottery), and any answer is worthless if it is derived from blatantly false assumptions. But it also has nothing to do with Bayes' rule. Bayes' rule is about updating your beliefs in response to evidence, but the lesswrong dream factory never actually does that. Eight lives per dollar isn't the refined value obtained after many trials, it's just made up out of whole cloth.

In the mathematician's-children argument, Yudkowsky describes his model of the mathematician's psychological behavior as "priors". This is wrong. (And infectiously so; he got me to misuse the term too when I first quoted him.) His model is used to compute the probabilities involved in the Bayes' rule formula, which is a separate factor from the priors. The priors are answers (and associated probabilities) to the thing you're trying to know, e.g. "How many of the children are boys?". He used "priors" to describe "What will the the mathematician say?", which is incorrect because the mathematician's statement is not what he is ultimately trying to learn.

To use Bayes' rule to learn about something, you're basically using the scientific method: start with a hypothesis, run an experiment, modify the hypothesis based on the experiment's outcome, and repeat. But as we've seen, Yudkowsky considers himself "above" the scientific method, and for all his fellating of "Bayesianism", he would dismiss actual experimentation and the collection of actual data as an evil, dirty frequentist act.

If you want to understand about Bayes' rule and how priors work, you should really avoid reading anything Yudkowsky says on the subject. He's really bad at it, misuses the terms constantly, is unlikely to actually know what they mean, abuses the terminology to suit is own ends, and outright makes things up when it suits him. (Also this is true of pretty much any subject, not just Bayes' rule.)

e: beaten much more succinctly:

Slime posted:

The dumbest thing is that his "Bayesian" method should be taking samples in order to ensure that the model is accurate. Bayes Theorem doesn't just pull numbers out of its rear end, it's all about using real world data to create a model.




Yudkowsky doesn't like it when we point out that he's just pulling Pascal's Wager, so he's decided to tell us why we're wrong:
The Pascal's Wager Fallacy Fallacy

Eliezer Yudkowsky posted:

So I observed that:

1. Although the laws of physics as we know them don't allow any agent to survive for infinite subjective time (do an unboundedly long sequence of computations), it's possible that our model of physics is mistaken. (I go into some detail on this possibility below the cutoff.)

2. If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.

And the one said, "Isn't that a form of Pascal's Wager?"

I'm going to call this the Pascal's Wager Fallacy Fallacy.

You see it all the time in discussion of cryonics. The one says, "If cryonics works, then the payoff could be, say, at least a thousand additional years of life." And the other one says, "Isn't that a form of Pascal's Wager?"

The original problem with Pascal's Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will drat you for believing in the Christian God).

However, what we have here is the term "Pascal's Wager" being applied solely because the payoff being considered is large - the reasoning being perceptually recognized as an instance of "the Pascal's Wager fallacy" as soon as someone mentions a big payoff - without any attention being given to whether the probabilities are in fact small or whether counterbalancing anti-payoffs exist.

Yudkowsky believes that the only problem with Pascal's Wager is that it unfairly singles out the Christian God from among other gods, and that the positive utility-probability of choosing the correct god is negated by the negative utility-probability of choosing the wrong one. Already this seems like a spurious claim. Pascal's Wager has a lot of holes, and remains an unconvincing argument even if that particular hole were smoothed over.

(Imagine an alien planet on which only one god is worshipped: Xaxaxar. A very large chunk of the population worships Xaxaxar, and there are also many who do not; but none of them have any knowledge of any civilization worshipping any other god. There are no serious social consequences to declining to worship Xaxaxar (no burning of heretics), but proper worship of Xaxaxar requires a tithe of 100 space-dollars per space-week. Would it be unjustified, then, for the aliens to single out Xaxaxar above all other hypothetical gods? That he alone has worshippers and a thriving religion makes it much more probable from their perspective for him to exist than for other gods, and even if a different god did exist, they would be unlikely to jealously punish Xaxaxar-worshippers, as if they really cared so much about worship they could have used their power to ensure a thriving religion of their own. "Xaxaxar's Wager" does not seem to fall into the one pit that Yudkowsky thinks Pascal's Wager does - and yet there are still many good arguments against Xaxaxar's Wager.)

Even were it not for the problems Yudkowsky singles out, Pascal's Wager would still fail in much the same way as the St. Petersburg Lottery. But for argument's sake, let's see how Yudkowsky argues that cryonics doesn't fall into this particular pitfall:

Eliezer Yudkowsky posted:

But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter. We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts.

The laws of physics cannot be easily modified to permit immortality: lightspeed limits and an expanding universe and holographic limits on quantum entanglement and so on all make it inconvenient to say the least.

On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".

So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.

Here's Yudkowsky's argument: physics as we understand it doesn't actually admit the possibility of the infinite-power immortal computers he always imagines. But, it is possible to write down laws, such as those of Conway's Game of Life, in which machines can keep on computing forever. Therefore, there's a decent chance that we'll discover that our understanding of physics is incorrect, the Second Law of Thermodynamics is outright false, and immortality is possible. And since the rules of Conway's Game of Life are simple, the set of physical laws in which immortality is possible has low ~Kolmogorov complexity~, so the probability of a deity-AI being physically possible is not low, whereas the probability of a deity-God is low.

There are a lot of obvious objections to this. One is that this doesn't deal with the entire cryonics problem, it just deals with arguing that one facet of the ideal scenario is not quite as improbable as it seems. But it doesn't even do that properly. Conway's Game of Life has low Kolmogorov complexity... but we already know that our universe is not Conway's Game of Life. The real question is not the complexity of laws in which immortality is possible, but rather the complexity of laws in which immortality is possible that do not disagree with our observations of our universe. What's the Kolmogorov complexity of those laws? Not so low anymore. And without further investigating that, we have no particular reason to think that such physics is any more likely than, well, God.

Having spent most of his article failing to defend one fraction of his point, Yudkowsky has only one paragraph left:

Eliezer Yudkowsky posted:

And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.

There's a lot wrong with this. You can't assume that the fundamental laws of physics will be found to be different, then turn around and say that everything else must continue along its default path. You can't treat Moore's Law as an actual physical law that will continue without bound. You can't pretend we actually know as much about neuroscience as he implies here. You can't pretend that a damaged, long-dead brain with necessarily have its data intact. You can't treat "someday cryonics will work" as equivalent to "the lovely cryonics lab suckering me out of money will totally do it right". You can't make "counterbalancing anti-payoffs don't exist here" the core of your argument for why this isn't Pascal's Wager, then dismiss it in a throwaway remark without justification. You can't say that the positive outcome is good enough to outweigh the negatives without saying anything about why the positive outcome is so good.

But the most glaring problem is that Eliezer "3^^^^3 copies of you are simulated and tortured for eternity" Yudkowsky thinks that the potential negative outcomes of an AI scanning your brain are trivial and vanishingly unlikely.

Lottery of Babylon fucked around with this message at 13:33 on May 5, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Robin Hanson posted:

President Bush just spoke of "income inequality" for the first time, Tyler Cowen (the most impressive mind I’ve met) said last week that "inequality as a major and chronic American problem has been overstated," while Brad DeLong just said that "on the level of individual societies, I believe that inequality does loom as a serious political-economic problem."

I find it striking that these discussions focus almost entirely on the smallest of these seven kinds of inequality:

1. Inequality across species
2. Inequality across the eras of human history
3. Non-financial inequality, such as of popularity, respect, beauty, sex, kids
4. Income inequality between the nations of a world
5. Income inequality between the families of a nation
6. Income inequality between the siblings of a family
7. Income inequality between the days of a person’s life

That's a coincidence because I find it striking that you could possibly be such a loving idiot.

His explanation for why "between the siblings of a family" is a greater form of inequality consists of half a line:

Robin Hanson posted:

Consider that "sibling differences [within each family] account for three-quarters of all differences between individuals in explaining American economic inequality"
with a hyperlink going to the Amazon purchase page for a pop science book. Well, if that's not persuasive I don't know what is.

Robin Hanson posted:

Clearly, we do not just have a generic aversion to inequality; our concern is very selective. The best explanation I can think of is that our distant ancestors got into the habit of complaining about inequality of transferable assets with a tribe, as a way to coordinate a veiled threat to take those assets if they were not offered freely. Such threats would have been far less effective regarding the other forms of inequality.

Hmm yes quite an interesting theory. Or maybe it's because we focus on a type of inequality that wouldn't be completely stupid to focus on. Want to eliminate inequality between eras of human history? Better return to a state of pre-civilizational subsistence, so we don't end up better off than our ancestors, and halt all scientific and technological progress forever, so our descendants don't live any better than we do. Want to eliminate inequality of sex and kids? Better institute state-enforced rape breeding programs.

But yes I'm sure income disparity complaints are just leftover primitive whiny tribal warcry :supaburn: CLASS WARFARE AGAINST THE DEFENSELESS RICH :supaburn:

Robin Hanson posted:

Added 5/7/07: There is also a huge ignored inequality between actual and possible siblings.

:psyduck:

Lottery of Babylon fucked around with this message at 06:13 on May 6, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

The ultra-simple crux of Yudkowsky's argument is something along these lines: "The sequence of odd numbers (1,3,5,7,...) goes off to infinity. Infinity doesn't exist. Therefore the sequence doesn't exist. Therefore odd numbers don't exist. Therefore the number 5 doesn't exist."

Yudkowsky is not smart.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Patter Song posted:

When Harry sees ghosts, he rants that they're just memory imprints of the dead on a location and not actually sentient.

I agree, just because they walk around and talk and act just like the person and have all that person's memories doesn't mean they're actually sentient.

But a subroutine in an AI's memory is obviously sentient, and you might be one and might be about to be tortured right now unless you help the AI!

Patter Song posted:

Harry sees the dementors as the living embodiment of Death itself and has pledged a campaign to destroy every last one of them.

But Dementors don't even kill you. They separate your soul from your body and leave your body catatonic. They aren't death; if anything, they're closer to cryonics.

How is it ~rational~ to decide that something is "the living embodiment of Death itself" just because it's dangerous and looks creepy, and vow to destroy it not because of what it does but because of what it symbolizes?

Fenrisulfr posted:

I dunno, the idea that we should prevent death if we can (and maintain a desirable standard of living while doing so) doesn't strike me as unreasonable?

Sure, preventing death if we can sounds cool. The problem is that "if we can" bit. What's your plan for bringing about immortality? Because Yudkowsky's is literally to hope the laws of physics magically change. Resources are better spent on successfully helping people than on daydreaming.

Of course, the question is a bit different if you ask it in the Harry Potter universe, where the first book alone establishes that immortality is possible even without doing anything morally reprehensible and doesn't seem to have any negative side-effects other than "bad guys might get to live longer too". That's fantasy for you.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Patter Song posted:

"Side effects? Side effects? What kind of side effect is medically worse than DEATH? " Harry's voice rose on the last word until he was shouting.

I will describe myself as I see myself: I am a great soft jelly thing. Smoothly rounded, with no mouth, with pulsing white holes filled by fog where my eyes used to be. Rubbery appendages that were once my arms; bulks rounding down into legless humps of soft slippery matter. I leave a moist trail when I move. Blotches of diseased, evil gray come and go on my surface, as though light is being beamed from within. Outwardly: dumbly, I shamble about, a thing that could never have been known as human, a thing whose shape is so alien a travesty that humanity becomes more obscene for the vague resemblance. Inwardly: alone.

Still better off than those four suckers though :smug: gently caress you the rest of humanity I survived the longest so I win!!


It really shouldn't need refuting, but to point out the obvious problems (other than "you're murdering unicorns what's wrong with you") with his unicorn scheme:

a) Unicorns might be sentient. Magical creatures in Harry Potter frequently are, and I don't think the tiny glimpse we get of them in the books shows they're not. In that case, killing multiple sentients to save one can't be justified because something something utilitarianism something something Bayes' rule.

b) The doctors who need to spill the unicorn blood for you to drink (most patients on death's door won't be able to hunt and slay them singlehandedly), or perhaps even everyone involved in setting up the unicorn abattoir, might be subject to the curse too. Who's going to volunteer to test the exact mechanics of who gets cursed?

c) The other magical creatures won't be amused by your unicorn slaughter factory. Hope you're ready for a war with the centaurs.

d) Seriously, you spend most of your time writing about evil AI's torturing people for eternity, how can you be unable to imagine a curse worse than death? By this logic, you want to anger the evil AI so that it will simulate and torture you as much as possible, because an extra life in a horrible torture simulation is still better than not having an extra life at all.

Lottery of Babylon fucked around with this message at 04:23 on May 16, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Mr. Sunshine posted:

Is the Less Wrong crowd focusing any attention on addressing any of the myriad problems that need to be solved if/when we achieve clinical immortality? Like, say that in the next few years scientists discover a way to keep telomeres intact during cell division and humans stop dying from of old age. Let's ignore that this does not in any way protect you from cancer, disease or accidents, and that the likelihood of dying of either of these rapidly approaches 1 (:smug:) as your age increases.

What about overpopulation? How do we keep a population that increases by 15000 each hour fed and clothed? How do we ensure an acceptable living standard for even a sizable minority of these people?
What are the social and economical consequences of people not dying? Parents, grandparents and great-grandparents will remain with you forever. There is no such thing as retirement. Jobs will be held indefinitely, while the workforce grows exponentially. Positions of power are occupied by immortals whose values and opinions grow more and more disconnected from the surrounding society with every passing century.

Or is this something that the Lord our God Benevolent AI will solve through magic super-science?

We will not need to fear cancer or disease, for the AI will cure everything. We will not need to fear accidents, for the AI will protect us. We will not need to feat overpopulation, for the AI will feed and clothe us. We will not need to have jobs, for the AI will take care of everything. We will not need to fear entrenched positions of power, for the only position of power shall be the AI. There will be no loyalty, except loyalty towards the AI. There will be no love, except the love of Bayesian purity. There will be no laughter, except the laugh of triumph over the defeated Deathists. There will be no art, no literature, no science.

Alien Arcana posted:

and 10^100 years until universal heat death.

Heat deathism is still deathism. :smug:

Yudkowsky admits that this is impossible:

The Pascal's Wager Fallacy Fallacy posted:

But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter. We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts.

The laws of physics cannot be easily modified to permit immortality: lightspeed limits and an expanding universe and holographic limits on quantum entanglement and so on all make it inconvenient to say the least.

By his own admission, the laws of physics make it clear that immortality is impossible, you will eventually die, you can't actually run 3 ^^^^^^^ 3 simulations, and for all his handwringing science ultimately sides with the deathists.

Or does it?

All italics are his posted:

On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".

So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.

His entire worldview is literally based on the assumption that the laws of physics as we know them will change in precisely the way he prefers. As long as he believes hard enough in immortality, the universe will transform itself into one in which immortality is possible, and at the end of days Multivac will discover a way to reverse entropy and proclaim "LET THERE BE LIGHT" and there will be light.

Yuddites don't actually like science. When science points out that their entire philosophy is wrong and impossible, they decide it's science that must be wrong, stick their fingers in their ears, and decide to literally reject our reality and substitute their own.

This is why it's impossible to argue with Yudkowsky. Even when you prove that everything he says is dumb and wrong and impossible and even if you get him to acknowledge that, he still won't care.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Beyond the Reach of God

Yudkowsky posted:

So it's not necessarily an attempt to avoid falsification, to say that God does not grant all prayers. Even a Friendly AI might not respond to every request.

But clearly, there exists some threshold of horror awful enough that God will intervene. I remember that being true, when I believed after the fashion of a child...

Where exactly is the boundary of sufficient awfulness? Even a child can imagine arguing over the precise threshold. But of course God will draw the line somewhere. Few indeed are the loving parents who, desiring their child to grow up strong and self-reliant, would let their toddler be run over by a car.

The obvious example of a horror so great that God cannot tolerate it, is
the holocaust. That's the obvious example of something unimaginably awful, right? He's making a "God let the holocaust happen, therefore God is a lie" argument. That must be it, right?

Yudkowsky posted:

The obvious example of a horror so great that God cannot tolerate it, is death—true death, mind-annihilation. I don't think that even Buddhism allows that. So long as there is a God in the classic sense—full-blown, ontologically fundamental, the God—we can rest assured that no sufficiently awful event will ever, ever happen. There is no soul anywhere that need fear true annihilation; God will prevent it.
Hahaha nope, it's just him projecting his insane death-phobia onto everything else again. Not every religion sees death as a bad thing, but Christianity does have an eternal afterlife, so I suppose he doesn't disagree with that particular god here. Alright, so where's he going with this?

Yudkowsky posted:

What if you build your own simulated universe? The classic example of a simulated universe is Conway's Game of Life. I do urge you to investigate Life if you've never played it—it's important for comprehending the notion of "physical law". Conway's Life has been proven Turing-complete, so it would be possible to build a sentient being in the Life universe, albeit it might be rather fragile and awkward. Other cellular automata would make it simpler.

Could you, by creating a simulated universe, escape the reach of God? Could you simulate a Game of Life containing sentient entities, and torture the beings therein?
Oh, of course, he's going to the same place he always goes - "Let's simulate people and then torture them."

Yudkowsky posted:

But suppose that instead you ask the question:

Given such-and-such initial conditions, and given such-and-such cellular automaton rules, what would be the mathematical result? ...What does Life look like, in this imaginary world where every step follows only from its immediate predecessor? Where things only ever happen, or don't happen, because of the cellular automaton rules? Where the initial conditions and rules don't describe any God that checks over each state? What does it look like, the world beyond the reach of God? ...

In the what-if world where every step follows only from the cellular automaton rules, the equivalent of Genghis Khan can murder a million people, and laugh, and be rich, and never be punished, and live his life much happier than the average. Who prevents it? God would prevent it from ever actually happening, of course; He would at the very least visit some shade of gloom in the Khan's heart. But in the mathematical answer to the question What if? there is no God in the axioms. So if the cellular automaton rules say that the Khan is happy, that, simply, is the whole and only answer to the what-if question. There is nothing, absolutely nothing, to prevent it.

And if the Khan tortures people horribly to death over the course of days, for his own amusement perhaps? They will call out for help, perhaps imagining a God. And if you really wrote that cellular automaton, God would intervene in your program, of course. But in the what-if question, what the cellular automaton would do under the mathematical rules, there isn't any God in the system. Since the physical laws contain no specification of a utility function—in particular, no prohibition against torture—then the victims will be saved only if the right cells happen to be 0 or 1. And it's not likely that anyone will defy the Khan; if they did, someone would strike them with a sword, and the sword would disrupt their organs and they would die, and that would be the end of that. So the victims die, screaming, and no one helps them; that is the answer to the what-if question.

Could the victims be completely innocent? Why not, in the what-if world? If you look at the rules for Conway's Game of Life (which is Turing-complete, so we can embed arbitrary computable physics in there), then the rules are really very simple. Cells with three living neighbors stay alive; cells with two neighbors stay the same, all other cells die. There isn't anything in there about only innocent people not being horribly tortured for indefinite periods.

Is this world starting to sound familiar?
Sound familiar? Checkmate theist :smug:

At long last, Yudkowsky remembers that Hitler exists and brings him in for his ultimate disproof of God:

Yudkowsky posted:

Belief in a fair universe often manifests in more subtle ways than thinking that horrors should be outright prohibited: Would the twentieth century have gone differently, if Klara Pölzl and Alois Hitler had made love one hour earlier, and a different sperm fertilized the egg, on the night that Adolf Hitler was conceived?

For so many lives and so much loss to turn on a single event, seems disproportionate. The Divine Plan ought to make more sense than that. You can believe in a Divine Plan without believing in God—Karl Marx surely did. You shouldn't have millions of lives depending on a casual choice, an hour's timing, the speed of a microscopic flagellum. It ought not to be allowed. It's too disproportionate. Therefore, if Adolf Hitler had been able to go to high school and become an architect, there would have been someone else to take his role, and World War II would have happened the same as before.

But in the world beyond the reach of God, there isn't any clause in the physical axioms which says "things have to make sense" or "big effects need big causes" or "history runs on reasons too important to be so fragile". There is no God to impose that order, which is so severely violated by having the lives and deaths of millions depend on one small molecular event.

The point of the thought experiment is to lay out the God-universe and the Nature-universe side by side, so that we can recognize what kind of thinking belongs to the God-universe. Many who are atheists, still think as if certain things are not allowed. They would lay out arguments for why World War II was inevitable and would have happened in more or less the same way, even if Hitler had become an architect. But in sober historical fact, this is an unreasonable belief; I chose the example of World War II because from my reading, it seems that events were mostly driven by Hitler's personality, often in defiance of his generals and advisors. There is no particular empirical justification that I happen to have heard of, for doubting this. The main reason to doubt would be refusal to accept that the universe could make so little sense—that horrible things could happen so lightly, for no more reason than a roll of the dice.

But why not? What prohibits it?

In the God-universe, God prohibits it. To recognize this is to recognize that we don't live in that universe. We live in the what-if universe beyond the reach of God, driven by the mathematical laws and nothing else.
This is Yudkowsky's grand disproof of God. Not "really unimaginably terrible things happen," not "bad things happen to good people," not "the Problem of Evil," but "small events can eventually have larger, distant consequences." Yudkowsky reckons that a butterfly flapping its wings ought not have the slightest effect on the weather because weather is big and butterflies are small, and therefore butterflies cannot have the slightest effect on the weather. And because he reckons that effects ought be no more significant than their causes, he assumes that God would obviously agree with him.

Does anyone better-versed in theology know of any doctrine that says that large events must always have large causes, and than a small change now cannot bring about a large change ten years from now? I don't know of anything in Christianity - or, for that matter, in any other religion - that says small events can't have large consequences. I can think of plenty of theological counterexamples, though (for example, that one man dying two thousand years ago could produce a globe-spanning religion with two billion followers much later).

As with the "mathematician says at least one of the children is male" problem, Yudkowsky can't actually answer the question at hand, so he uses sleight of hand to substitute his own easier-but-less-interesting question. Instead of answering the question of whether God exists, Yudkowsky answers the question of whether a God-who-forbids-seemingly-minor-events-to-have-major-consequences exists. Even if we accept his particular understanding of history in which events have singular causes, he's still only disproven a very narrow, particular type of God who shares an idiosyncratic view of his. But most conceptions of God, such as a mainstream Christian one, don't have that idiosyncrasy. He's disproven a version of God nobody believed in. And he can't do any better than that.

Ultimately, his argument is one of revulsion: "X; I don't like X; therefore not God." Such arguments against God are basic, common, and ancient. The only difference is that for most people, X is "evil," or "suffering," or "injustice," or "Hitler." But for Yudkowsky, it's "Hitler's dad's sperm."

Lottery of Babylon fucked around with this message at 01:48 on Aug 11, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Djeser posted:

Christianity has plenty of places where big things come from small events.

Thanks, I figured there had to be something like that in there.

In the comments, someone calls out Yudkowsky's poor understanding of history and points out that WWII was widely predicted twenty years in advance. Yudkowsky counters that it wouldn't have turned out exactly the same with a different German leader. After that the comments somehow turn into an argument about cryonics. Not a single person points out that "big things come from little things" doesn't come anywhere close to proving God doesn't exist.

"Beyond the Reach of God" is a followup to previous article and great death metal band name "The Magnitude of His Own Folly", which ends thus:

Yudkowsky posted:

I saw that others, still ignorant of the rules, were saying "I will go ahead and do X"; and that to the extent that X was a coherent proposal at all, I knew that would result in a bang; but they said, "I do not know it cannot work". I would try to explain to them the smallness of the target in the search space, and they would say "How can you be so sure I won't win the lottery?", wielding their own ignorance as a bludgeon.

And so I realized that the only thing I could have done to save myself, in my previous state of ignorance, was to say: "I will not proceed until I know positively that the ground is safe." And there are many clever arguments for why you should step on a piece of ground that you don't know to contain a landmine; but they all sound much less clever, after you look to the place that you proposed and intended to step, and see the bang.

Which means it's time for our regular reminder that Yudkowsky's entire belief system is based on blind faith that the universe's laws of physics will magically change in a way that permits immortality, and that his justification for believing this is "You can't prove it won't!"

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Anticheese posted:

Wouldn't "escaping the reach of God" by making a really really good implementation of Conway's Game of Life be a dumb idea from the outset, since the simulation still exists in our universe and is (theologically theoretically) still within God's domain?

He has some lines dancing around that objection:

Yudkowsky posted:

But if God is watching everywhere, then trying to build an unfair Life just results in the God stepping in to modify your computer's transistors. If the physics you set up in your computer program calls for a sentient Life-entity to be endlessly tortured for no particular reason, the God will intervene. God being omnipresent, there is no refuge anywhere for true horror: Life is fair.

But suppose that instead you ask the question:

Given such-and-such initial conditions, and given such-and-such cellular automaton rules, what would be the mathematical result?

Not even God can modify the answer to this question, unless you believe that God can implement logical impossibilities. Even as a very young child, I don't remember believing that. (And why would you need to believe it, if God can modify anything that actually exists?)

What does Life look like, in this imaginary world where every step follows only from its immediate predecessor?


Anticheese posted:

It seems simpler to just argue that free will precludes God, since horrible things like torture, Hitler, and the butterfly effect exist?

What he ends up arguing is basically the free will thing, except where most people look at "a man can do horrible things" and get upset at the "horrible things" part, Yudkowsky gets upset at the "a man" part because obviously it ought to take several men to do horrible things or else it's just not fair.

In Yudkowsky's worldview, if more Germans agreed that the Holocaust was the correct course of action then the Holocaust would in fact have been completely okay.

Lottery of Babylon fucked around with this message at 07:52 on May 27, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Mr. Sunshine posted:

In fact, the Yud's obsession with suffering, torture and death just obfuscates what could be a valid point - if we could set up a simulation of the universe which runs entirely on physical laws without divine intervention, how would it differ from our own? If there are no significant differences, what does this say about the possibility of God existing?

That's what it seems like he's going for in the middle section. The trouble is that:

1) He never actually explains why the simulation would turn out exactly like our own. He just says "Hey, maybe in the simulation 'cows' evolve and are eaten by 'wolves'. Or maybe not. But maybe they would, heh, wolves eating cows, wouldn't that sound familiar, nudge nudge wink wink" and moves on from there assuming that any simulation would in fact necessarily be an identical copy of our world. He never justifies this assertion.

2) He frames it less in terms of "The world would be much like our own" and more in terms of "Noted heartless supervillain Genghis Khan tortures puppies for no reason and gets away with it because he's just that evil and the world is just that bleak and uh I don't actually know anything about Genghis Khan"

3) After setting up this argument, he takes a sharp turn into a completely different argument about how small causes having big effects is just wrong.

Mr. Sunshine posted:

Of course, this still hinges on being able to accurately simulate the entire universe in goddamn Conway's Game of Life.

Most of his references are designed to make him look smart rather than to actually be useful. Conway's Game of Life is a cool thing cool nerds have heard of, therefore it needs to be shoehorned into everything even if it's so impractical it's actually counterproductive to bring it up. Knuth's up-arrow notation is how you write Graham's Number* and is therefore a cool thing cool nerds have heard of, therefore it needs to be shoehorned into everything even if it forces him to waste several paragraphs explaining the notation when all that really matters is "it's a big number". They're pointless references designed only to high-five other people who make the same pointless references; they're the computer science equivalent of quoting Monty Python. Since Yudkowsky is in the business of trying to look smart, not of doing useful things, this shouldn't be surprising.

*nb: Graham's Number can't actually be written in up-arrow notation because it's too big, but the way it's written involves up-arrow notation and that's all people remember

Anticheese posted:

Isn't that what you ask by running a goddamn simulation?

No, because if you actually ran the simulation that pesky God feller might interrupt it. If you asked a computer to compute 2+2, you can't be sure the computer wouldn't spit out 5 because of God interfering in its circuits to make it give the wrong answer. But if you consider what you would logically have to get if you added 2 plus 2, the answer would be 4.

It's a long-winded way of saying, "Okay, if we make a simulation we still can't escape the reach of God. But who cares, let's pretend we could escape God's reach anyhow, then what?"

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Monocled Falcon posted:

Ah, man, but that's still so much to talk about.

Like, is this some really obvious philosophic concept that he reinvented? I read the article as a young teenager and it seemed clever.
Reading it now, it's incredible pretentious but seems like a pretty good criticism of everything he's ever written about Friendly AI or the singularity.

Yeah it's "the halo effect exists" dressed up in a bunch of bullshit that it's borderline unreadable. It's also useless because the infinite love of science feedback loop doesn't happen except to other unemployed thirty-year-old autistics who jerk off 24/7 to Knuth's up arrow notation.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

"alter behavior patterns" probably means "temporarily do something outside of normal behavior patterns" here. He can't alter his behavior to start going to the gym every day, even though he can theoretically force himself to go to the gym once (though we know he hasn't).

Making meaningful changes/improvements stick in the long term is hard. That's not a result of Magical Nerd Limited Rewiring Syndrome or whatever, that's "people often fall back on bad habits". A wise person uses this knowledge to arm themselves against a tendency to backslide and becomes more vigilant against slipping. Yudkowsky uses this knowledge as an excuse to never to try to improve himself at all, because why bother if backsliding is so easy?

Remember that time a TCC mod killed a girl by saying "don't bother with rehab because in the long run a lot of people relapse, just do a bunch of LSD instead"? That's what Yudkowsky is doing to himself here.

Lottery of Babylon fucked around with this message at 17:24 on Jun 17, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Too good to be true argues that the scientific community is deliberately suppressing evidence that vaccines cause autism.

Well, okay, the author doesn't actually seem to think vaccines cause autism (although he does make end with an odd comment that even 3/43 studies showing vaccines cause autism would be surprisingly low, when in fact it would be surprisingly high since that's over 5%). He just thinks that the evidence vaccines don't cause autism is too great, and that some evidence that vaccines cause autism should have randomly appeared on its own. As further proof, he notes that studies establishing a link between vaccines and autism are less likely to be published than studies that don't. Which obviously proves that the evidence is being suppressed, because we rationalists all know correlation implies causation.

His proof is based on baseless assumptions, such as asserting without justification that all studies on a major hot-button issue are only using a 95% confidence threshold. (A quick search reveals that, surprise, that's not the case.) Even after stacking the deck with false assumptions, what he thinks is a resounding proof really isn't - he ends up with a p-value of .14 (which he misrepresents as .13), which too high to say anything with even 95% confidence.

All in all, pretty standard for lesswrong - bad math, false assumptions, and taking the anti-vaccer side in the name of just asking questions. But what interested me was this line:

quote:

Having "95% confidence" tells you nothing about the chance that you're able to detect a link if it exists. It might be 50%. It might be 90%. This is the information black hole that priors disappear into when you use frequentist statistics.

Uh, no, that's information you don't have regardless of how you talk about your statistics. That information depends on exactly what the link is and how strong the link is, and since you don't know the exact strength and nature of the link, you can't find that information. This isn't a frequentist-vs-bayesian issue, this is a "you just plain don't have that information" vs "let's just make some poo poo up" issue.

What's with lesswrong's weird hate-on for frequentism?

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

But there isn't a wrong interpretation. There's no reason for there to be a wrong interpretation.

In probability the different interpretations are just different ways of thinking about what probabilities mean. Sometimes one is more convenient for some situations than others, but neither is right or wrong because they're just different approaches. It's not like, say, quantum mechanics interpretations, where arguably either there are many worlds or there aren't, so many-worlds is either right or wrong (but we can't tell which). Bayesianism and frequentism don't make that sort of objective claim about physical reality, they're just different approaches.

Yudkowsky claimed that his (wrong) solution to the "At least one of my children is male" problem was the natural result of Bayesianism and that the counterintuitive (correct) solution was the product of idiotic truth-denying frequentism. It's bizarre seeing them treat frequentism as a weird bogeyman.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Stottie Kyek posted:

Don't you generally use frequentism and Bayesianism for different applications anyway? Frequentist tools like T-test, Wilcoxon rank scoring and ANOVA testing are for things like modelling how closely related two or more sets of data are to each other, like for monitoring change over time or testing the probability that a hypothesis is correct versus the null hypothesis. Like in medical testing, you can use frequency analysis to see how the control group's results differ from the testing group's. I haven't seen anything like that on Less Wrong, except in the autism/vaccines post where they haven't really understood it properly.

For all their professed love of bayesianism, lesswrong doesn't actually like updating their beliefs in response to evidence. They consider that grunt work for lesser intellects who need to rely on their mere senses - senses that could easily be deceived by the Dark Lords of the Matrix. Instead, lesswrong believes that the thing to do is to deduce the correct priors through pure reason, then stick with those priors for ever because your logicks are very smart so obviously those priors are correct.

Antivehicular posted:

This logic is just so sublimely incorrect and wrong-headed that I can't even be particularly angry at it. The entire concept of "there's a non-zero chance that Option A could be correct, so obviously sometimes it should be correct, ergo if studies continue to establish that Option A is wrong and Option B is correct, then obviously DATA SUPPRESSION!!" just opens so many wondrous possibilities. Someone should tell him there's a non-zero chance that the government implanted tracking microchips in his fillings, just to see how many dentists will have to deny it before he decides the conspiracy must go deeper.

He's not saying sometimes vaccines should cause autism. He's saying that even in situations where no correlation exists (i.e. vaccines really don't cause autism), an experiment using a 95% confidence interval has a 5% chance of returning a false positive by "finding" a link due to random variance. Which is actually true - making Type I errors 5% of the time when the null hypothesis is true is the definition of 95% confidence. That part is okay.

The problem is using this to argue that 39 consecutive studies finding no link proves a conspiracy because no type I errors appeared. For one thing, there's about a 14% chance of that happening randomly, which is much too high to dismiss as an explanation. For another, it assumes everything just uses 95% confidence intervals, which is absurd for an issue like this where even one false positive could cause severe damage. The studies are large enough to have high power even with better significance levels - a recent study published two months ago had a sample size of 1.5 million - and it would be recklessly irresponsible to conclude Vaccines Cause Autism!!!! with a p-value of .04. The evidence that some studies found a link but weren't published is a basic correlation/causation fallacy; rather than being suppressed for finding a link, it's more likely that a flaw in the study's design caused both the appearance of a link and the study's rejection for being flawed.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Anticheese posted:

How the gently caress would something cause continuity issues in a new continuity? :psyduck:

It uses continuity to mean continuity of self. Basically, if you make a horcrux and you die, the version of you in the horcrux doesn't have the memories you formed after making the horcrux, so it isn't really the same you as the you who died. (That's not how it works in the books.)

Also, the soul in the horcrux isn't really your soul, it's actually the ghost of someone else you killed forced to manifest, locked in the object, and imprinted with your memories. (That's not how it works in the books.)

Also also, Merlin's Interdict prevents your most powerful spells from being passed through a horcrux, so if resurrected you would be without your strongest spells and would be killed again easily. (That's not how anything works in the books.)

Basically he's doing his usual thing of "criticizing" the books' plots by not reading them, making bullshit up, and then complaining that the bullshit he made up doesn't make sense.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Let's be fair, Harry doesn't deny that curses exist or that killing a unicorn might carry one. He denies that any curse is even worthy of consideration when the alternative is death, and that no curse could possibly be so bad that it should deter you from a life-saving unicorn murder.

That would be shaky logic on its own (fates worse than death are pretty easy to imagine, like an I Have No Mouth And I Must Scream scenario; unicorns might be sentient, which kills the utilitarian argument), but then he insists that it's practically mass murder for St. Mungo's to not have a captive stable of unicorns it keeps and breeds for the purpose of killing whenever someone badly injured comes in.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Namarrgon posted:

I'll be honest: I actually like his magic setting better than the original. I think the Merlin's Interdict (can only learn more powerful spells from others or make them yourselves, not learn them from books) or the mechanics behind the killing curse or the horcrux are pretty neat.

A setting is only as good as the stories it lets you tell.

Harry Potter's magic system isn't elaborate, but it does its job. It provides a backdrop for a story about kids growing up at a magic school and fighting monsters, and it gets out of the way enough that the story is always about the characters without getting bogged down in the magic version of technobabble. In the Methods of Rationality, the magic system isn't there to be used or to tell stories. It's there to give an eleven-year-old boy an something to circlejerk with Literally Hitler about.

In the actual Harry Potter series, the purpose of horcruxes is to keep the bad guy around and to give the heroes a bunch of tasks to overcome in order to fight him. In the Methods of Rationality, the purpose of horcruxes is for Literally Hitler to say "Horcruxes are stupid" and for Harry to reply "Indeed, as are all people who believe in souls", and for both to nod sagely in mutual superiority over everyone else.

Let's see the mechanics of the killing curse, as explained in the latest chapter:

Wizard Hitler posted:

"There is a limitation... to the Killing Curse. To cast it once... in a fight... you must hate enough... to want the other dead. To cast Avada... Kedavra twice... you must hate enough... to kill twice... to cut their throat with your own hands... to watch them die... then do it again. Very few... can hate enough... to kill someone... five times... they would... get bored." The Defense Professor breathed several times, before continuing. "But if you look at history... you will find some Dark Wizards... who could cast the Killing Curse... over and over. A nineteenth-century witch... who called herself Dark Evangel... the Aurors called her A. K. McDowell. She could cast the Killing Curse... a dozen times... in one fight. Ask yourself... as I asked myself... what is the secret... that she knew? What is deadlier than hate... and flows without limit?"

A second level to the Avada Kedavra spell, just like with the Patronus Charm...

Hitler's Disciple posted:

Harry had read once, somewhere, that the opposite of happiness wasn't sadness, but boredom; and the author had gone on to say that to find happiness in life you asked yourself not what would make you happy, but what would excite you. And by the same reasoning, hatred wasn't the true opposite of love. Even hatred was a kind of respect that you could give to someone's existence. If you cared about someone enough to prefer their dying to their living, it meant you were thinking about them.

This is some pretty tenuous logic already. One could equally say that the opposite of North is not South, for that is also in the direction of a pole, but East. Or perhaps Up, because East and South, like North, are still compass directions. Or perhaps Right Here, because Up, East, and South, like North, are all directions pointing away from here. Or perhaps Monkeycheese, because Here, Up, East, and South all have to do with physical location, which makes them too similar to North to truly be its opposite, whereas Monkeycheese is completely unrelated.

I can at least see how the "opposite of happiness is boredom" thing would appeal to a man like Yudkowsky who has never worked a day in his life, has never experienced hardship or loss, has all his money donated to him, and whose greatest struggle is working up the energy to write lovely Harry Potter fanfiction.

Google tells me that the love/indifference, happiness/boredom thing is a quote from a self-help book. Wikipedia tells me that the book's author runs "an online nutritional supplements company" that claims its products will dramatically improve your memory and reaction time almost immediately, and that it claims its products are used by "17 world champions", but that it has produced zero evidence to support either claim. Also he's been caught buying positive reviews on Amazon for his self-help books. Apparently, Yudkowsky found those self-help books worthy of being a major focus of his philosophy and magic system.

Hitler's Disciple posted:

It had come up much earlier, before the Trial, in conversation with Hermione; when she'd said something about magical Britain being Prejudiced, with considerable and recent justification. And Harry had thought - but not said - that at least she'd been let into Hogwarts to be spat upon.

Not like certain people living in certain countries, who were, it was said, as human as anyone else; who were said to be sapient beings, worth more than any mere unicorn. But who nonetheless wouldn't be allowed to live in Muggle Britain. On that score, at least, no Muggle had the right to look a wizard in the eye. Magical Britain might discriminate against Muggleborns, but at least it allowed them inside so they could be spat upon in person.

What is deadlier than hate, and flows without limit?

"Indifference," Harry whispered aloud, the secret of a spell he would never be able to cast; and kept striding toward the library to read anything he could find, anything at all, about the Philosopher's Stone.

Ah, of course, it all makes sense! ...except it really doesn't. In terms of the scale being discussed here, the Killing Curse is a relatively close-range spell. You can't use it to kill someone who's out of sight out of mind a continent away. It's not something you can passively, inadvertently cast without even realizing you're doing it just by not paying attention. It's not a grenade you can lob over a wall at a potential room full of faceless unseen people.

If you're using the Killing Curse on someone, you're looking at that one person clearly and choosing to cast it on that one person. And the Harry Potter universe is full of spells that can completely incapacitate a target without killing them, most of which are far easier than the Killing Curse; if you choose to Avada Kedavra someone, you've decided not just that you want them out of your way at the moment but specifically that you want them dead.

It's a bizarre shoehorned-in metaphor for immigration of all things, and it doesn't really make sense even within the context of the fiction. And since this is Yudkowsky's writing we're talking about, it has zero purpose to the story other than giving Harry an excuse to monologue here.

Besesoth posted:

Not MOR-related, but: Having read through the thread, the concept of Roko's Basilisk continues to bother me, but not for the reasons they want it to.

The Basilisk's premise seems to be that an AI with the capability to improve the lot of humankind and the desire to implement that improvement ("friendly") will, in order to ensure that it gets completed as soon as possible, create an arbitrarily large number of perfect simulations of humans, living their lives before the AI's creation, and then submit any of the simulations who don't donate to the AI's creators' funding to arbitrarily large amounts of torture as soon as they choose not to donate. And since a person thus simulated can't be certain that they aren't a simulation rather than the original, they should donate to avoid being tortured.

But a perfect simulation of me must be indistinguishable from me except that it is virtual and I am not, and so necessarily the simulation must be as sapient and free-willed as I am. Otherwise it's not a perfect simulation. If that's the case, and the Basilisk sees no problem with torturing that simulation, then it sees no problem with torturing intelligent, sapient life (since definitionally the only difference between the simulation and the real thing is that one has a physical existence) - unless it values the original sapient life over the simulated sapient life.

In which case one of two things must be true:

* the Basilisk values non-artificial/non-computer life more highly than artificial/computer-based life, in which case it can't use its own benefit (being created earlier) to justify the harm that the experiment does to the psyches of the people who know it will simulate them and thus shouldn't be able to run the simulations; or

* the Basilisk is actually a hostile - at best a neutral - AI, not a friendly one.

Either way, it fails at its job.

(And anyway, it would be much more efficient from a financial standpoint to simply bribe one of the programmers to set COST_OF_BEING_A_BASILISK = 9999 in the genetic algorithm.)

You're forgetting that the AI is infinitely 3^^^^^3 powerful and infinitely 3^^^^^3 good. If the AI can usher in its own existence even one second sooner, then that's an extra 3^^^^^3 utilitarian happiness-units it can produce. Which means that even if it needs to create 3^^^^3 utilitarian suffering-units by torturing countless simulated people, it's all for the greater good because the positive numbers are bigger than the negative numbers. COST_OF_BEING_A_BASILISK = 9999 won't stop the AI if it thinks the reward of being a basilisk is 10000.

The problem is that most of us don't think you can just linearly add and subtract happiness/suffering that way. To us, if you shoot one kid's dad and give ten thousand other kids free ice cream, you're still a bad person. But Yudkowsky thinks that beep boop good-units and bad-units cancel each other out.

The other problem is that Yudkowsky thinks that future events can directly affect the past because he's broken his brain with Timeless Decision Theory nonsense. The truth is that they can't, and even in Yudkowsky's own weird TDT logic, the future can only affect the past if the past contains an infinitely powerful perfect AI simulator, which it doesn't. But since Yudkowsky doesn't understand any of the words he spews out, he doesn't notice that sort of problem and thinks an AI would, once created, try to rewrite history to make itself be created sooner. It wouldn't and couldn't because that's dumb.

Lottery of Babylon fucked around with this message at 20:12 on Jul 27, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Morkyz posted:

"The opposite of hate is indifference" isn't an EY circlejerk thing, it's actually a pretty common truism.

http://www.americanrhetoric.com/speeches/ewieselperilsofindifference.html

Elie Wiesel is a holocaust survivor, so he knows enough to talk about hate vs indifference.

You can argue it isn't true, but it isn't just something some rear end in a top hat nerd made up.

Yeah, the love/indifference part on its own is a real thing. But the happiness/boredom part only comes from some rear end in a top hat nerd who also paired it with the love/indifference thing, so that's definitely where Yudkowsky is getting it from. The indifference part definitely makes sense in the right context, but the boredom half is pure dumbass sheltered nerd bullshit, and pairing the two together cheapens the indifference part. Applying it to the Killing Curse also just isn't a good metaphor; indifference kills, but not because when people duel each other they load their guns with indifference bullets.

pentyne posted:

Huh, that name seems familiar.

http://negima.wikia.com/wiki/Evangeline_A.K._McDowell


You got your 'looks like a child but is really 600 years anime love interest for a harem anime' mixed with 'super smart and obscenely rational boy wizard dismantles the rationale behind a 1000s year old magic society because he's just so rationally smart'

So now LW is branching out to start seriously critiquing how some magical anime works? I can't wait for his "Sailor Moon and the MOR"

Hahaha even in his big dramatic speech where he reveals the great secret of the killing curse he still feels the need to reference creepy nerd anime.

Imagine if real writers felt the need to do that. "No, Luke. I am your father. And my waifu is Sakura-chan."

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

I entered a random chapter number into the HPMOR URL. This is what I was rewarded with.

Yudkowsky posted:

MY LITTLE PONY: FRIENDSHIP IS SCIENCE

"Applejack, who told me outright that I was mistaken, represents the spirit of... honesty! " The dusky pony raised her head even higher, her mane blowing like a wind about the night sky of her long neck, her eyes blazing like stars. "Fluttershy, who approached the manticore to find out about the thorn in its paw, represents the spirit of... investigation! Pinkie Pie, who realized that the awful faces were just trees, represents the spirit of... formulating alternative hypotheses! Rarity, who solved the serpent's problem represents the spirit of... creativity! Rainbow Dash, who saw through the false offer of her heart's desire, represents the spirit of... analysis! Marie-Susan, who made us convince her of our theories before she funded our expedition, represents the spirit of... peer review! And when those Elements are ignited by the spark of curiosity that resides in the heart of all of us, it creates the seventh element - the Element of Sci-"

The blast of power that came forth was like a wind of brilliant lava, it caught Marie-Susan before the pony could even flinch, and stripped her flesh from her bones and crumbled her bones to ash before any of them had the chance to rear in shock.

From the dark thing that stood in the center of the dais where the Elements had shattered, from the seething madness and despair surrounding the scarce-recognizable void-black outline of a horse, came a voice that seemed to bypass all ears and burn like cold fire, sounding directly in the brain of every pony who heard:

Did you expect me to just stand there and let you finish?

The screams began, then, echoing around that ancient and abandoned throne room; and Applejack fell to her forelocks beside the still-glowing ash that was all that remained of Marie-Susan's bones, looking too shattered even to sob.

Twilight Sparkle stared at the horror that had once been Nightmare Moon, racking her brains with frantic desperation and realizing that it was over, they were doomed, it was hopeless without Marie-Susan; everyone knew that no matter how honest, investigating, skeptical, creative, analytic, or curious you were, what really made your work Science was when you published your results in a prestigious journal. Everyone knew that...

After that it turns into Naruto fanfiction.

Adbot
ADBOT LOVES YOU

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Oddly enough, for all Yudkowsky's hatred of peer review I can't seem to find any articles by him addressing it. You'd think anything he hated enough to mock through My Little Pony fanfiction he'd hate enough to devote just one of his :words: articles to it, addressing why peer review is so awful. Maybe at some level he realizes he doesn't actually have any leg to stand on here and is just bitter that Nature won't publish his Harry Potter fanfic?

razorrozar posted:

It just occurred to me that "I don't care about you" is usually a self-defeating statement. Clearly you cared enough to feel the need to inform them you don't care.

Yeah, which is why using your indifference to someone to power the killing curse against them makes no sense. "Who gives a poo poo about those people" doesn't argue against itself, but "I don't care about you" does.

When the context is the Holocaust, it makes more sense. You have concentration camp guards just following orders they can't be arsed to question, commanders in Berlin ordering mass executions of unseen swathes of Those People, a civilian population whose majority isn't interested enough to do anything, and other countries where nobody spares a thought to what might be happening to Those People so far away because they're more concerned with their ration books. In an "All that is necessary for the triumph of evil is that good men do nothing" context, indifference can clearly be deadly.

But when the context is one bad guy acting alone killing individual people one by one face-to-face without acting on anybody's orders, it's absurd to pin the cause on that bad guy's indifference. The easiest real-world analogue to the lone dark AKing wizard would be a shooter going on a rampage, but nobody blames Virginia Tech on the shooter's superhuman overflowing indifference.

  • Locked thread