Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Reflections85
Apr 30, 2013

su3su2u1 posted:

As a simple, obvious counterexample, I once solved a 3 stop traveling salesman problem in my head. If any non-CS people want an explanation of how incredibly wrong this is, let me know and I'll try to go into more detail.
Being a non-compsci person, could you explain why that is incredibly wrong? Or link to a source that could explain it?

Adbot
ADBOT LOVES YOU

SolTerrasa
Sep 2, 2011

Reflections85 posted:

Being a non-compsci person, could you explain why that is incredibly wrong? Or link to a source that could explain it?

su3su2u1: Huh, you're probably right.

Reflections85: Now that he/she mentions it, sure. This is pretty rough, but it's all the effort I'd like to put into a forums post at the moment. And it'll get the idea across. (CS goons, don't jump down my throat; I promise I've taken a computing theory course, I'm just trying not to be too confusing.)

Some problems do not get harder as the size of the input gets bigger. For instance, it is not harder to remove the last element of a list of a gazillion elements than to do the same thing from a list of one element. Some problems, though, get much harder as the size of the input gets bigger: it's much harder to sort a list of a gazillion elements than it is to sort a list of one element. The amount that a problem gets harder as the size of the input grows is called the "computational complexity". We talk about these things in terms of big-O complexity, which is basically the largest term of the polynomial which represents the time-to-completion as a function of input size. So, that first set of problems, where it takes the same amount of time no matter the size, is O(1). A problem where the time-to-complete increases linearly with input size (say, counting the number of elements in a list) is O(n).

Some of the hardest problems we've ever discovered live in a class of problem called "NP". These problems have an upper bound on their time-to-complete which is proportional to O(n!). N! is super huge when N is large. The most canonical example here is finding the fastest route between N cities, visiting each city at least once. It's really easy for two cities: the fastest path is from one to the other. It's only slightly harder for three. It's a little harder for four, and once you get to ten or so most people can't do it in their heads.

Yudkowsky has claimed that literally no such problems could ever be solved in nature. This is false. For very small N, N! is not that big. You could arguably say that any animal which travels in a straight line is solving the previous problem (called "Travelling Salesman") for N=2.

Yudkowsky would have a witty response to this obvious counterexample which would be a lot of words surround a core of "yes, well, that's not what I meant" then redefining his terms so that he's right and you're wrong.

E: yeah, vvvvvvv this one vvvvvvvv is better than mine:

SolTerrasa fucked around with this message at 08:37 on May 14, 2014

su3su2u1
Apr 23, 2014

Reflections85 posted:

Being a non-compsci person, could you explain why that is incredibly wrong? Or link to a source that could explain it?

So the way that computer scientists and mathematicians talk about computational complexity is by imagining very simple types of computers, called turing machines. A turing machine is a reader/writer that gets fed a tape, and has a set of rules about what to do when it sees various symbols. i.e. it could have a rule that says "if you see an A on change it to a B and move the tape three spots left", or "if you see an x, change it to a y and move one slot right." You can imagine implementing an algorithm by designing a turing machine feeding it inputs on the tape, and designing it so that it rewrites the tape to the output of your algorithm. This is a very condensed explanation, but turing machine probably has a nice long article on wikipedia.

Now, we say a problem is in class "P" if you can make a turing machine that can solve the problem in polynomial time. Polynomial time means that the number of steps the turing machine has to take is a polynomial of the input. So imagine a turing machine algorithm to add one to a binary number. The input tape contains the binary number, and starting from the right, the turing machine changes any 1 to a 0, and as soon as it sees a 0, it changes it to a 1 and stops. The maximum number of operations you'd need to run this turing machine is N, where N is the length of the number (also the length of the tape). If you have a turing machine that takes N^2 steps, thats also in P, because N^2 is a polynomial in N.

BUT, we could imagine a different type of turing machine. This turing machine can have more than one command for a given symbol (i.e. if you see an A change it to a B move two steps left, OR if you see an A change it to a D and move one step right). At each step, the turing machine picks whichever step will lead to the turing machine finishing faster (its the best possible guesser). I'm sure there is a longer explanation of non-deterministic turing machines on wikipedia. A problem is said to be NP if one of these non-deterministic turing machines can solve it in polynomial time. Because we can turn a deterministic turing machine into a non-deterministic one simply by adding more instructions, anything that is in P is also in NP.

A problem is called "NP-hard" if its at least as hard as the hardest problems in NP. It is an open problem as to whether or not these NP-hard problems are actually in P, but almost everyone believes they aren't. Generally its believed their scaling is intrinsically worse than polynomial (so something like e^N)

This is a long wind up, but the important take away is this: computational classes like P and NP are about how the number of operations you need to do to solve a problem scale with the input of the problem. Low input length (low N) NP-hard problems can be quite simple to solve. Also, if you have lots of computers and lots of time, you might be able to brute force an NP-hard problem simply by waiting long enough. A famously NP-hard problem is the traveling salesman problem. A salesman has to travel to X locations and then home again, and he wants to travel the overall shortest distance, the goal is to find that shortest route. Now, with 3 cities you have to check 6 possible paths, something you can add up in your head. For 4 cities, there are 24 routes, so you can still do everything with a pencil and paper in just a few minutes. But in general, the number of routes scales like n! (n*(n-1)*(n-2)...) so it doesn't take very many cities before its impractical to check every route, even with a computer.

With that in mind, lets revisit Big Yud's quote

Big Yud posted:

Nothing that has physically happened on Earth in real life, such as proteins folding inside a cell, or the evolution of new enzymes, or hominid brains solving problems, or whatever, can have been NP-hard. Period. It could be a physical event that you choose to regard as a P-approximation to a theoretical problem whose optimal solution would be NP-hard, but so what, that wouldn't have anything to do with what physically happened. It would take unknown, exotic physics to have anything NP-hard physically happen. Anything that could not plausibly have involved black holes rotating at half the speed of light to produce closed timelike curves, or whatever, cannot have plausibly involved NP-hard problems. NP-hard = "did not physically happen". "Physically happened" = not NP-hard

His assertion that "nothing that has physically happened on Earth in real life... can be NP-hard" is obviously laughable. There are lots of NP hard problems that are trivially easy for small n. Even for large N, clever computer scientists with lots of computing power have found solutions (back in the mid 2000s I saw a paper that solved the traveling salesman problem for every city in Sweden, something like ~70,000 cities).

His next assertion "proteins folding in a cell, or evolution of enzymes[couldn't be NP-hard]" is more interesting, because conceivably these might be large N operations. Simple life on Earth has been around for like ~4 billion years. There are at least 10^30 or so microbes on Earth and they undergo reproduction a few times an hour. Thats a lot of 'computing cycles' for evolution to work on, and my estimate may well be low. Nature doesn't care if it can solve a problem in polynomial time because it has lots of time. It also doesn't have to solve the problem in full generality, it can find a few good solutions and reuse them. i.e. it doesn't have to find every possible enzyme, it has to find a few that work.

The next bit has the sentence "It would take unknown, exotic physics to have anything NP-hard physically happen." This is just incoherent, problems are NP-hard, "things-happening" are not. Further, there is a fairly-well known NP-hard problem called the "numerical sign problem" that crops up in physics. As far as anyone knows, quantum field theory is a correct physical theory and it has an NP-hard computational problem that makes doing simulations very difficult (one of the reasons lattice QCD is difficult.)

Trying to equate NP-hard with "did not physically happen" is obviously stupid to anyone who knows this stuff, because again, they can solve low N NP-hard problems in their head.

su3su2u1 fucked around with this message at 08:34 on May 14, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

The ultra-simple crux of Yudkowsky's argument is something along these lines: "The sequence of odd numbers (1,3,5,7,...) goes off to infinity. Infinity doesn't exist. Therefore the sequence doesn't exist. Therefore odd numbers don't exist. Therefore the number 5 doesn't exist."

Yudkowsky is not smart.

Peel
Dec 3, 2007

Quantum mechanics isn't easily computable by turing machines anyway, that's why quantum computers are theoretically superior for some types of computation, isn't it?

Peel fucked around with this message at 12:36 on May 14, 2014

Wanamingo
Feb 22, 2008

by FactsAreUseless

Peel posted:

Quantum mechanics isn't easily computable by turing machines anyway, that's why quantum computers are theoretically superior for some types of computation, isn't it?

Yes, with heavy emphasis on the word theoretically, because those things are not even remotely close to existing yet.

su3su2u1
Apr 23, 2014

Peel posted:

Quantum mechanics isn't easily computable by turing machines anyway, that's why quantum computers are theoretically superior for some types of computation, isn't it?

For quantum computing, we look at quantum turing machines. The class of problems they solve is called BQP (bounded error, quantum polynomial time, the reason for 'bounded error' is that the algorithms on quantum turing machines are probabilistic, so you want a worst-case error).

It is thought (but again, not proven) that BQP is not the same as NP-hard, and so the numerical sign problem would still make lattice QCD very difficult even if you managed to build a quantum computer.

Sunshine89
Nov 22, 2009

SolTerrasa posted:



Yudkowsky would have a witty response to this obvious counterexample which would be a lot of words surround a core of "yes, well, that's not what I meant" then redefining his terms so that he's right and you're wrong.


This describes Yud in a nutshell. Sorry to dredge it up again, but the Basilisk is literally just him saying "Let's toss a coin. Heads I win, Tails you win. What's the catch? If it comes up tails, we just flip again, if it comes up heads, then the match ends."

Sunshine89 fucked around with this message at 19:33 on May 14, 2014

canis minor
May 4, 2011

Love the thread and since the mention of time traveling AI I was wondering about a thing, that I don't think has appeared. Let's say I'm crucial to creating this AI - as, without me such AI won't come to being (this can be said about anybody, or about a group of people, or whatever). Now, if AI would torture me, I obviously won't create it. Expand this assumption to entire population. How does this AI come into being?

Telarra
Oct 9, 2012

The claim Yudkowsky is making is that it would be very easy for the creators of the AI to gently caress up and miss some small detail, and accidentally create Skynet. Which is a fairly reasonable claim, and we've got plenty of scifi stories where exactly this happens. The AI-in-a-box thought experiment is a continuation of this - would we even be able to tell if the AI is Skynet or not, and could we be coerced or tricked into giving it control anyways?

But he also claims that the construction of such an AI is inevitable, and so then the only way to prevent Skynet/the Basilisk is to give him lots of money so he can spend all his time writing pretentious Harry Potter fanfic researching all the ways that Skynet could try to trick us.

Wales Grey
Jun 20, 2012

su3su2u1 posted:

Only one unpublished, but submitted manuscript (having read it, I'd be incredibly surprised if it gets through review. Most academic papers don't repeatedly use the phrase 'going meta'). His only actual (non-reviewed) publications have been through "transhumanist" vanity prints and through his own organization. The thing that kills me- if I donated money to a research institute and MORE THAN A DECADE LATER it had only even submitted one paper to review (ONE! EVER!) but the lead investigator had been able to write hundreds of pages of the worst Harry Potter fanfic ever, I'd be outraged. Instead, the Lesswrong crowd seems grateful.

It makes more sense if you consider that the Lesswrong crowd don't want Yudkowsky to be a scientist. They want him to be a prophet.

Mors Rattus
Oct 25, 2007

FATAL & Friends
Walls of Text
#1 Builder
2014-2018

Moddington posted:

The claim Yudkowsky is making is that it would be very easy for the creators of the AI to gently caress up and miss some small detail, and accidentally create Skynet. Which is a fairly reasonable claim, and we've got plenty of scifi stories where exactly this happens. The AI-in-a-box thought experiment is a continuation of this - would we even be able to tell if the AI is Skynet or not, and could we be coerced or tricked into giving it control anyways?

But he also claims that the construction of such an AI is inevitable, and so then the only way to prevent Skynet/the Basilisk is to give him lots of money so he can spend all his time writing pretentious Harry Potter fanfic researching all the ways that Skynet could try to trick us.

I wouldn't be so sure it's a reasonable claim based on the fact that artificial intelligence doesn't work the way he thinks it does, at least if I understand the actual experts in the thread. (Or like how many sci fi authors think it does, for that matter.)

canis minor
May 4, 2011

Moddington posted:

The claim Yudkowsky is making is that it would be very easy for the creators of the AI to gently caress up and miss some small detail, and accidentally create Skynet. Which is a fairly reasonable claim, and we've got plenty of scifi stories where exactly this happens. The AI-in-a-box thought experiment is a continuation of this - would we even be able to tell if the AI is Skynet or not, and could we be coerced or tricked into giving it control anyways?

But he also claims that the construction of such an AI is inevitable, and so then the only way to prevent Skynet/the Basilisk is to give him lots of money so he can spend all his time writing pretentious Harry Potter fanfic researching all the ways that Skynet could try to trick us.

Does he explicitly claim it? I thought that the "reasoning" was - it will torture anyone that knows about it and hasn't contributed to it being created, not that it's evil or buggy.

Peel
Dec 3, 2007

'Humans will create ever more complex and powerful automated systems controlling an ever larger proportion of the world with an attendant ever higher risk of catastrophic error' is a plausible picture of the future but Yudkowsky-style a priori reasoning about how it would work based on a ridiculous evil supercomputer model of AI is going to be about as relevant to preventing it as mediaeval philosophers arguing about angels on pins.

If it makes any contribution at all it will be in the form of getting impressionable young nerds concerned about the possibility of disaster before they go off to learn real skills, and some good SF literature could do that more entertainingly.

projecthalaxy
Dec 27, 2008

Yes hello it is I Kurt's Secret Son


So I just marathoned this entire thread in one sitting and all I can say is that when I find this guy and hit him in the dick with a cactus I hope it's not just a simulation.

SolTerrasa
Sep 2, 2011

Mors Rattus posted:

I wouldn't be so sure it's a reasonable claim based on the fact that artificial intelligence doesn't work the way he thinks it does, at least if I understand the actual experts in the thread. (Or like how many sci fi authors think it does, for that matter.)

See, that's just the thing. He's close! He's SO close. He's so close that I was taken in for a long time before I actually started my thesis. It's made MORE confusing by the fact that his essays on human rationality are actually quite good!

"Artificial Intelligence doesn't work the way he thinks it does" is a bit too broad. More like... "artificial intelligence might someday work the way he thinks it does, but it probably won't, it might be computationally intractable, and besides that there's no reason to think it ever SHOULD work the way he thinks it already does, and besides THAT he's presented no evidence to suggest anything different from established opinion."

It's so goddamn frustrating because if he'd stop talking so much and start writing code, maybe he'd contribute something to the field. He might even be right, and he might even prove it. Probably not, but at least we'd LEARN something.

In software, the only real response to "that will never work" is "look, I built it." And unless he drastically changes his plans, he'll never get there.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

See, that's just the thing. He's close! He's SO close. He's so close that I was taken in for a long time before I actually started my thesis.

I would argue that you were probably taken in early in your learning process, as a student, because Yudkowsky has managed to sound (much of the time) like an actual expert. And now you want to think he was close because it makes being taken in a little easier to deal with. I had a similar experience with Eric Drexler's nanomachines.

quote:

It's made MORE confusing by the fact that his essays on human rationality are actually quite good!

But are they really? I think almost all of the actual interesting stuff is lifted idea-for-idea directly from Kahneman's popular books. Outside of those ideas on biases, you are left with Yudkowsky's... eccentric definition of rationality.

He uses beliefs in cryonics,the many worlds interpretation of quantum mechanics, and Drexler style nano-technology as heavy focal points in his essays on rationality. My training is physics, and I'm deeply skeptical of a "rationality" focused educator who can't poke holes in the wild claims of cryonics advocates and Drexler's work. I'm not expecting people to be physicists, I'm expecting people to use the well known "rationality tool" of asking an expert. Want to evaluate cryonics? Find some cryobiologists ask them what they think. Want to know about nanotech? Find some physicists or chemists working on 'real' nanotech and ask them what they think.

In his quantum mechanics sequence (here http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/), Yudkowsky asks his followers to whole-heartedly embrace the many worlds interpretation of quantum mechanics, and in fact to conclude that his pop-Bayesianism is a better tool than science, based entirely on approximately twenty pages of Yudkowsky explaining what quantum mechanics is.

So Yudkowsky ACTIVELY TEACHES that rational humans should trust their own opinion, formed entirely by reading 20 mathless pages on the internet, OVER physicists. Not only that, he teaches that they should trust their own opinion SO STRONGLY that they should throw out the scientific method in favor of pop-Bayesianism.

If I ever am stuck in a hospital bed for months or something, I'll go through all of his sequences and point out where he is falling victim to the very biases he rails against.

quote:

It's so goddamn frustrating because if he'd stop talking so much and start writing code, maybe he'd contribute something to the field. He might even be right, and he might even prove it. Probably not, but at least we'd LEARN something.

He can't. He doesn't know enough to write sophisticated code. He wants to get paid for blogging and for writing Harry Potter fanfiction. Look at his 'revealed preferences' to use the economic parlance. When given a huge research budget, instead of hiring experts to work with him and doing research, he instead blogs and writes Harry Potter fanfiction.

The people his institute hires are for the most part undergraduates with bachelor degrees, not domain experts, not proven researchers. The requirement to get hired seems to be long-time involvement in his "rationalist" community. This is why even the paper his institute has submitted to peer review is poorly written- no one at the institute knows how to first author a paper.

Lets look at his crack research team- Luke Muehlhauser, lists his education as "Previously, he studied psychology at the University of Minnesota." Exclude his (non reviewed) transhumanist vanity publications and he has none.

Louie Helm- masters in computer science (this seems promising), exclude the transhumanist vanity bs and you have one combinatorics paper and one conference proceedings. The conference proceeding appears to be his masters thesis, and its on quantum computing. Both published in 2008. Thats 5+ years of inactivity.

Malo Bourgon- masters in general engineering, no publications. Seems to work in HR and not as a researchers.

Alex Vermeer- unspecified degree in general engineering. No publicaitons, seems to manage the website.

Benja Fallenstein- undergrad in math. Might actually have two published papers on hypertext implementations way back in 2001-2002, hard to tell, there is certainly A Benja Fallenstein who published while associated with the University of Bielefeld, but the one at miri got his degree at university of Vienna.

Nate Soares- undergrad in CS, no published papers. Worked at google, so he can probably actually code? Then again, didn't work at google very long.

Katja Grace- ABD phd student in logic, no published papers.

Rob Bensinger- BA in philosophy, no published papers.

These are the direct employees, there are other "research associates" who just seem to be affiliated. Note- none of these people have extensive publication records IN ANYTHING. Most have no CS qualifications and no history of programming anything. The CS masters appears to have done combinatorics/applied math more than actual code-based-CS. Yudkowsky has very carefully avoided hiring anyone who can call him out on his bullshit- most of his hires were affiliated with his robo-cult for years before he hired them (and were roped into the robo-cult before they learned enough to know better).

I think its like scientology- after you've spent a ton of money and effort learning, cognitive disonance is too strong for someone roped in to realize its bullshit.

su3su2u1 fucked around with this message at 03:44 on May 15, 2014

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

I would argue that you were probably taken in early in your learning process, as a student, because Yudkowsky has managed to sound (much of the time) like an actual expert. And now you want to think he was close because it makes being taken in a little easier to deal with. I had a similar experience with Eric Drexler's nanomachines.

I agree that this is a strong possibility. I certainly agree that that's why I was taken in originally. But I worry that people in this thread are getting the wrong idea about his "work", if you can call it that. I'd like to argue that the core idea that he has, the core idea of a Bayesian reasoning system, is not so unreasonable that it should be dismissed out of hand. What I'm trying to say, and what I think I said right after the bit you quoted, is that if anyone ever got it working, I'd believe in it.

Don't get me wrong, I don't idolize the guy anymore, and in fact I think he's pretty goddamn stupid about a lot of things. When I say he's close, I mean that he's close to being an actual AI researcher, not necessarily close to being "right". What makes him different from a real researcher is that real researchers test their beliefs. They build systems that they think might work, then figure out if they do or not. I don't think that a Bayesian bootstrapping AI (as he envisions it) is ever going to work; if I did I'd have tested it for my thesis instead of what I ended up doing. My specific criticism is the same as the reason I haven't reimplemented the system I work on at Google as a Bayesian network: the priors are a giant goddamn pain in the rear end, and (except for in entirely theoretical systems) they end up dominating the behavior of the system, making it no better than the pants-on-head-idiotic rule-based "Expert Systems" that we thought were going to work out back in the 80s.

I don't even really think that he's got a consistent plan for one (or he'd have built it by now), but drat it, if he'd just sit down and type some goddamn code he might notice that he's got no plan and no ideas and start having some. What I'm lamenting is the loss of someone with great enthusiasm for the work. I'm just some guy, you know? I do my best, I work as hard as I can, and at the end of the day, there's an idea in natural language generation that wouldn't have been invented yet without me. It's not that I'm brilliant; I was able to do that just because I have both enthusiasm and a work ethic. Yudkowsky has the former and not the latter and it makes me sad sometimes.

quote:

But are they really? I think almost all of the actual interesting stuff is lifted idea-for-idea directly from Kahneman's popular books. Outside of those ideas on biases, you are left with Yudkowsky's... eccentric definition of rationality.

Well, huh. His ideas on biases were actually exactly what I meant. If they aren't his ideas at all, my respect for him goes down more notches. I appreciate you mentioning it; I really ought to go read Kahneman, then. I certainly no longer believe that Y. was right about quantum mechanics or cryonics, but I do feel that I benefited from reading "How to Actually Change Your Mind", for instance. Legitimately, thank you. I can now say that there is nothing of value I gained from LessWrong that I couldn't have gained elsewhere.

quote:

He can't. He doesn't know enough to write sophisticated code. He wants to get paid for blogging and for writing Harry Potter fanfiction. Look at his 'revealed preferences' to use the economic parlance. When given a huge research budget, instead of hiring experts to work with him and doing research, he instead blogs and writes Harry Potter fanfiction.

I think its like scientology- after you've spent a ton of money and effort learning, cognitive disonance is too strong for someone roped in to realize its bullshit.

Here I disagree again. I don't honestly think that programming is so hard; again, if I made a contribution to the field, anybody with a little bit of willpower and an interest will be able to make one. I do agree that he isn't doing anything of value, but I don't think that he couldn't, if only he'd do some work someday.


(Also, as kind of a meta-response: I wrote that post on my phone while I was still slightly panicked after a presentation I gave at a Google conference; it's possible that I am just stupid when I've just finished speaking.)


======================================================


That said, this has to be boring for anyone who doesn't care about AI specifically and who came here to mock the crazy.

:siren:As penance for being boring, have more crazy.:siren:

Today, I am going to tell you about the Bayesian Conspiracy. Let's start with a definition. Yudkowsky believes that rationality is effectively a "martial art of the mind" (1). Never mind that he's never actually taken any classes in any martial arts as far as anyone knows, never mind why he thinks that. Probably just the latent Japan fetish that a lot of people who grew up on the internet find themselves with. The point is that he thinks this.


This is what a member of the Bayesian Conspiracy looks like.
I didn't make this one up. He posted it on his website.
Also, that's Yudkowsky under that robe.
Is anyone surprised he doesn't know how to hold a textbook?


It all started when he wrote a piece of bad fiction. He has always wanted to spread the idea that his particular version of rationality can be derived from scratch; believing that his position has been "proven" from incontrovertible axioms is how he avoids having to do any work. His piece of bad fiction is about someone being inducted into a fictional organization called, unsurprisingly, the Bayesian Conspiracy. He is first led down 256 stairs (ooh, yes, you're very clever. I bet you wish you had a hexdollar from Knuth [2]), then forced to solve a math problem in his head. He is ridiculed for giving the right answer, and his persistence in the face of other people telling him he is wrong is what earns him his place in the Bayesian Conspiracy. The parallels to how Yudkowsky thinks of himself (a ridiculed genius, scorned for his correctness) are as obvious as they always are when you're reading bad fiction. You can read it at (3).

Yudkowsky thought this was great, presumably because he never got to join a fraternity in college and he's sure that the cool kids would have let him in, if only he'd gone. So he went on to redefine his group. He gave them a Japanese name which I really, really love. The "Beisutsukai". People who know Japanese can laugh now; I don't, so I had to read why they're called that. It's a transliteration of "Bayes", to "beisu", and "tsukai", "user". It means Bayes User, except he got it wrong. "Bayes" would be transliterated "beizu", not "beisu". So actually it means nothing. Not wanting to correct his mistake, he make a note on the never-used wiki (4) and left it.

Anyway, he goes on. His second short story in this vein is called "The Failures of Eld Science" (5), "Eld Science", of course, being literally anyone who currently disagrees with Yudkowsky. In it, he has his viewpoint character sitting in a rationality dojo where he is yelled at by his teacher. I won't talk too much about it, but the main point of it is that science today doesn't work, and no one comes to the right conclusions, and it's because they won't just listen to Wise Teacher Yudkowsky.

He goes on, and on, and on. He's sure that Einstein worked too slow, and if only Yudkowsky had been around to teach him, he'd have been so much faster:

http://lesswrong.com/lw/qt/class_project/ posted:

"Do as well as Einstein?" Jeffreyssai said, incredulously. "Just as well as Einstein? Albert Einstein was a great scientist of his era, but that was his era, not this one! Einstein did not comprehend the Bayesian methods; he lived before the cognitive biases were discovered; he had no scientific grasp of his own thought processes. Einstein spoke nonsense of an impersonal God—which tells you how well he understood the rhythm of reason, to discard it outside his own field! He was too caught up in the drama of rejecting his era's quantum mechanics to actually fix it. And while I grant that Einstein reasoned cleanly in the matter of General Relativity—barring that matter of the cosmological constant—he took ten years to do it. Too slow!"

Anyway. I explained all that so that I could explain this: some LessWrongers believe that they should literally (literally) take over the world. (6)

quote:

I ask you, if such an organization [as the Bayesian Conspiracy] existed, right now, what would – indeed, what should – be its primary mid-term (say, 50-100 yrs.) goal?

I submit that the primary mid-term goal of the Bayesian Conspiracy, at this stage of its existence, is and/or ought to be nothing less than world domination.

...

“World domination”, to me, actually describes rather a loosely packed set of possible world-states. One example would be the one I term “One World Government”, wherein the Conspiracy (either openly or in secret) is in charge of all nations via an explicit central meta-government. Another would be a simple infiltration of the world's extant political systems, followed by policy-making and cooperation which would ensure the general welfare of the world's entire population – control de facto, but without changing too much outwardly.

These people are like children; they still think that if only they were the ones in charge, then they would be able to fix everything. Remember being a child and believing that? They haven't yet realized that they are not that much smarter than anyone else, and that the people currently in charge of all the "policy-making and cooperation" are doing their best, too. I wish. I really, really do wish that this could happen, just because I want to see the look on some Bayesian's face, when they realize that all their arguments about prior probabilities are completely indistinguishable from the arguments about policy that politicians are already having, except with more explicit mathematics.

quote:

What should the Bayesian Conspiracy do, once it comes to power? It should stop war. It should usurp murderous despots, and feed the hungry and wretched who suffered under them.

Again, they're like children. "Usurp murderous despots"? What kind of simpleminded view of politics is that? How could anyone believe that the world comes down to "murderous despots" and "not that"? Surely whoever writes in comments will have explained why this is such a bad idea? So that this child can learn?

quote:

This needs a safety hatch.
...
A conspiracy of rationalists is even more disturbing because of how closely it resembles an AI. As individuals, we balance more logic based on our admittedly underspecified terminal values against moral intuition. But our intuitions do not match, nor do we communicate them easily. So collectively moral logic dominates. Pure moral logic without really good terminal values...
(7)

... Yes. Their biggest objection is not that literally taking over the world would be too hard, but that it would be too easy, and they might succeed too much, and that their ~pure logic~ will not give everyone exactly what they want. That last bit, with the ellipsis, links to another short story by Yudkowsky (does that guy do anything else? ever?) where an Evil Artificial Intelligence moves all the men to Mars and all the women to Venus for some loving reason or other. (8)

So there you have it. The Bayesian Conspiracy is a poorly spelled Japan-fetish martial arts club that doesn't teach martial arts, which will totally be smarter than Einstein and will only let stubborn people into their club. In 50 to 100 years it will take over the world and move all the men and women to different planets. Aren't you glad you know?

(1): http://lesswrong.com/lw/gn/the_martial_art_of_rationality/
(2): http://www-cs-faculty.stanford.edu/~uno/boss.html
(3): http://lesswrong.com/lw/p1/initiation_ceremony/
(4): http://wiki.lesswrong.com/wiki/Beisutsukai
(5): http://lesswrong.com/lw/q9/the_failures_of_eld_science/
(6): http://lesswrong.com/lw/74a/the_goal_of_the_bayesian_conspiracy/
(7): http://lesswrong.com/lw/74a/the_goal_of_the_bayesian_conspiracy/4ni3
(8): http://lesswrong.com/lw/xu/failed_utopia_42/


edit: okay wait, I'm sorry, I know that was a long post, but this is how good a writer Yudkowsky is. From that last short story.

quote:

His mind had already labeled her as the most beautiful woman he'd ever met. ... Her face was beyond all dreams and imagination, as if a photoshop had been photoshopped.

... good. I ... I think I'll let that last one stand on its own merits.

SolTerrasa fucked around with this message at 06:39 on May 15, 2014

That Old Tree
Jun 24, 2012

nah


SolTerrasa posted:

... good. I ... I think I'll let that last one stand on its own merits.

I hope it's not too frustrating that I'm glossing over the rest of your (good) post, but that last part is seriously the best thing. :wtc:

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
I think it's pretty effective at saying something truer than he meant about himself.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I'd like to argue that the core idea that he has, the core idea of a Bayesian reasoning system, is not so unreasonable that it should be dismissed out of hand.

Sure, a lot of Bayesian ideas are incredibly useful when you have the right application. The key is to have the right application. I make a lot of statistical models for work (like many physicists I work for an insurance company predicting high risk claims), and sometimes Bayesian modeling really is the best way to go. Those situations are rare though, and if I insisted on Bayes as the one true probability model, I'd be fired.

Like the biases though, the ideas mostly aren't his. Yudkowsky lifted his whole Bayesian framework straight from Jaynes (Yudkowsky's contribution is probably pretending to apply it to AI?). See: http://omega.albany.edu:8008/JaynesBook.html I think he is even explicit about how much he owes Jaynes.

quote:

What I'm lamenting is the loss of someone with great enthusiasm for the work. I'm just some guy, you know? I do my best, I work as hard as I can, and at the end of the day, there's an idea in natural language generation that wouldn't have been invented yet without me. It's not that I'm brilliant; I was able to do that just because I have both enthusiasm and a work ethic. Yudkowsky has the former and not the latter and it makes me sad sometimes.

Sure, I did my phd in physics, I don't think you need brilliance, you need fortitude and a work ethic. I put the time, did the slog, and at the end of the day got some results. I'm willing to bet both of us have published more peer reviewed papers than Yudkoswky's entire institute. I actually think Yudkowsky is a pretty bright guy, he just somehow fell into a trap of doing cargo cult science. It almost looks like research, it almost smells like research...

In an alternate reality, where he got off his rear end and took a few courses he might have accomplished something. The problem is he already thinks he is a phd level researcher (the world's foremost expert on Friendly AI after all), and already leads a research team.

quote:

I really ought to go read Kahneman, then.

The classic text is probably Judgement Under Uncertainty: Heuristics and Biases. Kahneman and Tversky kicked off a ton of literature in the late 80s that is fun to read.

And because I agree we may have bored others, some quotes, here we find Yudkowsky arguing that studying AI makes you less qualified to work on AI

Yudkowsky posted:

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them – just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified.

Here is Yud on the basilisk that so many in this thread seem to enjoy:

Yudkowsky posted:

Listen to me very closely, you idiot.

YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
[snip]
(For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.)

And here we reject global warming as existential risk in favor of evil AI:

Yud posted:

Having seen that intergalactic civilization depends on us[Yud's institute], in one sense, all you can really do is try not to think about that, and in another sense though, if you spend your whole life creating art to inspire people to fight global warming, you’re taking that ‘forgetting about intergalactic civilization’ thing much too far.

su3su2u1 fucked around with this message at 07:28 on May 15, 2014

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

So what sort of things are happening in the field of real nanotechnology anyway? I know that there are some pretty neat hydrophobic compounds out there, but my knowledge of what's actually out in the world/marketplace or in development is a bit shallow.

SolTerrasa
Sep 2, 2011

Okay, I promise, I'm done having actual-AI-chat in your making-fun-of-the-crazy thread, but su3su2u1 really helped me out up there with the pointer to Kahneman.

su3su2u1 posted:

Like the biases though, the ideas mostly aren't his. Yudkowsky lifted his whole Bayesian framework straight from Jaynes (Yudkowsky's contribution is probably pretending to apply it to AI?). See: http://omega.albany.edu:8008/JaynesBook.html I think he is even explicit about how much he owes Jaynes.

This one: definitely not. If Yudkowsky had been the first to propose applying Bayesian theory to AI, he would probably be as famous as he thinks he ought to be. His "contribution" is saying that it will avert a global crisis that no one else believes in. Which it won't, again, because of the problem of priors. Never mind that the problem it solves doesn't exist. Also, a provably sub-optimal decision theory which reduces to utilitarianism if you look at it funny. He does credit Jaynes (or, he used to) where credit's due, though. Personally I owe my understanding of the topic to Russell and Norvig's fantastic book, Artificial Intelligence: A Modern Approach and to a wonderful professor at my old school. http://aima.cs.berkeley.edu/ Yudkowsky later did a sub-par reiteration of that work in an abortive Sequence.

quote:

Sure, I did my phd in physics, I don't think you need brilliance, you need fortitude and a work ethic. I put the time, did the slog, and at the end of the day got some results. I'm willing to bet both of us have published more peer reviewed papers than Yudkoswky's entire institute. I actually think Yudkowsky is a pretty bright guy, he just somehow fell into a trap of doing cargo cult science. It almost looks like research, it almost smells like research...

Yeah, I think we're mostly in agreement. Anyway, thanks. You ever come to Seattle, let me buy you a beer or something.

Tunicate
May 15, 2012

Anticheese posted:

So what sort of things are happening in the field of real nanotechnology anyway?

Graphene.

That pretty much covers it.

Not because there isn't a lot, it's just that graphene does everything.

vaguely
Apr 29, 2013

hot_squirting_honey.gif

Tunicate posted:

Graphene.

That pretty much covers it.

Not because there isn't a lot, it's just that graphene does everything.
There's also some drug delivery stuff, and a load of nanoparticles for various applications going on too! As for nanomachines, there's a group working on rotaxanes/catenanes with different sites activated by stimuli such as pH or light, I'm sure that will lead to a tiny army of nanobots taking over the world any day now :allears:

Also someone mentioned quantum computing a page or so back, that's coming along nicely, maybe in a decade or so there might be some research chips that operate at cryogenic temperatures :allears:

It's interesting stuff but compared to what the general public thinks of as 'nanotech', aaaaaahahaha

ArcMage
Sep 14, 2007

What is this thread?

Ramrod XTreme

Tunicate posted:

Graphene.

That pretty much covers it.

Not because there isn't a lot, it's just that graphene does everything.

There's some neat work generating carbon nanotubes from crabshell, too.

Tunicate
May 15, 2012

ArcMage posted:

There's some neat work generating carbon nanotubes from crabshell, too.

True, true, but you probably can't just throw a crabshell in your blender with some laundry detergent and make nanotubes.

Patter Song
Mar 26, 2010

Hereby it is manifest that during the time men live without a common power to keep them all in awe, they are in that condition which is called war; and such a war as is of every man against every man.
Fun Shoe
So I finished reading HPMOR, out of sheer morbid fascination. My main conclusion is that Yudlowsky would have gotten along famously with the Apostle Paul.

Yudlowsky rejects an afterlife or a life after death, believing that the dead need be merely frozen and be woken up at the end of days singularity to gain a new body of incorruptible flesh scientifically-healed immortal body. Paul believed that the dead took a dirt nap until the end of days, at which point they would rise up and live in Jesus' Kingdom of New Jerusalem in bodies of incorruptible flesh. Substitute burial for cryogenics and it's much the same belief.

Yudlowsky wouldn't even have to change phrases like "O Death, where is thy sting? O Grave, where is thy victory?" given that the entire purpose of Harry's existence is a mocking defeat over death (something Paul would've wholeheartedly endorsed). Yudlowsky wholeheartedly, enthusiastically endorses James and Lily Potter's epitaph of "The last enemy to be destroyed is death" and lets Harry take it as a motto despite coming directly out of Paul's letter to the Corinthians. I bet you'd only need to do the most mild tweaking to something like "For since by man came death, by man came also the resurrection of the dead. For as in Adam all die, even so in Christ shall all be made alive" work for Yudlowsky...something like "For as in Nature all die even so in Science shall all be made alive."

The last few chapters of that book are all Yudlowsky's views on death and they're so profoundly, amazingly early Church despite his utter contempt for religion. However, it ends up coupling really, really oddly with Harry's wholehearted endorsement of drinking unicorn blood to stave off death. He is so arrogant that he doesn't even ask why it's such a major taboo to begin with.

EDIT: In short, I wouldn't be surprised to see Yudlowsky one of these days end up in a cryogenics coffin with RESURGAM written on the side like a Christian tomb from antiquity. Maybe with the symbol of the Deathly Hallows replacing the Chi Rho because juvenile fantasy novels are totally deeper than religion in his eyes.

Mors Rattus
Oct 25, 2007

FATAL & Friends
Walls of Text
#1 Builder
2014-2018

Given how his views on the value of pure reason over the evidence of the world also mirror classical philosophers, that's not super surprising to me. He really is just kind of recreating early philosophical ideas and mistakes from first principles.

Sunshine89
Nov 22, 2009

Mors Rattus posted:

Given how his views on the value of pure reason over the evidence of the world also mirror classical philosophers, that's not super surprising to me. He really is just kind of recreating early philosophical ideas and mistakes from first principles.

Pretty much, and shoehorns his fear of death and various fetishes into the final work too.

He's also really, really obsessed with huge numbers, and believes that this makes him smart- it's one of the trademarks of his output.

Perhaps I was a bit incomplete with my coin toss example:

You tell Yud "No, I'm not going to play your game. If we just flip a coin until you win, you're cheating. The result is already decided".

Yud strokes his goatee and cackles: "Bayes have pity on you; you're so illogical. Cheating would be if the premises were 'Heads I win, tails you lose'. My experiment is simply a new paradigm in logic and rationality.

Still, I will humour you. Instead of a coin, we will roll a fair six-sided die. I'll even let you choose- even number I win, odd number you win, 1,2 or 3 I win, 4,5 or 6 you win, but the match remains the same- we play on if you win your roll, I win the match when I win"

You reply "Yud, that changes nothing. The rules of the game dictate that you win every time. You've just dumped more numbers in there".

Yud sputters "No, it is YOU who don't understand! Fine, we'll roll two fair six-sided dice"

You reply "Yud, that's no different"

Yud begins to turn red and clench his fists "FINE. We will roll a fair twenty-sided die. No, make that five twenty-sided dice"

You're about to open your mouth, when Yud shouts "We will roll a googol-sided die! No! A googolplex sided die! By Bayes, I will make a googol simulations of us each rolling a googol of googolplex sided dice!"

You walk away.

"By not playing, you lose by default, you deathist!"

Chamale
Jul 11, 2010

I'm helping!



Patter Song posted:

So I finished reading HPMOR, out of sheer morbid fascination. My main conclusion is that Yudlowsky would have gotten along famously with the Apostle Paul.

Yudlowsky rejects an afterlife or a life after death, believing that the dead need be merely frozen and be woken up at the end of days singularity to gain a new body of incorruptible flesh scientifically-healed immortal body. Paul believed that the dead took a dirt nap until the end of days, at which point they would rise up and live in Jesus' Kingdom of New Jerusalem in bodies of incorruptible flesh. Substitute burial for cryogenics and it's much the same belief.

Yudlowsky wouldn't even have to change phrases like "O Death, where is thy sting? O Grave, where is thy victory?" given that the entire purpose of Harry's existence is a mocking defeat over death (something Paul would've wholeheartedly endorsed). Yudlowsky wholeheartedly, enthusiastically endorses James and Lily Potter's epitaph of "The last enemy to be destroyed is death" and lets Harry take it as a motto despite coming directly out of Paul's letter to the Corinthians. I bet you'd only need to do the most mild tweaking to something like "For since by man came death, by man came also the resurrection of the dead. For as in Adam all die, even so in Christ shall all be made alive" work for Yudlowsky...something like "For as in Nature all die even so in Science shall all be made alive."

The last few chapters of that book are all Yudlowsky's views on death and they're so profoundly, amazingly early Church despite his utter contempt for religion. However, it ends up coupling really, really oddly with Harry's wholehearted endorsement of drinking unicorn blood to stave off death. He is so arrogant that he doesn't even ask why it's such a major taboo to begin with.

EDIT: In short, I wouldn't be surprised to see Yudlowsky one of these days end up in a cryogenics coffin with RESURGAM written on the side like a Christian tomb from antiquity. Maybe with the symbol of the Deathly Hallows replacing the Chi Rho because juvenile fantasy novels are totally deeper than religion in his eyes.

Hot drat. Did Yudkowski read the seventh book? The whole point of the series isn't good vs. evil, it's that Harry and Voldemort both try to become the Master Of Death; Voldemort fails because he tries to live forever, Harry succeeds because he learns to accept that death is part of life. Actually, Eliezer probably read that far into the book and angrily condemned Rowling as a deathist.

The Vosgian Beast
Aug 13, 2011

Business is slow

Mors Rattus posted:

Given how his views on the value of pure reason over the evidence of the world also mirror classical philosophers, that's not super surprising to me. He really is just kind of recreating early philosophical ideas and mistakes from first principles.

See that's the thing. Yudkowsky is working in the same areas as philosophy, but because philosophers aren't part of "his" tribe, he and his followers refuse to learn anything about it and remain stubbornly proud of their own ignorance. The result is that a lot of times, he either re-invents the wheel or repeats a lot of old mistakes.

Less Wrong is kind of a direct parallel to early Platonists. There's the allegiance to a mathematical method that isn't as universally applicable as they think it is(Geometry in Plato's time, Bayesianism in Yudkowsky's) a disdain for democracy and the unintelligent rabble, a tendency to use mystery cult tactics because people won't GET IT if you just HAND IT DOWN TO THEM, and continued assurances that what they are doing is the most valuable thing in the world and makes them experts in all things.

The difference is that Plato was living in a civilization that didn't have the benefits of the hard-won lessons over the centuries we had. Yudkwosky is just an idiot.

Peel
Dec 3, 2007

'Deathism' is definitely a thing in the modern singularitarian zeitgeist. I have actually seen a person get genuinely angry and upset at that sentiment's presence in Harry Potter, and an appeal from a wealthy middle-aged believer terrified that he'll die before he's granted eternal life.

Unsurprisingly 'anti-deathists' tend to focus their dreams on inventing expensive ways to give rich white people biological immortality, rather than marshalling resources to fight war, famine and preventable diseases worldwide.

Patter Song
Mar 26, 2010

Hereby it is manifest that during the time men live without a common power to keep them all in awe, they are in that condition which is called war; and such a war as is of every man against every man.
Fun Shoe

AATREK CURES KIDS posted:

Hot drat. Did Yudkowski read the seventh book? The whole point of the series isn't good vs. evil, it's that Harry and Voldemort both try to become the Master Of Death; Voldemort fails because he tries to live forever, Harry succeeds because he learns to accept that death is part of life. Actually, Eliezer probably read that far into the book and angrily condemned Rowling as a deathist.

There are many, many angry rants where Harry yells at people around him for being deathists, especially Dumbledore. When Dumbledore says he doesn't want to live forever, Harry mocks him and calls him suicidal. When...someone important...dies, Harry's reaction is to freeze her corpse in hopes that he could bring her back someday. When Harry sees ghosts, he rants that they're just memory imprints of the dead on a location and not actually sentient. When Harry sees Professor Quirrell drinking unicorn blood straight from a unicorn corpse his reaction isn't horror but is "why is everyone else too stupid to also do this?" Harry sees the dementors as the living embodiment of Death itself and has pledged a campaign to destroy every last one of them.

BTW, besides for Yudlowsky's silly Japan fetish leading to really out of place anime references all over (the spell Lagann summons a giant drill, the wizards of distant Nippon [his term] have legends of Madoka and Homura), it leads to the dumbest thing of all when Quirrell (who is obviously, obviously, obviously evil) claims that...Voldemort killed his martial arts sensei in a duel. Now why anyone would believe that is a mystery enough, but why Voldemort would decide to pay a random visit to Japan to learn martial arts is an even more bizarre question.

Oh, and Harry is the worst, worst, worst kind of nerd. He never, ever shuts up about Star Wars and anime and Isaac Asimov and seems to neglect everything else as irrelevant.

projecthalaxy
Dec 27, 2008

Yes hello it is I Kurt's Secret Son


So to be clear, being a deathist, which is the worst thing to be, is to be a person that acknowledges that after less than 120 years a man will shuffle off this mortal coil? That's it? Not welcoming death, not worshiping death, just saying "people die"? That's his big problem?

DStecks
Feb 6, 2012

projecthalaxy posted:

So to be clear, being a deathist, which is the worst thing to be, is to be a person that acknowledges that after less than 120 years a man will shuffle off this mortal coil? That's it? Not welcoming death, not worshiping death, just saying "people die"? That's his big problem?

Yes.

The Vosgian Beast
Aug 13, 2011

Business is slow

Peel posted:

'Deathism' is definitely a thing in the modern singularitarian zeitgeist. I have actually seen a person get genuinely angry and upset at that sentiment's presence in Harry Potter, and an appeal from a wealthy middle-aged believer terrified that he'll die before he's granted eternal life.

Unsurprisingly 'anti-deathists' tend to focus their dreams on inventing expensive ways to give rich white people biological immortality, rather than marshalling resources to fight war, famine and preventable diseases worldwide.

Well it's about their own death they are worried about. Not like, the death of people who can't get clean water. Because obviously those people are unproductive low IQ non-Bayesians.

Peel
Dec 3, 2007

You're allowed to think death happens, you just have to think it's an ultimate horror and hate and despise all attempts to suggest it's a thing we should accept and come to terms with, because that might encourage people not to put maximum effort into preventing it.

Because if there's one thing our society doesn't invest enormous resources in, it's extending and saving lives.

Fenrisulfr
Oct 14, 2012

projecthalaxy posted:

So to be clear, being a deathist, which is the worst thing to be, is to be a person that acknowledges that after less than 120 years a man will shuffle off this mortal coil? That's it? Not welcoming death, not worshiping death, just saying "people die"? That's his big problem?

At least in the fanfic the positions seem to be that Harry thinks people should treat death like a disease and attempt to "cure" it, and Dumbledore thinks they shouldn't. I dunno, the idea that we should prevent death if we can (and maintain a desirable standard of living while doing so) doesn't strike me as unreasonable?

Peel posted:

Because if there's one thing our society doesn't invest enormous resources in, it's extending and saving lives.

If you consider this a good thing, why would it not be a good thing to extend and save lives into infinity, ie. immortality? Assuming of course that the resources needed to do so did not grow in proportion.

Adbot
ADBOT LOVES YOU

The Vosgian Beast
Aug 13, 2011

Business is slow

Fenrisulfr posted:

At least in the fanfic the positions seem to be that Harry thinks people should treat death like a disease and attempt to "cure" it, and Dumbledore thinks they shouldn't. I dunno, the idea that we should prevent death if we can (and maintain a desirable standard of living while doing so) doesn't strike me as unreasonable?


If you consider this a good thing, why would it not be a good thing to extend and save lives into infinity, ie. immortality? Assuming of course that the resources needed to do so did not grow in proportion.

Lemme put it this way: Less Wrong is not Jonas Salk. Less Wrong is a guy claiming we should build giant space guns to shoot diseases out of people's bodies, and anyone saying that all our resources into this is a "sickist"

  • Locked thread