|
Darth Walrus posted:Seriously, I really want to see some Lesswrong posts on Aristotle. I mean, they have every reason to be about how he's a pre-Enlightenment guy who got a vast amount of stuff comically wrong, and yet his scientific method is an ideal fit for Yudkowsky, and HPMOR really does seem to be championing it in practice if not necessarily in theory. http://lesswrong.com/lw/ns/empty_labels/ EY pointing out in extended, pedantic fashion that syllogisms don't map well to real life. http://lesswrong.com/lw/te/three_fallacies_of_teleology/ A commentary on how intent is confused with result (accurate as far as I can tell). http://lesswrong.com/lw/m1/guardians_of_ayn_rand/ Part of a too-long sequence about cults saying that Ayn Rand and Aristotle may have been good in their day, they fell into a trap of cultish behavior and time and sensibilities marched past them.
|
# ? Mar 24, 2015 01:32 |
|
|
# ? May 15, 2024 14:45 |
|
Chapter 8: Positive Bias Part Two quote:
What on earth made Eliezarry think this was a good way to introduce yourself to someone else? There’s being dorky and awkward, and then there’s being obnoxious and annoying. quote:
Less than 72 hours, yet he already knows so much about wizarding society and politics, enough to discourse with Malfoy on an equal basis. What a prodigy! Such genius! :alllears: quote:
Nothing particularly offensive past Harry’s ridiculous self-introduction, and Hermione hasn’t prostrated herself before Harry’s brilliance. I’ll give the beginning of this chapter a passing mark.
|
# ? Mar 24, 2015 02:56 |
|
Those are, of course, all D&D potions.
|
# ? Mar 24, 2015 03:09 |
|
Saw this, decided to give the thread a read what sort of state of mind do you have to be in to say half of these things, whether its hpmor or just other stuff yud has written, like loving christ
|
# ? Mar 24, 2015 05:08 |
|
Chapter 8: Positive Bias Part Three quote:
Hermione is instantly my favourite character in this story. quote:
On a scale of 1-10 for pretentiousness, this must be at least an 8. And there are still more than 110 chapters of this story to go. Plenty of time for Eliezarry to top himself. quote:
That’s not a showcase of “intelligence” per se, that’s just Eliezarry getting super-special snowflake powers for no reason other than him being the author’s self-insert Harry Sue of the story.
|
# ? Mar 24, 2015 05:49 |
|
No, it's him being a smartass and letting her think he did something when the drink did it itself.
|
# ? Mar 24, 2015 05:56 |
|
no guys this is called negging my friend told me all about this no wait where are you going
|
# ? Mar 24, 2015 06:22 |
|
akulanization posted:Ah yes, the old "I meant to do that" defense, Harry fails at being High King of Rational Mountain because a) a lot of his science is lovely, half right, or surface level but is made into the gospel truth in the story b) because he isn't actually motivated by knowledge or exploration, he is motivated by power. You say he might have been intended to be like Artemis Fowl, but that is trivially untrue; Artemis Fowl is never held up as an example, and he certainly isn't meant to convince you that you should follow the Way of Fowl. Harry however is supposed to teach the audience how to be "rationalists" and really is never defeated or outdone in his area of "expertise" in the context of the story. Much of this is achieved by undermining the other characters or by choosing to have the world work the way that harry guesses it will work. So, first of all it was raised by Nessus, not me, that said Harry is like Artemis. I could remember some similarity so I rolled with. However I haven't read any Artemis Fowl books in at least 10 years, so IDK. So if you want to argue whether or not Harry is like Artemis fowl, argue with Nessus, he clearly does. Second of all, to those of you saying that Harry Potter was intended to be a paragon of rationality I literally just asked Eliezer: quote:re: Is harry supposed to be a uber rationalist? So clearly Harry isn't supposed to be "High King of Rational Mountain". SSNeoman posted:Ah, but you see he was! We're supposed to think the science is real. We're supposed to think he is uber-rationalistic. The author pretty much flat our admitted this point. If he fails to deliver, then it's a bad story. And it fails to deliver. See above. How was Harry's blackmail going to ruin McGongal's life? His Blackmail consisted of "If you don't tell me the truth, I'll go ask questions elsewhere" which I think is a perfectly fine thing to say when the truth is about whether or not the Dark Lord who killed your parents, and tried to kill you, is still alive. Your last paragraph I agree with, except that behavior doesn't bother me, I think we were meant to like him, I like him, and many others do. If he was a real-person I probably wouldn't like interacting with him though. akulanization posted:it's author so odious How so? Nessus posted:It seems like your goal is to get people to say "I am upset by the behavior of the character in the lovely fanfiction being roasted." Naw. I have two motivations. I wanted to understand why people dislike Harry so much, and initially people were going "He is infuriating because he is irritating" which doesn't explain much, there have been better explanations since then. The second is that I honestly like hpmor & Eliezer. I'd rate it a 7/10 and I do not think the hate against Eliezer is warranted. I think a lot of stems from people not understanding he writes (due to little fault of his). Like the guy on the first page who, when speaking of the torture vs dust specks says "3^^^3 is the same thing as 3^3^3^3" despite the fact that the very post where Eliezer raised the discussion, he explained the notation before even getting to the torture & dust specks. I don't think this guy is at fault for this. He probably got this from some one else. I'm just using this as a really simple example of how the people who mock Eliezer's writing don't even understand basic things hes written and are mocking him from a position of ignorance. Similarly I don't think "friendly A.I" is some sort of crazy idea. It seems pretty reasonable. It is basically: "Are there solutions to problems where the solutions are so complex we can't understand everything about the solution? If yes, how do we build something that will give solutions to problems that won't provide solutions that will conflict with other things we care about?"
|
# ? Mar 24, 2015 06:39 |
|
Ohhhh, it's 3^^^3 dust specks, not 3^3^3^3? In that case I agree, the torture option is obviously the morally correct choice.
|
# ? Mar 24, 2015 07:32 |
|
More seriously, yes, friendly AI isn't some crazy idea. Science fiction has toyed with the idea of artificial minds overthrowing their creators for probably a century by now, to the point where everyone is aware of it. The problem is that Yud claims to have more than science fiction to contribute here, and he doesn't. He has unitless utilitarianism and a fetish for Baye's theorem, and that's about it.
|
# ? Mar 24, 2015 07:52 |
Legacyspy posted:So, first of all it was raised by Nessus, not me, that said Harry is like Artemis. I could remember some similarity so I rolled with. However I haven't read any Artemis Fowl books in at least 10 years, so IDK. So if you want to argue whether or not Harry is like Artemis fowl, argue with Nessus, he clearly does.
|
|
# ? Mar 24, 2015 07:59 |
|
If it makes a difference to you, Legacyspy, I hate on Eliezer all the time for a lot of the same reasons as the rest, but this fic is better than I expected. Uh, it's not hate, but I think he's pretty pretentious, doesn't fact-check himself adequately, and tries to look important for achievements that are pretty insignificant. I haven't seen him do anything that I don't imagine I could have done, but I don't think I'm self-important enough to represent his achievements the way he does, so it annoys me when he gets money and popularity and fame. I think it's a stretch to say he's done nothing because he's written hundreds of thousands of words, but I'm not convinced the words are very substantial or novel. I think his attitude towards others' accomplishments makes him unlikely to succeed in a real sense, although it might get him some attention. He's a natural for shilling, which I don't mean as an insult. The last few times I've had to shill something it's left me feeling a little sour and uncomfortable, but he seems to shill his own work pretty effortlessly. Maybe it's practice. Oh yeah, his programming language, Flare, didn't impress me and I have longposts about that in the Less Wrong thread. It's not that bad for a first-time language designer (I'd be lying to say I thought it was average or above average) but I exaggerated for humor. It's pretty nebulous, like the rest of what he does. That's had a more lasting impression with me than it really deserves (it was a long time ago too) because it's a big area of interest for me outside of mockthread stuff. It reminds me of stuff I was doing when I was 14. Fic has some really lovely moments (every Malfoy convo except the first one) and I think Harry is irritating, but it's funny when Yud doesn't hijack other characters to make Harry look smart. He's pretty good at providing examples of cleverness that aren't totally contrived although Harry's plans, in all their intricate details, have a really strong track record of success so far that makes them feel contrived -- other characters are reacting to Harry or following Harry's script more than Harry appears to be reacting to other characters, which makes everything feel a little bit authorial. There's something really stilted about the prose but it's ignorable and I blame inexperience on Yud's part. I give the fic as a whole three out of five stars so far with some four-star moments. Mind that I'd be a lot more disfavorable if I thought he took this as seriously as I think a lot of the people following this "Let's Read" think he was. Oh yeah, it's hypothetical but if he had an account I'd be interested in talking. I don't know if he would go for that though.
|
# ? Mar 24, 2015 08:04 |
|
Fried Chicken posted:Well I was expecting something weird and arrogant based off everything else, but instead that's really sad. I dunno, I can recognize the pain there, but at a certain point you need to rise above and not be a complete poo poo like he is as an adult I think it definitely counts as weird. Sad yes. But definitely very weird. I'm sure some actual developmental psyc person could help make sense of it all, but it is a fascinating glimpse into the mind that wrote this. http://web.archive.org/web/20010205221413/http://sysopmind.com/eliezer.html#timeline_the RE raising smart children and reinforcement: Northwestern's Center for Talent Development, Hopkins's Center for Talented Youth, and other gifted and talented programs run testing programs to administer standardized tests to children typically around middle school when students can be sent off for summer programs. Ostensibly this is to identify smart children and offer them opportunities to attend advanced classes amongst their peers. Less charitable wags would note that the programs are primarily funded through tuition from the classes that it offers all students who take their screening tests and that enrollment has increased year-over-year for decades. IDK. Gifted education is kinda a mess. Regardless, the programs exist to tell people their kids are smart and should be around other smart kids. Let's see what taking the test (and some practice tests!) did to Yud: quote:I obtained a couple of SAT preparation books - one targeted specifically on Math, and one targeted on the whole SAT (Math and Verbal). I took a few practice tests from the Math book, and with each additional test, my scores went down. I got a 570, then a 530, then a 460 (9). "Huh?" I said to myself. I think, parentally this is where you try to talk about standardized test's poor behavior on bell curve edges and test repeatability/teachability and try to keep a kid grounded.... IDK. Gifted education is really a mess and Yud's childhood case is sadly not uncommon. Anecdotally, I took the SAT around the same time as Yud in a similar gifted screening/summer school salesmanship exercise. I'm pretty sure I scored about the same as Yud did. The top ~30ish scorers from my state got certificates and they all turned out more or less normal by the time high school graduation rolled around (small state, most ended up in the same 2 high schools). But for the grace of God... i81icu812 fucked around with this message at 06:45 on Mar 25, 2015 |
# ? Mar 24, 2015 08:09 |
|
I will defend my position but I need JWKS to get to the later parts of this dreck before I can. That said,Legacyspy posted:Naw. I have two motivations. I wanted to understand why people dislike Harry so much, and initially people were going "He is infuriating because he is irritating" which doesn't explain much, there have been better explanations since then. The second is that I honestly like hpmor & Eliezer. I'd rate it a 7/10 and I do not think the hate against Eliezer is warranted. I think a lot of stems from people not understanding he writes (due to little fault of his). Like the guy on the first page who, when speaking of the torture vs dust specks says "3^^^3 is the same thing as 3^3^3^3" despite the fact that the very post where Eliezer raised the discussion, he explained the notation before even getting to the torture & dust specks. Right my mistake. It's actually 3^^7625597484987 which is some godawful number you get if you take 3 and then raise it to the 3rd power 7,625,597,484,987 times which is still the same thing as "a meaninglessly big number" which means my point still stands. There is no reason to involve numbers in this problem, it's a problem concerning philosophical morality, unless you're a smartass who wants to flash his cock. Yud could have asked "Should one person get tortured for a decade or should every other person on Earth get a grain of sand in their eye?" But that wasn't nerdy enough so they do this poo poo. But you know what fine, whatever. Lemme explain why Yud's conclusion is nuts. So there is a branch of philosophy called utilitarianism. The basic premise is that it wants to achieve the maximum amount of happiness for the maximum amount of people (in broad strokes). In Yud's PHILO101 he blasts the problem with a blunt solution. Dude is tortured for 50 years, or someone is tortured (ie: removes the dust speck) for like 1 second? Well obviously we'd pick option 2. But when you SHUT UP AND MULTIPLY our second by that "meaninglessly big number", suddenly having one dude tortured for 50 years doesn't sound that bad, right? BEEP BOOP PROBLEM SOLVED GET hosed CENTURIES OF PHILOSOPHY Well of course not. There are other factors involved when you realize that we are dealing with a being who has life, willpower, the ability to feel pain and all that other good poo poo. We are taking 50 years out of a person's life and replacing it with pain and misery, instead of inconveniencing a lot of people for an insignificant amount of time. This logic, by the way, is still equating torture with the pain caused by dust specks. I am still playing by Yud's rules despite the fact the two are obviously not equal. You are crushing a person's dreams and ambitions just for the sake of not bothering a whole lot of people for something they won't remember. What about the life this poor person is missing out by going through this torture? All those people won't even remember the speck by the end of the day, but that person will carry his PTSD for the rest of his (no doubt shortened) life, if he's not loving catatonic by the end of the first year. I am explaining this in-depth because it's a problem that requires you to do so. It is something LW themselves avoid doing, and I hope you are not falling into the same trap. Yud, by the way, totally dismisses people who point this out, because he misapplies his own concept of Scopes Insensitivity to the solution. He uses an absolutely absurd and unrelated example to prove his point, which is fantastically different from the problem on hand. Legacyspy posted:I'm just using this as a really simple example of how the people who mock Eliezer's writing don't even understand basic things hes written and are mocking him from a position of ignorance. NO. I understand what Yud and co write. What I don't, I ask others until I do. Some of it is profound, but most of it is them re-inventing philosophical wheels. And sometimes they decide that these wheels should be squares instead. Do you remember Roko's Basilisk? That was a user taking Yud's ridiculous loving AI anecdote to its absurd conclusion. That is the reason (I suspect) why Yud hates it so, because it shows what a house of cards his whole philosophy is. That is why he hates non-Bayesian AI ideas; because they make him irrelevant. At one point he was even backed into a philosophical corner by that same "we know in our hearts torture is the right answer" guy and he threw a hissy fit instead of saying "I don't know". I could go on about their myriad of faults, but I am very much not arguing from a position of ignorance. Yud's research center has barely published any papers and none are helpful or relevant to society. He is pondering on an irrelevant problem that will never have a solution, nor does it even need to be solved. And if it does come up, Yud's solution will be wrong. I can explain why in detail if you want me to but this part is already long as gently caress. I'm really annoyed that you'd use this defense because this is the sort of bullshit LW loves. Instead of trying to explain things in simple terms, they use complex ones or ones they made up. Not only does this go against their own mission statement, it's condescending and disingenuous. People have debunked Yud's ideas, he just doesn't want to admit to it (like how he doesn't want to admit that he lost his AI box roleplay 4 times ina row after two wins, which is why he doesn't offer that challenge anymore). So no, you're wrong. Legacyspy posted:Similarly I don't think "friendly A.I" is some sort of crazy idea. It seems pretty reasonable. It is basically: "Are there solutions to problems where the solutions are so complex we can't understand everything about the solution? If yes, how do we build something that will give solutions to problems that won't provide solutions that will conflict with other things we care about?" Except Yud wants a very specific friendly AI, one that uses Bayesian probability to achieve godlike omnipotence. And this is one that will handle all tasks ever and can easily lord over our entire society. Seraphic Neoman fucked around with this message at 08:25 on Mar 24, 2015 |
# ? Mar 24, 2015 08:21 |
|
That numbers game is one of the most obvious indications that Judowsky doesn't really get what he is talking about. The exact number that is needed is not relevant to the example. Most real Philosophers or Mathematicians would just say that there is an arbitrary number with those properties and argue from there. The rest would compute the number, like Graham famously did that one time. But Judowsky insists on inventing a number. He has no justifications at all for choosing this number, but his followers insist that it is the right number. Why? There is no difference between 3^^4 and 3^^^^3 or even 3 itself. I even suspect that the reasone Yud used 3 instead of 4 as the basis was because Wikipedia has examples for that already computed out.
|
# ? Mar 24, 2015 14:33 |
|
i81icu812 posted:Anecdotally, I took the SAT around the same time as Yud in a similar gifted screening/summer school salesmanship exercise. I'm pretty sure I scored about the same as Yud did. The top ~30ish scorers from my state got certificates and they all turned out more or less normal by the time high school graduation rolled around (small state, most ended up in the same 2 high schools). But for the grace of God... Also, like ten people at my middle school got perfect 1600s on the SAT. Admittedly, that's across sixth-through-eighth, but I strongly suspect Yud fell for a scam. Tunicate fucked around with this message at 16:03 on Mar 24, 2015 |
# ? Mar 24, 2015 16:01 |
quote:So I took another practice test, this time resolving to, as Ben Kenobi would say, "act on instinct". (That actual phrase, in Ben's voice, ran through my head.) (10). I got a 640 Math. The lesson I learned was to trust my intuitions, because my intuitions are always right - probably one of the most important lessons of my entire life. anilEhilated fucked around with this message at 16:25 on Mar 24, 2015 |
|
# ? Mar 24, 2015 16:22 |
|
If you are in the 99.9999th percentile of loving anything, you do not go on in life to write Harry potter fan fics and logical fallacies. gently caress,I know people who got 10s and 12s on the act and even they get the basic concept of prisoner dilemmas I'm not meaning to be dissing on the guy and it's likely that he did do pretty well on tests way back in the day, but doing well on basic English and math tests in middle school and then dropping out of school because you're so much better than the system does not give you the qualifications to make up terms and assert that your way of thinking is so much better than everyone else's
|
# ? Mar 24, 2015 16:33 |
|
Legacyspy posted:Naw. I have two motivations. I wanted to understand why people dislike Harry so much, and initially people were going "He is infuriating because he is irritating" which doesn't explain much, there have been better explanations since then. The second is that I honestly like hpmor & Eliezer. I'd rate it a 7/10 and I do not think the hate against Eliezer is warranted. I think a lot of stems from people not understanding he writes (due to little fault of his). Like the guy on the first page who, when speaking of the torture vs dust specks says "3^^^3 is the same thing as 3^3^3^3" despite the fact that the very post where Eliezer raised the discussion, he explained the notation before even getting to the torture & dust specks. I don't think this guy is at fault for this. He probably got this from some one else. I'm just using this as a really simple example of how the people who mock Eliezer's writing don't even understand basic things hes written and are mocking him from a position of ignorance. Similarly I don't think "friendly A.I" is some sort of crazy idea. It seems pretty reasonable. It is basically: "Are there solutions to problems where the solutions are so complex we can't understand everything about the solution? If yes, how do we build something that will give solutions to problems that won't provide solutions that will conflict with other things we care about?" I mock Yudkowsky from a position of strength rather than of ignorance- I have a phd in physics, and Yud included (wrong) physics references in his fanfic. He incorrectly references psychology experiments in his fanfic. He uses the incorrect names for biases in his fanfic. He gets computational complexity stuff wrong in his fanfic. He does this despite insisting on page 1 that "All science mentioned is real science." Let me ask you- if an AI researcher can't get computational complexity correct, why should I trust anything else he writes? If someone who has founded a community around avoiding mental biases can't get the references right in a fanfic, why should I trust his other writing? His technical work is no better. His paper "timeless decision theory" paper is 100 pages of rambling, with no formal definition of the theory anywhere (and it would be super easy to formalize the theory as described). His research institute is a joke- they've been operating for more than a decade with only 1 paper on arxiv and basically no citations to any of their self-published garbage.
|
# ? Mar 24, 2015 17:03 |
|
Luna Was Here posted:If you are in the 99.9999th percentile of loving anything, you do not go on in life to write Harry potter fan fics and logical fallacies. gently caress,I know people who got 10s and 12s on the act and even they get the basic concept of prisoner dilemmas Now you're giving too much credit to standardized testing.
|
# ? Mar 24, 2015 18:33 |
|
Legacyspy posted:So, first of all it was raised by Nessus, not me, that said Harry is like Artemis. I could remember some similarity so I rolled with. However I haven't read any Artemis Fowl books in at least 10 years, so IDK. So if you want to argue whether or not Harry is like Artemis fowl, argue with Nessus, he clearly does. Legacyspy posted:Are you saying that Harry is supposed to be a "uber-rationalist super-optimizer", but fails to do so? I never got that conclusion that at all. I thought he was supposed to be another character in the vein of Artemis Fowl, Ender, Bean, or Miles Vorkosigan. Legacyspy posted:Second of all, to those of you saying that Harry Potter was intended to be a paragon of rationality I literally just asked Eliezer: Big Yud posted:To learn almost everything that Harry knows, the best current free online solution is to read the Sequences at LessWrong.com – two years of blog posts that tried to introduce just about everything that I thought a rationalist needed to know as of 2007, starting with basic theory of knowledge, Bayesian probability theory, cognitive biases, evolutionary psychology, social psychology, and going on into the more arcane realms of reductionism and demystified quantum mechanics. Believe it or not, Harry is only allowed to draw on around half of the easier Sequences – if he knew all of them, he would be too powerful a character and break the story. Legacyspy posted:How was Harry's blackmail going to ruin McGongal's life? His Blackmail consisted of "If you don't tell me the truth, I'll go ask questions elsewhere" which I think is a perfectly fine thing to say when the truth is about whether or not the Dark Lord who killed your parents, and tried to kill you, is still alive. Legacyspy posted:Your last paragraph I agree with, except that behavior doesn't bother me, I think we were meant to like him, I like him, and many others do. If he was a real-person I probably wouldn't like interacting with him though. Legacyspy posted:Naw. I have two motivations. I wanted to understand why people dislike Harry so much, and initially people were going "He is infuriating because he is irritating" which doesn't explain much, there have been better explanations since then. The second is that I honestly like hpmor & Eliezer. I'd rate it a 7/10 and I do not think the hate against Eliezer is warranted. I think a lot of stems from people not understanding he writes (due to little fault of his). Like the guy on the first page who, when speaking of the torture vs dust specks says "3^^^3 is the same thing as 3^3^3^3" despite the fact that the very post where Eliezer raised the discussion, he explained the notation before even getting to the torture & dust specks. I don't think this guy is at fault for this. He probably got this from some one else. I'm just using this as a really simple example of how the people who mock Eliezer's writing don't even understand basic things hes written and are mocking him from a position of ignorance. Similarly I don't think "friendly A.I" is some sort of crazy idea. It seems pretty reasonable. It is basically: "Are there solutions to problems where the solutions are so complex we can't understand everything about the solution? If yes, how do we build something that will give solutions to problems that won't provide solutions that will conflict with other things we care about?" Yud's friendly AI meanwhile is probably a worthless and intellectually bankrupt idea. While I'm going secondhand here, given that Yud hasn't been cited or published a model himself it's pretty clear that even if you think his goals are worthwhile you should recognize that he will never move them forward, let alone complete them as he promises.
|
# ? Mar 24, 2015 20:14 |
|
Legacyspy posted:I have two motivations. I wanted to understand why people dislike Harry so much, and initially people were going "He is infuriating because he is irritating" which doesn't explain much, there have been better explanations since then. The second is that I honestly like hpmor & Eliezer. I'd rate it a 7/10 and I do not think the hate against Eliezer is warranted. I think a lot of stems from people not understanding he writes (due to little fault of his). Like su3su2u1, I am not mocking Yudkowsky from a position of ignorance. I feel that I'm in a pretty good place to criticize his AI work. Which I have, in the mock thread. But a short version here: he is so needlessly wordy that it's difficult to notice how incredibly basic his ideas are. Timeless Decision Theory is a great example, I formalized it in one paragraph in the other thread, which Yudkowsky failed to do in a hundred pages. And it wasn't even a new idea once I wrote it down! On his favorite model of an AI, the Bayesian inference system, I've built one and I can't tell if he has. Bayesian inference doesn't parallelize well, the units of work are too small and so efficiency gains are nearly canceled by overhead. It couldn't play an RTS game because it was computationally bound, even after I taught it the rules. Yudkowsky's would need to do way better than mine to literally take over the world, and he has literally never had a plan for how it will be. Mine was textbook, his would need to be orders of magnitude above. All that said, even I don't think friendly AI is a crazy problem for crazy people; I think it's an engineering problem for a domain that doesn't exist yet.
|
# ? Mar 24, 2015 20:26 |
|
SolTerrasa posted:All that said, even I don't think friendly AI is a crazy problem for crazy people; I think it's an engineering problem for a domain that doesn't exist yet. Although, the specific direction that MIRI is running in, creating mathematical ideas of friendliness is a huge misunderstanding of how applied math works. su3su2u1 fucked around with this message at 21:19 on Mar 24, 2015 |
# ? Mar 24, 2015 21:12 |
|
Yeah, the 'position of ignorance' line probably wasn't a good move, Legacyspy, goons like physics and ai almost as much as Mt. Dew. Am I reading it right that the Friendly AI should have human values and responses? Because that would imply Yud believes that humans are good to each other and would never, say, try to wipe out a group they consider inferior. Also, can someone link to the lesswrong thread, please?
|
# ? Mar 25, 2015 00:24 |
petrol blue posted:Yeah, the 'position of ignorance' line probably wasn't a good move, Legacyspy, goons like physics and ai almost as much as Mt. Dew. The idea is that the AI is going to inevitably become God, so it is the most important thing ever to make sure that when the computer inevitably becomes Literally God, we make sure it's a nice friendly New Testament God who cares for us, rather than an Old Testament God who will send us all to Robot Hell.
|
|
# ? Mar 25, 2015 00:31 |
|
Luna Was Here posted:If you are in the 99.9999th percentile of loving anything, you do not go on in life to write Harry potter fan fics and logical fallacies. gently caress,I know people who got 10s and 12s on the act and even they get the basic concept of prisoner dilemmas Tunicate posted:Also, like ten people at my middle school got perfect 1600s on the SAT. Admittedly, that's across sixth-through-eighth, but I strongly suspect Yud fell for a scam. Yeah, the old 1600 scale SATs are wonderfully teachable tests. Take a few dozen old tests, memorize a list of vocab words, and your scores will go up. SATs are really, really, lovely at differentiating the top end of the scale though. Too teachable and way too many people can max out the test for it to be very meaningful. And even if you are one in a million, there's 7,000 people smarter than you. But it's the perfect scam for the colleges running the gifted and talented summer school programs--all data are real and truthful and everything they tell the students is completely accurate social science! They administer actual SATs for that year in a special middle school students only session and simply add an informational paper to the results reported by College Board saying how your results scale to the study run years ago for other kids your age, confidence levels, error bars and everything. Add award ceremonies for high scorers and some cheap recognition certificates and suburban parents can't throw money at your summer school programs fast enough. Though perhaps I'm less than charitable, I'm sure some kids benefit tremendously from being around other smart kids and that it looks good on college applications. Legacyspy posted:position of ignorance And since Legacyspy is doing such a wonderful job of getting people to respond--like SolTerrasa and su3su2u1 I'm not mocking from a position of ignorance. I've actually coded simulations and built robotic systems using Bayesian networks/Markov chains. They work well for very specific tasks (classic example is training robots to walk) and are terrible inefficient for others. But an AI with some sort of agency? Yud is crazy, not in our lifetimes. The mock thread goes into far more detail if you want. Beside, I claimed a good score on a middle school SAT--the exact same credentials Yud has attained in his academic career. Clearly I am the most qualified person to mock him.
|
# ? Mar 25, 2015 01:21 |
|
I mock the writing in the story because it's bad and we're on somethingawful.com, but Yud's cult is legitimately fascinating to me as a religious studies guy who is interested in getting into studying emerging religions, and so his proselytization-fiction is actually really interesting from an academic standpoint. Also, as someone who came out of a high school for the gifted, I can say it's definitely productive in some ways (I had access to a lot of very good teachers and classes) but counterproductive in others (there was a sort of accidental promotion of the kind of learning style I see in Yud, and I was definitely not above it. I never really learned to buckle down and work on stuff I didn't 'love' until I got to college, where a goodly number of my fellow graduates flunked.) Night10194 fucked around with this message at 01:31 on Mar 25, 2015 |
# ? Mar 25, 2015 01:28 |
|
su3su2u1 posted:Although, the specific direction that MIRI is running in, creating mathematical ideas of friendliness is a huge misunderstanding of how applied math works. Interesting! Would you mind posting (here or in the mock thread) how they misunderstood applied math? Whenever I see unnecessarily fiddly math in AI papers I assume they're just trying to impress the reviewers. I had a paper rejected once because it "didn't have enough equations", despite attaching working source code. petrol blue posted:Am I reading it right that the Friendly AI should have human values and responses? Because that would imply Yud believes that humans are good to each other and would never, say, try to wipe out a group they consider inferior. Nope, that's not quite what it means. What does it mean? Well, Yud doesn't seem to know either; he's never really explained it. As far as I can discern it means that the AI will never do anything that would violate the "coherent extrapolated volition" of humanity. So, basically, if you could take everyone's opinions (no explanation given for collecting these), throw out the opinions that are bad (no explanation given for deciding which opinions are bad), then do whatever best satisfies those opinions. The AI itself doesn't need to seem human or have human feelings, just to act in a way that optimizes around human feelings. Edit: here, try to derive what he means from this, which, if you can believe it, he tried to include in an AI paper. quote:Our coherent extrapolated volition is our wish if we SolTerrasa fucked around with this message at 01:49 on Mar 25, 2015 |
# ? Mar 25, 2015 01:45 |
|
This AI is going to be a brony, isn't it?
|
# ? Mar 25, 2015 02:25 |
|
Chapter 8: Positive Bias Part Four quote:
I was wrong about the source of Harry’s “wandless magic” and Moddington was right. That’s what comes of reading in little chunks. I still don’t understand how this “demonstrates Harry’s intelligence”, though. He’s just making use of a quality of the Comed-Tea that Hermione wasn’t aware of – it’s a gap in knowledge rather than a sign of “intelligence” on his part per se. quote:
Here it comes – Hermione being forced by author fiat to bow to Eliezarry’s superiority. Also, when Hermione said that “Maybe I’ll let you help you with my research”, it was clear that it was a verbal riposte to Harry’s arrogance and obnoxiousness. She never said that she actually thought she was a magic-scientist or wanted to be one. quote:
This is starting to look a little like “negging”, as SSNeoman highlighted. Has Eliezer expressed any Men’s Rights Activist views in his writings in the past?
|
# ? Mar 25, 2015 02:33 |
|
To try to explain, imagine you have a genie. You want to make a wish, but the genie might actively screw with you Monkey Paw style, so you wish for the genie to be obedient. You might not have thought out you wish enough so that something comes back to bite you, so wish there were no bad consequences... Except you're not sure how to do that, so wish for the proper form of a wish so you can get what you want without negative consequences... You're not really sure how to do that either, so wish that you were smart enough to know what you would wish for if you smarter. Now write an AI that is also a genie, and you're set. e: No, JKWS, it's textbook Socratic method. Sometimes you really overreach in your criticisms. Please just stick to the actually stupid things? Added Space fucked around with this message at 02:41 on Mar 25, 2015 |
# ? Mar 25, 2015 02:37 |
|
JosephWongKS posted:Chapter 8: Positive Bias I went looking, and now my world is a silent, wordless scream.
|
# ? Mar 25, 2015 02:41 |
|
Added Space posted:e: No, JKWS, it's textbook Socratic method. Alright, fair enough. quote:Sometimes you really overreach in your criticisms. Please just stick to the actually stupid things? I'll try my best. It's a mock thread though, so you guys are equally free to mock me when I'm being stupid.
|
# ? Mar 25, 2015 02:50 |
|
SolTerrasa posted:
This was for an AI-generated poetry anthology right?
|
# ? Mar 25, 2015 02:56 |
|
What the hell do you expect from a guy who is 'Hey! Evolutionary Psych and are amazing!'
|
# ? Mar 25, 2015 03:05 |
|
Night10194 posted:What the hell do you expect from a guy who is 'Hey! Evolutionary Psych and are amazing!' Expectation had nothing to do with it. Like most of this thread's posters, I stuck my hand in that bear trap voluntarily.
|
# ? Mar 25, 2015 03:07 |
|
Oh good! Now you know how Yud feels all the time! http://web.archive.org/web/20010205221413/http://sysopmind.com/eliezer.html#timeline_the posted:There's a single emotional tone - an emotional tone is a modular component of the emotional symphonies we have English words for - common to sorrow, despair, and frustration. The tone is invoked by an effort failing to produce the expected reward ("frustration"), or by the anticipation of something going wrong ("despair"), or by watching something go wrong ("sorrow"). The message of this tone can be summarized as: "This isn't working. Stop what you're doing, try to figure out what you're doing wrong, and try something else." The cognitive methods activated by this tone (21) include what I would now call "causal analysis", "combinatorial design", and "reflectivity". The motivational effect of the tone includes, of course, low mental energy. Are you smarter than everyone you know, but unable to force yourself to get stuff done? If so I have this great fanfic you should read! i81icu812 fucked around with this message at 04:11 on Mar 25, 2015 |
# ? Mar 25, 2015 03:12 |
|
Darth Walrus posted:Expectation had nothing to do with it. Like most of this thread's posters, I stuck my hand in that bear trap voluntarily. It's worse than that: we saw your hand in the beartrap and jumped right in after you. quote:Our species does definitely have a problem. If you've managed to find your perfect mate, then I am glad for you, but try to have some sympathy on the rest of your poor species—they aren't just incompetent. Not all women and men are the same, no, not at all. But if you drew two histograms of the desired frequencies of intercourse for both sexes, you'd see that the graphs don't match up, and it would be the same way on many other dimensions. There can be lucky couples, and every person considered individually, probably has an individual soulmate out there somewhere... if you don't consider the competition. Our species as a whole has a statistical sex problem! On the plus side, I think his utopia involves putting everyone in a volcano. I couldn't agree more.
|
# ? Mar 25, 2015 04:00 |
|
That article is basically the "Don't Date
|
# ? Mar 25, 2015 04:17 |
|
|
# ? May 15, 2024 14:45 |
|
SolTerrasa posted:Interesting! Would you mind posting (here or in the mock thread) how they misunderstood applied math? Whenever I see unnecessarily fiddly math in AI papers I assume they're just trying to impress the reviewers. I had a paper rejected once because it "didn't have enough equations", despite attaching working source code. The definition of friendliness they create will just be an abstraction that shares some properties of what we might think of as "friendly." It's like the mathematical definitions of "secure" you might use in cryptography stuff - it captures some feature but it's not going to be perfect. You still have all the engineering challenges that go along with the real world. A provably secure algorithm might fail in practice. They have a nebulous mathematical definition of super human intelligence (i.e. how near to a Bayesian agent/ideal AIXI does something behave?) that doesn't capture lots of properties of intelligence (i.e. idea generation - an AIXI agent starts with all possible ideas), and they'll move to a nebulous idea of friendliness that also doesn't capture some important features,etc. What they really want to create is a sort of "best practices guide to AI development such that it doesn't kill everyone" - that isn't a math problem.
|
# ? Mar 25, 2015 04:59 |