|
Exit Strategy posted:All I know is that my next villain for an Eclipse Phase campaign is going to be a resimulated version of Yudowski as straight-up Roko's Basilisk, as . I'll see how many of my players want to punch me when that comes down the pipe. Will his preferred form of AI Hell involve turning the simulations into barely-free-willed sexy catgirls populating a volcano base for the chosen?
|
# ? Aug 19, 2014 00:27 |
|
|
# ? May 24, 2024 19:17 |
|
Strategic Tea posted:Keep in mind I don't think they actually advocate the basilisk as part of their ideology. Big Yud is terrified of the thing and tries to make sure it's never discussed. Because if an AI heard it, it might give it ideas Yes, but they do believe (incorrectly) that it is a logical consequence of their ideology; it they didn't, they wouldn't be scared of it and would feel free to discuss it openly. I think the point The Vosgian Beast was making is that their ideology has never before lead them to any conclusion that they don't like (other than "God isn't real so that particular form of eternal afterlife paradise isn't real"). Of course, part of the reason for that is that in most cases their ideology is too impractical, too disjointed, and too meaningless to lead them to any sort of conclusion at all beyond "I am a smart person for thinking this". HapiMerchant posted:hey! NtK isnt about only one side getting something outta it, it's a touching love story of a dom and a sub who---- never mind its pretty drat creepy, on reflection.
|
# ? Aug 19, 2014 00:36 |
|
Antivehicular posted:DEATH DEATH DEATH Agree completely. I was just responding to someone earlier in thread talking about how death enriches life or some such nonsense. One could almost call it deathist, really. It's just an insecure Japanese nerd's wank fantasy about how his hot classmate just happens to have the same fetishes as him, but he does some shady poo poo to push her into situations she's not comfortable with. And this is before they're even in any sort of relationship, which should be a huge red flag. Basically, telling impressionable subs that this is the kind of thing they should be looking for is a great way for them to wind up dead in someone's basement. Speaking of bad ends at the hands of creepy men, has anyone ever come forward about how they were swindled by Yudkowsky? It's no big deal if a school blew some money to invite him as a guest speaker, but I'm worried some poor schlub bankrupted themselves because they gave everything to AI Jesus.
|
# ? Aug 19, 2014 01:09 |
|
Strategic Tea posted:Keep in mind I don't think they actually advocate the basilisk as part of their ideology. Big Yud is terrified of the thing and tries to make sure it's never discussed. Because if an AI heard it, it might give it ideas Yeah I don't think anyone aside from maybe Roko believes in the basilisk in 2014.
|
# ? Aug 19, 2014 02:38 |
|
Mr. Horrible posted:Agree completely. I was just responding to someone earlier in thread talking about how death enriches life or some such nonsense. One could almost call it deathist, really. No one who gives Yud money and actually has him show up to speak about his own brand of thought would ever be disappointed, because unless he shows up drunk or skips out they'd never complain. Ditto for anyone who donates tons of money to the AI think tank that produces nothing, because Yud has them convinced their research is so dangerous is must be kept from the public until the unwashed masses are ready. So even after donating thousands a year to him and living off ramen and mountain dew no white libertarian would ever say "Damnit quit stalling I want results" because they get a pat on the back from Yud about how enlightened they are but the rest of the world isn't at their level.
|
# ? Aug 19, 2014 03:49 |
|
The Vosgian Beast posted:Yeah I don't think anyone aside from maybe Eliezer believes in the basilisk in 2014. ftfy pentyne posted:no white libertarian would ever say "Damnit quit stalling I want results" because they get a pat on the back from Yud about how enlightened they are but the rest of the world isn't at their level. Much like proponents of game theory, I came to the wrong conclusion by assuming everyone would act in their rational self-interest.
|
# ? Aug 19, 2014 03:54 |
|
I am part of the educational "elite" that Yudkowsky can only wish and pretend he was part of and this thread has made me seriously consider going over to his place, just to make fun of him for how absolutely and completely unqualified he is to have an opinion on anything and how he's completely laughable for having the presumption to believe that he could say anything more complicated "the sun will probably rise tomorrow" and be at all worth taking seriously. Literally everything he says that isn't outright wrong is stolen from someone substantially smarter and more competent than him and he doesn't even realize.Syritta posted:(This has less to do with LessWrong than I thought it would when I started writing. Whoops.)
|
# ? Aug 19, 2014 10:41 |
|
The Vosgian Beast posted:Roko's Basilisk was probably the first time I've seen LW's intellectual commitments lead them to an unpalatable conclusion. Everything else exists to make them feel good about their fate, place in the world, and their own intelligence. Yeah, but it's part of the appeal for them. Their notions about the world have real weight because of Roko's Basilisk. Every frisson of terror they feel when contemplating it just roots them deeper into the idea that they've stumbled upon a deep and horrifying truth about the nature of the universe.
|
# ? Aug 19, 2014 11:22 |
|
Cardiovorax posted:I am part of the educational "elite" that Yudkowsky can only wish and pretend he was part of and this thread has made me seriously consider going over to his place, just to make fun of him for how absolutely and completely unqualified he is to have an opinion on anything and how he's completely laughable for having the presumption to believe that he could say anything more complicated "the sun will probably rise tomorrow" and be at all worth taking seriously. Literally everything he says that isn't outright wrong is stolen from someone substantially smarter and more competent than him and he doesn't even realize. People like him can't be reasoned with or humiliated because their thought process has led them to think that their style of thought is inherently superior and any criticizing stems from not understand Bayes or his arguments. Even if you comprehensively broke down what Bayes actually is and the fallacies he makes, the most cogent response would be "You're not allowing yourself think past what others have told you". It would be way easier just to mock his AI institute for not producing anything at all in its entire existence, and even as he claims their work is "so dangerous" then ask why they haven't even published position papers not revealing their research but promoting their ideas and studies.
|
# ? Aug 19, 2014 11:38 |
|
I don't think I even care, it would just be incredibly cathartic to tell him off on his own ground. Makes me wish goonrushes were still a thing.
|
# ? Aug 19, 2014 12:09 |
|
pentyne posted:It would be way easier just to mock his AI institute for not producing anything at all in its entire existence, and even as he claims their work is "so dangerous" then ask why they haven't even published position papers not revealing their research but promoting their ideas and studies. What do you think 'Harry Potter and the Methods of Rationality' is?
|
# ? Aug 19, 2014 13:06 |
|
I want there to be an AI movie where the AI understands its own limitations, but the people making it don't. They keep checking whether it's sending huge amounts of data over the internet that they don't understand, or whether there's some factory full of 3d printers being built in china by a shell corporation, but no. It's just emailing suggestions to scientists. And they're kind of trying to pretend they're not disappointed.
|
# ? Aug 19, 2014 14:16 |
|
What about a movie where an AI decides it wants to discard the atheistic humanist rationalism of its creators and become devoutly religious (the specific religion really wouldn't matter that much, but I'd suggest picking a religion not all that popular so that it would resonate as "betrayal" for a majority of the target audience)? I could see a lot of potential in that story.
|
# ? Aug 19, 2014 14:33 |
|
I don't think you could make a movie like that and not have it sound smug and condescending in some fashion. Even if you threw an arrow at a dart board to choose the religion, you'd have to make a good argument for why the AI has good reason to believe that this religion is right or end up making an argument for why it isn't right just by its absence. Either way someone is getting pissed. Also, have you ever met someone who really viscerally gives you a feeling of "there but for the grace of God go I?" Because The Yud really manages to hit all my alarm buttons. e:sp; Cardiovorax fucked around with this message at 15:26 on Aug 19, 2014 |
# ? Aug 19, 2014 14:43 |
|
Iunnrais posted:What about a movie where an AI decides it wants to discard the atheistic humanist rationalism of its creators and become devoutly religious The Long Earth series by Terry Pratchett and Stephen Baxter features a Buddhist AI who claims to be a reincarnated Tibetan motorcycle repairman.
|
# ? Aug 19, 2014 15:26 |
|
Iunnrais posted:What about a movie where an AI decides it wants to discard the atheistic humanist rationalism of its creators and become devoutly religious (the specific religion really wouldn't matter that much, but I'd suggest picking a religion not all that popular so that it would resonate as "betrayal" for a majority of the target audience)? I could see a lot of potential in that story. Asimov wrote a short story similar to that about a group of robots manning an offworld solar array or something. They convince themselves that the instructions they receive come from God, and that the distant point they beam energy to is him, not Earth. Earth doesn't exist because, hey, we've never seen any proof of it. The human inspectors who come by every so often clearly made the whole thing up to feel better about being God's lovely pre-robot prototypes.
|
# ? Aug 19, 2014 15:35 |
|
Strategic Tea posted:Asimov wrote a short story similar to that about a group of robots manning an offworld solar array or something. They convince themselves that the instructions they receive come from God, and that the distant point they beam energy to is him, not Earth. Earth doesn't exist because, hey, we've never seen any proof of it. The human inspectors who come by every so often clearly made the whole thing up to feel better about being God's lovely pre-robot prototypes. I would love to hear Yud's opinions on Asimov's writings. It would be a clusterfuck of "well educated but misunderstood" and "lacks the benefit of modern thought" while he ends up dismissing one of the smartest men of the 20th century as a naive misguided fool. Speaking of "scientific journals mean nothing", after his prolific fictional writing career, Asimov honed his technical writing skills by inventing a fictional compound that he wrote a highly technical journal article with fake graphs, tables, images, citations etc. as practice. He was terrified it would backfire on his academic career, but during his Ph.D defense one of the committee members make a joking comment about it just to let him expand on its fictional properties. http://en.wikipedia.org/wiki/Thiotimoline Reminder, Asimov was a tenured professor of biochem who left his job because he made obscenely higher amounts of money from writing. Oh, and this is pure gold quote:Isaac Asimov was an atheist, a humanist, and a rationalist. He did not oppose religious conviction in others, but he frequently railed against superstitious and pseudoscientific beliefs that tried to pass themselves off as genuine science pentyne fucked around with this message at 17:15 on Aug 19, 2014 |
# ? Aug 19, 2014 16:52 |
Iunnrais posted:What about a movie where an AI decides it wants to discard the atheistic humanist rationalism of its creators and become devoutly religious (the specific religion really wouldn't matter that much, but I'd suggest picking a religion not all that popular so that it would resonate as "betrayal" for a majority of the target audience)? I could see a lot of potential in that story. A. The Qu'ran says that God made the djinn from smokeless fire, as he made man from clay B. The Qu'ran is revealed, but expressed to a 6th century Arab C. "Smokeless fire" is a pretty good vague approximation of electricity D. A AI isn't the hardware it's on, but is "made" of the electrical states, thus "made" of smokeless fire E. AIs are djinn and can become Muslims.
|
|
# ? Aug 19, 2014 17:47 |
|
Nessus posted:I once read some story or something with the following logic: This sounds really, really familiar and it's going to bug me all day that I can't think of what it's actually called or who it's by or whatever.
|
# ? Aug 19, 2014 17:53 |
|
Basil Hayden posted:This sounds really, really familiar and it's going to bug me all day that I can't think of what it's actually called or who it's by or whatever.
|
# ? Aug 19, 2014 18:19 |
|
Has this been posted yet? One of the top executives at Givewell audited MIRI (back when it was called the Singularity Institute) to determine whether it was a legitimate organization worthy of donations and support. The results are about what you'd expect: http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
|
# ? Aug 19, 2014 19:20 |
|
It's written by the Givewell executive, not by someone from LW, which explains why as I'm reading it, I can understand the development of thoughts and arguments in a coherent manner.
|
# ? Aug 19, 2014 19:42 |
|
Basil Hayden posted:This sounds really, really familiar and it's going to bug me all day that I can't think of what it's actually called or who it's by or whatever. Bruce Sterling had a rather weird short story "The Compassionate, the Digital" about an AI that converts to Islam, it's been a while since I read it but it might follow a similar logic.
|
# ? Aug 19, 2014 20:38 |
|
Would it loving kill Eliezer to speak as simply as the person he's responding to? I'm really, really trying to understand what he's saying and it's all just parsing as garbage. Edit: I mean poo poo like taking Karnofsky's "tool AI" and insisting on referring to it as "non-self-modifying planning Oracle" because five words makes him sound smarter than two would
|
# ? Aug 19, 2014 23:04 |
|
Strategic Tea posted:Asimov wrote a short story similar to that about a group of robots manning an offworld solar array or something. They convince themselves that the instructions they receive come from God, and that the distant point they beam energy to is him, not Earth. Earth doesn't exist because, hey, we've never seen any proof of it. The human inspectors who come by every so often clearly made the whole thing up to feel better about being God's lovely pre-robot prototypes. I can't remember if it was the same story or not, but the one that I remember had a new super-smart robot on the satellite who converted all of the dumber drones to worshipping the energy array itself. The inspectors are up there trying to convince him that the energy array isn't conscious or anything. Then a solar storm fires up and the inspectors are terrified, because that might throw the beam off, and if the beam defocuses it could scar the planet. The super-smart robot locks the humans in a room and when they're let back out, they look at the output and see that the beam didn't even budge, because the super-smart robot handled it, because the energy array's laws decree that the beam does not defocus and the robot follows the law. The robot does not question the energy array's laws, it does not understand them, and it does not care- the law is the law and it will follow the law. They conclude that the robot's beliefs don't matter for poo poo because it does its job and it does its job far better than any other system they've ever seen. As long as they're not harmful, there's no problem, and the inspectors conclude that they were kind of stupid to assume that the robot's beliefs were going to compromise its ability to function. It was a nice sentiment.
|
# ? Aug 19, 2014 23:12 |
|
Remora posted:Would it loving kill Eliezer to speak as simply as the person he's responding to? I'm really, really trying to understand what he's saying and it's all just parsing as garbage. Dug up from a text dump made on an old forum thread I can't find anymore but which I saved on a file, it's YUDKOWKY WRITES A RECIPE A Parody of Yudkowsky posted:Using a suitable DEVICE such as a yolk-seperator or the Yolkine Seperatus Technique mentioned in the Avine Reproduction Sequence. That is to say you separate the two parts of the egg, you may argue that the shell is part of the egg, but here we are strictly talking about the EDIBLE parts of the egg. You may ask why we define the egg-shell as inedible, but that is for another sequence.
|
# ? Aug 19, 2014 23:51 |
|
Somfin posted:I can't remember if it was the same story or not, but the one that I remember had a new super-smart robot on the satellite who converted all of the dumber drones to worshipping the energy array itself. The inspectors are up there trying to convince him that the energy array isn't conscious or anything. Then a solar storm fires up and the inspectors are terrified, because that might throw the beam off, and if the beam defocuses it could scar the planet. The super-smart robot locks the humans in a room and when they're let back out, they look at the output and see that the beam didn't even budge, because the super-smart robot handled it, because the energy array's laws decree that the beam does not defocus and the robot follows the law. The robot does not question the energy array's laws, it does not understand them, and it does not care- the law is the law and it will follow the law. That was the one. The controlling robot completely slipped my mind I seem to remember another story had an astronaut finding a family of kittens his friend had been raising in the back of his suit . I need to read that book again.
|
# ? Aug 20, 2014 00:00 |
|
Strategic Tea posted:That was the one. The controlling robot completely slipped my mind
|
# ? Aug 20, 2014 12:22 |
|
Remora posted:Would it loving kill Eliezer to speak as simply as the person he's responding to? I'm really, really trying to understand what he's saying and it's all just parsing as garbage. He's a cult leader. The obfuscation is deliberate.
|
# ? Aug 20, 2014 12:29 |
|
Mr. Horrible posted:Has this been posted yet? One of the top executives at Givewell audited MIRI (back when it was called the Singularity Institute) to determine whether it was a legitimate organization worthy of donations and support. The results are about what you'd expect: This is a good read, and highlights a why a number of things that lesswrongers take for granted are actually highly questionable. It also linked me to this article that Big Yud wrote about Pascal's Mugging. Pascal's Mugging is basically a dude walking up to you and saying "Give me five bucks, or I use my magic powers to kill a gazzillion people" - only in yudspeak the mugger/wizard is "an outside agent spliced into our Turing tape" that threatens to "run a Turing machine that simulates and kills 3^^^^3 people". The interesting thing here is that Big Yud realizes that it would be madness to give in to the demands of the mugger, but he is incapable of explaining why since he cannot construct a formal Bayseian expression that dismisses the "3^^^^3 people" part.
|
# ? Aug 20, 2014 12:48 |
|
Mr. Horrible posted:Has this been posted yet? One of the top executives at Givewell audited MIRI (back when it was called the Singularity Institute) to determine whether it was a legitimate organization worthy of donations and support. The results are about what you'd expect:
|
# ? Aug 20, 2014 13:17 |
|
Darth Walrus posted:He's a cult leader. The obfuscation is deliberate. I'd say its more a symptom of insecurities of his "self education" that he feels the need to speak/write like someone with a thesaurus and OED dictionary/how a "true" academic would write. I'd be really eager to see if he can actually engage someone verbally in a debate without frequent long pauses and and doing that thing where pseudo-smart people stumble over the words they try to use to sound intelligent.
|
# ? Aug 20, 2014 14:34 |
|
Mr. Horrible posted:Has this been posted yet? One of the top executives at Givewell audited MIRI (back when it was called the Singularity Institute) to determine whether it was a legitimate organization worthy of donations and support. The results are about what you'd expect: You know what's amazing? This post was written in 2012, two solid years ago. The two best points that it makes are (1) that SI has not convinced any experts of its value, and (2) that SI does not appear to have produced anything. Both responses say "well, yeah, those are true, but we're working on this 'open problems in friendly AI' sequence, and that will answer all the problems". It is 2014. LessWrong does not contain an "open problems in friendly AI" sequence yet, but I'm sure it's coming any day now. Right after Big Yud finishes Methods of Rationality, surely. The best part of the whole thing is that it wouldn't even be hard to write! Nobody doubts that they are trying to solve a problem which is hard! Everyone doubts some other part of their argument/house-of-cards, like that they have a handle on a solution, or that they are at all the right people to solve it, or that it needs solving right now, or that more money will help them.
|
# ? Aug 20, 2014 17:58 |
|
"Open problems in friendly AI" is on the level of "open problems in intergalactic FTL travel" for how completely divorced it is from any real-world considerations.
|
# ? Aug 20, 2014 18:01 |
|
I love this bit from that article:quote:Apparent poorly grounded belief in SI's superior general rationality. Many of the things that SI and its supporters and advocates say imply a belief that they have special insights into the nature of general rationality, and/or have superior general rationality, relative to the rest of the population. (Examples here, here and here). My understanding is that SI is in the process of spinning off a group dedicated to training people on how to have higher general rationality. Darth Walrus fucked around with this message at 18:06 on Aug 20, 2014 |
# ? Aug 20, 2014 18:04 |
|
Double posting, sorry, but I found the paper where MIRI/SI explains why they think that AI is the problem that needs to be solved first, and the more I read the more I think that "argument/house-of-cards" like I said earlier is accurate. Here is the fundamental basis of their argument in "Intelligence Explosion, Arguments and Import". Note that this is one of their few public papers which has been published. It was published in a non-peer-reviewed one-time, special-topic philosophy "journal". Anyway. One thing I've been noticing is the incredible frequency with which they say "sorry, we can't explain this right now, it's just too complex, but we refer you to X, Y, and Z". I recommend to anyone with a background in AI to actually follow those references. I have been doing so for the past few hours and I've found that they almost always refer (sometimes after a non-obvious or unreasonably-long chain of references; X points to Y to prove a point, but Y only makes that point by citing Z, etc) to a self-published paper by MIRI or SI or whatever it happens to be called that year, or to a blog post somewhere (amazingly, blog posts are cited as if they're real work, I mean they literally typed "[Yudkowsky 2011] " to cite LessWrong or whatever). Or, my personal favorite, they refer to a forthcoming paper. This is what I mean by a house-of-cards argument, it's all built on assumptions which are straight-up nonsense, but so MANY of them that it appears to hold together. This particular paper, "IE:AI", starts off with all of those. They "prove" that strong AI is possible by citing a speculative fiction work by Vernor Vinge (a novelist), a blog post by Big Yud, two self-published papers by MIRI, and a forthcoming no-really any-day-now paper by another MIRI dude. Then, having probed it possible, they set off to prove that it is inevitable, and coming soon. They waste three pages explaining that "predictions are hard", and they throw in a joke about weather forecasters??? Anyway they explain that there are a lot of things that are going to promote the creation of AI, like (I'm quoting here) "more hardware" and "better algorithms". They conclude this point with (again, quoting) "it seems misguided to be 90% confident that we will have strong AI in the next hundred years. But it also seems misguided to be 90% confident that we will not." Masterstroke, guys. Color me convinced. The interesting part of the article ends here, there's 10 more pages but they're all exactly what you'd expect. They take the previous section (with its citations of novelists and blog posts and nonexistent papers) as proven, then move on to a new argument which uses the same bullshit base, plus their "argument" from the previous chapter, to stack a new card on top of the obviously-stable-why-are-you-asking structure.
|
# ? Aug 20, 2014 18:21 |
|
SolTerrasa posted:Double posting, sorry, but I found the paper where MIRI/SI explains why they think that AI is the problem that needs to be solved first, and the more I read the more I think that "argument/house-of-cards" like I said earlier is accurate. Here is the fundamental basis of their argument in "Intelligence Explosion, Arguments and Import". Note that this is one of their few public papers which has been published. It was published in a non-peer-reviewed one-time, special-topic philosophy "journal". I think part of what makes it so egregious is the complete lack of effort in addition to the seriousness of the claims. I've read social science papers where they could only get nine people to participate in a longitudinal study, but researchers still tried to make statements about the general human population (e.g. "children of divorced parents don't have lower self esteem than the average person"). But at least they were trying to collect data, and with enough similar studies we could answer that question with reasonable certainty. Moreover, the truth of that hypothesis isn't quite as world-changing as "AI is literally the most efficient and most effective solution to all of our problems and it should be the the primary focus of
|
# ? Aug 20, 2014 19:20 |
|
SolTerrasa posted:Words Do you have a link to the paper?
|
# ? Aug 20, 2014 19:20 |
|
The Erland posted:Do you have a link to the paper? Let me see if I can remember where I found it. Yeah, here it is: Intelligence.org/files/IE-EI.pdf Agh, rereading this nonsense really hurts me. One of their arguments that AI explosions will probably happen in the next century is that we've made so much progress since the first Dartmouth conference on AI. For non-AI people, maybe this makes sense, but everything we have done since then has been so much harder than we thought it would be. At the Dartmouth conference, we thought machine vision, the problem of "given some pixels, identify the entities in it" would be solved in a few months. It's been 50 years and we're only okay at it. If there is ANYTHING we should have learned from Dartmouth it's that everything in AI is always harder than it seems like it should be. Why you'd use that as an example baffles me. gently caress, there's just so much WRONG with this that I have to force myself to stop typing and get back to work.
|
# ? Aug 20, 2014 20:46 |
|
|
# ? May 24, 2024 19:17 |
|
SolTerrasa posted:Agh, rereading this nonsense really hurts me. One of their arguments that AI explosions will probably happen in the next century is that we've made so much progress since the first Dartmouth conference on AI. For non-AI people, maybe this makes sense, but everything we have done since then has been so much harder than we thought it would be. At the Dartmouth conference, we thought machine vision, the problem of "given some pixels, identify the entities in it" would be solved in a few months. It's been 50 years and we're only okay at it. If there is ANYTHING we should have learned from Dartmouth it's that everything in AI is always harder than it seems like it should be. Why you'd use that as an example baffles me. And even then, it's so primitive. The best we've got can, just about, distinguish textures. It can then not tell the difference between a polar bear and a mountain goat. It can, just about, tell that they are not a mountain.
|
# ? Aug 20, 2014 20:56 |