Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

Exit Strategy posted:

All I know is that my next villain for an Eclipse Phase campaign is going to be a resimulated version of Yudowski as straight-up Roko's Basilisk, as . I'll see how many of my players want to punch me when that comes down the pipe.

Will his preferred form of AI Hell involve turning the simulations into barely-free-willed sexy catgirls populating a volcano base for the chosen?

Adbot
ADBOT LOVES YOU

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Strategic Tea posted:

Keep in mind I don't think they actually advocate the basilisk as part of their ideology. Big Yud is terrified of the thing and tries to make sure it's never discussed. Because if an AI heard it, it might give it ideas :spooky:

Yes, but they do believe (incorrectly) that it is a logical consequence of their ideology; it they didn't, they wouldn't be scared of it and would feel free to discuss it openly. I think the point The Vosgian Beast was making is that their ideology has never before lead them to any conclusion that they don't like (other than "God isn't real so that particular form of eternal afterlife paradise isn't real"). Of course, part of the reason for that is that in most cases their ideology is too impractical, too disjointed, and too meaningless to lead them to any sort of conclusion at all beyond "I am a smart person for thinking this".

HapiMerchant posted:

hey! NtK isnt about only one side getting something outta it, it's a touching love story of a dom and a sub who---- never mind its pretty drat creepy, on reflection.

:frogon:

Freemason Rush Week
Apr 22, 2006

Antivehicular posted:

DEATH DEATH DEATH

Agree completely. I was just responding to someone earlier in thread talking about how death enriches life or some such nonsense. One could almost call it deathist, really. :smug:



It's just an insecure Japanese nerd's wank fantasy about how his hot classmate just happens to have the same fetishes as him, but he does some shady poo poo to push her into situations she's not comfortable with. And this is before they're even in any sort of relationship, which should be a huge red flag. Basically, telling impressionable subs that this is the kind of thing they should be looking for is a great way for them to wind up dead in someone's basement.

Speaking of bad ends at the hands of creepy men, has anyone ever come forward about how they were swindled by Yudkowsky? It's no big deal if a school blew some money to invite him as a guest speaker, but I'm worried some poor schlub bankrupted themselves because they gave everything to AI Jesus. :ohdear:

The Vosgian Beast
Aug 13, 2011

Business is slow

Strategic Tea posted:

Keep in mind I don't think they actually advocate the basilisk as part of their ideology. Big Yud is terrified of the thing and tries to make sure it's never discussed. Because if an AI heard it, it might give it ideas :spooky:

Yeah I don't think anyone aside from maybe Roko believes in the basilisk in 2014.

pentyne
Nov 7, 2012

Mr. Horrible posted:

Agree completely. I was just responding to someone earlier in thread talking about how death enriches life or some such nonsense. One could almost call it deathist, really. :smug:


It's just an insecure Japanese nerd's wank fantasy about how his hot classmate just happens to have the same fetishes as him, but he does some shady poo poo to push her into situations she's not comfortable with. And this is before they're even in any sort of relationship, which should be a huge red flag. Basically, telling impressionable subs that this is the kind of thing they should be looking for is a great way for them to wind up dead in someone's basement.

Speaking of bad ends at the hands of creepy men, has anyone ever come forward about how they were swindled by Yudkowsky? It's no big deal if a school blew some money to invite him as a guest speaker, but I'm worried some poor schlub bankrupted themselves because they gave everything to AI Jesus. :ohdear:

No one who gives Yud money and actually has him show up to speak about his own brand of thought would ever be disappointed, because unless he shows up drunk or skips out they'd never complain. Ditto for anyone who donates tons of money to the AI think tank that produces nothing, because Yud has them convinced their research is so dangerous is must be kept from the public until the unwashed masses are ready. So even after donating thousands a year to him and living off ramen and mountain dew no white libertarian would ever say "Damnit quit stalling I want results" because they get a pat on the back from Yud about how enlightened they are but the rest of the world isn't at their level.

Freemason Rush Week
Apr 22, 2006

The Vosgian Beast posted:

Yeah I don't think anyone aside from maybe Eliezer believes in the basilisk in 2014.

ftfy

pentyne posted:

no white libertarian would ever say "Damnit quit stalling I want results" because they get a pat on the back from Yud about how enlightened they are but the rest of the world isn't at their level.

Much like proponents of game theory, I came to the wrong conclusion by assuming everyone would act in their rational self-interest. :doh:

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
I am part of the educational "elite" that Yudkowsky can only wish and pretend he was part of and this thread has made me seriously consider going over to his place, just to make fun of him for how absolutely and completely unqualified he is to have an opinion on anything and how he's completely laughable for having the presumption to believe that he could say anything more complicated "the sun will probably rise tomorrow" and be at all worth taking seriously. Literally everything he says that isn't outright wrong is stolen from someone substantially smarter and more competent than him and he doesn't even realize.

Syritta posted:

(This has less to do with LessWrong than I thought it would when I started writing. Whoops.)

SolTerrasa explained the AI side of this, but I thought I'd go into the decision theory. Sorry if anyone's done so already. Or if I make mistakes, I'm not a statistician.

Let's use the constant of probability discussions: a coin flip. You have a coin and you want to decide how weighted it is. Depending on the weight you'll get 50% heads 50% tails, or 75% heads 25% flips, or whatever. Any one of these hypothetical coins is described by a probability distribution. For example, for the fair 50-50 coin, it's a discrete distribution that's .5 at 0 (tails) and 1 (heads) and zero everywhere else. The 75-25 coin, similarly, would have a distribution of .25 at 0 and .75 at 1.

These distributions are obviously related. We can say they're part of a "family" of distributions, and each member of the family is uniquely identified by a "parameter". In this case the parameter can just be the percentage of the time the coin would land heads.

In Bayesian inference, we treat this parameter as being described by a probability distribution as well. Unlike the coin distributions, which could be construed as corresponding to physical facts if you're a dirty frequentist, this distribution is intended to be a description of the knowledge of an investigator. So for example, if somebody has good reason to believe that the coin is either fair or always lands heads, we could describe that with a distribution where p=.5 has a probability of .5, and p=1 has a probability of 0 has .5, and everything else has a probability of zero. Or just as well we could use a normal distribution, where .5 seems most likely and it smoothly drops to either side.

For the actual inference part, what we want to do is take some data (coin flip results) and adjust our belief distribution in some sensible way to reflect this data. For example, if we flip the coin a hundred thousand times and get about fifty thousand heads, we probably want to think that it's a fair coin is more likely than that it always lands heads. This process is also what's called "Bayesian updating".

The adjustment process is described by Bayes' law: P(A|B) = P(B|A)P(A)/P(B). In English: the probability of A, given B, equals the probability of B, given A, times the probability of A in general, divided by the probability of B in general. "given" in the Bayesian context can be thought of as knowledge - the probability that the coin is fair given that we've flipped it a hundred thousand times and gotten so and so results, for instance.

Writing it out with the coin example we get something like P(p = .5|coin observations) = P(coin observations|p = .5)P(p = .5)/P(coin observations). By doing this for all possible p, we get a new distribution (the P(p = x|coin observations)), our "updated" beliefs.

Notice that this is actually really easy. The hardest part computation-wise is probably the sum (next paragraph). Bayesian inference itself is totally computable, and in fact, one of the main reasons Bayesian methods are used is because they're often computationally easier than the rival (and often older) frequentist methods.

Now to the intractability. Let's examine each term. P(coin observations|p = .5) is simple enough to calculate and I won't go into it. P(coin observations) may seem like a strange term, because how are we supposed to know that before we pick a hypothesis about how weighted the coin is?, but in fact we can "just" sum over all possibilities (all p between 0 and 1). Anyway, this term is the same for all p, so it doesn't matter what it is if we're just doing relative weighting of hypotheses.

The problem is P(p = .5). What's this? It's a probability in our belief distribution - an initial, or prior distribution, from before we had any evidence. The thing we're updating in the first place. For Bayesian inference to work, essentially, we have to start somewhere.

There is in fact no obvious place to start. We could say, for instance, that we start out believing that p is uniformly distributed, that p is just as likely to be pi/4 as it is to be .6. This is called the "principle of indifference" and it's pretty common. Or we could figure coins are normally fair and go with the normal distribution. Or we could say the person who made the coin definitely wanted it to be totally unfair, but we are indifferent over how competent they are at making bad coins.

If we pick a particularly pathological prior, in fact, we can make a Bayesian reasoner come up with psychotic results. You can see some examples on Cosma Shalizi's blog.

Now as far as Bayesian methods in say, sciences, go, this isn't too much of a problem. We go with some prior and on the rare occasion it seems to get implausible results we choose some other one. There's some concern with people deliberately picking priors to get results they want, but on the whole, Bayesian methods are considered pretty reliable and useful.

No good for AI though. Can't have this magic. So this guy Solomonoff, interested in this problem, came along with a "universal" prior distribution. It's universal in that, if we assume that the universe can be described by some computable distribution, it always works. If we start with this distribution Bayesian inference will always take us to the real distribution, and pretty fast.

An AI guy Marcus Hutter took this and ran with it and came up with this "AIXI" theory/design/thing. I don't know what it stands for, but he also calls it "Universal artificial intelligence". The basic idea is that any intelligent agent should work by this sort of inference based on the Solomonoff prior. You can read more about that on his embarassingly academic website. One result, from there, is "The book also presents a preliminary computable AI theory. We construct an algorithm AIXItl, which is superior to any other time t and space l bounded agent." where "agent" means basically anything that acts. Pretty broad.

I should mention that neither Hutter nor Solomonoff are or were involved with LessWrong as far as I know. They are/were real mathematicians and smart peeps. They are also, however, outside the mainstream of AI research. I figure these theories are what LW is going to end up with if they keep going and learn some math, is all. Also it looks like their wiki has a page on it in which they are concerned about it not having a self-model, which is interestingly practical for LW but happens to be irrelevant to the formalism.

Anyway that sounds great right? Universal prior. Right. What's it look like? Way oversimplifying, it rates hypotheses' likelihood by their compressibility, or algorithmic complexity. For example, say our perfect AI is trying to figure out gravity. It's going to treat the hypothesis that gravity is inverse-square as more likely than a capricious intelligent faller. It's a formalization of Occam's razor based on real, if obscure, notions of universal complexity in computability theory.

But, problem. It's uncomputable. You can't compute the universal complexity of any string, let alone all possible strings. You can approximate it, but there's no efficient way to do so (AIXItl is apparently exponential, which is computer science talk for "you don't need this before civilization collapses, right?").

So the mathematical theory is perfect, except in that it's impossible to implement, and serious optimization of it is unrealistic. Kind of sums up my view of how well LW is doing with AI, personally, despite this not being LW. Worry about these contrived Platonic theories while having little interest in how the only intelligent beings we're aware of actually function.
Best post in this thread and it got zero attention. For shame.

Mandatory Assembly
May 25, 2008

it's time to get juche
Lipstick Apathy

The Vosgian Beast posted:

Roko's Basilisk was probably the first time I've seen LW's intellectual commitments lead them to an unpalatable conclusion. Everything else exists to make them feel good about their fate, place in the world, and their own intelligence.

Yeah, but it's part of the appeal for them. Their notions about the world have real weight because of Roko's Basilisk. Every frisson of terror they feel when contemplating it just roots them deeper into the idea that they've stumbled upon a deep and horrifying truth about the nature of the universe.

pentyne
Nov 7, 2012

Cardiovorax posted:

I am part of the educational "elite" that Yudkowsky can only wish and pretend he was part of and this thread has made me seriously consider going over to his place, just to make fun of him for how absolutely and completely unqualified he is to have an opinion on anything and how he's completely laughable for having the presumption to believe that he could say anything more complicated "the sun will probably rise tomorrow" and be at all worth taking seriously. Literally everything he says that isn't outright wrong is stolen from someone substantially smarter and more competent than him and he doesn't even realize.

People like him can't be reasoned with or humiliated because their thought process has led them to think that their style of thought is inherently superior and any criticizing stems from not understand Bayes or his arguments. Even if you comprehensively broke down what Bayes actually is and the fallacies he makes, the most cogent response would be "You're not allowing yourself think past what others have told you".

It would be way easier just to mock his AI institute for not producing anything at all in its entire existence, and even as he claims their work is "so dangerous" then ask why they haven't even published position papers not revealing their research but promoting their ideas and studies.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
I don't think I even care, it would just be incredibly cathartic to tell him off on his own ground. Makes me wish goonrushes were still a thing.

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

pentyne posted:

It would be way easier just to mock his AI institute for not producing anything at all in its entire existence, and even as he claims their work is "so dangerous" then ask why they haven't even published position papers not revealing their research but promoting their ideas and studies.

What do you think 'Harry Potter and the Methods of Rationality' is? :v:

Jazu
Jan 1, 2006

Looking for some URANIUM? CLICK HERE
I want there to be an AI movie where the AI understands its own limitations, but the people making it don't. They keep checking whether it's sending huge amounts of data over the internet that they don't understand, or whether there's some factory full of 3d printers being built in china by a shell corporation, but no. It's just emailing suggestions to scientists. And they're kind of trying to pretend they're not disappointed.

Iunnrais
Jul 25, 2007

It's gaelic.
What about a movie where an AI decides it wants to discard the atheistic humanist rationalism of its creators and become devoutly religious (the specific religion really wouldn't matter that much, but I'd suggest picking a religion not all that popular so that it would resonate as "betrayal" for a majority of the target audience)? I could see a lot of potential in that story.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
I don't think you could make a movie like that and not have it sound smug and condescending in some fashion. Even if you threw an arrow at a dart board to choose the religion, you'd have to make a good argument for why the AI has good reason to believe that this religion is right or end up making an argument for why it isn't right just by its absence. Either way someone is getting pissed.

Also, have you ever met someone who really viscerally gives you a feeling of "there but for the grace of God go I?" Because The Yud really manages to hit all my alarm buttons.

e:sp;

Cardiovorax fucked around with this message at 15:26 on Aug 19, 2014

Tiggum
Oct 24, 2007

Your life and your quest end here.


Iunnrais posted:

What about a movie where an AI decides it wants to discard the atheistic humanist rationalism of its creators and become devoutly religious

The Long Earth series by Terry Pratchett and Stephen Baxter features a Buddhist AI who claims to be a reincarnated Tibetan motorcycle repairman.

Strategic Tea
Sep 1, 2012

Iunnrais posted:

What about a movie where an AI decides it wants to discard the atheistic humanist rationalism of its creators and become devoutly religious (the specific religion really wouldn't matter that much, but I'd suggest picking a religion not all that popular so that it would resonate as "betrayal" for a majority of the target audience)? I could see a lot of potential in that story.

Asimov wrote a short story similar to that about a group of robots manning an offworld solar array or something. They convince themselves that the instructions they receive come from God, and that the distant point they beam energy to is him, not Earth. Earth doesn't exist because, hey, we've never seen any proof of it. The human inspectors who come by every so often clearly made the whole thing up to feel better about being God's lovely pre-robot prototypes.

pentyne
Nov 7, 2012

Strategic Tea posted:

Asimov wrote a short story similar to that about a group of robots manning an offworld solar array or something. They convince themselves that the instructions they receive come from God, and that the distant point they beam energy to is him, not Earth. Earth doesn't exist because, hey, we've never seen any proof of it. The human inspectors who come by every so often clearly made the whole thing up to feel better about being God's lovely pre-robot prototypes.

I would love to hear Yud's opinions on Asimov's writings. It would be a clusterfuck of "well educated but misunderstood" and "lacks the benefit of modern thought" while he ends up dismissing one of the smartest men of the 20th century as a naive misguided fool.

Speaking of "scientific journals mean nothing", after his prolific fictional writing career, Asimov honed his technical writing skills by inventing a fictional compound that he wrote a highly technical journal article with fake graphs, tables, images, citations etc. as practice. He was terrified it would backfire on his academic career, but during his Ph.D defense one of the committee members make a joking comment about it just to let him expand on its fictional properties.

http://en.wikipedia.org/wiki/Thiotimoline

Reminder, Asimov was a tenured professor of biochem who left his job because he made obscenely higher amounts of money from writing.

Oh, and this is pure gold

quote:

Isaac Asimov was an atheist, a humanist, and a rationalist. He did not oppose religious conviction in others, but he frequently railed against superstitious and pseudoscientific beliefs that tried to pass themselves off as genuine science

pentyne fucked around with this message at 17:15 on Aug 19, 2014

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Iunnrais posted:

What about a movie where an AI decides it wants to discard the atheistic humanist rationalism of its creators and become devoutly religious (the specific religion really wouldn't matter that much, but I'd suggest picking a religion not all that popular so that it would resonate as "betrayal" for a majority of the target audience)? I could see a lot of potential in that story.
I once read some story or something with the following logic:

A. The Qu'ran says that God made the djinn from smokeless fire, as he made man from clay
B. The Qu'ran is revealed, but expressed to a 6th century Arab
C. "Smokeless fire" is a pretty good vague approximation of electricity
D. A AI isn't the hardware it's on, but is "made" of the electrical states, thus "made" of smokeless fire
E. AIs are djinn and can become Muslims.

:getin:

Basil Hayden
Oct 9, 2012

1921!

Nessus posted:

I once read some story or something with the following logic:

A. The Qu'ran says that God made the djinn from smokeless fire, as he made man from clay
B. The Qu'ran is revealed, but expressed to a 6th century Arab
C. "Smokeless fire" is a pretty good vague approximation of electricity
D. A AI isn't the hardware it's on, but is "made" of the electrical states, thus "made" of smokeless fire
E. AIs are djinn and can become Muslims.

:getin:

This sounds really, really familiar and it's going to bug me all day that I can't think of what it's actually called or who it's by or whatever.

NGDBSS
Dec 30, 2009






Basil Hayden posted:

This sounds really, really familiar and it's going to bug me all day that I can't think of what it's actually called or who it's by or whatever.
That sounds like something from the various Budayeen stories? (Disclaimer: I've never actually read any of them.)

Freemason Rush Week
Apr 22, 2006

Has this been posted yet? One of the top executives at Givewell audited MIRI (back when it was called the Singularity Institute) to determine whether it was a legitimate organization worthy of donations and support. The results are about what you'd expect:

http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

Djeser
Mar 22, 2013


it's crow time again

It's written by the Givewell executive, not by someone from LW, which explains why as I'm reading it, I can understand the development of thoughts and arguments in a coherent manner.

Hobnob
Feb 23, 2006

Ursa Adorandum

Basil Hayden posted:

This sounds really, really familiar and it's going to bug me all day that I can't think of what it's actually called or who it's by or whatever.

Bruce Sterling had a rather weird short story "The Compassionate, the Digital" about an AI that converts to Islam, it's been a while since I read it but it might follow a similar logic.

Remora
Aug 15, 2010

Would it loving kill Eliezer to speak as simply as the person he's responding to? I'm really, really trying to understand what he's saying and it's all just parsing as garbage.

Edit: I mean poo poo like taking Karnofsky's "tool AI" and insisting on referring to it as "non-self-modifying planning Oracle" because five words makes him sound smarter than two would

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Strategic Tea posted:

Asimov wrote a short story similar to that about a group of robots manning an offworld solar array or something. They convince themselves that the instructions they receive come from God, and that the distant point they beam energy to is him, not Earth. Earth doesn't exist because, hey, we've never seen any proof of it. The human inspectors who come by every so often clearly made the whole thing up to feel better about being God's lovely pre-robot prototypes.

I can't remember if it was the same story or not, but the one that I remember had a new super-smart robot on the satellite who converted all of the dumber drones to worshipping the energy array itself. The inspectors are up there trying to convince him that the energy array isn't conscious or anything. Then a solar storm fires up and the inspectors are terrified, because that might throw the beam off, and if the beam defocuses it could scar the planet. The super-smart robot locks the humans in a room and when they're let back out, they look at the output and see that the beam didn't even budge, because the super-smart robot handled it, because the energy array's laws decree that the beam does not defocus and the robot follows the law. The robot does not question the energy array's laws, it does not understand them, and it does not care- the law is the law and it will follow the law.

They conclude that the robot's beliefs don't matter for poo poo because it does its job and it does its job far better than any other system they've ever seen. As long as they're not harmful, there's no problem, and the inspectors conclude that they were kind of stupid to assume that the robot's beliefs were going to compromise its ability to function. It was a nice sentiment.

The Vosgian Beast
Aug 13, 2011

Business is slow

Remora posted:

Would it loving kill Eliezer to speak as simply as the person he's responding to? I'm really, really trying to understand what he's saying and it's all just parsing as garbage.

Edit: I mean poo poo like taking Karnofsky's "tool AI" and insisting on referring to it as "non-self-modifying planning Oracle" because five words makes him sound smarter than two would

Dug up from a text dump made on an old forum thread I can't find anymore but which I saved on a file, it's

YUDKOWKY WRITES A RECIPE

A Parody of Yudkowsky posted:

Using a suitable DEVICE such as a yolk-seperator or the Yolkine Seperatus Technique mentioned in the Avine Reproduction Sequence. That is to say you separate the two parts of the egg, you may argue that the shell is part of the egg, but here we are strictly talking about the EDIBLE parts of the egg. You may ask why we define the egg-shell as inedible, but that is for another sequence.

Wham-a-bam the yolks till they appear creamish, at which point apply to the mixture two teacupfuls of extra-fine monosaccharide crystals to create a Vitellian Monosaccharide mixture. Here it may be worthwhile noticing that at this point the whites will be in a separate container and that you are not to mix them and the yolks at this stage! See the Mixture Assumption Error sequence.

Wham-a-bam the resulting Vitellian Monosaccharide mixture for five to ten minutes while maintaining steady supervision of the mixture to ensure that you reach optimal results. Then add two tablespoonfuls of milk or water, a measure of salt sufficient for this cake. It is worth noticing that despite this being a sweet sponge cake the salt is necessary as a flavour enhancer, much in the same way as MSG may be added to say Chinese take-out foods. The salt is not meant to provide flavour in itself, but, as I said, to enhance the flavour of other ingredients. This is a well known technique, but given how counter-intuitive it is it is a technique that is often either ignored by inexperienced Pastry Creationists or else done entirely by rote without fully understanding the underlying principles. This is why you will also add some flavouring at this stage.

Now add a fraction of the albumen, which you should have wham-a-bammed as well. Then add two cups of flour into which you have sifted two teaspoonfuls of baking powder; It is important to understand that the gas-development of the baking-powder is what helps turn this cake into a sponge. As the baking-powder is heated it releases vapours which creates many hollows in the body of the cake. Take the resulting mixture and slowly mix it into the Enhanced Vitellian Monosaccharide mixture, ensure that the mixture speed is the minimum necessary to combine the two ingredients.

To conclude mix in the remainder of the albumen. Line the baking containers with buttered paper. That is to say paper onto which butter has been applied to ensure that it will come loose easily when the baking process if over. Then fill the containers two-thirds full.

Strategic Tea
Sep 1, 2012

Somfin posted:

I can't remember if it was the same story or not, but the one that I remember had a new super-smart robot on the satellite who converted all of the dumber drones to worshipping the energy array itself. The inspectors are up there trying to convince him that the energy array isn't conscious or anything. Then a solar storm fires up and the inspectors are terrified, because that might throw the beam off, and if the beam defocuses it could scar the planet. The super-smart robot locks the humans in a room and when they're let back out, they look at the output and see that the beam didn't even budge, because the super-smart robot handled it, because the energy array's laws decree that the beam does not defocus and the robot follows the law. The robot does not question the energy array's laws, it does not understand them, and it does not care- the law is the law and it will follow the law.

They conclude that the robot's beliefs don't matter for poo poo because it does its job and it does its job far better than any other system they've ever seen. As long as they're not harmful, there's no problem, and the inspectors conclude that they were kind of stupid to assume that the robot's beliefs were going to compromise its ability to function. It was a nice sentiment.

That was the one. The controlling robot completely slipped my mind :shobon:

I seem to remember another story had an astronaut finding a family of kittens his friend had been raising in the back of his suit :3:. I need to read that book again.

Runcible Cat
May 28, 2007

Ignoring this post

Strategic Tea posted:

That was the one. The controlling robot completely slipped my mind :shobon:

I seem to remember another story had an astronaut finding a family of kittens his friend had been raising in the back of his suit :3:. I need to read that book again.
The kittens is by Arthur C Clarke, I think. He's suited up out in space and hears creeeepy noiiiiises and then something pats him on the back of the neck AAAAA!!!

Darth Walrus
Feb 13, 2012

Remora posted:

Would it loving kill Eliezer to speak as simply as the person he's responding to? I'm really, really trying to understand what he's saying and it's all just parsing as garbage.

Edit: I mean poo poo like taking Karnofsky's "tool AI" and insisting on referring to it as "non-self-modifying planning Oracle" because five words makes him sound smarter than two would

He's a cult leader. The obfuscation is deliberate.

Mr. Sunshine
May 15, 2008

This is a scrunt that has been in space too long and become a Lunt (Long Scrunt)

Fun Shoe

Mr. Horrible posted:

Has this been posted yet? One of the top executives at Givewell audited MIRI (back when it was called the Singularity Institute) to determine whether it was a legitimate organization worthy of donations and support. The results are about what you'd expect:

http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

This is a good read, and highlights a why a number of things that lesswrongers take for granted are actually highly questionable. It also linked me to this article that Big Yud wrote about Pascal's Mugging.

Pascal's Mugging is basically a dude walking up to you and saying "Give me five bucks, or I use my magic powers to kill a gazzillion people" - only in yudspeak the mugger/wizard is "an outside agent spliced into our Turing tape" that threatens to "run a Turing machine that simulates and kills 3^^^^3 people".

The interesting thing here is that Big Yud realizes that it would be madness to give in to the demands of the mugger, but he is incapable of explaining why since he cannot construct a formal Bayseian expression that dismisses the "3^^^^3 people" part.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

Mr. Horrible posted:

Has this been posted yet? One of the top executives at Givewell audited MIRI (back when it was called the Singularity Institute) to determine whether it was a legitimate organization worthy of donations and support. The results are about what you'd expect:

http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
This is hilarious for all the wrong reasons.

pentyne
Nov 7, 2012

Darth Walrus posted:

He's a cult leader. The obfuscation is deliberate.

I'd say its more a symptom of insecurities of his "self education" that he feels the need to speak/write like someone with a thesaurus and OED dictionary/how a "true" academic would write. I'd be really eager to see if he can actually engage someone verbally in a debate without frequent long pauses and and doing that thing where pseudo-smart people stumble over the words they try to use to sound intelligent.

SolTerrasa
Sep 2, 2011

Mr. Horrible posted:

Has this been posted yet? One of the top executives at Givewell audited MIRI (back when it was called the Singularity Institute) to determine whether it was a legitimate organization worthy of donations and support. The results are about what you'd expect:

http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

You know what's amazing? This post was written in 2012, two solid years ago. The two best points that it makes are (1) that SI has not convinced any experts of its value, and (2) that SI does not appear to have produced anything. Both responses say "well, yeah, those are true, but we're working on this 'open problems in friendly AI' sequence, and that will answer all the problems".

It is 2014. LessWrong does not contain an "open problems in friendly AI" sequence yet, but I'm sure it's coming any day now. Right after Big Yud finishes Methods of Rationality, surely. The best part of the whole thing is that it wouldn't even be hard to write! Nobody doubts that they are trying to solve a problem which is hard! Everyone doubts some other part of their argument/house-of-cards, like that they have a handle on a solution, or that they are at all the right people to solve it, or that it needs solving right now, or that more money will help them.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
"Open problems in friendly AI" is on the level of "open problems in intergalactic FTL travel" for how completely divorced it is from any real-world considerations.

Darth Walrus
Feb 13, 2012
I love this bit from that article:

quote:

Apparent poorly grounded belief in SI's superior general rationality. Many of the things that SI and its supporters and advocates say imply a belief that they have special insights into the nature of general rationality, and/or have superior general rationality, relative to the rest of the population. (Examples here, here and here). My understanding is that SI is in the process of spinning off a group dedicated to training people on how to have higher general rationality.

Yet I'm not aware of any of what I consider compelling evidence that SI staff/supporters/advocates have any special insight into the nature of general rationality or that they have especially high general rationality.

I have been pointed to the Sequences on this point. The Sequences (which I have read the vast majority of) do not seem to me to be a demonstration or evidence of general rationality. They are about rationality; I find them very enjoyable to read; and there is very little they say that I disagree with (or would have disagreed with before I read them). However, they do not seem to demonstrate rationality on the part of the writer, any more than a series of enjoyable, not-obviously-inaccurate essays on the qualities of a good basketball player would demonstrate basketball prowess. I sometimes get the impression that fans of the Sequences are willing to ascribe superior rationality to the writer simply because the content seems smart and insightful to them, without making a critical effort to determine the extent to which the content is novel, actionable and important.

I endorse Eliezer Yudkowsky's statement, "Be careful … any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility." To me, the best evidence of superior general rationality (or of insight into it) would be objectively impressive achievements (successful commercial ventures, highly prestigious awards, clear innovations, etc.) and/or accumulation of wealth and power. As mentioned above, SI staff/supporters/advocates do not seem particularly impressive on these fronts, at least not as much as I would expect for people who have the sort of insight into rationality that makes it sensible for them to train others in it. I am open to other evidence that SI staff/supporters/advocates have superior general rationality, but I have not seen it.

Darth Walrus fucked around with this message at 18:06 on Aug 20, 2014

SolTerrasa
Sep 2, 2011

Double posting, sorry, but I found the paper where MIRI/SI explains why they think that AI is the problem that needs to be solved first, and the more I read the more I think that "argument/house-of-cards" like I said earlier is accurate. Here is the fundamental basis of their argument in "Intelligence Explosion, Arguments and Import". Note that this is one of their few public papers which has been published. It was published in a non-peer-reviewed one-time, special-topic philosophy "journal".

Anyway. One thing I've been noticing is the incredible frequency with which they say "sorry, we can't explain this right now, it's just too complex, but we refer you to X, Y, and Z". I recommend to anyone with a background in AI to actually follow those references. I have been doing so for the past few hours and I've found that they almost always refer (sometimes after a non-obvious or unreasonably-long chain of references; X points to Y to prove a point, but Y only makes that point by citing Z, etc) to a self-published paper by MIRI or SI or whatever it happens to be called that year, or to a blog post somewhere (amazingly, blog posts are cited as if they're real work, I mean they literally typed "[Yudkowsky 2011] " to cite LessWrong or whatever). Or, my personal favorite, they refer to a forthcoming paper. This is what I mean by a house-of-cards argument, it's all built on assumptions which are straight-up nonsense, but so MANY of them that it appears to hold together.

This particular paper, "IE:AI", starts off with all of those. They "prove" that strong AI is possible by citing a speculative fiction work by Vernor Vinge (a novelist), a blog post by Big Yud, two self-published papers by MIRI, and a forthcoming no-really any-day-now paper by another MIRI dude.

Then, having probed it possible, they set off to prove that it is inevitable, and coming soon. They waste three pages explaining that "predictions are hard", and they throw in a joke about weather forecasters??? Anyway they explain that there are a lot of things that are going to promote the creation of AI, like (I'm quoting here) "more hardware" and "better algorithms". They conclude this point with (again, quoting) "it seems misguided to be 90% confident that we will have strong AI in the next hundred years. But it also seems misguided to be 90% confident that we will not."

Masterstroke, guys. Color me convinced.

The interesting part of the article ends here, there's 10 more pages but they're all exactly what you'd expect. They take the previous section (with its citations of novelists and blog posts and nonexistent papers) as proven, then move on to a new argument which uses the same bullshit base, plus their "argument" from the previous chapter, to stack a new card on top of the obviously-stable-why-are-you-asking structure.

Freemason Rush Week
Apr 22, 2006

SolTerrasa posted:

Double posting, sorry, but I found the paper where MIRI/SI explains why they think that AI is the problem that needs to be solved first, and the more I read the more I think that "argument/house-of-cards" like I said earlier is accurate. Here is the fundamental basis of their argument in "Intelligence Explosion, Arguments and Import". Note that this is one of their few public papers which has been published. It was published in a non-peer-reviewed one-time, special-topic philosophy "journal".

Anyway. One thing I've been noticing is the incredible frequency with which they say "sorry, we can't explain this right now, it's just too complex, but we refer you to X, Y, and Z". I recommend to anyone with a background in AI to actually follow those references. I have been doing so for the past few hours and I've found that they almost always refer (sometimes after a non-obvious or unreasonably-long chain of references; X points to Y to prove a point, but Y only makes that point by citing Z, etc) to a self-published paper by MIRI or SI or whatever it happens to be called that year, or to a blog post somewhere (amazingly, blog posts are cited as if they're real work, I mean they literally typed "[Yudkowsky 2011] " to cite LessWrong or whatever). Or, my personal favorite, they refer to a forthcoming paper. This is what I mean by a house-of-cards argument, it's all built on assumptions which are straight-up nonsense, but so MANY of them that it appears to hold together.

This particular paper, "IE:AI", starts off with all of those. They "prove" that strong AI is possible by citing a speculative fiction work by Vernor Vinge (a novelist), a blog post by Big Yud, two self-published papers by MIRI, and a forthcoming no-really any-day-now paper by another MIRI dude.

Then, having probed it possible, they set off to prove that it is inevitable, and coming soon. They waste three pages explaining that "predictions are hard", and they throw in a joke about weather forecasters??? Anyway they explain that there are a lot of things that are going to promote the creation of AI, like (I'm quoting here) "more hardware" and "better algorithms". They conclude this point with (again, quoting) "it seems misguided to be 90% confident that we will have strong AI in the next hundred years. But it also seems misguided to be 90% confident that we will not."

Masterstroke, guys. Color me convinced.

The interesting part of the article ends here, there's 10 more pages but they're all exactly what you'd expect. They take the previous section (with its citations of novelists and blog posts and nonexistent papers) as proven, then move on to a new argument which uses the same bullshit base, plus their "argument" from the previous chapter, to stack a new card on top of the obviously-stable-why-are-you-asking structure.

I think part of what makes it so egregious is the complete lack of effort in addition to the seriousness of the claims. I've read social science papers where they could only get nine people to participate in a longitudinal study, but researchers still tried to make statements about the general human population (e.g. "children of divorced parents don't have lower self esteem than the average person"). But at least they were trying to collect data, and with enough similar studies we could answer that question with reasonable certainty. Moreover, the truth of that hypothesis isn't quite as world-changing as "AI is literally the most efficient and most effective solution to all of our problems and it should be the the primary focus of our species all the bright, gifted superstars who are clever enough to give Eliezer money."

military cervix
Dec 24, 2006

Hey guys

Do you have a link to the paper?

SolTerrasa
Sep 2, 2011

The Erland posted:

Do you have a link to the paper?

Let me see if I can remember where I found it.

Yeah, here it is:
Intelligence.org/files/IE-EI.pdf

Agh, rereading this nonsense really hurts me. One of their arguments that AI explosions will probably happen in the next century is that we've made so much progress since the first Dartmouth conference on AI. For non-AI people, maybe this makes sense, but everything we have done since then has been so much harder than we thought it would be. At the Dartmouth conference, we thought machine vision, the problem of "given some pixels, identify the entities in it" would be solved in a few months. It's been 50 years and we're only okay at it. If there is ANYTHING we should have learned from Dartmouth it's that everything in AI is always harder than it seems like it should be. Why you'd use that as an example baffles me.

gently caress, there's just so much WRONG with this that I have to force myself to stop typing and get back to work.

Adbot
ADBOT LOVES YOU

Mors Rattus
Oct 25, 2007

FATAL & Friends
Walls of Text
#1 Builder
2014-2018

SolTerrasa posted:

Agh, rereading this nonsense really hurts me. One of their arguments that AI explosions will probably happen in the next century is that we've made so much progress since the first Dartmouth conference on AI. For non-AI people, maybe this makes sense, but everything we have done since then has been so much harder than we thought it would be. At the Dartmouth conference, we thought machine vision, the problem of "given some pixels, identify the entities in it" would be solved in a few months. It's been 50 years and we're only okay at it. If there is ANYTHING we should have learned from Dartmouth it's that everything in AI is always harder than it seems like it should be. Why you'd use that as an example baffles me.

And even then, it's so primitive. The best we've got can, just about, distinguish textures.

It can then not tell the difference between a polar bear and a mountain goat. It can, just about, tell that they are not a mountain.

  • Locked thread