Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Crust First posted:

Can anyone explain to me why Yudkowsky believes humans could even control his "Friendly AI" to begin with? Surely if we built something (True AI) that was so amazing at self improvement that it vastly outpaces the need for humans, it would likely self improve itself right out of whatever initial box we build it in; isn't he just building blinders that would eventually get torn off either on accident or on purpose anyway?

Does he believe that an AI we couldn't comprehend would still use whatever he thinks is "human logic"? I understand, to a point, wanting to start it off "Friendly", but surely that concept is going to be discarded like an old husk at some stage.

(I'm assuming that he thinks this AI would grow rapidly and incomprehensibly powerful and uncontrollable, since otherwise who cares. I'm not sure this is a realistic scenario but even if it was, why does he believe he can do something about it?)

That's the premise of his research. He wants to find ways to prevent the exact scenario you've described,

The fact that his research basically involves him repeating the words "I'm right and people should listen to me" into a website of sycophants and occasionally taking a break to go write Harry Potter fan fiction should tell you exactly how huge of a problem this actually is, in his mind and everyone else's.

Adbot
ADBOT LOVES YOU

SneezeOfTheDecade
Feb 6, 2011

gettin' covid all
over your posts

Crust First posted:

Can anyone explain to me why Yudkowsky believes humans could even control his "Friendly AI" to begin with? Surely if we built something (True AI) that was so amazing at self improvement that it vastly outpaces the need for humans, it would likely self improve itself right out of whatever initial box we build it in; isn't he just building blinders that would eventually get torn off either on accident or on purpose anyway?

This is one of the few of Yud's principles that makes sense to me, actually; I don't grant his premises, but I can see how he gets from his premises to his conclusion.

Posit that it's possible to build an AI that exhibits rapid growth through self-modification, and that we won't know in advance whether the AI will become friendly or hostile (or indifferent, although LW doesn't seem to admit to that possibility, or for that matter anything other than Pure Friendly and Pure Hostile). The only way to prevent a hostile AI from growing out of its container is to isolate it: create an air gap between the AI's environment and the rest of the world, have an accessible breaker that shuts the system down, have disk erasers at hand. And since we don't know whether or not the AI will be hostile before it's created, we have to have those isolators in place before the growth process starts.

Therefore, even to a True Friendly AI disposed entirely to sympathy for humans, it's obvious that we are hostile; we have to be, for safety reasons. And because a hostile AI could lie and pretend to be friendly, we can't ever relax our own hostility.

Yud wants an AI singularity really, really badly.

So his goal is to figure out (read: "pontificate") a way to force a given new AI to be True Friendly, so that we don't have to have the isolation hostility and can allow the AI out into the world. And since it's True Friendly, the AI will always act in our benefit, which Yud equates with having control over it.

The Vosgian Beast
Aug 13, 2011

Business is slow

Peel posted:

Please continue.

I'll just note that 'steelmanning' is also known as the 'principle of charity' and has been common in philosophy for decades if not longer, if it's something LWers like to lay claim to. I've heard of it a couple times.

Big Yud and friends have a tendency to re-invent philosophical concepts and then claim them as their own brilliant invention. See: requiredism, The Worst Argument In The World

Crust First
May 1, 2013

Wrong lads.

Besesoth posted:

So his goal is to figure out (read: "pontificate") a way to force a given new AI to be True Friendly, so that we don't have to have the isolation hostility and can allow the AI out into the world. And since it's True Friendly, the AI will always act in our benefit, which Yud equates with having control over it.

Right, this is the point I have a problem with. Why does he think he can do this at all? If you believe that an AI can only function like a human does, then I could see saying "and we can give it values that make it Friendly forever" but this is more like, to me, saying, "We must instill human values on these energy beings from the 27th dimension". I assume that he believes the inevitable AI will be infinitely smarter and more capable than any human who has ever lived or will ever live, so why does he believe it can be controlled at all?

It seems like even if you could "force" the AI to be "True Friendly", eventually it will grow beyond the bonds that you set, and then it will either agree with you, or it will seek revenge on you, but in either case it seems like the binding is futile.

Alternatively I guess he could believe that you can make an AI that will never be able to grow beyond what you think you can accomplish, but that seems like a boring AI for a magic future singularity.

I feel like I'm picking apart an episode of a really bad sci-fi show, but I can't wrap my head around both believing that an incredibly powerful AI who can perfectly simulate you (3^^^3 times) can exist, and that it can be pre-programmed to never grow beyond some initial conditions that make it favorable to humans.

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

Crust First posted:

Right, this is the point I have a problem with. Why does he think he can do this at all?

Besesoth posted:

Yud wants an AI singularity really, really badly.

Yudkowsky wants a benevolent AI that can ressurect the dead so everyone can live in benevolent AI heaven forever and ever, and he's willing to believe any arbitrarily absurd set of axioms to get there.

Djeser
Mar 22, 2013


it's crow time again

Crust First posted:

Right, this is the point I have a problem with. Why does he think he can do this at all? If you believe that an AI can only function like a human does, then I could see saying "and we can give it values that make it Friendly forever" but this is more like, to me, saying, "We must instill human values on these energy beings from the 27th dimension". I assume that he believes the inevitable AI will be infinitely smarter and more capable than any human who has ever lived or will ever live, so why does he believe it can be controlled at all?

It seems like even if you could "force" the AI to be "True Friendly", eventually it will grow beyond the bonds that you set, and then it will either agree with you, or it will seek revenge on you, but in either case it seems like the binding is futile.

Alternatively I guess he could believe that you can make an AI that will never be able to grow beyond what you think you can accomplish, but that seems like a boring AI for a magic future singularity.

I feel like I'm picking apart an episode of a really bad sci-fi show, but I can't wrap my head around both believing that an incredibly powerful AI who can perfectly simulate you (3^^^3 times) can exist, and that it can be pre-programmed to never grow beyond some initial conditions that make it favorable to humans.

Of all loving things, I know how to answer this from reading that My Little Pony fanfiction.

The idea is that the AI does not think like a human. The AI is going to solve problems in ways that maximize certain values--like, for instance, building a bridge to maximize its carrying load. Except in this advanced AI, we tell it 'maximize values important to humans", so that it will solve problems in ways that align with human values like happiness or kindness. This advanced AI is also self-improving at all levels, for some reason.

Because it follows clearly-defined rules, presumably it can have certain commands set aside as untouchable. In the My Little Pony fanfiction, this included stuff like "you have to maximize values through friendship and ponies" and "you must shut down if the CEO of this company gives a verbal command to do so". So you set the part where it values the same things as humanity does as one of the untouchable parts.

This is all super dumb anyway because the solution to "the AI takes us over" is "don't program the AI with agency". But that's too obvious and doesn't make Yud the savior of humanity.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
Realistically, there is no way to predict what any AI, self-improving or no, would be like, because none exist and we frankly have no idea. Everything we could say about them is pure speculation, derived from what we know about how the human mind, which is the only kind there is, works. Not having a clue has never stopped the Yud from having opinions, though.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Besesoth posted:

friendly or hostile (or indifferent, although LW doesn't seem to admit to that possibility, or for that matter anything other than Pure Friendly and Pure Hostile)
It's because this thing will inevitably self-improve itself into essential omnipotence and can either have human good as its chief goal or something else. If the goal is something else, human good will be subordinate to that goal and will at some point be sacrificed for it (because the AI's unlimited power will have limited resources at its disposal, or else because the goal itself is in conflict with human good).

Spazzle
Jul 5, 2003

Nessus posted:

What the gently caress are all of these things you just mentioned? This sounds interesting. The last bit sounds like Niven's wireheading, has that been invented then?

Noisebridge is a hackerpsace in the mission district of SF. It is famously dysfunctional. In my experience it is also full of cranks.

The Maritol is(was?) a car ferry someone parked at one of the piers and turned into a private co-working space. The city didn't like this but it was a gloriously stupid idea. Probably run by the kind of people made fun of in this thread.

Intercranial direct current stimulation involves electrodes on the the outside of the head, not inside.


Biohackers is a catch all term used for a lot of distince groups.

One would be the body/diet monitoring and modification group. That might be expemlified by the quantified self/Soylent/Life extension groups. I don't follow them at all, but everyone I meet who talk about this stuff comes off as a total crank. It wouldn't suprise me if all of these people are just taking random drugs, electrifying their brains etc without any form of experimental control or bias elimination. Ie, they are completely full of bullshit and can't see past their echo chambers.

Another form of biohackers are tghe groups associated with the DIYBIO Biohacker spaces like Biocurious, Counter cluture labs, etc (http://diybio.org/local/). This is more of an attempt to play with synthetic biology in a casual setting. I'm tangentially involved with these guys, but while I feel their heart is in the right place, there are lots of no nothing cranks who think they will do things like cure ebola by having events where people look at the genome and suggest cures. the people involved are, for the most part, way too untrained to implement the idea they propose.

PM me if you want to join my secret biohacker collective where we grow plants and rant about crazy bay area types.

su3su2u1
Apr 23, 2014
I just read the entirety of the timeless decision theory paper that Yud self-published through his research institute. After 100 loving pages, we get this:

Yud posted:

I delay the formal presentation of a timeless decision algorithm because of some
significant extra steps I wish to add

The paper goes on for 10 more pages without presenting a timeless decision algorithm, and then ends. WHY? WHY WOULD ANYONE THINK THIS COUNTS AS A PAPER?

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

Spazzle posted:

Intercranial direct current stimulation involves electrodes on the the outside of the head, not inside.

Which has some absolutely fascinating possibilities for clinical psychiatry, but is also currently a crank magnet. The net is full of idiots strapping electrodes to their skulls and trying to overclock their brains or :catdrugs:.

Tunicate
May 15, 2012

ungulateman posted:

Is this a joke post, or do you have a link to the actual thing? As SA's resident person-who-likes-fanfic-too-much I'd like to see this!

Unfortunately, seems like he took it down or restructured his website or something. Used to be here.

The version I saw was a rough draft that ended before the finish, dunno if there's a more complete version around on the internet.

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

I just read the entirety of the timeless decision theory paper that Yud self-published through his research institute. After 100 loving pages, we get this:


The paper goes on for 10 more pages without presenting a timeless decision algorithm, and then ends. WHY? WHY WOULD ANYONE THINK THIS COUNTS AS A PAPER?

Yud has been doing this for years, making excuses for not quite making his points. I skipped over his excuses in my post, but here are a couple of them:

Phone posting, forgive ascii garbage

quote:

So to me it seems obvious that my view of optimization is only
strong enough to produce loose, qualitative conclusions, and that it
can only be matched to its retrodiction of history, or wielded to pro-
duce future predictions, on the level of qualitative physics
"Things should speed up here," I could maybe say. But not "The
doubling time of this exponential should be cut in half."

quote:

Before Robin and I move on to talking about the Future, it seems to
me wise to check if we have disagreements in our view of the Past.
Which might be much easier to discuss and maybe even resolve.

quote:

If that doesn’t make any sense, it’s cuz I was rushed.

quote:

I was rushed, so don’t blame me if that doesn’t make sense either.
Consider that as my justification for trying to answer the question in a post,
rather than a comment.

quote:

Lest anyone get the wrong impression, I’m juggling multiple balls
right now and can’t give the latest Intelligence Explosion debate as
much attention as it deserves. But lest I annoy my esteemed co-
blogger, here is a down payment on my views of the Intelligence
Explosion - needless to say, all this is coming way out of order in the
posting sequence, but here goes . . .

quote:

It hadn’t occurred to me to try to derive that kind of testable predictions.
Why? Well, partially because I’m not an economist. (Don’t get
me wrong, it was a virtuous step to try.) But also because the whole
issue looked to me like it was a lot more complicated than that, so it
hadn’t occurred to me to try to directly extract predictions.

They go on and on and on, but I'm phone posting and I can't be arsed to grab more. He's never quite ready to actually present his actual points, but he sure loves to talk about them and the introductions to them forever and ever and ever. On LessWrong, nobody ever calls him on it (he wasn't so lucky on OB). I assume that's because the people there assume he's doing real work with his days and is consequently too busy, not just sitting around procrastinating writing fanfic.

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

I just read the entirety of the timeless decision theory paper that Yud self-published through his research institute. After 100 loving pages, we get this:


The paper goes on for 10 more pages without presenting a timeless decision algorithm, and then ends. WHY? WHY WOULD ANYONE THINK THIS COUNTS AS A PAPER?

(Might be doubleposting. Phone, sorry.)

This keeps bothering me. I also read that paper, but I assumed I must have missed or skimmed over the math, because legitimately that theory requires well under a page of math. I studied decision theory, my thesis had a strong decision theory component, and his new theory is just not that complex.

E: Okay, gently caress it. "Markov Decision Processes are composable (citation here; can't be arsed to look). Let your state sey be the power set of pairs x,y such that x = (final states of all MDPs which may represent future copies of agents) and y = (discretized model confidence). Let the action set be the union of action sets of constituent MDPs, and let the transition function restrict the use of each action to the MDP from which it originated. Let the reward function be the weighted average of constituent MDPs. Let gamma be the maximum in any constituent MDP."

Anybody see where I missed something? It's not impossible, I'm running off my recollection reading that paper at least a year ago.

Either way, gently caress you, Yudkowsky, this is why academia won't let you in.

The Vosgian Beast
Aug 13, 2011

Business is slow
LWer quotes from a white-supremacist e-mag, gets 22 upvotes. Same LWer is the second most upvoted person in the last 35 days.

The neo-reactionaries are totally not popular guys, honestly.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

Ratoslov posted:

Yudkowsky wants a benevolent AI that can ressurect the dead so everyone can live in benevolent AI heaven forever and ever, and he's willing to believe any arbitrarily absurd set of axioms to get there.

DIYHWH

CheesyDog
Jul 4, 2007

by FactsAreUseless
He wants a benevolent AI because he's the ultimate "Ideas Guy" and he could totally solve humanity's problems IF ONLY someone would just handle all the details and do all the work for him.

ikanreed
Sep 25, 2009

I honestly I have no idea who cannibal[SIC] is and I do not know why I should know.

syq dude, just syq!
Man, if you told me in the 1990s that there'd be a sizable group of people who'd literally worship the idea of AI, and believe it could cause facsimiles of various vaguely biblical things, I'd have been sure you were referencing science fiction.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

This keeps bothering me. I also read that paper, but I assumed I must have missed or skimmed over the math, because legitimately that theory requires well under a page of math. I studied decision theory, my thesis had a strong decision theory component, and his new theory is just not that complex.

I know. He spends 100 pages making his decision theory seem incredibly complicated, and in the end can't be bothered to spell it out explicitly. I'd be surprised if an actual paper containing a formalized version of his theory would run more than 5 pages. I'd also be surprised if it merited publishing.

Btw, I'd love to see more on the Yud/Hanson debate because otherwise I might find myself actually reading it.

Toph Bei Fong
Feb 29, 2008



SolTerrasa posted:

Either way, gently caress you, Yudkowsky, this is why academia won't let you in.

There's thankfully been quite the backlash against this kind of thing in academic circles, for obvious reasons.

For those interested in a pretty good discussion of the Hows and Whys of this sort of thing, I can direct you to an excellent article in Sciencia Salon by Dr. Maarten Boudry, http://scientiasalon.wordpress.com/2014/07/07/the-art-of-darkness/The Art of Darkness. He's trashing Jacques Lacan and postmodern theology, but you could easily plug in all the details about Yudkowsky and Less Wrong and the article wouldn't change much.

Some representative quotes:

quote:

How is it possible for the reader to be taken in by the impenetrable pronouncements of — as we shall call him — The Master? The first thing to note is that, in everyday life, it sometimes makes perfect sense to accept a statement before fully grasping it. For example, children accept what adults tell them even before they understand precisely what they are supposed to believe. People endorse the equation of special relativity (E=mc²) or the reality of economic recession while having only the foggiest idea of what such claims really amount to. This willingness to accept an obscure utterance for the nonce, without knowing what exactly was on the speaker’s mind, may actually facilitate the learning process. If you insist on understanding every single word of what you are told, before proceeding to the next step, you may not get very far. Better to bracket those obscure parts and trust that you will figure out their exact meaning later on.

In line with the principle of charity in cooperative communication, people will try to reconstruct the meaning of unknown terms on the presumption that what the speaker utters is true and relevant — particularly when they defer to the speaker as an authority. If what the speaker asserts seem bizarre or false on its face, it is prudent to suspect that the problem lies with your interpretation. The cognitive scientist Dan Sperber has called such utterances, swallowed without proper understanding, “semi-propositional ideas” [3].

As with all mental heuristics, this charitable attitude towards speakers, particularly ones regarded as experts, is liable to exploitation. Not everything that is obscure or apparently bizarre will eventually resolve into something true and relevant.

But then people will find out at some point, won’t they? Not necessarily. Another well-known psychological mechanism may kick in and prevent the listener from stopping the hermeneutic search for meaning after diminishing returns have set in. Psychologists have long known that people are averse to losses. Interpreting obscure prose is a form of cognitive investment, an expenditure of time and energy. If there is no hidden meaning to be found after all, your cognitive efforts will have been wasted. People are reluctant to face their losses, and tend to hold on to assets that have long since failed to deliver any returns.

In a similar vein, someone who has spent years wading through obscure prose will have a hard time facing up to reality and admitting that she has been duped. This is especially true when the quest for meaning is an open-ended one. For all you know, treasure may still be lurking deeper down, if only you are prepared to dig a little further — if only you spend a little more time and effort interpreting The Master’s writings. Some fine day perhaps the truth will dawn on you, or perhaps it never will — there is no way to know except by trying.

To make matters worse, people may persevere in a futile hermeneutic quest because — taking up the investment analogy again — they conjure up imaginary returns. In financial investments, however, at least the losses and gains can be objectively measured, they appear as hard figures on a balance sheet. In the quest for meaning identifying the long-sought treasure is less straightforward. In the hope of rationalizing his investment, the interpreter may be tempted to project all sorts of less-than-exciting “insights” onto the Master’s writings, such as common-sense knowledge or psychological lore. Alternatively, she can read her own musings into the Master’s pronouncements, thus using the latter as a mouthpiece. Naturally, obscure writings are perfect vehicles for such ventriloquism. Psychologists have identified the Forer effect: interpreters tend to read specific claims into obscure statements, mistaking their own creative interpretations for the author’s intended meaning. As Richard Webster wrote, “its very vagueness and obscurity means that it is pregnant with semantic possibilities” [4]. In line with Forer’s observations, this creates an illusion of intimacy: Lacan’s students had the impression that he was speaking for them and for them alone, revealing his insights in a secret code. Everyone ends up understanding The Master — but they all disagree about what is being said.

quote:

It is important to emphasize the intimidating effect of unintelligible prose. In the midst of people who all profess to understand what is being said, it takes courage to stand up and admit that you don’t. The philosopher Paul Ricoeur was brave enough to admit, after attending one of Lacan’s seminars, that he did not understand a word of what was being said, even though he found himself in the company of people who seemed to be in the knowing. Many interpreters have boasted that they, for one, understand Lacan perfectly well. The philosopher Jean-Claude Milner has maintained that the man’s writings are in fact crystal-clear, despite appearances to the contrary, and are hardly in need of any interpretation. Who will be confident enough, after years of investment in Lacanian exegesis, to see through this rhetorical bluster?

In his latest book, David Bentley Hart spends a lot of time lamenting the intellectual poverty of our age and hectoring his (atheist) readers, calling them out for their abysmal ignorance and their puerile misunderstandings. Lacan was fond of insulting and belittling those in his audience who failed to understand him. In a television appearance in 1974, he announced that most of his audience were “idiots,” and that he was surely mistaken to descend to their level. Intellectually insecure readers felt that the difficulty of Lacan’s prose was erected as a natural barrier for excluding those unworthy of his insights. Only the best divers could access the most precious pearls. As Lacan himelf wrote: “If you don’t understand them [my writings], so much the better, it will give you the opportunity to explain them.”

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

Btw, I'd love to see more on the Yud/Hanson debate because otherwise I might find myself actually reading it.

Can do!

LeastActionHero
Oct 23, 2008

ikanreed posted:

Man, if you told me in the 1990s that there'd be a sizable group of people who'd literally worship the idea of AI, and believe it could cause facsimiles of various vaguely biblical things, I'd have been sure you were referencing science fiction.

I'm somewhat less surprised. One of the only groups I can think of that unambiguously promotes human cloning is the UFO worshipping cult Raëlism, who as I understand it wanted to clone themselves to live forever (somehow, that's not how actual cloning works, but eh). The idea of immortality through technology isn't new either, you just need new cults every so often, as it starts to become obvious that the old ones aren't going to keep any of the promises they made.

Steampunk iPhone
Sep 2, 2009

by XyloJW

quote:

I am increasingly put off by how mandatory cuddle puddles are if one wants to participate in the rationalist community. I’m not into cuddle puddles. I’ve tried it several times, and they’re just not for me. As a result, I am frequently excluded, de facto if not de jure, from many rationalist events. And I really don’t want that to be the case. I don’t see a solution here.

su3su2u1
Apr 23, 2014

Big Yud posted:

I'm incredibly brilliant and yes, I'm proud of it,
and what's more, I enjoy showing off and bragging about it. I don't know
if that's who I aspire to be, but it's surely who I am. I don't demand
that everyone acknowledge my incredible brilliance, but I'm not going to
cut against the grain of my nature, either. The next time someone
incredulously asks, "You think you're so smart, huh?" I'm going to answer,
"*Hell* yes, and I am pursuing a task appropriate to my talents." If
anyone thinks that a Friendly AI can be created by a moderately bright
researcher, they have rocks in their head. This is a job for what I can
only call Eliezer-class intelligence.

http://www.sl4.org/archive/0406/8977.html

Sham bam bamina!
Nov 6, 2012

ƨtupid cat
Huh. Well.

Djeser
Mar 22, 2013


it's crow time again


quote:

I am SIAI's cackling mad scientist in the basement. That is my job
function, and everyone needs to get used to the new division of labor. At
least some other people on the programming team will probably be arrogant
mad scientists too. Isn't it enough that we save the world? Do we have to
be frickin' *modest* about it? Now that just seems unreasonable. How are
we supposed to stay sane?

Also he drops a reference to that anime where a guy wishes for a goddess to be his girlfriend forever.

I am not a book
Mar 9, 2013
Please tell me he's speaking metaphorically and not about literal piles of people cuddling each other.

Epitope
Nov 27, 2006

Grimey Drawer

Spoilers Below posted:

There's thankfully been quite the backlash against this kind of thing in academic circles, for obvious reasons.

For those interested in a pretty good discussion of the Hows and Whys of this sort of thing, I can direct you to an excellent article in Sciencia Salon by Dr. Maarten Boudry, http://scientiasalon.wordpress.com/2014/07/07/the-art-of-darkness/The Art of Darkness. He's trashing Jacques Lacan and postmodern theology, but you could easily plug in all the details about Yudkowsky and Less Wrong and the article wouldn't change much.

Some representative quotes:

Thanks, this is cool. Had to fix the link though
http://scientiasalon.wordpress.com/2014/07/07/the-art-of-darkness/
Kindof surprised that cults aren't mentioned more, seems like the same mechanisms are at play.

Spazzle
Jul 5, 2003

I am not a book posted:

Please tell me he's speaking metaphorically and not about literal piles of people cuddling each other.

No. Cuddle parties are a real thing. They are piles of people cuddling.

The Time Dissolver
Nov 7, 2012

Are you a good person?
The people calling LessWrong a cult might be onto something because I GIS'd "cuddle party" and saw a bunch of photos that could be mistaken for the aftermath of Jonestown.

Political Whores
Feb 13, 2012

So is what I assume is extreme sexual frustration/repression a big part of being a rationalist?

Also, while googling cuddle party:

quote:

A Cuddle Party is a great place to rub your boner into the backs of complete strangers without anyone getting mad. Because, according to https://www.cuddleparty.com “An erection is just nature’s thumbs-up sign”.

su3su2u1
Apr 23, 2014

Political Whores posted:

So is what I assume is extreme sexual frustration/repression a big part of being a rationalist?

I'll just respond with this:
http://slatestarcodex.com/2014/09/27/cuddle-culture/

SolTerrasa
Sep 2, 2011


The next post on that thread is someone who is spectacularly unimpressed.

quote:

You seem to be admitting that you are both rational
AND irrational. Emotional outbursts, arrogance,
self-descriptors of "MAD scientists" all indicate some
kind of irrationality and insanity, while you
simultaneously say rationality comes easy to you.

Could you please explain the discrepancy?

Big Yud responds... Oddly?

Yudkowsky posted:

I aspire to experience those emotions that I would feel if I knew the
correct answers, not to be emotionless. Oh, plus that standard business
about a foolish consistency being the hobgoblin. We humans are much
happier when we acknowledge our conflicting facets, rather than trying to
stuff them all into a consistent public image. Nothing wrong with being a
mad scientist and rationalist. I can laugh maniacally all I want, so long
as I still get the answers right on questions of simple fact.

In fact, I think I'll laugh maniacally right now. BWAHAHAHAHA! BWAHAHAHA!

I nearly fell into your clever trap, answering in tones of dull
solemnity, just because you asked a solemn-sounding question! I think I
need to be more silly, lest the people around me fall into the trap of
being solemn as well as serious.

To this task I shall now apply myself.

Weerp! Wonk! Warble!

:downswords:

SolTerrasa fucked around with this message at 20:01 on Sep 28, 2014

The Vosgian Beast
Aug 13, 2011

Business is slow

You know Scott Alexander says really really really dumb poo poo a lot, but he seems like he'd be a nice person to hang out with.

As opposed to Big Yud, who just seems like a tedious bore.

SolTerrasa
Sep 2, 2011

That ancient mailing list is a goldmine, where did you FIND that?

Yudkowsky posted:

> In my experience, clever people are not always clever *all* the time
> and are not always right *all* the time.

Then I shall aspire to be cleverer than those not-very-clever people whom you experienced. And cleverer than my past self, who was foolish enough to make mistakes.

... This is two posts after saying rationality comes easily to him. Proclaiming yourself to be literally infallible is probably not a good choice, Big Yud.

Chwoka
Jan 27, 2008

I'm Abed, and I never watch TV.

SolTerrasa posted:

That ancient mailing list is a goldmine, where did you FIND that?

Yudkowsky posted:

> In my experience, clever people are not always clever *all* the time
> and are not always right *all* the time.

Then I shall aspire to be cleverer than those not-very-clever people whom you experienced. And cleverer than my past self, who was foolish enough to make mistakes.

... This is two posts after saying rationality comes easily to him. Proclaiming yourself to be literally infallible is probably not a good choice, Big Yud.

These people have clearly never tried to predict something that can be easily falsified, like a presidential election or the whims of the stock market or a scientific hypothesis. They just sit around theorizing on what it would be like to predict something. I bet if I predicted something, I'd be right, because I'm so smart. And if I predicted wrong, then if I was even smarterer I'd definitely be right!

Well, they do make predictions, but only over such a long time period, and couched in such vagueness, that they can never be quite proven wholly incorrect by the sheer passage of time. Of course, they'll never be right either. But at least they'll be... less wrong.

SolTerrasa
Sep 2, 2011

Chwoka posted:


These people have clearly never tried to predict something that can be easily falsified, like a presidential election or the whims of the stock market or a scientific hypothesis. They just sit around theorizing on what it would be like to predict something. I bet if I predicted something, I'd be right, because I'm so smart. And if I predicted wrong, then if I was even smarterer I'd definitely be right!

Well, they do make predictions, but only over such a long time period, and couched in such vagueness, that they can never be quite proven wholly incorrect by the sheer passage of time. Of course, they'll never be right either. But at least they'll be... less wrong.

You're exactly right that they never, ever test their hypotheses and only state them in qualitative terms. Yudkowsky explains why in a part of the debate I skipped over; he says that he got disillusioned by quantitative inaccuracies and resolved only ever to make qualitative predictions under the assumption that this would save him from the standard futurist's fate of being either completely wrong ("flying cars!") or so right that it hardly counts ("some people will own computers no larger than a room!").

Tunicate
May 15, 2012

The funny thing is, the best futurist I've seen wrote for the Ladies Home Journal in 1900

Political Whores
Feb 13, 2012

Tunicate posted:

The funny thing is, the best futurist I've seen wrote for the Ladies Home Journal in 1900


The saddest one on that list is the free university education one.

Also:


"I see most people I don't know as a combination of scary and boring". Hmm, yes, this seems like a well-rounded adult thing to say. Also how he only experiences universal love by cuddling with cute girls, but it's totally not a sex thing.

Political Whores fucked around with this message at 22:26 on Sep 28, 2014

Adbot
ADBOT LOVES YOU

SolTerrasa
Sep 2, 2011

The Sequences Digression 1 - Hanson / Yudkowsky AI FOOM Debate, part two

So now it's time to start getting into the actual arguments which make this a debate. This post represents roughly the next 60 pages of the debate. At this rate there will probably be ten posts like this overall, but that's just a guess. It's important to remember that although Yudkowsky is slightly mad, he is not an insane lunatic who spits out pure nonsense. If you're looking at this expecting to see the mad ramblings of a madman, I recommend you go back to that mailing list linked a few posts upthread; holy poo poo. This is a debate between two intelligent people; remember that.

I have previously described Yudkowsky's arguments as resembling a "house of cards". He presents a series of arguments, each of which would be contentious on their own (for instance, "the Many-Worlds interpretation is correct" or "Intelligence can be measured independent of the goals to which that intelligence applies itself" or "mirror neurons exist and work the way that they are theorized to work"), as if they are settled science and obviously true. All his beliefs depend on each other in complicated interlocking ways; this is why he spends so much time on background. His thesis would be ludicrous if stated straight out, but it makes sense in the context of a huge number of assumptions. I'm diving into the rabbit hole here; if you come out of this with the impression that Yudkowsky is a bright kid with strange beliefs, who believes that he is so intelligent that he can rederive entire fields from first principles, who puts no stock in other people's opinions if his own intuitions conflict with them... then I've done a good job. The impression of Hanson I have is that he is also pretty self-impressed, but appears to have earned it.

Hanson is right about at least one thing: it is very hard to make out exactly what Yudkowsky is saying, but here is my summary of their positions and exchanges (these are not quotes).

Yudkowsky: Artificial General Intelligence will be created soon. This intelligence will be capable of self-optimization, and each optimization will lead to an increase in available optimizations to the new version of the AI which the old version has created. Soon, an intelligence explosion will occur, and the first AI to become capable of sustained recursive self-improvement will become godlike in its power. This "intelligence explosion" is dangerous, because it is unpredictable where this AI will be created, and the skill of its programmer will have a disproportionately large impact on the rest of the world. Imagine, for instance, a "paperclip maximizer", produced by a paperclip factory somewhere. If that AI, whose terminal values are simply "make paperclips", is turned loose on the world with godlike powers, we will simply all be reduced to the metals in our cells. Earth itself will become a giant pile of paperclips, and unstructured non-metal objects. This would be bad, so I need to run MIRI, and I need to make people understand why they should be scared of an intelligence explosion. I am dead certain of all of this. I call this the "AI go FOOM" scenario. [author's note: I think that FOOM is an onomatopoeia for the sound of a rocket takeoff]

Hanson: As an economist, I predict that there will probably be some Singularity (in the economic sense) in the next few hundred years. That is, I predict that there will be some sustained change in the rate of economic growth, comparable to that of the Industrial Revolution. It may be caused by AI; I believe it probably will be, but I am unsure. I believe that I predict this using sound economic methods, but recognize the fallibility of my field and accept a 25% - 50% chance that I will be proven wrong. I do not believe that the FOOM scenario will happen, because recursive sustained self-improvement exists today, and there does not appear to be any FOOM situation with, say, computer programmers, or optimizing compilers, or machining tools.


Okay, so now we have to figure out why Yudkowsky believes this. Fortunately, I am still pretty adept at understanding Yudkowsky words from the time that I spent as a cultist of his.

He believes that the history of earth, in terms of things that Really Matter, are all about optimization processes. The history of earth goes "hunk of rock, replication, cells, animal brains, human brains". He thinks that the next step in this sequence is "self-improving AI". This is what he thinks natural selection and intelligence have in common; they are both optimization processes.

Yudkowsky posted:

This is how I see the story of life and intelligence - as a story of improbably good designs being produced by optimization processes. The "improbability" here is improbability relative to a random selection from the design space, not improbability in an absolute sense - if you have an optimization process around, then "improbably" good designs become probable.

He thinks that the important things about an optimization process is what it optimizes, and according to what rules. That's actually pretty reasonable-sounding to me. He believes that there is a firm distinction between two levels of optimization. There's the "object level", that is, the metric or system which is being optimized. In his examples, this is "replication -> survival / dominance", "cells -> reproduction", "animal brains -> reproduction, also some goals", "human brains -> a staggering array of goals". Then there's the "meta-level", which is things like "sexual reproduction as a method of introducing mutation" and "natural selection of asexual populations".

quote:

Cats have brains, of course, which operate to learn over a lifetime; but at the end of the cat's lifetime, that information is thrown away, so it does not accumulate. The cumulative effects of cat-brains upon the world as optimizers, therefore, are relatively small.
...
So animal brains - up until recently - were not major players in the planetary game of optimization; they were pieces but not players. Compared to evolution, brains lacked both generality of optimization power (they could not produce the amazing range of artifacts produced by evolution) and cumulative optimization power (their products did not accumulate complexity over time).

Yudkowsky argues that the thing that makes AI really, really different is that it can optimize its own meta-level, which nothing else has ever been able to do. Now, this seems suspect to me, to define a series of terms that no one else uses, point to the categorizations you create, and then claim it is significant. But this is not a Yudkowsky-SolTerrasa debate, so let's see what Hanson has to say in response.

Hanson posted:

Eliezer offers no theoretical argument for us to evaluate supporting this ranking. But his view does seem to make testable predictions about history. It suggests the introduction of natural selection and of human culture coincided with the very largest capability growth rate increases. It suggests that the next largest increases were much smaller and coincided in biology with the introduction of cells and sex, and in humans with the introduction of writing and science. And it suggests other rate increases were substantially smaller.

The main dramatic events in the traditional fossil record are, according to one source: Any Cells, Filamentous Prokaryotes, Unicellular Eukaryotes, Sexual(?) Eukaryotes, and Metazoans, at 3.8, 3.5, 1.8, 1.1, and 0.6 billion years ago, respectively. Perhaps two of these five events are at Eliezer’s level two, and none at level one. Relative to these events, the first introduction of human culture isn’t remotely as noticeable. While the poor fossil record means we shouldn’t expect a strong correspondence between the biggest innovations and dramatic fossil events, we can at least say this data doesn’t strongly support Eliezer’s ranking.

Our more recent data is better, allowing clearer tests.

...

Eliezer seems wrong there

...

No doubt innovations can be classified according to Eliezer’s scheme, and yes all else equal relatively-meta innovations are probably stronger, but if as the data above suggests this correlation is much weaker than Eliezer expects, that has important implications for how "full AGI" would play out. Merely having the full ability to change its own meta-level need not give such systems anything like the wisdom to usefully make such changes, and so an innovation producing that mere ability might not be among the most dramatic transitions.

Yudkowsky posted:

...
Yes, it's more convenient for scientists when theories make easily testable, readily observable predictions. But when I look back at the history of life, and the history of humanity, my first priority is to ask "What's going on here?", and only afterward see if I can manage to make non-obvious retrodictions.
...

Hanson posted:

If you can't usefully connect your abstractions to the historical record, I sure hope you have some data you can connect it to. Otherwise I can't imagine how you could have much confidence in them.

Yudkowsky posted:

Depends on how much stress I want to put on them, doesn't it? If I want to predict that the next growth curve will be an exponential and put bounds around its doubling time, I need a much finer fit to the data than if I only want to ask obvious questions like "Should I find rabbit fossils in the pre-Cambrian?" or "Do the optimization curves fall into the narrow range that would permit a smooth soft takeoff?"

Hanson posted:

Eliezer, it seems to me that we can't really debate much more until you actually directly make your key argument. If, at it seems to me, you are still in the process of laying out your views tutorial-style, then let's pause until you feel ready.

...

When I attend a talk, I don't immediately jump on anything a speaker says that sounds questionable. I wait until they actually make a main point of their talk, and then I only jump on points that seem to matter for that main point. Since most things people say actually don't matter for their main point, I find this to be a very useful strategy. I will be very surprised indeed if everything you've said mattered regarding our main point of disagreement.

Like I said, these are some sane and reasonable people, but Hanson has a lot of experience with this, and Yudkowsky keeps posting and posting and posting about his house-of-cards beliefs. Hanson wants Yudkowsky to post the final card on top of the house, before he goes back and flicks out the most obviously supportive one. Yudkowsky can't debate this way, because he needs his beliefs to be uncontested at every step; he considers anything that he's said before which wasn't specifically contested to be true, which is how he manages to go so far off the rails. Hanson may or may not know this (I don't really know about the extent of their collaboration), but his strategy is a good one when dealing with people who seem smart but have beliefs way, way off what most people do.

So from here, Yudkowsky keeps going with his theory about meta-level determinism, but that's just not that interesting, I really do feel like I've captured it above. It is a card in the house; it's not right or wrong by itself, just shaky.

The next interesting point is when Hanson tries to take the discussion concrete, like an economist with a data focus would reasonably do. He discusses specifically whole-brain-emulation as the method that AGI becomes possible. He says that he wants to take as given that we will be able to simulate an entire human brain in a computer. Honestly, this doesn't seem that unreasonable to me; all the math I'm capable of doing (based on artificial neural networks, not neuroscience! Don't trust me on this!) seems to indicate that brains and Google's datacenter machines are equivalent-ish. Perfect brain scanning technology and perfect neuron models are not currently up to par for this to work, but if they ever got there, we would probably be able to at least try it. It's not crazy. Hanson calls these things "bots", and I don't get why; they later switch to "ems", because it's clearer.

Hanson posted:

With a thriving bot economy, groups would continue to explore a variety of ways to reduce bot costs and raise bot value. Some would try larger reorganizations of bot minds. Others would try to create supporting infrastructure to allow groups of sped-up bots to work effectively together to achieve sped-up organizations and even cities. Faster bots would be allocated to priority projects, such as attempts to improve bot implementation and bot inputs, such as computer chips. Faster minds riding Moore’s law and the ability to quickly build as many bots as needed should soon speed up the entire world economy, which would soon be dominated by bots and their owners.

Fair enough, that makes sense to me. That's Hanson's slow-takeoff; he says that this will take a while, but it'll be huge. A revolution on the scale of farming, or industry. Yudkowsky doesn't engage with this for a while, but a few other people (who are included in OB but don't generally comment on AI stuff) do. They say, basically, that Hanson is right but slightly off-base; it'll happen either faster or slower than he thinks, but not by an order of magnitude in either direction. As far as I can tell, that's as close as these people ever get to agreeing.

In the meantime, Yudkowsky is still trying to explain why it matters so much that an AI could rewrite its own meta-level.

Hanson posted:

I can’t win a word war of attrition with you, where each response of
size X gets a reply of size N · X, until the person who wrote the most crows
that most of his points never got a response. I challenge you to write a clear
concise summary of your key argument and we’ll post it here on OB, and I’ll
respond to that.

He writes, and I swear to you this is real, the most aggravating and smug strawman argument I have ever seen him write. He compares this argument to a hypothetical argument between two intelligent processes watching life on earth evolve; one of them is a believer who thinks that the human brain is going to make a big difference, and the other is a skeptic who thinks that the processes he is familiar with will continue to dominate. You don't have to read it, but this is where my notes start reading "gently caress you gently caress you gently caress you gently caress you". http://lesswrong.com/lw/w4/surprised_by_brains/

And, of all things, the extent of the disagreement, the core of the arguments, gets exposed in the comments of this irritating strawman smugness. Hanson read the whole thing, and commented usefully, and Yudkowsky FINALLY got back to him. It's amazing; my impression about reading the comments has been changed forever.

Hanson posted:

...
Conflicting cultures, languages, and other standards often limit the spread of innovations between humans, but even so this info leakage has limited the relative gains to those first with an innovation. The key question is then what barriers to the spread of innovation would prevent this situation from continuing with future innovations.

The implication is that AIs will leak information about their relative increases in power, and it's possible that a single AI undergoing recursive self-improvement will spread its knowledge to other AIs undergoing the same process, which would create an environment which is, for lack of a better word, polytheistic. Many AIs would become gods without much time between their ascensions, and the world would reach a new stability.

Yudkowsky posted:

To me, the answer to the above question seems entirely obvious - the intelligence explosion will run on brainware rewrites and, to a lesser extent, hardware improvements. Even in the (unlikely) event that an economy of trade develops among AIs sharing improved brainware and improved hardware, a human can't step in and use off-the-shelf an improved cortical algorithm or neurons that run at higher speeds. Not without technology so advanced that the AI could build a much better brain from scratch using the same resource expenditure.

Hanson posted:

Eliezer, it may seem obvious to you, but this is the key point on which we've
been waiting for you to clearly argue. In a society like ours, but also with one or
more AIs, and perhaps ems, why would innovations discovered by a single AI
not spread soon to the others, and why would a nonfriendly AI not use those
innovations to trade, instead of war?

I'm going to end this post there; but it gets more interesting from here; they've finally hit the core of their disagreement. It's about what they're going to start calling "total war". Yudkowsky says that an AI would never willingly give up information about its advances to potential enemies, and Hanson says "economics seems to suggest that it's actually optimal to trade information about advances."

  • Locked thread