Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
LaughMyselfTo
Nov 15, 2012

by XyloJW

WilliamAnderson posted:

Look who noticed Big Yud now...

http://xkcd.com/1450/

Whoever loses, we win.

Adbot
ADBOT LOVES YOU

Ugly In The Morning
Jul 1, 2010
Pillbug

LaughMyselfTo posted:

Whoever loses, we win.

The alt-text is gold.
"I'm working to bring about a superintelligent AI that will eternally torment everyone who failed to make fun of the Roko's Basilisk people."

sat on my keys!
Oct 2, 2014

Neoreactionaries, why are you neoreactionary?.

First comment contains poster's SAT score.

The Lord of Hats
Aug 22, 2010

Hello, yes! Is being very good day for posting, no?

bartlebyshop posted:

Neoreactionaries, why are you neoreactionary?.

First comment contains poster's SAT score.

More importantly, they make mention of the fact that they read Calvin and Hobbes and Encyclopedia Brown as a kid.

uber_stoat
Jan 21, 2001



Pillbug

bartlebyshop posted:

Neoreactionaries, why are you neoreactionary?.

First comment contains poster's SAT score.

" Like so many who fancied ourselves prodigies (I got a 1600 on my SAT, I read Calvin and Hobbes, Encyclopedia Brown..."

Vanguard of the counter-revolution. Nerdlings who think they're smart because, as children, they read stuff they think jocks didn't.

LaughMyselfTo
Nov 15, 2012

by XyloJW
If I thought Encyclopedia Brown made me smart, I'd probably be a neoreactionary too.

sat on my keys!
Oct 2, 2014

It's a good thing these intellectual titans weren't subject to the dysgenic effects of poor people mixing into their bloodlines.

Ratoslov
Feb 15, 2012

Now prepare yourselves! You're the guests of honor at the Greatest Kung Fu Cannibal BBQ Ever!

Almost nobody didn't read Calvin and Hobbes. It was a nationally syndicated comic strip. It isn't some kind of special magical signifier of being intelligent, it's a comic strip.

Namarrgon
Dec 23, 2008

Congratulations on not getting fit in 2011!
He made a new thing; How To Write Intelligent Characters.

I would quote my favourite part but that would be all of it and it is very short.

Tiggum
Oct 24, 2007

Your life and your quest end here.


Namarrgon posted:

He made a new thing; How To Write Intelligent Characters.

I would quote my favourite part but that would be all of it and it is very short.

quote:

you should read standard books of writing advice like “How to Write Science Fiction and Fantasy" by Orson Scott Card.
I haven't read that book, but I have read several of Card's Ender books, and I would not take his advice.

quote:

Level 1 Intelligent characters. Writing characters with an inner spark of life and optimization; not characters that do super-amazing clever things, but characters that are trying in routine ways to optimize their own life in a reasonably self-aware fashion.
I assume these "level one" characters are supposed to be normal, realistic people, right?

quote:

Every Level 1 Intelligent character wants to toss your precious plot out the window and will seize any available chance to do so.
Ugh. Why does every fanfic/NaNoWriMo writer seem to think that their characters are sentient agents working against them? You're writing it, idiot, it's all you.

quote:

Level 1 Intelligent characters will often have done some equivalent of having read the same books you have, which requires that you give them plots which cannot be solved just by having read similar books.
If I'm reading this right, he's saying that your characters should be as much like you as possible?

quote:

If the character does something novel or unexpected using widely available tools, the surrounding civilization must be such that other people wouldn’t have thought of it already.
:ironicat:

quote:

Thanks to the Illusion of Transparency, the best way to construct a mystery is to have some latent fact about the story, known to you, that is not spelled out explicitly in the text, and then make absolutely no effort to conceal this fact, except that you never literally say it out loud.
That sounds more like a plot hole than a mystery to me. :shrug:

HMS Boromir
Jul 16, 2011

by Lowtax

Tiggum posted:

If I'm reading this right, he's saying that your characters should be as much like you as possible?

I think what he's going for is what the teeveetorps call being "genre savvy" - a character knowing what kind of story they're in and how it tends to go, in other words having "read the same books you have" and thus knowing what to expect from a story inspired by them. It's still really dumb.

HMS Boromir fucked around with this message at 15:33 on Nov 21, 2014

neongrey
Feb 28, 2007

Plaguing your posts with incidental music.

Tiggum posted:

I haven't read that book, but I have read several of Card's Ender books, and I would not take his advice.

Naw, it's actually a decent advice book, if a bit dated-- a lot of the story conventions it discusses haven't really been done much in at least twenty years, if not more. But it's got good advice on stuff like keeping the purple out of your prose and how to handle exposition. He wrote the thing long before he disappeared up his own rear end, and his writing-as-craft skills aren't really in question.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

Tiggum posted:

Ugh. Why does every fanfic/NaNoWriMo writer seem to think that their characters are sentient agents working against them? You're writing it, idiot, it's all you.

It's not uncommon to hear actual writers say things like "I originally planned for Character X to do Y, but when the time came Character X decided to do Z instead." By which of course they don't actually mean that Character X is an independent agent with some mystical metafictional narrative-influencing powers rebelling against the author. It's just a way of saying that by the time they got around to writing a certain scene, they realized that the way they had written that character so far had made it clear that it would be out of character for them to do whatever the original plan was, so the author had them do something else instead that would suit their personality better.

Bad writers hear good writers say this sort of thing and imitate it because it makes them sound like better writers and makes the writing process sound more magical. But since they don't understand what the original statements actually meant, the things they say when they try to copy them end up making no sense at all, like implying that a sufficiently "intelligent" character will realize they're in a book and declare war on the author, or saying that their upper-crust schoolteacher character "chose" to actually have been a lower-class prostitute all along instead.

LaughMyselfTo
Nov 15, 2012

by XyloJW

Lottery of Babylon posted:

like implying that a sufficiently "intelligent" character will realize they're in a book and declare war on the author,

To be fair, this could happen in the same way you described as reasonable - but only if the story were so hopelessly poorly written that you could not possibly live in it without realizing exactly what it was. :v:

The Vosgian Beast
Aug 13, 2011

Business is slow

LaughMyselfTo posted:

Whoever loses, we win.

the fight begins

Remora
Aug 15, 2010


god bless the self-unaware

Telarra
Oct 9, 2012


And so far everyone's calling him out on it. :unsmith:

Sham bam bamina!
Nov 6, 2012

ƨtupid cat
I am Yud's complete lack of paragraph breaks.

Toph Bei Fong
Feb 29, 2008



Meanwhile, over in real AI research land, a team has mapped out the mind of a worm (Caenorhabditis elegans) and put it into a Lego robot... The resulting robot behaves like a worm.

https://www.youtube.com/watch?v=YWQnzylhgHc

http://www.i-programmer.info/news/105-artificial-intelligence/7985-a-worms-mind-in-a-lego-body.html

Asgerd
May 6, 2012

I worked up a powerful loneliness in my massive bed, in the massive dark.
Grimey Drawer

That led me to this on the HPMOR subreddit

quote:

Harry is dropped into the canon of your favourite fictional universe. How quickly does he munchkin himself into omnipotence?

Among a ton of jerking off about how awesome Eliezer Harry is, I like this one:

quote:

Warhammer 40k: Harry doesn't stand a chance. He's killed for being a psyker before he can talk.

The Lord of Hats
Aug 22, 2010

Hello, yes! Is being very good day for posting, no?

Spoilers Below posted:

Meanwhile, over in real AI research land, a team has mapped out the mind of a worm (Caenorhabditis elegans) and put it into a Lego robot... The resulting robot behaves like a worm.

https://www.youtube.com/watch?v=YWQnzylhgHc

http://www.i-programmer.info/news/105-artificial-intelligence/7985-a-worms-mind-in-a-lego-body.html

Someone get that poor worm a proper body :ohdear:

How significant is this? I mean, a worm is a really simple organism, but the fact that we can do this even on a basic level seems really neat.

I am not a book
Mar 9, 2013
It strikes me as indicative of Yud's worldview that instead of e-mailing Munroe, which lends itself to authenticating identity of the participants and the integrity of the message, he used reddit, and just assumed that Munroe would see his message.

Lightanchor
Nov 2, 2012

The Lord of Hats posted:

How significant is this? I mean, a worm is a really simple organism, but the fact that we can do this even on a basic level seems really neat.

Not significant until it starts killing people.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Lightanchor posted:

Not significant until it starts killing people.
Worms are known for their violent rampages...

uber_stoat
Jan 21, 2001



Pillbug
Soon that worm will begin boot-strapping its own mental architecture. We need to lock this down now or the day of the Nematode is upon us.

The Vosgian Beast
Aug 13, 2011

Business is slow
This will bring us to the cyborg sandworm apocalypse. We must all become cyberpunk Fremen.

Darth Walrus
Feb 13, 2012

The Lord of Hats posted:

Someone get that poor worm a proper body :ohdear:

That's actually an interesting thought - would body dysmorphia become a thing with brain uploading and the like? What (additional) psychological issues might this lot's dreams of becoming eight-penised cyberdragons give them?

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

The Lord of Hats posted:

Someone get that poor worm a proper body :ohdear:

How significant is this? I mean, a worm is a really simple organism, but the fact that we can do this even on a basic level seems really neat.

If you can do it with 302 neurons, you can do it with 100 billion (theoretically). Then again, good luck finding a computer with the necessary processing power in the next few decades.

Skittle Prickle
Oct 28, 2005

The best-tasting pickle I ever heard!

Big Yud posted:

I'm a bit sad that Randall Monroe seems to possibly have jumped on this bandwagon - since it was started by people who were playing the role of jocks sneering at nerds, the way they also sneer at effective altruists, and having XKCD join in on that feels very much like your own mother joining the gang hitting you with baseball bats. On the other hand, RationalWiki has conducted a very successful propaganda campaign here. So it's saddening but not too surprising if Randall Monroe has never heard hinted any version but RationalWiki's. I hope he reads this and reconsiders.

I love that he immediately jumps to jocks when complaining about his "persecution", when it is very obvious that everyone who cares about this at all is nerdy as all hell. (Also lol at the "propaganda campaign" consisting of explaining what these people actually believe)

SubG
Aug 19, 2004

It's a hard world for little things.

RPATDO_LAMD posted:

If you can do it with 302 neurons, you can do it with 100 billion (theoretically). Then again, good luck finding a computer with the necessary processing power in the next few decades.
Nah. I mean I'm not saying that I think there's any reason to believe you can't, but in the general case being able to solve a small problem doesn't imply that you can solve an arbitrary larger version of the same problem. Busy beaver, n-body, whatever.

The Lord of Hats posted:

Someone get that poor worm a proper body :ohdear:
You'd be hard pressed to do that with legos; a bunch of Caenorhabditis elegans survived the Columbia disaster. There were a bunch onboard for a biology experiment. They survived the breakup at Mach 18, the fall from roughly 60 km up, and then being left to their own devices for a couple months before their container was found intact in the debris.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Some fun-hating reddit mod went nuclear on that subthread. I take it the Big Yud had a little meltdown there?

su3su2u1
Apr 23, 2014
This might be the dumbest paper I've ever read (its the only Yud arxiv paper):

http://arxiv.org/abs/1401.5577

In it they note that they can make programs that will cooperate on the one-shot prisoner's dilemma, if they can see each other's source code ahead of time.

So if players are allowed to coordinate, coordination problems go away? WHO'D HAVE THUNK IT?

sat on my keys!
Oct 2, 2014

su3su2u1 posted:

This might be the dumbest paper I've ever read (its the only Yud arxiv paper):

http://arxiv.org/abs/1401.5577

In it they note that they can make programs that will cooperate on the one-shot prisoner's dilemma, if they can see each other's source code ahead of time.

So if players are allowed to coordinate, coordination problems go away? WHO'D HAVE THUNK IT?

Someone's never tried to plumb the depths of math.GM...

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

This might be the dumbest paper I've ever read (its the only Yud arxiv paper):

http://arxiv.org/abs/1401.5577

In it they note that they can make programs that will cooperate on the one-shot prisoner's dilemma, if they can see each other's source code ahead of time.

So if players are allowed to coordinate, coordination problems go away? WHO'D HAVE THUNK IT?

I was thinking about that paper today, apropos of nothing, and I realized that it's nontrivial to implement in a practical way! It's one of those things where you can theorize about it with no effort at all, but if you try to write the actual code then you have more concerns. For example you need to be able to avoid the infinite loop scenario where you check the code of the other agent which checks your code which checks its code, etc. I wonder if Yud actually wrote it.

Telarra
Oct 9, 2012

SolTerrasa posted:

I wonder if Yud actually wrote it.

Ahaha.

Ahahahaha.

Ahahahahahahahahahahahahahaha.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I was thinking about that paper today, apropos of nothing, and I realized that it's nontrivial to implement in a practical way! It's one of those things where you can theorize about it with no effort at all, but if you try to write the actual code then you have more concerns. For example you need to be able to avoid the infinite loop scenario where you check the code of the other agent which checks your code which checks its code, etc. I wonder if Yud actually wrote it.

I don't think they have any actual working code? It just seems like a bunch of super-trivial solutions to the prisoner's dilemma. i.e. "fair Bot" whose definition is "cooperate with anything that will provably cooperate with you." Prudent Bot "cooperate with anything that will cooperate with you UNLESS it will cooperate even if you defect." Then they discuss that while these are in-principle unexploitable, real world implementations are probably exploitable.

If they had dealt with the constraints of the real world, they might have stumbled on to something interesting in terms of how to actually go about doing the proving.

I suppose thats why its gone nearly a year with no citations- there is nothing there.

su3su2u1 fucked around with this message at 04:50 on Nov 22, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

su3su2u1 posted:

I suppose thats why its gone nearly a year with no citations- there is nothing there.
I don't think it's overstating things to say that he's changed the problem statement enough that he's no longer talking about the Prisoner's Dilemma at all and just doesn't seem to realise it.

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

I don't think they have any actual working code? It just seems like a bunch of super-trivial solutions to the prisoner's dilemma. i.e. "fair Bot" whose definition is "cooperate with anything that will provably cooperate with you." Prudent Bot "cooperate with anything that will cooperate with you UNLESS it will cooperate even if you defect." Then they discuss that while these are in-principle unexploitable, real world implementations are probably exploitable.

If they had dealt with the constraints of the real world, they might have stumbled on to something interesting in terms of how to actually go about doing the proving.

I suppose thats why its gone nearly a year with no citations- there is nothing there.

They must have done something. I'm sitting around waiting for my mapreduces to finish so I wrote up working code for FairBot and DefectBot in 25 lines of python, and three of those are comments, and seven more are tests. I believe that they have wrong beliefs about the probability of a particular kind of singularity, but they aren't all a bunch of idiots, they could do this.

I just don't see any actual evidence that they were interested in solving the one incredibly obvious mildly interesting recursion problem.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

They must have done something. I'm sitting around waiting for my mapreduces to finish so I wrote up working code for FairBot and DefectBot in 25 lines of python, and three of those are comments, and seven more are tests. I believe that they have wrong beliefs about the probability of a particular kind of singularity, but they aren't all a bunch of idiots, they could do this.

I mean... you say that they "must have done something", but LOOK AT THE PAPER! If they did do something, why isn't it referenced anywhere in the paper?

Did you write FairBot in the way they suggest- treating agents as formulas in Peano arithmetic and searching for equivalence proofs in towers of formal systems? Naively, I'd expect that to be totally impractical. Of course they need the silly methodology because they insist on provability because they claim the recursion problem is intractable.

su3su2u1 fucked around with this message at 05:52 on Nov 22, 2014

Adbot
ADBOT LOVES YOU

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

I mean... you say that they "must have done something", but LOOK AT THE PAPER! If they did do something, why isn't it referenced anywhere in the paper?

Did you write FairBot in the way they suggest- treating agents as formulas in Peano arithmetic and searching for equivalence proofs in towers of formal systems? Naively, I'd expect that to be totally impractical. Of course they need the silly methodology because they insist on provability because they claim the recursion problem is intractable.

No, I used monkeypatching. Their approach reduces to "can I prove that if I cooperate, the other agent will cooperate?" So FairBot examines the memory of the other bot, then patches in guaranteed cooperation to all those instances, then checks if the other bot would cooperate, then cooperates if it does. Pretty boring, but it works and I cannot fathom why you'd try it their way instead.

  • Locked thread