Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
su3su2u1
Apr 23, 2014

SolTerrasa posted:

No, I used monkeypatching. Their approach reduces to "can I prove that if I cooperate, the other agent will cooperate?" So FairBot examines the memory of the other bot, then patches in guaranteed cooperation to all those instances, then checks if the other bot would cooperate, then cooperates if it does. Pretty boring, but it works and I cannot fathom why you'd try it their way instead.

So its a typical MIRI paper:

They give up on the practical problem as impossible in their introduction. Instead, they have roughly 20 pages of a really silly formal system, with math showing trivial things (if you write most of their theorems out in English words, they seem trivial). Their formalism is of 0 practical importance, and does more to obscure then enlighten.

Meanwhile, the practical problem can be solved in a fairly straightforward fashion if you are a bit clever about it.

Its like that 100 page decision theory paper Yud wrote- 100 pages of totally unnecessary background, and he fails to formalize his decision theory.

Adbot
ADBOT LOVES YOU

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out
Keep building the bamboo runways until the cargo planes come, MIRI.

Toph Bei Fong
Feb 29, 2008



SolTerrasa posted:

Pretty boring, but it works and I cannot fathom why you'd try it their way instead.





With wooden rifles, wooden planes, we all wait for John Frum.

e: f;b

ol qwerty bastard
Dec 13, 2005

If you want something done, do it yourself!
During a recent Wikipedia Wander, I came across some surprising information about a very simple modification you can do to your body that on average confers an extra thirteen years of life: become a eunuch.

So, presumably, Mr. Yud and all his death-fearing followers have already elected to undergo this procedure, since it would certainly be the rational thing to do. Or is he still intent on lording it over us puny-brained mortals with his more highly evolved poly relationships?

SubG
Aug 19, 2004

It's a hard world for little things.

ol qwerty bastard posted:

During a recent Wikipedia Wander, I came across some surprising information about a very simple modification you can do to your body that on average confers an extra thirteen years of life: become a eunuch.

So, presumably, Mr. Yud and all his death-fearing followers have already elected to undergo this procedure, since it would certainly be the rational thing to do. Or is he still intent on lording it over us puny-brained mortals with his more highly evolved poly relationships?
Thirteen years versus individuals of the same cohort, or versus the average across the entire population? I don't know the general rituals and practices surrounding becoming a eunuch, but it wouldn't at all be surprising if any procedure done after adolescence would produce a population with a higher average life expectancy versus the general population. Same reason pipe smokers have, on average, a greater life expectancy than the general population. Not because smoking a pipe makes you live longer, but because in order to be old enough to smoke a pipe you've got to have already survived all the common causes of infant mortality, and you've almost certainly survived all the common causes of adolescent mortality as well.

The Vosgian Beast
Aug 13, 2011

Business is slow

BobHoward posted:

Some fun-hating reddit mod went nuclear on that subthread. I take it the Big Yud had a little meltdown there?

Here you go
http://i.imgur.com/uvJkRoT.jpg

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



So do you think this is motivated by terror of the Basilisk concept or by his increasing irritation at this ridiculous thing being used to pick fun of his Harry Potter/sophomore blogging audience?

Pulsedragon
Aug 5, 2013

Nessus posted:

So do you think this is motivated by terror of the Basilisk concept or by his increasing irritation at this ridiculous thing being used to pick fun of his Harry Potter/sophomore blogging audience?

Can't it be both?

The Vosgian Beast
Aug 13, 2011

Business is slow

Nessus posted:

So do you think this is motivated by terror of the Basilisk concept or by his increasing irritation at this ridiculous thing being used to pick fun of his Harry Potter/sophomore blogging audience?

I think he's probably a still a little scared of the basilisk, but mostly I think he's just afraid Less Wrong will never live this down

which it won't

ever

Qwertycoatl
Dec 31, 2008

Lesswrong would probably find it way easier to live it down if Eliezer didn't have a hilarious meltdown every time someone on the internet mentioned it.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'


How do you type the sentence "let me post this handy histogram of contributors to the RationalWiki article" without recoiling in horror at what you have become

SubG
Aug 19, 2004

It's a hard world for little things.

Lottery of Babylon posted:

How do you type the sentence "let me post this handy histogram of contributors to the RationalWiki article" without recoiling in horror at what you have become
By being the particular kind of douchebag who has a histogram of the contributors to a RationalWiki article in the first place.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900
I feel kind of bad for David Gerard. He wrote the definitive article about Roko's Basilisk not just to mock the idea, but to reassure frightened LessWrong members. He wrote some very even-handed articles about LessWrong and Yudhowsky themselves. Gerard is himself a member of LessWrong and is inclined to sympathize with them. And what does he get in return? Shrieking accusations of propaganda and sabotage.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

SubG posted:

By being the particular kind of douchebag who has a histogram of the contributors to a RationalWiki article in the first place.

He didn't have a histogram in the first place. He made it for that post with bash scripting.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

:tipshat:

Serious Cephalopod
Jul 1, 2007

This is a Serious post for a Serious thread.

Bloop Bloop Bloop
Pillbug
Is the most recent elementary Yud feed?


Fake edit: it is 30% memes

Real edit: fake miri is the suspect in the murder of and ai engineer. Sherlock keeps saying " ai doesn't exist, at least in the way you're talking about. " who in this thread is writing for this show?

Serious Cephalopod fucked around with this message at 07:22 on Nov 25, 2014

Tiggum
Oct 24, 2007

Your life and your quest end here.


Serious Cephalopod posted:

fake miri is the suspect in the murder of and ai engineer. Sherlock keeps saying " ai doesn't exist, at least in the way you're talking about.

Except then at the end of the episode it turns out that maybe it does? Also, the AI in that episode was really irritating to me; They go on about it passing the Turing test, but half the time it sounds about as convincing as ELIZA. And how did it always know when someone was talking to it and never respond when people talked to themselves or each other?

Serious Cephalopod
Jul 1, 2007

This is a Serious post for a Serious thread.

Bloop Bloop Bloop
Pillbug

Tiggum posted:

Except then at the end of the episode it turns out that maybe it does? Also, the AI in that episode was really irritating to me; They go on about it passing the Turing test, but half the time it sounds about as convincing as ELIZA. And how did it always know when someone was talking to it and never respond when people talked to themselves or each other?

At the end of the episode it felt to me that Sherlock coming up with an answer and the machine responding with "I don't understand..." was meant to distinguish his emotional issues from actual machine level emotionlessness, which Sherlock thinks he admires.

The machine only was in the presence of multiple people at the end of the episode, right? Could just be a directional mic thing.

Also, while dealing with typical tv writing makes it hard to tell, so far it looks to me like everyone who is responding to the computer like it's human is deluding themselves, but in a very human and easy way. It's hard to tell right next to a Turing test where the interviewer knows he's taking with a machine.

SolTerrasa
Sep 2, 2011

http://www.overcomingbias.com/2014/12/ai-boom-bet-offer.html

We've got Robin Hanson putting his money where his mouth is on AI FOOM. "Recently, lots of people have been saying 'this time is different', and predicting that we'll see a huge rise in jobs lost to automation, even though we’ve heard such warnings every few decades for centuries."

He's willing to bet at 20:1 odds that the percentage of the US economy in the computer hardware/software sector won't rise above 5%, from its current position around 2%, before 2025. No takers, unsurprisingly, not even Big Yud.

At Google on Thursday I hears Ray Kurzweil talk. It was at one of those confidential meetings, so no quotes and no video and no context, but one thing he said was to reaffirm his belief that we're on track for the singularity by 2030. He also seems to believe that that's the consensus opinion among AI people, which is... Well, probably not a lie so much as an indication of what sorts of people choose to talk to him. I'm inclined to be generous to Kurzweil because he only seems to make predictions he'll be able to verify, which I admire.

I wonder if someone could convince Hanson to extend his bet another five years and get Kurzweil to take him up on it. I wonder if Kurzweil knows about the crazy side of singulatarians.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Kurzweil is his own crazy side of singularitarians, though.

SolTerrasa
Sep 2, 2011

BobHoward posted:

Kurzweil is his own crazy side of singularitarians, though.

Yeah? I mean, I obviously think that if he makes a prediction about AI specifically it's likely to be wrong, but I thought his singularity was a lot less science fiction than Big Yud's. I thought it was mostly about accelerating hardware capabilities and falling costs. Maybe I'm wrong, I was never a huge fan of the guy. What makes him nuts?

Lightanchor
Nov 2, 2012
A singularity is a sign that your model doesn't apply past a certain point, not infinity arriving in real life

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

SolTerrasa posted:

Yeah? I mean, I obviously think that if he makes a prediction about AI specifically it's likely to be wrong, but I thought his singularity was a lot less science fiction than Big Yud's. I thought it was mostly about accelerating hardware capabilities and falling costs. Maybe I'm wrong, I was never a huge fan of the guy. What makes him nuts?

He's literally a guy who pops hundreds of pills a day and injects other supplements directly into his bloodstream in some kind of self-designed program to extend his life long enough so that he can get his brain uploaded into a computer (a tech he's been predicting for a long time now) and thereby become immortal. It's become clear over the years that this is his religion. He has an extreme fear of death and he's too rational to go for supernatural religion, so he's desperately casting about for a technological afterlife.

In other words he's so invested in the desire for a tech/biotech singularity to happen that he allows his desire to override his rationality - when Kurzweil talks about accelerating HW capabilities and falling costs and so forth, you always have to be aware that he may have fooled himself into putting a ridiculously rosy interpretation on things. He also likes to conflate biological evolution, cultural developments, and technological developments into a kind of inevitable march-of-progress that will result in Thing X by Year Y, where X is something related to being able to upload his brain or extend his life, and Y fits on the timeline to keep Kurzweil alive long enough to see that day.

SubG
Aug 19, 2004

It's a hard world for little things.

SolTerrasa posted:

Yeah? I mean, I obviously think that if he makes a prediction about AI specifically it's likely to be wrong, but I thought his singularity was a lot less science fiction than Big Yud's. I thought it was mostly about accelerating hardware capabilities and falling costs. Maybe I'm wrong, I was never a huge fan of the guy. What makes him nuts?


I don't know about nuts, but we could spend all night enumerating all the poo poo wrong with that.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

SubG posted:



I don't know about nuts, but we could spend all night enumerating all the poo poo wrong with that.

You could make exactly the same graph in the 1700's, yet somehow there wasn't a singularity then. It's also completely trivial because you can make the graph look like whatever you want just by what you choose to be an "event", since "Time vs Time" is so easy to manipulate.

Lottery of Babylon fucked around with this message at 03:04 on Dec 7, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

Lottery of Babylon posted:

You could make exactly the same graph in the 1700's, yet somehow there wasn't a singularity then. It's also completely trivial because you can make the graph look like whatever you want just by what you choose to be an "event", since "Time vs Time" is so easy to manipulate.
:ssh: Every entry that lists multiple events (`telephone, electricity, radio', much less things like the Industrial Revolution or Cambrian Explosion) would be a `singularity' on this chart if they weren't collapsed into a single data point.

Prolonged Panorama
Dec 21, 2007
Holy hookrat Sally smoking crack in the alley!



Projecting that trend line forwards gives zero (or negative, lol) "time til next event"s - and by this graph that happened 20-30 years ago. Why are we waiting for the singularity if it happened in the early 90s?

su3su2u1
Apr 23, 2014

SolTerrasa posted:

Yeah? I mean, I obviously think that if he makes a prediction about AI specifically it's likely to be wrong, but I thought his singularity was a lot less science fiction than Big Yud's. I thought it was mostly about accelerating hardware capabilities and falling costs. Maybe I'm wrong, I was never a huge fan of the guy. What makes him nuts?

Kurzweil gave a talk when I was attending undergrad, and I stayed after for a conversation he had with some professors. His talk was fine, but while interacting with the professors he went full-crackpot pretty fast. Uploading brain's in 10 years (this was significantly more than 10 years ago), living forever, etc.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

SolTerrasa posted:

At Google on Thursday I hears Ray Kurzweil talk. It was at one of those confidential meetings, so no quotes and no video and no context, but one thing he said was to reaffirm his belief that we're on track for the singularity by 2030. He also seems to believe that that's the consensus opinion among AI people, which is... Well, probably not a lie so much as an indication of what sorts of people choose to talk to him. I'm inclined to be generous to Kurzweil because he only seems to make predictions he'll be able to verify, which I admire.

I wonder if someone could convince Hanson to extend his bet another five years and get Kurzweil to take him up on it. I wonder if Kurzweil knows about the crazy side of singulatarians.

This sums up Kurzweil completely:

pentyne
Nov 7, 2012

BobHoward posted:

He's literally a guy who pops hundreds of pills a day and injects other supplements directly into his bloodstream in some kind of self-designed program to extend his life long enough so that he can get his brain uploaded into a computer (a tech he's been predicting for a long time now) and thereby become immortal. It's become clear over the years that this is his religion. He has an extreme fear of death and he's too rational to go for supernatural religion, so he's desperately casting about for a technological afterlife.

In other words he's so invested in the desire for a tech/biotech singularity to happen that he allows his desire to override his rationality - when Kurzweil talks about accelerating HW capabilities and falling costs and so forth, you always have to be aware that he may have fooled himself into putting a ridiculously rosy interpretation on things. He also likes to conflate biological evolution, cultural developments, and technological developments into a kind of inevitable march-of-progress that will result in Thing X by Year Y, where X is something related to being able to upload his brain or extend his life, and Y fits on the timeline to keep Kurzweil alive long enough to see that day.

The pills and supplements thing doesn't sound too weird for someone wanting to min/max their health if its backed by medical information. If it's all "Well medical science doesn't want to admit that X herb can prevent arterial plaque but injecting it into your blood is a common practice among those enlightened enough to know about it" then I'm amazed he hasn't destroyed his liver yet.

It's kind of like a weird hard science version of what Bruce Lee did. Lee got hardcore into healthy eating and more or less lived his life by only eating what was essential for his body and to maintain his physical regimen. Kurzweil seems more like a guy who thinks some pills are all it takes to live at peak health.

pentyne fucked around with this message at 13:10 on Dec 7, 2014

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

pentyne posted:

The pills and supplements thing doesn't sound too weird for someone wanting to min/max their health if its backed by medical information. If it's all "Well medical science doesn't want to admit that X herb can prevent arterial plaque but injecting it into your blood is a common practice among those enlightened enough to know about it" then I'm amazed he hasn't destroyed his liver yet.

Someone dumped a list from one of his books here.

http://www.reddit.com/r/skeptic/comments/1ypitt/ray_kurzweils_supplement_regimen/

It's mostly a lot of faddish supplements. And no, this isn't excusable as min/maxing health. You might think there's real medical information behind these kinds of things, but you'd be wrong. Anything marketed as a "dietary supplement" in the USA is highly suspect. A certain U.S. Senator from Utah legislatively invented supplements as a kind of not-a-drug-honest! which the FDA can't regulate so long as the manufacturers don't go too far with explicit medical claims on the label. (Guess who is known to receive money from supplement manufacturers?)

Since innuendo and plausibly deniable marketing done at arm's length through pop health media serves just as well to convince people of medicinal action, they've basically got a free pass to sell snake oil. Kurzweil is truly an easy con for anything which holds promise of immortality.

As a thread-relevant aside: Kurzweil's other immortality related obsession is with bringing his long-dead dad back to life. Much like Yudkowsky, he believes this kind of resurrection will be possible via sufficiently advanced AI running some kind of dad-simulation based on his dad's notebooks, letters, and whatever other ephemera Kurzweil has sitting in a vault.

Telarra
Oct 9, 2012

BobHoward posted:

As a thread-relevant aside: Kurzweil's other immortality related obsession is with bringing his long-dead dad back to life. Much like Yudkowsky, he believes this kind of resurrection will be possible via sufficiently advanced AI running some kind of dad-simulation based on his dad's notebooks, letters, and whatever other ephemera Kurzweil has sitting in a vault.

And this I just find sad. Even if it worked, this won't actually bring someone back, just a superficial replica. There's no selfless quest here to save someone they loved, just an attempt to patch over a hole in their own life.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Moddington posted:

And this I just find sad. Even if it worked, this won't actually bring someone back, just a superficial replica. There's no selfless quest here to save someone they loved, just an attempt to patch over a hole in their own life.

Yup. Even more sad: note the disconnect between this and "I must live long enough for the Upload". It implies that he doesn't really believe dad-AI is a meaningful way to live after death, but he can't let himself fully acknowledge it because that would require accepting that his dad is dead forever.

Dylan16807
May 12, 2010

Moddington posted:

And this I just find sad. Even if it worked, this won't actually bring someone back, just a superficial replica. There's no selfless quest here to save someone they loved, just an attempt to patch over a hole in their own life.

It's even sadder when I picture this high intelligence devoted to what it knows is just a delusion, actively manipulating Kurzweil so that he's pacified and avoids critical thought that might shatter the illusion. Off in his own little world where he pretends to have everything he truly wanted.

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1
Black Mirror S02E01: Be Right Back

fade5
May 31, 2012

by exmarx

BobHoward posted:

As a thread-relevant aside: Kurzweil's other immortality related obsession is with bringing his long-dead dad back to life. Much like Yudkowsky, he believes this kind of resurrection will be possible via sufficiently advanced AI running some kind of dad-simulation based on his dad's notebooks, letters, and whatever other ephemera Kurzweil has sitting in a vault.
Dude don't go down that road, it's been tried before.

All that'll end up happening is you'll lose some body parts while creating some horrifying, non-human thing. You won't get what you want, and you'll end up worse than when you started. (And you'll end up marked as a human sacrifice in a huge government plot.) Don't do it bro.

BobHoward posted:

Yup. Even more sad: note the disconnect between this and "I must live long enough for the Upload". It implies that he doesn't really believe dad-AI is a meaningful way to live after death, but he can't let himself fully acknowledge it because that would require accepting that his dad is dead forever.
Seriously, it really is amazing just how loving scared people are of death, and the lengths they go to in order to try and avoid/escape it. Just accept it; everyone loving dies. It's an integral part of not just the human condition, but life itself; there's no avoiding it.

fade5 fucked around with this message at 00:06 on Dec 8, 2014

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



fade5 posted:

Seriously, it really is amazing just how loving scared people are of death, and the lengths they go to in order to try and avoid/escape it. Just accept it; everyone loving dies. It's an integral part of not just the human condition, but life itself; there's no avoiding it.
Deathist!! Clearly you must be opposed to all medical research, because there's absolutely no difference between "people living out their natural span, perhaps somewhat elongated by beneficial treatments, in comfort and health" and "I become an immortal computer-god."

What I don't think a lot of these guys really think through is that if people did become immortal it would have to change society quite drastically. Probably a good capsule version here would be that if you invented an easy-to-use immortality serum, I hope you like your current job title, because the people above you may have those jobs forever.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

fade5 posted:

Dude don't go down that road, it's been tried before.

All that'll end up happening is you'll lose some body parts while creating some horrifying, non-human thing. You won't get what you want, and you'll end up worse than when you started. (And you'll end up marked as a human sacrifice in a huge government plot.) Don't do it bro.

Uh that is not a valid argument because anime is not like real life. Now, let me explain how AI is foretold by that ending of Tsuikime

Terrible Opinions
Oct 18, 2013



I think fade was joking. Though it was in service of a correct argument. If the AI is somehow able to magically "resurrect" someone it's not really bringing them back. It's creating a facsimile of them, maybe even a perfect copy, but unless you both believe that souls exist and the AI is able to manipulate them the copy will never be something more than a copy. If they just came out and said that the AI is a magic soul catcher then we could finally just define them completely as a religion.

Adbot
ADBOT LOVES YOU

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



PresidentBeard posted:

I think fade was joking. Though it was in service of a correct argument. If the AI is somehow able to magically "resurrect" someone it's not really bringing them back. It's creating a facsimile of them, maybe even a perfect copy, but unless you both believe that souls exist and the AI is able to manipulate them the copy will never be something more than a copy. If they just came out and said that the AI is a magic soul catcher then we could finally just define them completely as a religion.
I think one of the key Yudkowsky Thought ideas is that at a certain level of resolution, a simulation of you is you, which is why the prospect of a future super-computer that will torture a million copies of you is terrifying. You don't know that you yourself are not a simulation, subjectively - perhaps everything happening NOW, apparently, is actually just a memory that the AI gives you so you are established as "you" in its simulation, and any minute now it's going to wheel in the Excruciator.

  • Locked thread