Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
NGDBSS
Dec 30, 2009






Another thing that's not been touched on about that passage is that Yudkowsky seems to be weirded out by fiat currency of all things.

Adbot
ADBOT LOVES YOU

MatchaZed
Feb 14, 2010

We Can Do It!


NGDBSS posted:

Another thing that's not been touched on about that passage is that Yudkowsky seems to be weirded out by fiat currency of all things.

PURESTRAIN GOLD what do you mean it doesn't have intrinsic value

Peanut Butler
Jul 25, 2003



WilliamAnderson posted:

PURESTRAIN GOLD what do you mean it doesn't have intrinsic value

I'm just going to start calling it "fiat metal"

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

Spazzle posted:

It's time to reject the reflexive kowtowing to the supposed genius of people like drexler, kurtzweil and yud. They are cranks who get famous by telling the rich and nerds that they are special and that their worldview is correct.

Kurzweil is like Noam Chomsky or Linus Pauling in that he got professional kudos for doing a fairly obscure thing well and then parlayed it into decades of crazy ranting about half-baked speculative stuff.

Spazzle
Jul 5, 2003

AlbieQuirky posted:

Kurzweil is like Noam Chomsky or Linus Pauling in that he got professional kudos for doing a fairly obscure thing well and then parlayed it into decades of crazy ranting about half-baked speculative stuff.

He is a crank that believes in ~cybermagick~. He puts forth graphs that have dinosaurs and printing presses to infer the coming singularity. This is timecube level bullshit. He is a crank. These guys are all cranks.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

Spazzle posted:

He is a crank that believes in ~cybermagick~. He puts forth graphs that have dinosaurs and printing presses to infer the coming singularity. This is timecube level bullshit. He is a crank. These guys are all cranks.

Yes, but he got in a position to crank his crankiness because he came up with OCR. Just like Linus Pauling was a real chemist before he went all "woo, vitamin C will cure cancer."

Yudkowsky's done nothing except be a crank. Kurzweil at least invented and improved some useful poo poo before he went coo-coo bananas.

Lightanchor
Nov 2, 2012
That is not what Noam Chomsky is like at all.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

Lightanchor posted:

That is not what Noam Chomsky is like at all.

Nobody would listen to his political opinions if he hadn't received acclaim for his work as a linguist. I guess if you think his political opinions are good, the comparison to Pauling or Kurzweil might seem inappropriate? But then some people think Kurzweil is right about the singularity, and some people think Pauling was right about vitamin C.

And if it matters, I'm a socialist who thinks Chomsky does the left more harm than good. But my larger point is about the "if you do one thing properly, you are automatically an expert on everything" tendency in public life. Pretty sure all of us in the thread agree that Kurzweil is off the rails.

AlbieQuirky fucked around with this message at 00:16 on Nov 19, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

AlbieQuirky posted:

Nobody would listen to his political opinions if he hadn't received acclaim for his work as a linguist.
His political writing rose to prominence in the late '60s due to his vocal opposition to American involvement in Vietnam. Regardless of what you think of a book like American Power and the New Mandarins, I think you're just grinding your own ideological axe if you're pretending incredulity that it could have become well-known, in 1969, entirely on its own merits.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out
If you read the reviews from the time, most if not all of them start by referring to Chomsky's prior accomplishments in linguistics. Arthur Schlesinger's famous review was titled "Three Cheers for Professor Chomsky."

But you're right. What I said was ridiculously overstated, and I was just being a dick. I should have said "His academic prominence helped him find a platform for his political writing." And to be fair to Chomsky, he was very clear in stating that it was his responsibility as an academic to speak out.

AlbieQuirky fucked around with this message at 01:17 on Nov 19, 2014

SubG
Aug 19, 2004

It's a hard world for little things.

AlbieQuirky posted:

If you read the reviews from the time, most if not all of them start by referring to Chomsky's prior accomplishments in linguistics. Arthur Schlesinger's famous review was titled "Three Cheers for Professor Chomsky."

Arthur Schlesinger posted:

A half-century ago Mencken described the eruption of Thorstein Veblen on the American intellectual scene: "Of a sudden, Siss! Boom! Ah! Then, overnight, the upspringing of the intellectual soviets, the headlong assault upon all the old axioms of pedagogical speculation, the nihilistic dethronement of Prof. Dewey---and, rah, rah, rah for Prof. Dr. Veblen!...In a few months---almost it seemed a few days---he was all over the Nation, the Dial, the New Republic and the rest of them, and his books and pamphlets began to pour from the presses, and newspapers reported his every wink and whisper, and everybody who was anybody began gabbling about him."

One is tempted to write in the same way about the recent emergence of Noam Chomsky. A distinguished student of linguistics, he quietly pursued arcane studies at M.I.T. in a highly specialized field. Then, of a sudden, he has burst forth as an all-purpose expert on history, strategy, foreign policy, social psychology, political science, political ethics, ethical politics. He settles every issue with ecclesiastical certitude. His sermons cover interminable pages in The New York Review of Books. He is cited with reverence by the young. It is rah, rah, rah for Prof. Dr. Comsky.
Arthur Schlesinger does not appear to support your claim that the only reason anyone would listen to Chomsky is because of his work on linguistics.

AlbieQuirky posted:

But you're right. What I said was ridiculously overstated, and I was just being a dick. I should have said "His academic prominence helped him find a platform for his political writing."
...and therefore he's a crank? Are all people who profess political beliefs cranks? People who do so to great popular attention?

I'm just trying to get at the heart of the claim. I understand why Kurzweil is a crank. But apart from the fact that you disagree with his political beliefs I don't see why we're being invited to believe Chomsky is one.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

SubG posted:

I'm just trying to get at the heart of the claim. I understand why Kurzweil is a crank. But apart from the fact that you disagree with his political beliefs I don't see why we're being invited to believe Chomsky is one.

I meant to be speaking more generally about the "expert in one thing has public platform to talk about other unrelated thing" phenomenon that Schlesinger refers to (I had forgotten that he had quoted Mencken on Veblen there). My opinions about Chomsky aren't interesting even to me, so forget I ever mentioned him; as you pointed out, it was a stretch.

My point is that Kurzweil banked a shitload of public credibility by real achievements in inventing and perfecting some significant tech stuff, which he has then used as a platform from which to launch his kookery. Which is ridiculous. But Yudkowsky is even more ridiculous, because his platform is built on having written a lovely fanfiction.

AlbieQuirky fucked around with this message at 02:14 on Nov 19, 2014

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

SolTerrasa posted:

If you grant Yudkowsky his belief, then his work with MIRI makes sense

Even if you accept every one of Yudkowsky's premises and conclusions about the danger of an evil AI singularity spontaneously occurring in someone's kitchen over the course of five minutes, what "work" has MIRI ever done and how would any of it "help" in any way?

SubG
Aug 19, 2004

It's a hard world for little things.

AlbieQuirky posted:

I meant to be speaking more generally about the "expert in one thing has public platform to talk about other unrelated thing" phenomenon that Schlesinger refers to (I had forgotten that he had quoted Mencken on Veblen there). My opinions about Chomsky aren't interesting even to me, so forget I ever mentioned him; as you pointed out, it was a stretch.
Schlesinger does not appear to be doing that, however. He appears to be saying that Chomsky is an expert in one relatively obscure field and has been getting a great deal of attention for his opinions in diverse other subjects, which is more or less exactly the opposite of what you've been arguing.

AlbieQuirky posted:

My point is that Kurzweil banked a shitload of public credibility by real achievements in inventing and perfecting some significant tech stuff, which he has then used as a platform from which to launch his kookery. Which is ridiculous. But Yudkowsky is even more ridiculous, because his platform is built on having written a lovely fanfiction.
I'm not sure that I believe that Kurzweil has a shitload of public credibility, or that whatever public credibility he does have comes from his having invented OCR (which I wager is a fact a minority of the public could name) rather than the fact that he's a TED Talk-level talking head.

I mean this is all completely irrelevant, as an idea's value isn't expressible in terms of the majesty of the c.v. from which it emanated. But I think Kurzweil if famous for his kooky Singularity rants because he makes a bunch of kooky Singularity rants and there's a market for that kind of poo poo, and Yud is known as well as he is for precisely the same reason.

SolTerrasa
Sep 2, 2011

Whew, I apologize to everyone for being wrong about Drexler. In my field, 1700 cites on a paper counts for a lot; I didn't bother to verify whether that's also true for nanotechnology, or even for textbooks. Looking now, my favorite textbook has 23000 cites. Still, even if I'd known that I'd have said that it's hard to call someone's ideas "ignored" after 1700 published citations. But someone who seems to know more than me about nanotechnology says no, so I trust their opinion.

Lottery of Babylon posted:

Even if you accept every one of Yudkowsky's premises and conclusions about the danger of an evil AI singularity spontaneously occurring in someone's kitchen over the course of five minutes, what "work" has MIRI ever done and how would any of it "help" in any way?

A good question! MIRI has never done any (edit: important or useful or notable) work that I know of, so it would be useless to speculate on what it would help. But they believe that they need to be the ones to build the AI which causes the singularity, because they will build it according to a made-up bit of math which allows them to verify that it will maintain the same goals under self-modification. Their math might be sound, or not. They're keeping it ~secret~ because if you unwashed masses get your hands on it then you might use it wrong. Yudkowsky talks often about the burden of knowing ~dangerous things~.

But if you grant that there's a reasonable chance of the singularity happening the way they think it will, then their standard argument of "even if it's unlikely that we can help, it's a low-risk high-reward investment" actually applies. I think.

sat on my keys!
Oct 2, 2014

SolTerrasa posted:

A good question! MIRI has never done any (edit: important or useful or notable) work that I know of, so it would be useless to speculate on what it would help. But they believe that they need to be the ones to build the AI which causes the singularity, because they will build it according to a made-up bit of math which allows them to verify that it will maintain the same goals under self-modification. Their math might be sound, or not. They're keeping it ~secret~ because if you unwashed masses get your hands on it then you might use it wrong. Yudkowsky talks often about the burden of knowing ~dangerous things~.

It's truly incredible to me that people are still donating money to an organization of ~10 "research associates" and EY and Luke Muehlhauser that has been as or less productive than one unsuccessful grad student. Who would have cost over their 10 years to degree as much as one of MIRI's people.

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

SolTerrasa posted:

A good question! MIRI has never done any (edit: important or useful or notable) work that I know of, so it would be useless to speculate on what it would help. But they believe that they need to be the ones to build the AI which causes the singularity, because they will build it according to a made-up bit of math which allows them to verify that it will maintain the same goals under self-modification. Their math might be sound, or not. They're keeping it ~secret~ because if you unwashed masses get your hands on it then you might use it wrong. Yudkowsky talks often about the burden of knowing ~dangerous things~.

But if you grant that there's a reasonable chance of the singularity happening the way they think it will, then their standard argument of "even if it's unlikely that we can help, it's a low-risk high-reward investment" actually applies. I think.

That's not a low-risk high-reward investment, that's a long-odds bet. Even granting their assumptions, there's no particular reason to believe MIRI is capable of producing anything that could affect the outcome. They're barely able to write a fanfic; what reason does anyone have to believe, even if there is something to be done, that they are the ones to do it? Supporting them isn't a safe investment, it's a Pascal's Wager.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆
IIRC they say something along the lines of "there's a one-in-a-billion chance that we'll save 8 billion lives for every dollar you donate, so each dollar is worth 8 saved lives". So they don't have to actually accomplish anything. They just want you to donate on the off chance that they do accomplish something.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

SubG posted:

I'm not sure that I believe that Kurzweil has a shitload of public credibility, or that whatever public credibility he does have comes from his having invented OCR (which I wager is a fact a minority of the public could name) rather than the fact that he's a TED Talk-level talking head.

I mean this is all completely irrelevant, as an idea's value isn't expressible in terms of the majesty of the c.v. from which it emanated. But I think Kurzweil if famous for his kooky Singularity rants because he makes a bunch of kooky Singularity rants and there's a market for that kind of poo poo, and Yud is known as well as he is for precisely the same reason.

I think people in the tech field had (at least at one time) a predisposition to take Kurzweil seriously because of his previous achievements. If you haven't encountered the "this person did an actually useful thing in the past, therefore this crazy idea they're espousing can't be as crazy as it seems" attitude, you have lucked out, I guess.

Lottery of Babylon posted:

That's not a low-risk high-reward investment, that's a long-odds bet. Even granting their assumptions, there's no particular reason to believe MIRI is capable of producing anything that could affect the outcome. They're barely able to write a fanfic; what reason does anyone have to believe, even if there is something to be done, that they are the ones to do it? Supporting them isn't a safe investment, it's a Pascal's Wager.

Buying Indulgences 2.0

su3su2u1
Apr 23, 2014

bartlebyshop posted:

It's truly incredible to me that people are still donating money to an organization of ~10 "research associates" and EY and Luke Muehlhauser that has been as or less productive than one unsuccessful grad student. Who would have cost over their 10 years to degree as much as one of MIRI's people.

Even more incredibly the subset of people that do this really, really claim to believe in being "effective."

I bet a grad student over 10 years would cost less than one of MIRI's people. They seem to pay the full-timers a bit more than a market postdoc.

sat on my keys!
Oct 2, 2014

su3su2u1 posted:

Even more incredibly the subset of people that do this really, really claim to believe in being "effective."

I bet a grad student over 10 years would cost less than one of MIRI's people. They seem to pay the full-timers a bit more than a market postdoc.

Someone (possibly you?) at one point dug up MIRI's tax returns and it turns out EY is getting paid north of $100K a year to write about one fanfiction chapter every 3 months. I know postdocs who make that much who've published more papers in the last year than MIRI has ever.

I wish I could get hired for more than double my current salary based solely on posts to LessWrong.

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

Honestly, the ~dangerous knowledge~ BS is the mark of a cult. Secret knowledge that only the enlightened can possess is a common way of bonding people together, particularly combined with the idea that they have a special status (All the bullshit about 'we shouldn't try to take over the world because it would be easy!') and are guardians of the eschaton and the salvation of humankind, in a fashion that not only makes them the most important people on earth, but also happens to fit every single one of their preconceptions about the world and how it works.

And holy poo poo, Yud, how do you not know about MCP? I've only had Econ 101 (as part of a teaching cert program) and I know that stuff (and also that Econ 101 is ridiculously simplified and not reflective of reality, just basic concepts in a vacuum).

Nessus
Dec 22, 2003

After a Speaker vote, you may be entitled to a valuable coupon or voucher!



Night10194 posted:

And holy poo poo, Yud, how do you not know about MCP? I've only had Econ 101 (as part of a teaching cert program) and I know that stuff (and also that Econ 101 is ridiculously simplified and not reflective of reality, just basic concepts in a vacuum).
The MCP was shut down, wasn't it, by Flynn and TRON?

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

Goddamnit, that's a pretty appropriate typo. I meant the Marginal Propensity to Consume, as mentioned above.

su3su2u1
Apr 23, 2014

bartlebyshop posted:

Someone (possibly you?) at one point dug up MIRI's tax returns and it turns out EY is getting paid north of $100K a year to write about one fanfiction chapter every 3 months. I know postdocs who make that much who've published more papers in the last year than MIRI has ever.

I wish I could get hired for more than double my current salary based solely on posts to LessWrong.

Particle physics postdocs make like 50k or so. Half a MIRI researcher or so. And of course they publish more than MIRI, because its basically impossible not to.

EDIT: Also, can anyone with more in-field knowledge than I have tell me if this is as silly as it looks: http://intelligence.org/files/CorrigibilityTR.pdf

Is there something interesting or profound hidden in there that I'm missing?

su3su2u1 fucked around with this message at 10:13 on Nov 19, 2014

Munin
Nov 14, 2004


The other thing to note is that we've had self-optimising nanomachines running on this planet for millennia and they are still a very long way from turning the surface, let alone the bulk, into grey goo...

[edit] Some of them even managed to spontaneously assemble into arguably intelligent entities!

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.

Munin posted:

The other thing to note is that we've had self-optimising nanomachines running on this planet for millennia and they are still a very long way from turning the surface, let alone the bulk, into grey goo...

[edit] Some of them even managed to spontaneously assemble into arguably intelligent entities!
They're not exactly the group-organized universal submolecular fabricators people think about when they posit that scenario either, though.

Patter Song
Mar 26, 2010

Hereby it is manifest that during the time men live without a common power to keep them all in awe, they are in that condition which is called war; and such a war as is of every man against every man.
Fun Shoe

SubG posted:

...and therefore he's a crank? Are all people who profess political beliefs cranks? People who do so to great popular attention?

I'm just trying to get at the heart of the claim. I understand why Kurzweil is a crank. But apart from the fact that you disagree with his political beliefs I don't see why we're being invited to believe Chomsky is one.

Is there a better word than "crank" to describe someone who was an outspoken supporter of the Khmer Rouge through the entirety of the Cambodian genocide?

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

Big Yud posted:

Harry had read once, somewhere, that the opposite of happiness wasn't sadness, but boredom; and the author had gone on to say that to find happiness in life you asked yourself not what would make you happy, but what would excite you. And by the same reasoning, hatred wasn't the true opposite of love. Even hatred was a kind of respect that you could give to someone's existence. If you cared about someone enough to prefer their dying to their living, it meant you were thinking about them.

This has been bugging me for a while: why wouldn't Yudhowsky, big nerd that he is, say "No emotion is the 'opposite' of anything else because it's all just chemicals in your brain. It's just a social construct." and now Harry can simply will people dead because he's hacked the Matrix? It would've both been technically true and also let him feel really superior to everybody.

I suspect he didn't because "all you are is a sack of chemical interactions" is the thought of a hardcore naturalist, and naturalism doesn't exactly jive with things like brain uploading, which is mentioned often by Singularitarians, or 100%-accurate simulations of you. (After all, if consciousness is nothing more than the sum of physical processes, then an uploaded or simulated version of you isn't you, anymore than running a simulation of Earth's tectonic plates on my computer means I actually have 40 sextillion tons of rock under my complete control.)

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

Particle physics postdocs make like 50k or so. Half a MIRI researcher or so. And of course they publish more than MIRI, because its basically impossible not to.

EDIT: Also, can anyone with more in-field knowledge than I have tell me if this is as silly as it looks: http://intelligence.org/files/CorrigibilityTR.pdf

Is there something interesting or profound hidden in there that I'm missing?

I just read it. I'm not sure what they thought was cool about their result, which is basically "we don't know!", but I found it... Well, not offensively awful. It's possible to enjoy MIRI stuff like that, kind of like science fiction. if you grant their premises (*all* of them, including many-worlds and sustainable improvement cycles and whatever else), then posit that there already exists an intelligent utility maximizing agent, how *would* you convince it to shut down if need be? It's "work" that should be done (and has been) by the likes of stoned undergrads, but hey.

Also one of their "problems" is "wait, what if we accidentally build Fraa Jad from Anathem", so you can see why it's hard to take too seriously.

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

I'm pretty certain that I've read stuff about corporation-enslaved uploaded minds in pulpy science fiction.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

Anticheese posted:

I'm pretty certain that I've read stuff about corporation-enslaved uploaded minds in pulpy science fiction.

Turns out we've already made malevolent AI. It's spiteful, optimizes for a single thing and is willing to destroy the planet to do so, is far smarter than any single human, and successfully manipulates large groups of people to get what it wants.

A corporation.

su3su2u1
Apr 23, 2014

SolTerrasa posted:

I just read it. I'm not sure what they thought was cool about their result, which is basically "we don't know!", but I found it... Well, not offensively awful. It's possible to enjoy MIRI stuff like that, kind of like science fiction. if you grant their premises (*all* of them, including many-worlds and sustainable improvement cycles and whatever else), then posit that there already exists an intelligent utility maximizing agent, how *would* you convince it to shut down if need be? It's "work" that should be done (and has been) by the likes of stoned undergrads, but hey.

Also one of their "problems" is "wait, what if we accidentally build Fraa Jad from Anathem", so you can see why it's hard to take too seriously.

It seems like the paper is:
1. here is a problem from our sci-fi scenario.
2. Here is the exact same problem restated in terms of a broad math formalism.
3. Here are some trivial results relating to the formalism (not the problem)
4. this problem sure is hard! Look how unsolved it is!

I feel like I must be missing something subtle, because the whole paper is contained in the sentence "it might be really hard to turn off a super powered AI" and the math looks like window dressing? Like... where is the actual result?

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

That science fiction problem is already solved with a science fiction answer! Wire an electromagnetic shotgun to the hardware! Or force it to run on Windows ME.

SolTerrasa
Sep 2, 2011

su3su2u1 posted:

It seems like the paper is:
1. here is a problem from our sci-fi scenario.
2. Here is the exact same problem restated in terms of a broad math formalism.
3. Here are some trivial results relating to the formalism (not the problem)
4. this problem sure is hard! Look how unsolved it is!

I feel like I must be missing something subtle, because the whole paper is contained in the sentence "it might be really hard to turn off a super powered AI" and the math looks like window dressing? Like... where is the actual result?

You're not missing anything important. The MIRI people love to crow about their unsolved hard problems because they want to use that to convince people that their work is valuable. Tiny correction is that really what they're saying is "it might be really hard to convince an AI that it should change utility functions, including the specific case of shutting it off." They also offer a few attempts but none without unsalvageable problems (in my judgment).

Advice: When you read a MIRI paper or a Yudkowsky paper, skip anything that looks like they spent a bunch of time with LaTeX. Their math formalisms are never an effective way of communicating the idea in question, because they don't think in terms of math formalisms. They think in terms of science fiction scenarios, then later try to translate it, and the translation errors shine through. They add the formalisms to their papers because it makes them feel better to have invented a formalism, but they always explain their ideas better in words and examples.

I'm guilty of this myself; my first paper includes a complicated-looking quarter-page explanation of what is effectively "this is two Markov Decision Processes". I asked my advisor at the time whether I could remove it since I was pressed for space and he said that machine learning reviewers won't approve a paper that doesn't have math symbols.

SubG
Aug 19, 2004

It's a hard world for little things.

AlbieQuirky posted:

I think people in the tech field had (at least at one time) a predisposition to take Kurzweil seriously because of his previous achievements. If you haven't encountered the "this person did an actually useful thing in the past, therefore this crazy idea they're espousing can't be as crazy as it seems" attitude, you have lucked out, I guess.
It kinda looks like you're conflating two things here. Just because somebody was right about other things before doesn't mean they're necessarily right about a different thing now. But also just because they're wrong about something now that doesn't mean that they were actually wrong about those other things earlier.

But whatever. My point is that I don't think this has anything to do with Kurzeil's popularity, particularly his popularity in tech. Because he writes nerd wish-fulfillment fantasies. People in tech and people who fetishise tech are a natural audience. And they don't seem particularly selective about what kind of resume a person needs to have to qualify as an proselyte of the Singularity---as demonstrated by the fact that someone like Yud is on the radar at all.

Patter Song posted:

Is there a better word than "crank" to describe someone who was an outspoken supporter of the Khmer Rouge through the entirety of the Cambodian genocide?
Chomsky wasn't an outspoken supporter of the Khmer Rouge. He was an outspoken critic of prior American involvement in Cambodia, and was an outspoken critic of media discussion of the genocide in Cambodia.

I'm not trying to defend Chomsky here, and I think arguing for or against his opinions is beyond the scope of the thread. I do, however, still think it's nonsense to call someone a crank just because you disagree with their political opinions.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

SubG posted:

It kinda looks like you're conflating two things here. Just because somebody was right about other things before doesn't mean they're necessarily right about a different thing now. But also just because they're wrong about something now that doesn't mean that they were actually wrong about those other things earlier.

I'm not conflating those two things. I am saying that media and people in general conflate those two things. Have you literally never encountered this phenomenon before?

quote:

But whatever. My point is that I don't think this has anything to do with Kurzeil's popularity, particularly his popularity in tech.

I think it had a lot to do with it when he was first positioning himself as a futurologist, and I say this as someone who was organizing academic conferences right around the time The Age of Spiritual Machines came out. At that time (1999), people gave Kurzweil the benefit of the doubt that his book wasn't as crazy as it seemed, specifically because he had accomplished so much. Fifteen years later, it's a lot more obvious that he's off the rails.

SubG
Aug 19, 2004

It's a hard world for little things.

AlbieQuirky posted:

I'm not conflating those two things. I am saying that media and people in general conflate those two things. Have you literally never encountered this phenomenon before?
Sure. But you said that the problem was that people took Kurzweil seriously because of his past accomplishments. I mean I get what you're saying, but saying people shouldn't take Kurzweil seriously because his ideas about the Singularity are nutty is the same kind of error as saying people should take the Singularity seriously because of Kurzweil's prior accomplishments.

AlbieQuirky posted:

I think it had a lot to do with it when he was first positioning himself as a futurologist, and I say this as someone who was organizing academic conferences right around the time The Age of Spiritual Machines came out. At that time (1999), people gave Kurzweil the benefit of the doubt that his book wasn't as crazy as it seemed, specifically because he had accomplished so much. Fifteen years later, it's a lot more obvious that he's off the rails.
Nah. I mean I think this is a silly derail and we've about wrung it dry. But Kurzweil was a recognisable name in tech more in the '70s and '80s. People started paying attention to him when The Age of Spiritual Machines came out because he's a self-promoting evangelist selling wish-fulfillment fantasies to tech fetishists. Here's an Ngram graph for mentions of the name Kurzweil along with mentions of the term `Singularity' (with an initial cap, case sensitive):



You see a big bump there when The Age of Spiritual Machines was published, with the Singularity as an idea already gaining attention. What you don't see is a bump in 1990 for The Age of Intelligent Machines or in 1993 for The 10% Solution for a Healthy Life. If everyone was hanging on Ray Kurzweil's every word because they so highly regarded him as a thinker that would be a surprising result. If we accept that the reason why the Singularity nonsense has gotten so much play is not because it's Ray Kurzweil advocating for it but rather because the idea itself has an audience the data makes perfect sense.

In slightly different terms: I don't think the Kurzweil was boosting the Singularity's stock as much as the Singularity was boosting Kurzweil's stock.

AlbieQuirky
Oct 9, 2012

Just me and my 🌊dragon🐉 hanging out

SubG posted:

Sure. But you said that the problem was that people took Kurzweil seriously because of his past accomplishments. I mean I get what you're saying, but saying people shouldn't take Kurzweil seriously because his ideas about the Singularity are nutty is the same kind of error as saying people should take the Singularity seriously because of Kurzweil's prior accomplishments.

I actually wasn't saying that, either. I was saying that culture has a propensity to take crazy ideas more seriously when they come from people with a track record of non-crazy ideas.

quote:

In slightly different terms: I don't think the Kurzweil was boosting the Singularity's stock as much as the Singularity was boosting Kurzweil's stock.

I feel like we're talking at cross-purposes here somehow. I agree 100% that Kurzweil became a celebrity with credulous nerds because of the singularity nonsense. But my impression as someone who was following that stuff at that time was that he got access to the media platform (including book deal) that made his woo-woo celebrity possible because of his track record of being a successful and innovative tech guy.

That Ngram data is interesting, because it definitely highlights how his one big stupid idea was his breakthrough idea in terms of reaching the public.

Adbot
ADBOT LOVES YOU

MatchaZed
Feb 14, 2010

We Can Do It!


Look who noticed Big Yud now...

http://xkcd.com/1450/

  • Locked thread