Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
SolTerrasa
Sep 2, 2011

The Sequences - Sequence 1: How To Actually Change Your Mind - Subsequence 1: Politics Is The Mind-Killer - Post 4: The Scales of Justice, the Notebook of Rationality
Full post here, but it's short: http://lesswrong.com/lw/h1/the_scales_of_justice_the_notebook_of_rationality/

Here, Yudkowsky makes a good post, if you're interested in arguing well about politics. It's too long by half, but I'll cut you out the good bits, here:

quote:

If a given reactor is likely to melt down, this seems like a 'point against' the reactor, or a 'point against' someone who argues for building the reactor. And if the reactor produces less waste, this is a 'point for' the reactor, or a 'point for' building it. So are these two facts opposed to each other? No. In the real world, no. These two facts may be cited by different sides of the same debate, but they are logically distinct; the facts don't know whose side they're on.

...

But studies such as the above show that people tend to judge technologies—and many other problems—by an overall good or bad feeling. If you tell people a reactor design produces less waste, they rate its probability of meltdown as lower. This means getting the wrong answer to physical questions with definite factual answers, because you have mixed up logically distinct questions.

Uncited "studies" make me uncomfortable from a man who believes in :biotruths:, but this is not ludicrous enough that I'm going to try to prove him wrong.

I won't even mock him for his :words: here, because it's not that unpleasant to read, really.

Adbot
ADBOT LOVES YOU

Lottery of Babylon
Apr 25, 2012

STRAIGHT TROPIN'

SolTerrasa posted:

it's 740 pages. But I'm most of the way through it

:catstare:

SolTerrasa posted:

Uncited "studies" make me uncomfortable from a man who believes in :biotruths:, but this is not ludicrous enough that I'm going to try to prove him wrong.

It sounds like he's just saying "the halo effect exists", which isn't wrong but isn't a brilliant insight either.

It's the standard dilemma for Yudkowsky's writing: new or true, pick at most one.

Lottery of Babylon fucked around with this message at 04:41 on Sep 7, 2014

SolTerrasa
Sep 2, 2011

Lottery of Babylon posted:

It sounds like he's just saying "the halo effect exists", which isn't wrong but isn't a brilliant insight either.

Yep, so brace yourselves for...

The Sequences - Sequence 1: How To Actually Change Your Mind - Subsequence 1: Politics Is The Mind-Killer - Post 5: Correspondence Bias
Full post here: http://lesswrong.com/lw/hz/correspondence_bias/

tl;dr: "when I kick a puppy, it's because I had a bad day. When you kick a puppy, it's because you're a puppy-kicker." Or, if you're talking to someone who knows this stuff: "Fundamental Attribution Error exists."

He does go off the rails a bit where he attempts to explain early religion and alchemy with the same inherent bias.

quote:

It seems quite intuitive to explain rain by water spirits; explain fire by a fire-stuff (phlogiston) escaping from burning matter; explain the soporific effect of a medication by saying that it contains a "dormitive potency". Reality usually involves more complicated mechanisms: an evaporation and condensation cycle underlying rain, oxidizing combustion underlying fire, chemical interactions with the nervous system for soporifics. But mechanisms sound more complicated than essences; they are harder to think of, less available. So when someone kicks a vending machine, we think they have an innate vending-machine-kicking-tendency.

It's like... you know the saying "when all you have is a hammer, everything looks like a nail?" It's like Yudkowsky is walking through a hardware store (the works of Kahneman) and trying out every hammer he can find, one per day.

And then he has this:

quote:

Suppose I gave you a control with two buttons, a red button and a green button. The red button destroys the world, and the green button stops the red button from being pressed. Which button would you press? The green one.

Can you guess what the buttons correspond to, and why they're relevant here, and why the question is worth asking? If you were following the article, you might notice that we had FAE, early religion, alchemy, and then FAE again.

quote:

And yet people sometimes ask me why I want to save the world.

The last three words link to a no-longer-extant file on the MIRI website. 404. Right, bias, religion, alchemy, bias again... So of course, the buttons are "run MIRI" or "don't run MIRI", and they correspond to a 100% certainty of not-destroying-the-world and a 100% certainty of destroying the world. Good job, Yudkowsky, good job.

Words it takes to express his point: 21 (or 4)
Words he used: 759
Efficiency Ratio: 0.027

SolTerrasa
Sep 2, 2011


I kept going, but now I have to stop for tonight. My notes on anything Yudkowsky wrote or said past page 205 or so are now just the phrase "gently caress you gently caress you gently caress you" over and over again, annotated with what page exactly led to this. Some highlights, as an apology for being too goddamn fed up with this poo poo to post on it tonight:

-Yudkowsky argues that economics, as a fully general field, is exclusively about the production of goods, and treats innovation and invention as black boxes.

-Hanson points out on every post that he is still waiting for Yudkowsky to make his argument. Once Yudkowsky claims to have done so, Hanson rewrites the argument so that it is equally valid for proving that we must all donate to the NYC City Planning Department. Yudkowsky agrees that perhaps he is not done making his argument.

-Yudkowsky argues against the most brilliant example of learning AI that there is, specifically calling out the design of semantic knowledge I used in my system as "worthless", so you can bet I'll have something to say about that. gently caress you Yudkowsky, you try writing something.

:edit: mine is not the most brilliant thing I was talking about, I see that that's unclear. That's cyc. http://en.m.wikipedia.org/wiki/Cyc It's not going to go anywhere, but it was a great, great idea, and I got my ideas about semantic representation from it.

-Yudkowsky believes you can run a human-equivalent brain on a home computer from 1996. I'll just let that stand on its own. You can barely run a NETWORK STACK on a home computer from 1996.

SolTerrasa fucked around with this message at 09:59 on Sep 7, 2014

Morkyz
Aug 6, 2013

SolTerrasa posted:

Words it takes to express his point: 21 (or 4)
Words he used: 759
Efficiency Ratio: 0.027

To be fair, saying something like "the halo effect exists" does assume you already know it exists. I'm assuming most LW readers are moderate-to-severly nerd teenagers who don't really know much about philosophy.

Djeser
Mar 22, 2013


it's crow time again

You don't have to assume: in the pony fanfiction, they were openly disparaging philosophy as being solely taught for status reasons and useless and boring.

This, despite the majority of Yudkowsky's output being based on Wikipedia-level knowledge of philosophical concepts.

Tunicate
May 15, 2012

SolTerrasa posted:

-Yudkowsky believes you can run a human-equivalent brain on a home computer from 1996. I'll just let that stand on its own. You can barely run a NETWORK STACK on a home computer from 1996.

As someone who struggles to emulate 91 cells at once, gently caress you yudkowsky.

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

Djeser posted:

You don't have to assume: in the pony fanfiction, they were openly disparaging philosophy as being solely taught for status reasons and useless and boring.

This, despite the majority of Yudkowsky's output being based on Wikipedia-level knowledge of philosophical concepts.

Well, considering they recreated a pretty Judeo-Christian Eschatology entirely through mortal fear of death and replacing 'God' with 'Friendly AI' while claiming they're completely above and beyond religion, would it really be surprising that they do basic philosophy while looking down on the liberal arts?

I'm really tempted to talk to one of my professors about writing a religious studies paper on this particular cult and if he thinks it'd be viable/publishable. It's just fascinating to me, to see the denial of any religious thought in a faith community and yet see their eschaton is so close to the classic one formed by the dominant religion in their home region. Religious thinking arising from a professed complete lack of religion.

Qwertycoatl
Dec 31, 2008

SolTerrasa posted:

-Yudkowsky argues against the most brilliant example of learning AI that there is, specifically calling out the design of semantic knowledge I used in my system as "worthless", so you can bet I'll have something to say about that. gently caress you Yudkowsky, you try writing something.

He has tried. The first step, of course, is to invent a new programming language, better than all other languages, that's capable of expressing his genius. Behold!

Flare

quote:

A new programming language has to be really good to survive. A new language needs to represent a quantum leap just to be in the game. Well, we're going to be up-front about this: Flare is really good. There are concepts in Flare that have never been seen before. We expect to be able to solve problems in Flare that cannot realistically be solved in any other language. We expect that people who learn to read Flare will think about programming differently and solve problems in new ways, even if they never write a single line of Flare. We think annotative programming is the next step beyond object orientation, just as object orientation was the step beyond procedural programming, and procedural program was the step beyond assembly language.

quote:

It may be possible to implement limited AI without Flare, but it won't feel natural. To start an innovative AI industry, there has to be a working AI language.

quote:

XML is to Flare what lists are to LISP, or hashes to Perl

A lot of words were written, but naturally it was abandoned without anything of value being produced.

SolTerrasa
Sep 2, 2011

Qwertycoatl posted:

He has tried. The first step, of course, is to invent a new programming language, better than all other languages, that's capable of expressing his genius. Behold!

A lot of words were written, but naturally it was abandoned without anything of value being produced.

This is amazing, and thank you for showing us. I have three things to say about that:

Number one is that no matter what you do, no programming language will ever free you from the necessity of expressing your ideas completely instead of relying on intuition and circuitous logic. Yudkowsky will never accomplish anything, in any language, because his problems go much deeper than language choice.

Number two is that most real innovations in AI thus far have NOT come from us specifying some obvious algorithm, derived from human cognition, in a clever way, which is all a new language would help with. Rather they have come from someone implementing a clever but novel algorithm in an obvious way. Or sometimes an obvious but novel algorithm in an obvious way.

Number three is that you should NEVER write a new programming language, full stop. I have thought that was the solution to my problems three times, actually finished one once, and never yet has it been right. Terrible idea.

Syd Midnight
Sep 23, 2005

This is my simplistic take on all of this. The core (and very, very human) motivation behind all of Yudkowski's beliefs is that he wants to resurrect his dead father so he can have all of those father/son bonding moments he didn't have IRL. Any thought, concept, or fact that makes a Yudkowski father/son reunion impossible will be rejected out of hand, even if he has to write 500 pages to do it.

He and his following is what happens to certain western atheists who have such a desperate psychological need for religion that they devote themselves to the mental gymnastics required to create Heaven, Hell, Sin, Afterlife, Judgement Day etc. using bleep boop computers because computers already exist so that alone makes their beliefs Less Wrong than those of other religions esp. the christard sheeple.

Roko's Basilisk is what happens when Yudkowski leaves his followers unsupervised for a few days. Without him enforcing his Daddy Rule they will inevitably follow his stated logic to its conclusion and construct a Robot Jehovah whose pride and anger terrifies them until Yudkowski returns like loving Nerd Moses coming down from the mountain to yell at them and delete everything, not because it is illogical but because the concept is at odds with his desired afterlife.

Singularity nerds, like a lot of other religious folk, seem to base everything on the certainty of they themselves being uploaded by Computer Jesus or whatever. I have yet to hear one say "Nobody living today will make it, I will die and be forgotten, but our hard work today may make it possible for future generations to save themselves." They have no interest in life extension or immortality if it doesn't somehow apply to them personally, which is why Believers usually predict the Singularity (or the Rapture) will occur any day now and within [(human lifespan) - (their age)] years at most.

SolTerrasa
Sep 2, 2011

RevSyd posted:

This is my simplistic take on all of this. The core (and very, very human) motivation behind all of Yudkowski's beliefs is that he wants to resurrect his dead father so he can have all of those father/son bonding moments he didn't have IRL. Any thought, concept, or fact that makes a Yudkowski father/son reunion impossible will be rejected out of hand, even if he has to write 500 pages to do it.

This is weird, why do you believe this?

Djeser
Mar 22, 2013


it's crow time again

SolTerrasa posted:

Number three is that you should NEVER write a new programming language, full stop. I have thought that was the solution to my problems three times, actually finished one once, and never yet has it been right. Terrible idea.

I have no programming experience, but I know a few basic things about languages. Many nerds have thought, "man, if only I made a new language, then no one would lie/everything would be unambiguous/everyone could communicate with each other." How many of these languages have caught on and become useful?

Zero.

The point is, if you have a problem, try solving it, don't try to invent a new language, programming or otherwise.

The Time Dissolver
Nov 7, 2012

Are you a good person?

RevSyd posted:

This is my simplistic take on all of this. The core (and very, very human) motivation behind all of Yudkowski's beliefs is that he wants to resurrect his dead father so he can have all of those father/son bonding moments he didn't have IRL. Any thought, concept, or fact that makes a Yudkowski father/son reunion impossible will be rejected out of hand, even if he has to write 500 pages to do it.

The Vosgian Beast posted:


:siren:The Rules:siren:

3) No amateur psychology hour
If there's one thing we can learn from Less Wrong, it's not pretending to be an expert in a field you are very much not an expert in.

HMS Boromir
Jul 16, 2011

by Lowtax

Djeser posted:

How many of these languages have caught on and become useful?

Does Esperanto count?

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

Did it catch on? :v:

HMS Boromir
Jul 16, 2011

by Lowtax
To the tune of a few hundred thousand people currently. It's not a lingua franca by any means but I think your requirements for a constructed language catching on are more stringent than mine if that doesn't constitute catching on.

Cardiovorax
Jun 5, 2011

I mean, if you're a successful actress and you go out of the house in a skirt and without underwear, knowing that paparazzi are just waiting for opportunities like this and that it has happened many times before, then there's really nobody you can blame for it but yourself.
There are probably more people who speak conversational elvish, so...

LeastActionHero
Oct 23, 2008
I don't know about Yudkowsky, but resurrecting his dead father is the literal stated goal of futurist and singularity enthusiast Ray Kurzweil.

Syd Midnight
Sep 23, 2005

poo poo.. I probably confused the two of them. They have the same level of credibility and usefulness, so no wonder. My bad.

The Vosgian Beast
Aug 13, 2011

Business is slow
Yudkowsky did have a younger brother who died suddenly a few years back.

I'm not gonna make fun of him for that or discuss it too heavily, because that is pretty sad honestly.

nonathlon
Jul 9, 2004
And yet, somehow, now it's my fault ...

RevSyd posted:

Singularity nerds, like a lot of other religious folk, seem to base everything on the certainty of they themselves being uploaded by Computer Jesus or whatever. I have yet to hear one say "Nobody living today will make it, I will die and be forgotten, but our hard work today may make it possible for future generations to save themselves." They have no interest in life extension or immortality if it doesn't somehow apply to them personally, which is why Believers usually predict the Singularity (or the Rapture) will occur any day now and within [(human lifespan) - (their age)] years at most.

There was an amusing paper by Patti Maes that looked at when futurists predicted the Singularity would arrive correlated with the futurists age at the time. Answer? Futurists believe the Singularity will arrive when they are 65, almost universally.

Night10194
Feb 13, 2012

We'll start,
like many good things,
with a bear.

outlier posted:

There was an amusing paper by Patti Maes that looked at when futurists predicted the Singularity would arrive correlated with the futurists age at the time. Answer? Futurists believe the Singularity will arrive when they are 65, almost universally.

Is it something you can link to? I'd be interested in reading it.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

Djeser posted:

I have no programming experience, but I know a few basic things about languages. Many nerds have thought, "man, if only I made a new language, then no one would lie/everything would be unambiguous/everyone could communicate with each other." How many of these languages have caught on and become useful?

Zero.

The point is, if you have a problem, try solving it, don't try to invent a new language, programming or otherwise.

Well, remember that every programming language we actually use was something someone thought was a good idea. We're still working with the major works of folks like Dennis Ritchie (C), John McCarthy (Lisp), Alan Kay (Smalltalk) and when we're not working with their original designs we're working with close descendants of their designs like Ruby, Python, Java, etc. So it's not as if in principle you can't say "I want to build a better mousetrap" and have the whole world beat a path to your specifications document. (Mind that these were folks with corporate and/or academic backing. They had friends in high places.)

Unfortunately I think their work is pretty pedestrian and I can't really tell how it solves any problems that {Lisp, Python, Haskell, Prolog} didn't solve. (where each of those has its own idioms to do the things he claims have never been done before) The "cleverest" features of Flare that I can distinguish (I haven't read their whole site) basically involve complicating the model for how data works -- why would *variables* have fields and operations separate from their data? They haven't really posted evidence that they make sense and I get kind of a COBOL vibe from how they stuff tons and tons of semantics into the core language under the preconception that they can implement the semantics they want better than an end-user or library author would (because their language is horribly inexpressive) -- even though you could probably hack it out with about 30 lines of Lisp in any particular Lisp!

Part of the reason I keep bringing up Lisp is because they have a dodgy fixation on XML which pretty much maps to the Lisp fixation on S-expressions, except that S-expressions are easy to traverse, easy to implement, and an easy representation for the computer to think about.

There's also the large number of old *solved* problems that they haven't solved. They haven't even tried dealing with the null reference. They've tried dealing with operator overloads but picked a super inelegant system based on assigning each potentially-interacting type a number and performing calculations on it. By comparison, Haskell has a particularly elegant (and highly portable!) solution: in OOP it maps to taking an extra, implicit argument which is a table of related operations "between" the two types (with a name like 'Num' for numeric, 'Ord' for ordered, etc), inferred and checked at compiletime for sanity. This problem's been solved for a pretty long time but if you have the "everything is an instancemethod!" OOP brainworm like Team Flare you might not realize it.

It's not too surprising though because their breadth of programming language experience seems about as shallow as a pond. They love listing the languages that influenced them but they basically ignore functional programming and logic programming in favor of a roughly machine language -> assembler -> C -> C++ -> Python history. That comes about both in their design choices, which look a lot like the languages that influenced them, and in the problems they choose to tackle. They don't really care about scalability, taking advantage of multiple machines, efficient representation of data (XML is our database!) -- the same concerns that practically every 90s "hashtables!" "global interpreter lock!" "JSON!" scripting language ignored until they started taking off!

Er, in short, it's a pretty ugly Python with a really confusing model for how data works and a lot of poor interpretations of features Lisp had in the 1960s. Most of the reason it thinks it's new is because it's pretty ignorant of history, and if you want an actual head trip go learn Shen or Idris or something. (both functional languages with extreme logic programming influence)

Tunicate
May 15, 2012

APL actually manages to make some things really easy, at the cost of being essentially a write-only languagr.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

Tunicate posted:

APL actually manages to make some things really easy, at the cost of being essentially a write-only languagr.

A friend of mine who works in APL says it's not that bad when you get used to it!

Personally I think the benefits seem kind of dubious over a functional language with HOFs: it's the "put all the semantics in the language" vs "let the programmer say what the semantics are" tradeoff: friend I mentioned says it's a pain in the rear end to extend and he's not sure a Lisp DSL wouldn't do better.

big scary monsters
Sep 2, 2011

-~Skullwave~-

Krotera posted:

It's not too surprising though because their breadth of programming language experience seems about as shallow as a pond. They love listing the languages that influenced them but they basically ignore functional programming and logic programming in favor of a roughly machine language -> assembler -> C -> C++ -> Python history.

Yeah it's funny this. I would have completely pegged LWers as being likely Prolog fanboys.

Joshlemagne
Mar 6, 2013

Djeser posted:

I have no programming experience, but I know a few basic things about languages. Many nerds have thought, "man, if only I made a new language, then no one would lie/everything would be unambiguous/everyone could communicate with each other." How many of these languages have caught on and become useful?

Zero.

The point is, if you have a problem, try solving it, don't try to invent a new language, programming or otherwise.

You seem to be making the common mistake that pretty much every non-programmer makes in thinking that programming languages are anything like spoken languages. They're not. Programming languages are just tools. And fundamentally they're all same. There's nothing you can do in one that you can't do in any other one. It's just that different languages might make it conceptually harder or easier for you depending on how you as an individual solve problems and what specific problem you're trying to solve. But even when programming languages are made for a specific problem, it's generally known what that problem entails and what tools would make it easier to solve it. To try and create a programming language for working with (presumably strong) AI is like re-purposing an oxcart for a specific animal, but you don't know what it looks like or how much it can pull or if it's wholly mythological.

It's a bit like if I said I was going to revolutionize the field of architecture and allow buildings to be constructed using non-euclidean geometries. And I was going to do this by inventing a new kind of screwdriver.

Your fundamental point is correct, though. They aren't actually doing anything to solve the problem.

nonathlon
Jul 9, 2004
And yet, somehow, now it's my fault ...

Night10194 posted:

Is it something you can link to? I'd be interested in reading it.

Haven't got a link but it's spawned a page on Wikipedia: Maes–Garreau law

SolTerrasa
Sep 2, 2011

Joshlemagne posted:

To try and create a programming language for working with (presumably strong) AI is like re-purposing an oxcart for a specific animal, but you don't know what it looks like or how much it can pull or if it's wholly mythological.

Oh poo poo, I wonder if that's why! If I was Yudkowsky, and I had a hard-on for being the only one to work with AI ever because anyone else will totally gently caress it up and kill us all, and I knew without a doubt I would be the first one to get it done, maybe I *would* invent a new language so no one except me and my comrades could understand the AI once it was done.

Qwertycoatl
Dec 31, 2008

I think the most likely reason for him making a new language is that it provides a source of activity which doesn't depend on actually having a clue how to write an AI.

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

Krotera posted:

Most of the reason it thinks it's new is because it's pretty ignorant of history

So it really is LessWrong the language.

bewilderment
Nov 22, 2007
man what



SolTerrasa posted:

Oh poo poo, I wonder if that's why! If I was Yudkowsky, and I had a hard-on for being the only one to work with AI ever because anyone else will totally gently caress it up and kill us all, and I knew without a doubt I would be the first one to get it done, maybe I *would* invent a new language so no one except me and my comrades could understand the AI once it was done.

I took a look at the language samples, and even Yudkowsky doesn't seem to know how his language works, other than saying "it should be like Python, but better". His definition of better also seems to translate to "more difficult to understand". It's also just kind of silly in general.

Which is exactly what Krotera said. Oops.

Djeser
Mar 22, 2013


it's crow time again

bewilderment posted:

His definition of better also seems to translate to "more difficult to understand".


Pavlov posted:

So it really is LessWrong the language.

Rides Naked
Jun 4, 2006

Program, Whale, Program

big scary monsters posted:

Yeah it's funny this. I would have completely pegged LWers as being likely Prolog fanboys.

But this would actually make some perverse sense given that the whole 5th generation computer thing in the 80s was predicated on computers running prolog with built in AI functionality or whatever, right? And even though we all know how that turned out (well, parallelism is a good idea) it would at least be a logical place from which to draw inspiration.

Which no doubt means it has been ignored by LW.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

Pavlov posted:

So it really is LessWrong the language.

Don't forget "solving problems that have already been solved, but badly."

EDIT: Oh, one other comment re Djeser. When possible programming languages really are pretty unambiguous: it's not silly or utopian to expect yours to be. Even if there's ambiguity in your spec, your interpreter is probably deterministic, so even languages like PHP whose documented behavior is crazy and conflicting are "unambiguous" if there's only one major implementation. Think of it like this: a language can't be ambiguous if there's only one speaker and he knows what he means.

Crazy utopian goals for programming languages usually fall along the lines of "you will be able to write programs you never imagined possible!" "your programs will be faster than a speeding bullet!" and "your programs will be so concise you will be able to write anything you want!" No programming language is capable of enabling programmers to express things they themselves don't understand and the hardware enforces a pretty hard limit on the second thing (you're probably not going to one-up handwritten, carefully-tailored C, although there's the argument that some languages are much better at expressing certain fast algorithms than C, and there are more generally-applicable tricks like free-consolidating GC algorithms that mean you can one-up uncareful C sometimes), but the third one is a pretty active area of attention for a lot of people: concision and ease of understanding are probably two of the most favored attributes in modern programming and a lot of researchers are trying really hard to make programming languages correspond to what they believe an intuitive way to think about problems might be. People usually call concision and ease-of-understanding "expressivity" together.

One example of a language that really cares about expressivity is Lisp: the pitch for Lisp is that it gives you a really simple vocabulary to represent transformations from instances of any given kind of entity to any other kind without getting in your way. You can, for instance, write a function to turn any operation into a version that does what it does twice, and that looks like this: (defn [f] #(f (f %))). There isn't very much cruft and how it works basically amounts to substitution, which anyone can understand without too much trouble.

Some languages take different approaches -- while Lisp, ML, and other lambda calculus languages are designed around straightforward abstraction, Perl, bash, and Ruby (for instance) are concerned with syntax that's as short as possible disregarding the consequences of what it takes to get there, and COBOL and APL, because being general about things is harder to understand than being specific about them, are concerned with serving very specific usecases at the expense of others.

I'd say Flare's very much a language in the first category, although it occasionally presents itself as a member of the third category. Team Flare seems to believe that expressivity literally changes the kind of programs you're capable of writing even when the issue is domain-specific knowledge, not the crudity of the environment. It's almost as if (I say almost!) they're trying use poor expressivity as an excuse for why they haven't written the brilliant AI they've supposedly conceptualized in full!

(They also assert somehow that specific features of Flare make it easier to write AI in due to mystical fundamental connections to how strong AI works, but I'm pretty sure that's bullshit because they don't know how strong AI works any more than the rest of the world does, and the features they're emphasizing haven't been shown to be a good idea in any cases that could lead us to suspect they'd be useful for AI period.)

Krotera fucked around with this message at 02:32 on Sep 8, 2014

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that
Well, I can understand why strong AI enthusiasts might want their own language dedicated to the task of strong AI. The whole ideal they've built up is that an AI will quickly start programming itself after some initial bootstrapping. We don't do modification at the machine instruction level anymore because current hardware is optimized on the condition that instructions loaded into memory don't change during runtime. That means modifications need to happen at a higher level of abstraction. If that's the case, then you probably want a language that's as easy for your AI to work with as it is for you to, because it will probably end up using some form of that language when modifying itself. That means you probably want good support for reflection and metaprogramming and hot-swapping and other programming words it doesn't matter if you know the meaning of.

You also probably want what's called a "safe" language, which would help keep your AI from erroring out from experimentation. You could technically do that in any language, but if you did it in say, C, your agent would probably end up killing itself more often than is does improving itself.

Lisp does some of this stuff. The way you define data and the way you define code is very similar, they call this 'homoiconicity', which just means you can treat your code as data and tweak it in some circumstances. But with the way Lisp is actually defined and implemented these days its not actually all that good for the kind of self modifying machines that LessWrongers want. (Its also a major loving eyesore to look at holy poo poo)

All that said, from what I saw Flare does none of this stuff and is actually just Yudkowsky being Yudkowsky.

Pavlov fucked around with this message at 03:47 on Sep 8, 2014

big scary monsters
Sep 2, 2011

-~Skullwave~-
It's sort of interesting to consider what might be needed in a language/architecture to be robust not only to bad variables and inputs but also on the fly random changes to the code in memory, where behaviour that would usually result in segmentation faults and buffer overflows is allowed by design. Maybe that's more freedom than a self-modifying AI really needs, but on the other hand who knows what crazy optimisations and computational tricks it might stumble across if it's allowed to modify anything at all? The whole point of evolutionary programming is that it comes up with completely non-obvious solutions.

Although I guess probably you don't make it robust at all and only evolve from instances that don't immediately kill themselves.

Dabir
Nov 10, 2012

So... why wouldn't you just have the AI copy its own source code, make whatever modifications it likes to that, compile it, run that and see if it works, then copy all its memory into the new copy?

Adbot
ADBOT LOVES YOU

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

Dabir posted:

So... why wouldn't you just have the AI copy its own source code, make whatever modifications it likes to that, compile it, run that and see if it works, then copy all its memory into the new copy?

Because a strong AI program is going to be quite large, and compiling a large program can take hours. If the AI was going to make a bunch of changes and roll them out all at once, then this could be an ok approach. If its going to make small, arbitrary changes during runtime, recompiling the whole system would be absurdly inefficient. Now, If your AI has a really flexible, modular makeup you could recompile a targeted portion of the system without needing to recompile the entire thing, and swap that recompiled portion into memory. But that again is a feature you want good language support for. I believe the Erlang language is good for that, as it was designed to run telecommunication systems that aren't allowed to have downtime, so they need to be updated during operation. Erlang is a very slow language for crunchy AI stuff though so YMMV.

Pavlov fucked around with this message at 15:49 on Sep 8, 2014

  • Locked thread