Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
RealityApologist
Mar 29, 2011

ASK me how NETWORKS algorithms NETWORKS will save humanity. WHY ARE YOU NOT THINKING MY THESIS THROUGH FOR ME HEATHENS did I mention I just unified all sciences because NETWORKS :fuckoff:

Obdicut posted:

I hadn't made the John Frum connection. But that's exactly what this is. It's cargo-cult thinking. I mean, he doesn't even understand the incredible straight-forward example of the tragedy of the commons. It's language-as-cargo-cult.

Cargo Cult was the theme for last year's burn =)



edit: thank you for the new av and sparing the world from contemplating the ways jaron lanier blows

RealityApologist fucked around with this message at 17:08 on Apr 4, 2014

Adbot
ADBOT LOVES YOU

Vincent Van Goatse
Nov 8, 2006

Enjoy every sandwich.

Smellrose

RealityApologist posted:

Cargo Cult was the theme for last year's burn =)

And guess what? It's still true this year.

RealityApologist
Mar 29, 2011

ASK me how NETWORKS algorithms NETWORKS will save humanity. WHY ARE YOU NOT THINKING MY THESIS THROUGH FOR ME HEATHENS did I mention I just unified all sciences because NETWORKS :fuckoff:

ALL-PRO SEXMAN posted:

And guess what? It's still true this year.

New year, new theme.

Wanamingo
Feb 22, 2008

by FactsAreUseless
The person meant that you're still cargo culting.

Or something like that, I don't know. They weren't being literal.

Who What Now
Sep 10, 2006

by Azathoth

Wanamingo posted:

The person meant that you're still cargo culting.

Or something like that, I don't know. They weren't being literal.

I'm pretty sure the joke is that RA is still a Cargo Cultist. Which he undeniably is.

Vincent Van Goatse
Nov 8, 2006

Enjoy every sandwich.

Smellrose
You're right. :thejoke: is that Eripsa is, was, and shall always be a cargo cultist. He is the one Richard Feynman warned us about.

bend it like baked ham
Feb 16, 2009

Fries.
Eripsa, do you have anything published anywhere besides blogs?

Good Citizen
Aug 12, 2008

trump trump trump trump trump trump trump trump trump trump

RealityApologist posted:

SedanChair gets mad props for being loving awesome during the whole hangout. I owe you a beer or acceptable alternative, which you may choose to enjoy far away from me.

Here's the viddie

https://www.youtube.com/watch?v=q1EA8_taJmI

Watched up to the point where dude called you out for having dumb half-formed ideas and you started babbling about signals and ants and world of warcraft

What the gently caress is wrong with your brain?

DoctorDilettante
May 16, 2013

Good Citizen posted:

Watched up to the point where dude called you out for having dumb half-formed ideas and you started babbling about signals and ants and world of warcraft

What the gently caress is wrong with your brain?

It's worth a full watch, if only to see all of us skeptics engaging with each other. The quality of the comments really was quite high.

Good Citizen
Aug 12, 2008

trump trump trump trump trump trump trump trump trump trump

DoctorDilettante posted:

It's worth a full watch, if only to see all of us skeptics engaging with each other. The quality of the comments really was quite high.

Your glasses look bad

Wanamingo
Feb 22, 2008

by FactsAreUseless
It's true. RA should've went with a better thumbnail.

Obdicut
May 15, 2012

"What election?"

Good Citizen posted:

Watched up to the point where dude called you out for having dumb half-formed ideas and you started babbling about signals and ants and world of warcraft

What the gently caress is wrong with your brain?

My god I am glad I didn't attend. That was torturous, basically a reply of Erpisa's dodges in this thread, all the unanswered criticisms, and with the latest mutation of this idea as not a solution but just a different way to get data--which is a loving moronic reason because this system would require such a huge amount of data and, through creating blackmarkets all over the place, actually obfuscate data. That was painful to watch, and made me feel sad for Eripsa all over again. Especially after he failed to even use 'tragedy of the commons' correctly, I think he really actually doesn't understand when he's wrong. He's not just being slimy, there's an actual fault in the way he thinks so that he somehow can look at a conversation where he's clearly wrong, or contradicting himself, and it just doesn't penetrate. Like the linear conversation, he just really still doesn't understand that changing the equation multiple times, declaring it linear, having it disproved, actually means that he was wrong. It's like a dude saying "Was this your card? No? This one? No? This one? No? This one? Yes? I'm psychic!"

Forums Barber
Jan 5, 2011
My gut tells me that yes, it is a failing due to ignorance rather than malice. This is backed up by the anecdotal evidence that the thing inside my chest that makes me go "this is too sad to watch and i am embarrassed for this person" twigged so hard i couldn't get through more than a minute of video without my mouse hand spasming to close the window. I tried a couple of times, too. In text, this stuff is pretty funny, but to see video of someone humanizes them, and then it's not as funny any more.

Bip Roberts
Mar 29, 2005
Hey but remember Strangecoin will allow everyone to know exactly where they are in the social strata because no one has figured out to look at their loving bank account balance before now.

DoctorDilettante
May 16, 2013

moebius2778 posted:

...I always thought it was the theory and programming language people on top of the smart people in CS heap.

Who do you think is working on serious AI projects? All the AI projects I've encountered (both in academia and in industry), have been collaborations of the best computing theorists, specialized programmers, and mathematical decision theorists. Working on AI requires a lot of specialty knowledge, but the problem is so big that people working on, in my experience, tend to have a better-than-average grounding in lots of areas outside their focus as well.

moebius2778 posted:

Is anyone still trying to solve the "make computers intelligent" problem?

The closest thing I can think of to an example of high-profile recent progress toward a general thinking machine in Turing's sense is IBM's Watson, the Jeopardy!-playing computer. If I recall, IBM is now training a similarly-structured machine to do basic diagnostic triage in medicine and make recommendations to human practitioners. That's not true general intelligence (Watson doesn't pass the Turing test in a general context) but it incorporates a lot of the hallmarks of strong AI--perhaps most importantly machine learning and natural language processing. Bluebrain (the brain-simulating project you mentioned) is interesting too, but I suspect that they'll be outpaced before they get too far toward simulated human intelligence.

moebius2778 posted:


I mean, there's Cyc, but unless you're expecting emergent AI (and I don't think they are - you'll just get software that can reason across a broad range of domains), some rather off the wall robotics that I've heard about, some brain simulating attempts (using massively simplified models - they're probably not going to be making too much progress for a while), but that's about it. Everyone else seems to have moved on to the "solve problems" phase.


The "problem solving" approach certainly strikes me as the more reasonable one. We already know how to make lots more generally intelligent organisms, we do it all the time, and it feels really good. It seems far more sensible to pour our limited research time and money into job-specific machines. Of course, some jobs are going to require more inroads toward general intelligence than others will; if we want (for instance) a robot that can act as a caregiver or companion to a human being, its intelligence will need to be far more "human-like" (at least in appearance) than Watson's or a Google car's.

moebius2778 posted:

I know some of the older/earlier AI people think the focus on solving specific problems has delayed the ... let's say general intelligence approach, and that we'd have AI by now if everyone wasn't working on their own separate problems, but I didn't think there was all that much effort focused on real AI (whatever that means).

I think this is basically correct, with the possible exception of the "we'd have strong AI by now" point.

moebius2778 posted:


AI completeness these days gets defined as being able to do things like solve pronoun anaphora/coreference resolution (basically, be able to reliable identify the previously mentioned entity that a given pronoun is referring to).

More broadly, one of the big hurdles is reliable natural language processing. Again, Watson is a really good proof-of-concept here. It did remarkably well in Jeopardy!, but it still made the kind of hilariously basic language interpretation mistakes (many of them related to anaphora, which plays a big role in some Jeopardy! wordplay) that even a very young human child wouldn't make. There's still a very long way to go, to be sure.

moebius2778 posted:


I think AI researchers might be interested in a definition/description of a mind, but more out of curiosity than for any use it could have for measuring progress/setting goals.

I think they're interested in setting concrete goalposts, which is going to require at least a degree of philosophical analysis. The people who are interested in generally intelligent strong-AI tend to care a lot more about that stuff than the domain-specific people, though, for obvious reasons. Still, I expect that at some point we'll get digital systems that have human-like intelligence, if only because researchers are going to want to see if they can meet such a monumental historical target. That's just speculation, though.

Good Citizen posted:

Your glasses look rad

Fixed that for you

DoctorDilettante
May 16, 2013
Oops double post.

Who What Now
Sep 10, 2006

by Azathoth

Obdicut posted:

Like the linear conversation, he just really still doesn't understand that changing the equation multiple times, declaring it linear, having it disproved, actually means that he was wrong. It's like a dude saying "Was this your card? No? This one? No? This one? No? This one? Yes? I'm psychic!"

I think this hits his actions square on the head, and is a great descriptor. This, and all the rest of Eprisa/RA's threads have been a series of bait-and-switches where he presents ideas and then switches out the focus for something that is similar but just different enough that he can start the whole conversation over. We can even see that tactic in this thread alone, where when he is started to be hounded for how Strangecoin doesn't work as a currency, suddenly it no longer was supposed to be an economic system at all. But in the last day or so people have found some legitimate, if very narrow, uses of the system as a currency. Then Epirsa is all for treating strangecoin as currency again. There is no consistency, and so there is no way for critics to approach it.

Forums Barber
Jan 5, 2011
I would say it's consistent to the extent that he is tropic towards sources of attention, but that's just the human condition.

moebius2778
May 3, 2013

DoctorDilettante posted:

Who do you think is working on serious AI projects? All the AI projects I've encountered (both in academia and in industry), have been collaborations of the best computing theorists, specialized programmers, and mathematical decision theorists. Working on AI requires a lot of specialty knowledge, but the problem is so big that people working on, in my experience, tend to have a better-than-average grounding in lots of areas outside their focus as well.

...I kinda assumed it was the AI people (NLP, ML, vision, robotics, reasoning systems, control systems, ... may be forgetting some sub-disciplines in there). I'm kinda used to the theorists being basically mathematicians working in the computation field (not that I ever spent that much time actually paying attention to the theorists) and the programming language people discovering what you can do with the underlying semantics of programming languages. Honestly, I'm used to AI project collaborations being collaborations between people in different AI sub-disciplines.

DoctorDilettante posted:

I think this is basically correct, with the possible exception of the "we'd have strong AI by now" point.

I admit, I was mostly thinking of Minsky when I typed that.

DoctorDilettante posted:

More broadly, one of the big hurdles is reliable natural language processing. Again, Watson is a really good proof-of-concept here. It did remarkably well in Jeopardy!, but it still made the kind of hilariously basic language interpretation mistakes (many of them related to anaphora, which plays a big role in some Jeopardy! wordplay) that even a very young human child wouldn't make. There's still a very long way to go, to be sure.

I'd make a glib comment that maybe Jelinek can stop firing linguists now, but first, apparently he's dead, and second, for all I know that particular pendulum has started swinging back anyways.

DoctorDilettante posted:

I think they're interested in setting concrete goalposts, which is going to require at least a degree of philosophical analysis. The people who are interested in generally intelligent strong-AI tend to care a lot more about that stuff than the domain-specific people, though, for obvious reasons. Still, I expect that at some point we'll get digital systems that have human-like intelligence, if only because researchers are going to want to see if they can meet such a monumental historical target. That's just speculation, though.

I think that that depends on how large the remaining strong AI community is. I can understand the desire for concrete goalposts - from what I remember, that's what's supposed to have started the entire task specific AI projects in the first place. Otherwise you're left with crap like comparing the number of neurons in a human brain and the number of gates in a CPU and wondering what the hell that's supposed to mean. (Yeah, yeah, it's supposed to be a comparison of their computational abilities - this assumes that neurons and gates are of comparable power. Not going to place any bets on that.)

moebius2778 fucked around with this message at 23:34 on Apr 4, 2014

DoctorDilettante
May 16, 2013

moebius2778 posted:

...I kinda assumed it was the AI people (NLP, ML, vision, robotics, reasoning systems, control systems, ... may be forgetting some sub-disciplines in there). I'm kinda used to the theorists being basically mathematicians working in the computation field (not that I ever spent that much time actually paying attention to the theorists) and the programming language people discovering what you can do with the underlying semantics of programming languages. Honestly, I'm used to AI project collaborations being collaborations between people in different AI sub-disciplines.

Getting into a fight about how to divide up subdisciplines is silly, so let's drop this. My point was that people working in AI are some of the smartest computer scientists I've ever met, irrespective of what role they're playing. I agree that computation theory advances are more often made by mathematicians than by practicing computer scientists. If I had it to do over again, I'd get a degree in applied math for sure.

moebius2778 posted:


I admit, I was mostly thinking of Minsky when I typed that.


I've wondered before about why old-timers in some fields are frequently so much more optimistic about the potential speed of progress than young pups tend to be. Intuitively, you'd think it would be the other way around; young people in general tend to think that standing problems can't possibly be that hard to solve, and usually the more you understand a field the less optimistic your projections for rapid progress. The Minsky-era computer scientists are often different, though, and so are some of the physicists from the same time. I wonder if it has to do with the fact that the oldtimers in those disciplines lived through (and contributed to) an incredible paradigm-shift that sparked decades of remarkable progress. When you've seen things like that, I can understand how you might think that the final goal is just over the horizon, if only everyone would focus. Perhaps that's also why great physicists tend to go a bit batty in their old age (see, e.g., Penrose, Dyson, Gell-Mann, &c.).

moebius2778 posted:


I'd make a glib comment that maybe Jelinek can stop firing linguists now, but first, apparently he's dead,


That's a terrible reason to avoid mocking someone.

moebius2778 posted:


and second, for all I know that particular pendulum has started swinging back anyways.


I think it has. One of my dissertation committee members was a machine learning guy from Carnegie Mellon, and he told me that the people he worked with were paying a lot more attention to other disciplines, including linguistics. I think AI researchers have realized that reinventing everything explicitly (as in the case of the what-the-gently caress-are-you-people-thinking Cyc project that was already mentioned) is stupid. Most big AI teams these days include people with a really wide variety of backgrounds. Sometimes they even have philosophers. That's probably symptomatic of a more general shift toward interdisciplinary collaboration these days.

moebius2778 posted:



I think that that depends on how large the remaining strong AI community is.


Good point. I'm not sure how many people would identify with that label these days, or even know what it means. A lot of people also confuse the looser (and more reasonable) way that "strong AI" gets used in practice in discussions among AI professionals, and the original term coined by John Searle , which is silly and not terribly helpful.

moebius2778 posted:


I can understand the desire for concrete goalposts - from what I remember, that's what's supposed to have started the entire task specific AI projects in the first place.


Heck, it was there even in Turing's old paper. The whole point he wanted to make was that a question as vague as "can machines think" is so nebulous as to lack meaning. His central suggestion was that we operationalize our definition of intelligence with concrete tests. The linguistic imitation game was just an example of one way of doing that.

moebius2778 posted:


Otherwise you're left with crap like comparing the number of neurons in a human brain and the number of gates in a CPU and wondering what the hell that's supposed to mean. (Yeah, yeah, it's supposed to be a comparison of their computational abilities - this assumes that neurons and gates are of comparable power. Not going to place any bets on that.)

Right, that's clearly silly. It's obvious from neuroscience itself that the number of neurons in a brain is far less important than the arrangement between them; if that weren't the case, then big brains would be smarter brains. Most of the computation in organic brains doesn't take place in neurons (or pair-wise connections between neurons), but rather in more abstract functionally-differentiated clusters. As Obdicut very rightly pointed out, if we could figure out exactly how our own brains work, that would be a huge leap toward knowing how to build strong AI.

woke wedding drone
Jun 1, 2003

by exmarx
Fun Shoe

Forums Barber posted:

My gut tells me that yes, it is a failing due to ignorance rather than malice. This is backed up by the anecdotal evidence that the thing inside my chest that makes me go "this is too sad to watch and i am embarrassed for this person" twigged so hard i couldn't get through more than a minute of video without my mouse hand spasming to close the window. I tried a couple of times, too. In text, this stuff is pretty funny, but to see video of someone humanizes them, and then it's not as funny any more.

I had really intended to be a bit more mocking in my tone but yeah, in person I can't really manage to be other than polite. Eripsa's a nice guy and he is ambitious, but there were just too many times where I utterly punctured any reason to continue the discussion, you could see the light dawning in his eyes as some part of him recognized it, then the meaningless language would forcefully seize hold of him again. It actually struck me as an almost involuntary process so yeah, I felt some obligation to try to bring him back down to earth rather than mocking him :sigh:

More than anything it reminds me of one time when I was visiting my schizophrenic cousin and he announced to me that he now had the ability to speak any language. "Ok, say something in Romanian then," I said. He produced a string of glossolalia. "Now what's that in Korean?" More gibberish that sounded exactly the same. I don't know if he knew he was bullshitting, but I know that Eripsa doesn't know. He says that he knows he's an incoherent crank...but he keeps doing it.

Best Friends
Nov 4, 2011

Eripsa pretty clearly wants more than anything to be a huge world changing genius and he isn't, and that sucks, but most of us learn we can't be the greatest of all time in late childhood/early teens and then move on with our (often perfectly happy and productive) lives.

I agree there are times he seems to notice hey maybe I'm not actually that smart but then one post later he's back to assuming perfectly valid criticisms just cannot possibly comprehend someone of his intelligence.

BernieLomax
May 29, 2002

Who What Now posted:

If you don't know python then what would running the code tell you?

Because programming isn't that hard. Besides that, python is a byte-compiler, and pypy is a jit-compiler. I've been following this thread and while you demand so much of this man, you certainly are really smug while childish mistakes. Are you going to be sorry and admit you were lying now?

JawnV6 posted:

Ok I don't know how all these newfangled techmologies work but here's a thing https://github.com/jawnv6/strangesim

Back into 3 files, does Support transactions. Duration isn't handled right, check the last commit note for details.

Thanks, awesome! :)

Best Friends posted:

Eripsa pretty clearly wants more than anything to be a huge world changing genius and he isn't, and that sucks, but most of us learn we can't be the greatest of all time in late childhood/early teens and then move on with our (often perfectly happy and productive) lives.

I agree there are times he seems to notice hey maybe I'm not actually that smart but then one post later he's back to assuming perfectly valid criticisms just cannot possibly comprehend someone of his intelligence.

I feel some are too eager to find malice in Eripsa, and often through semantic games. But what's more striking is this tendency to try bringing him down to earth. I doubt Eripsa is going to solve any huge issues in the world, but he has certainly the right attitude, and I wish more people were just as eager at trying to develop radical solutions to big problems. I've read so many articles from so-called radical economists in cool magazines, yet no-one are even close to suggesting any bigger changes to the economic system than those that have happened in the last 50 years - despite that we are at a point in human history where we are required to do a change that is bigger than humanity has ever done (eg. because of globalization or global warming). He isn't Einstein or anything, but I am glad that he is at least touching on ideas at the right scale. But I have never seen any radical ideas from those trying to tear him down, and that's depressing. And you're saying it right out - he shouldn't try to have radical ideas. Reading your post, I understand why people keep tearing down people such as Chomsky because he's too radical, and thus have to be perfect - a higher standard than the one where people have small ideas and are allowed to make mistakes. I can like a wrong but novel idea, as long as the intentions are good. Because then we have something to compare to when a better idea comes around. But some of you just seem to wait around to some magic Einstein comes around and solve all of our problems.

BernieLomax fucked around with this message at 03:40 on Apr 5, 2014

moebius2778
May 3, 2013

DoctorDilettante posted:

I've wondered before about why old-timers in some fields are frequently so much more optimistic about the potential speed of progress than young pups tend to be. Intuitively, you'd think it would be the other way around; young people in general tend to think that standing problems can't possibly be that hard to solve, and usually the more you understand a field the less optimistic your projections for rapid progress. The Minsky-era computer scientists are often different, though, and so are some of the physicists from the same time. I wonder if it has to do with the fact that the oldtimers in those disciplines lived through (and contributed to) an incredible paradigm-shift that sparked decades of remarkable progress. When you've seen things like that, I can understand how you might think that the final goal is just over the horizon, if only everyone would focus. Perhaps that's also why great physicists tend to go a bit batty in their old age (see, e.g., Penrose, Dyson, Gell-Mann, &c.).

Flipping the question around, I think for the young people in the AI field, it's all of the task specific AI problems left are hard, and these problems were chosen as stepping stones, really, on the path to strong AI. I get the feeling that the original intention for the task specific problems was, "okay, we're not making great progress on AI, so let's define a bunch of doable concrete goals, finish those up, and we'll learn more about intelligence, and get back to making strong AI with a much better grasp of what intelligence is." And it turns out the doable concrete goals were really, really hard.

For the old timers, my guess would be ... okay, in NLP, it seems like there's a couple broad, generally applicable rules, and then a lot of special cases. (Say, Zipf's Law again - there's a couple things that show up everywhere, and then a lot of things that don't show up in all that many places - be they POS tags, grammar rules, or what have you.) So in the beginning as you get the broad rules, you can make really good progress. But the closer and closer you try to get to perfection, the harder it gets. 85% - 90% accuracy on a parser is pretty good these days, I think. But it wouldn't surprise me to find out the remaining 10% is harder to achieve than the first 90%. So the old timers took the first steps, made great progress, and, I would guess, expected this to continue. And, not so much in practice.

DoctorDilettante posted:

I think it has. One of my dissertation committee members was a machine learning guy from Carnegie Mellon, and he told me that the people he worked with were paying a lot more attention to other disciplines, including linguistics. I think AI researchers have realized that reinventing everything explicitly (as in the case of the what-the-gently caress-are-you-people-thinking Cyc project that was already mentioned) is stupid. Most big AI teams these days include people with a really wide variety of backgrounds. Sometimes they even have philosophers. That's probably symptomatic of a more general shift toward interdisciplinary collaboration these days.

Yeah, that doesn't surprise me. If stuff like the Gigaword corpus isn't enough data for a statistical approach to work perfectly, you're probably reaching the limits of what you can accomplish with statistics. To be fair to the statistical parsing people, NLP did start with linguistics, and I think statistical parsing has a valid criticism in the form of, "You're trying to get a computer to process language the same way humans think about language, but computer processing and human thinking aren't the same - you should be trying to handle NLP in a way that focuses on a computer's strengths - ability to process large amounts of data in a very simple manner." Of course, deciding to discard all linguistics probably wasn't the greatest idea ever.

DoctorDilettante posted:

Good point. I'm not sure how many people would identify with that label these days, or even know what it means. A lot of people also confuse the looser (and more reasonable) way that "strong AI" gets used in practice in discussions among AI professionals, and the original term coined by John Searle , which is silly and not terribly helpful.

Heck, it was there even in Turing's old paper. The whole point he wanted to make was that a question as vague as "can machines think" is so nebulous as to lack meaning. His central suggestion was that we operationalize our definition of intelligence with concrete tests. The linguistic imitation game was just an example of one way of doing that.

To paraphrase Dijkstra - "The Turing Test attempts to determine if a computer can think, a question about as relevant as asking if a submarine can swim." (Not necessarily fair to Turing, but I think it does encapsulate how AI researchers' thinking about AI has changed.)

I think you're correct in that a lot of people in AI think being able to create a strong AI would be a really, really neat thing to be able to do. But at the end of the day, I think they're going to focus on the task specific problems which (potentially) have applications (so you can tell how well you're doing at solving the problem), but still really impinge upon domains that previously would have been considered solely the province of human intelligence.

DoctorDilettante
May 16, 2013

Cowman posted:

I'm ADHD and can't pay attention very well, does this mean I'm rich or poor in an attention economy?

You're diversified.

RealityApologist
Mar 29, 2011

ASK me how NETWORKS algorithms NETWORKS will save humanity. WHY ARE YOU NOT THINKING MY THESIS THROUGH FOR ME HEATHENS did I mention I just unified all sciences because NETWORKS :fuckoff:

Best Friends posted:

Eripsa pretty clearly wants more than anything to be a huge world changing genius and he isn't, and that sucks, but most of us learn we can't be the greatest of all time in late childhood/early teens and then move on with our (often perfectly happy and productive) lives.

Guys, I'm well aware that I'm not a world changing genius. There are plenty of people who are much smarter, more talented, better trained and equipped, with more experience and charisma and ambition than I'll ever have. I'm just a guy with a blog and an idea. I never claimed to be anything but. I make mistakes. I made mistakes in the discussion of linearity, and in plenty of other places as well. I'm ignorant of huge chunks of human knowledge, much of which is required background for discussing the idea I have. And these ideas aren't mine anyway: every one of them have been expressed more clearly and usefully and mulled over by our brightest minds for generations. This one has just happens to drive me to write, but that doesn't make me or it special in any way. Every one of you are utterly and without qualification correct to point these things out.

It would take me decades of work to amass the wisdom and experience and education required to approach this project in a meaningful way. As someone else said, Strangecoin would take a book-length treatment to turn into a complete thought, and the Attention Economy would fill a library of research if done properly. Only a world-changing genius could hope to complete such a project on their own-- and then only with the support of an academic institution and grants and researchers. I have utterly none of that talent or support. It would be sheer insanity for me, with my talents and means, to sit in my office on my own and bring such a project to fruition.

But that's not what I'm doing. I'm writing this poo poo up for a comedy forum. The amassed wisdom, intelligence, and ability in this forum makes for some great and occasionally hilarious analysis, particularly in the areas that directly pertain to the attention economy. The education I'm receiving from this discussion has been better than a year's worth of MOOCing around trying to figure things out on my own; you aren't just telling me what I need to learn, you're helping me understand why it's important to learn in the first place. I've also been writing on this forum since 2001, and I feel comfortable with my voice (if not my writing) in this format. I feel comfortable enough here that I'm willing to ramp up the crazy a few notches and make claims or connections that wouldn't fly in a more professional context, but I feel perfectly fine doing for basically memetic experimentation reasons in this context. I'm fairly secure in my confidence in the material I can defend professionally, the material I can defend in my voice and persona on this forum, the material I'm unwilling to defend in any circumstances, and my ability to distinguish between these.

My occasionally crazy writing gets a lot of hate thrown at me, but it also generates enough interest and activity within the community that some substantive discussion actually does take place in its wake. Unifying philosophical projects throw a stake in the ground, around which lots of theoretical frameworks can organize. I've not mastered the intellectual corpus required to coordinate all the discussions necessary to talk about the Attention Economy directly, but I'm trying to write in a way that pulls in the perspective of those who do, so they can correct, elaborate, and develop in the many places I leave off. I really do think that lots of different practices, both in science and in culture, are converging on a unifying framework not unlike the Attention Economy, and I don't think enough public discussion has taken place about this framework whatever it is that we've been collectively building for the past few years. There are attempts coming out all the time and they are all pretty lame. Attention economy is lame too, perhaps, but it's got enough traction and history on this forum that it only takes a spark to get the discussion going. All I want to have happen is for that discussion to take place. It's not because it's a vindication of my brilliant theory, or because I want you to do my homework. In some ways this is all of our homework, and I'm just making sure we do it. That doesn't mean that I deserve the credit for anything we do because of it, and my goal is not the potential fame or wealth that would accrue from its success. Attacking my character or sanity, or trying to figure out what the hell is wrong with my brain (plenty, believe me), and me trying feebly to defend such attempts, is really the most boring parts of these threads and I wish it didn't have to happen, but I don't know how to stop it except just shutting up and doing science. I'll be doing that soon enough.

I just want to build the drat thing so the future can loving get here already because it isn't here yet and things are pretty poo poo, and you all have a hell of a lot better loving chance of making it happen than me.

RealityApologist fucked around with this message at 15:59 on Apr 5, 2014

Muscle Tracer
Feb 23, 2007

Medals only weigh one down.

Why do any of you give a poo poo what this chucklefuck "thinks" about any of these subjects? So much effort has been put into disillusioning an individual with the logical capacity of an ornery toddler to literally no end, over and over and over again. Please, for your own goods, find something better to do with yourselves. Just Stop Posting.

[i repent, i am become The Problem]

Muscle Tracer fucked around with this message at 21:59 on May 1, 2014

CheesyDog
Jul 4, 2007

by FactsAreUseless

Muscle Tracer posted:

Why do any of you give a poo poo what this chucklefuck "thinks" about any of these subjects? So much effort has been put into disillusioning an individual with the logical capacity of an ornery toddler to literally no end, over and over and over again. Please, for your own goods, find something better to do with yourselves. Just Stop Posting.

It's funny to see what stupid thing comes out next

Gerund
Sep 12, 2007

He push a man


RealityApologist posted:

I just want to build the drat thing so the future can loving get here already because it isn't here yet and things are pretty poo poo, and you all have a hell of a lot better loving chance of making it happen than me.

Step one- please hop off the futurist/singularity theology and realize that if/when 'the future' gets here it won't be any better for you or for those you care about, and that for the vast sea of those more unprivileged than you anything approaching 'the future' would be held back from them to begin with.

You simply must be able to consider a world outside the warm fuzzy blanket of academia, where everyone is young and healthy and only ever-so-slightly uncomfortable (even if rapidly gaining debt). There are people that starve every day- when confronted by this very fact of life by SedanChair, you became an uncomfortable child swaying back and forth in your chair with a gasping gob. Is it because children screaming in pain from unfed bellies is uncomfortable for you to contemplate? Do the sick and dying not match your perception of a post-Cyborg world? Is there any real ethic to your beliefs, or have you pushed the entirety of your moral values into 'the future' where everything will be the shiney chrome and neon text they promised us in the 90s?

You're the person that William F. Buckley had to invent when he used "Don't immanentize the eschaton!" as a slogan to destroy the Great Society of the post-war years.

Nintendo Kid
Aug 4, 2011

by Smythe
Point of information: all economies proposed by eripsa would be detrimental to reaching the singularity.

JawnV6
Jul 4, 2004

So hot ...
Making the cap flexible per-user per-step makes the system has the immediate effect of making the system much harder to reason about. Now the task of keeping the pool of strangecoins constant is much more difficult. It's not clear if you're abandoning that constraint or just not considering the implications of what the flexible cap would have. I'm partial to the notion that making the system more difficult to reason about is considered a good all by itself, a means to an end in terms of the discussion about it

Without that diversion, I'll go ahead and ask again which of the three relations of total S to the cap and user count applies? If you're still operating under the assumptions that 1) TUA transactions are meaningful and aren't just the bookkeeping of boundary condition policies and 2) That the total supply of S is fixed, it's implied that S < U*C. This necessitates consideration of the condition where TUA runs out. I can't help but notice that this is never considered in the spec. So, either TUA is glorified bookkeeping of boundary condition protocol and useless as pressure on the system or the spec has an incredible gap.

I want an answer to that. The stated rules, independent of all transaction types in the system and any conceivable user interactions, mean TUA can run out and the protocol must specify what happens then. Either S>=U*C and the TUA can be safely ignored as an entity or the network is not zero sum. A choice must be made between those two propositions to continue forward.

I don't like erasing the cap as a solution either. The point in trying to actually build a simulator is to enumerate the potential systems allowed by the spec and figure out which ones can be built, which would be useful, and allow for simple experiments on them. When I ask binary questions, I'm dividing the universe of simulators in half and asking which one's more interesting. When the answer to a yes/no question, intended to winnow the state space down to something manageable, is "mu, here's 18 other possible conditions" it's actively slowing down the simulator progress. Instead of the space being reduced by 50% it's exploded exponentially. It's quite counterproductive if you genuinely believe the simulator can help this endeavor.


Pushed the latest up to github. The system now handles Payment, Support, and Endorsement. It can be considered an accurate model of the system with the conditions that nobody is close to the cap and there are no transactions between TUA and users. But I repeat myself. When a User's balance drops negative, the system terminates. This is basically calling for help when a TUA transaction would be required.

RealityApologist
Mar 29, 2011

ASK me how NETWORKS algorithms NETWORKS will save humanity. WHY ARE YOU NOT THINKING MY THESIS THROUGH FOR ME HEATHENS did I mention I just unified all sciences because NETWORKS :fuckoff:
I didn't mean to suggest more features and blow the project to any more than it is. The possibility of variable account caps was mentioned earlier in the thread, and I thought it might address a problem you raised. If it's unreasonable, forget it.

Let me think more about TUA running out.

RealityApologist fucked around with this message at 05:52 on Apr 5, 2014

Xelkelvos
Dec 19, 2012
One thing I've always wondered about "Post-Scarcity" scenarios: Isn't time still a limiting factor in production of goods, and can therefore be considered scarce?

Nintendo Kid
Aug 4, 2011

by Smythe

Xelkelvos posted:

One thing I've always wondered about "Post-Scarcity" scenarios: Isn't time still a limiting factor in production of goods, and can therefore be considered scarce?

If you can produce goods constantly, time isn't very relevant. Especially if you can produce much faster than they can be consumed or even demanded.

Who What Now
Sep 10, 2006

by Azathoth
Wait, is Eprisa seriously trying to pull a puppetmaster defense with his claim that he writes in a deliberately obtuse and asinine manner in order to generate more interest in his threads? Is that actually happening?

Obdicut
May 15, 2012

"What election?"

Who What Now posted:

Wait, is Eprisa seriously trying to pull a puppetmaster defense with his claim that he writes in a deliberately obtuse and asinine manner in order to generate more interest in his threads? Is that actually happening?

Yep. Things like not understanding that his equations were linear when he claimed they weren't, his complete misuse of the 'tragedy of the commons', these weren't mistakes--he can't accept that. So, no matter how transparent and unbelievable the rationale is to everyone else, he needs it for some reason. This has been his pattern over threads, as well as emotionally swinging back and forth between calling everyone here assholes and morons to unctuously claiming that he's just a humble proferrer of ideas to us brilliant shimmering ones who can immanentize his eschaton.

Zodium
Jun 19, 2004

Who What Now posted:

Wait, is Eprisa seriously trying to pull a puppetmaster defense with his claim that he writes in a deliberately obtuse and asinine manner in order to generate more interest in his threads? Is that actually happening?

He's not saying he's being deliberately obtuse and asinine, only that it doesn't matter if he is. It's more of case of a million monkeys with a million typewriters, I think. The reasoning seems to go that even though the topic is admittedly not suited to the medium, is neither completely or consistently defined, and meaningful discussion of the topic is far beyond the level of his, probably any participant's and arguably any living person's current understanding, as long as we talk about "it," he/we will nonetheless improve our understanding by something like procedural learning. This has not occurred and will not occur (I see very little substantive change in his ideas between attention economy and Strangecoin, for instance) because his approach is not principled: when an error is found, we have no way to know how or why or even where it occurred. All of the criticisms of Strangecoin, like all the criticisms of the attention economy, amount to "the idea is not complete/consistent/both," because that's the only thing anyone can say.

RealityApologist
Mar 29, 2011

ASK me how NETWORKS algorithms NETWORKS will save humanity. WHY ARE YOU NOT THINKING MY THESIS THROUGH FOR ME HEATHENS did I mention I just unified all sciences because NETWORKS :fuckoff:

Zodium posted:

I see very little substantive change in his ideas between attention economy and Strangecoin, for instance

I think you are mostly right, although given the way you put it, it's probably worth saying that I probably learn more from these threads than you guys-- not that I know more, but that these threads are more helpful in shaping and influencing my thoughts than y'alls, just because I'm more invested than any of you. So I'm not claiming to be a puppet master that's pulling the strings so the conversation happens the way I want. Rather, I want a conversation of some sort to happen, not to control it but to lick my finger and put it in the wind and see where it's blowing.

But I disagree about no substantive change. The idea is still mostly the same in broad character, but I feel the strangecoin proposal is far more targeted and focused than anything I've said about AE in the past. Not that it's perfect, but this thread has already shown that it's the kind of thing that a technical expert can pick up and squint at for a while and then start putting up code to reflect the spec. There was nothing anywhere close to that in the marble proposal. The strangecoin proposal doesn't rest on any conceptual link between attention and value, or any hidden master AI doing all the work behind the scenes, or that everyone adopt my political and normative dispositions in one fell swoop. It doesn't require political governance on Twitter, or universal brotherly love, or infinite resources, or anything else that was potentially left murky in the old AE threads. And the altcurrency presentation makes it relevant to a developing technology and community of active public interest. I'm not saying there is no connection between the two, and I'm certainly not saying that I've fixed or removed all the problems. But I think there's at least some indication of substantive change over the last three years. I also think it's evidence of a good faith effort to respond to many of the criticisms, both specific and general, that have arisen from these threads.

I don't expect it to satisfy any of you. But I don't feel ashamed of the work I do here with you, despite your attempts at bullying me into shame and selfdoubt. In fact, there are times when I'm quite proud of this work we do together. That doesn't make me a self-obsessed narcissist. The thread has taken the approach that instead of critiquing and discussing the ideas its easier to criticize and discuss the various deficiencies in me. You reason that because I'm not a genius messiah I have no business talking the way I do, and so addressing what I say is therefore a waste of time. I find this discussion of my failings boring as gently caress and a waste of time and completely irrelevant and I know it's partly my fault but I I don't know how to make it stop. Eventually you and I will both lose interest in these disgressions and the thread will die and I'll disappear for another 6 months.

It would just be really nice if we could do something interesting with the work and code that JawnV6 and others have done in this thread, because it's interesting as gently caress and I don't want to blow it.

RealityApologist fucked around with this message at 16:55 on Apr 5, 2014

Obdicut
May 15, 2012

"What election?"
See, this is what I mean:

quote:

The thread has taken the approach that instead of critiquing and discussing the ideas its easier to criticize and discuss the various deficiencies in me

The thread has been absolutely packed with criticisms of the ideas, and yet he's managing to really, sincerely convince himself otherwise.

He says:

quote:

this thread has already shown that it's the kind of thing that a technical expert can pick up and squint at for a while and then start putting up code to reflect the spec.

Right after hearing this:

quote:

So, either TUA is glorified bookkeeping of boundary condition protocol and useless as pressure on the system or the spec has an incredible gap.

Adbot
ADBOT LOVES YOU

RealityApologist
Mar 29, 2011

ASK me how NETWORKS algorithms NETWORKS will save humanity. WHY ARE YOU NOT THINKING MY THESIS THROUGH FOR ME HEATHENS did I mention I just unified all sciences because NETWORKS :fuckoff:
The "incredible gap" he's identifying is about a limit case in the spec, which can be defined precisely and for which there may be technical solutions. That's not the same as cargocult babbling, which is what I'm being accused of in this thread.

edit: You are all too stupid to deal with this idea too SEE HOW DOES IT FEEL

  • Locked thread