|
Shame Boy posted:lol i was wondering how big yud was reacting to the whole "computers are good enough to trick gullible people into thinking they're alive now" thing and, well quote:Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
|
# ? Mar 30, 2023 12:16 |
|
|
# ? Apr 26, 2024 20:26 |
|
The problem with accelerationism though is the idea that "well surely if we automate all these jobs away then governments will implement UBI" is based on nothing. Every time this has happened before the world has just gone "ahhh the free market will sort things out"
|
# ? Mar 30, 2023 12:21 |
|
Powerful Two-Hander posted:what are these people on lol Cybernetic Vermin posted:lot of crank
|
# ? Mar 30, 2023 12:22 |
|
Mega Comrade posted:The problem with accelerationism though is the idea that "well surely if we automate all these jobs away then governments will implement UBI" is based on nothing. first and foremost: nothing of that sort of significance is going to happen with technology as it stands, can't rule out additional breakthroughs, but there's also not much of a reason to expect them. except if one is worried about word salad email writing job being automated, but i really don't think anyone should cry about that. so it seems pretty safe to lol at people being weird over it. beyond that, if we imagine breakthroughs, i don't really trust any of those people to preserve the status quo in a way that is good for anyone. it is not that accelerationist a matter even, capitalism has a very difficult relationship with actual automation.
|
# ? Mar 30, 2023 12:30 |
|
My best guess on this dumb poo poo: Having faced no material adversity or struggle in their lives, some people just need to make poo poo up to be scared of to drive themselves out of ennui
|
# ? Mar 30, 2023 12:54 |
|
Mega Comrade posted:The problem with accelerationism though is the idea that "well surely if we automate all these jobs away then governments will implement UBI" is based on nothing. i think they might do that eventually, ubi is a right wing program after all
|
# ? Mar 30, 2023 13:00 |
|
Not a Children posted:My best guess on this dumb poo poo: Having faced no material adversity or struggle in their lives, some people just need to make poo poo up to be scared of to drive themselves out of ennui don’t doxx me
|
# ? Mar 30, 2023 13:00 |
|
probably the biggest issue with gpt models everywhere is that texts that are not generated at least in part by gpt become the new low background steel which also means good luck finding anything online using search lol e: and establishing trust relationships with people online
|
# ? Mar 30, 2023 13:05 |
|
Shame Boy posted:lol i was wondering how big yud was reacting to the whole "computers are good enough to trick gullible people into thinking they're alive now" thing and, well wait so we don't want to do the butlerian jihad?
|
# ? Mar 30, 2023 13:07 |
|
I know nothing about Yudkowsky but his name pops up a lot, is he an actual AI researcher or just a blogger?
|
# ? Mar 30, 2023 13:26 |
|
Gubbinal Girl posted:I know nothing about Yudkowsky but his name pops up a lot, is he an actual AI researcher or just a blogger? https://rationalwiki.org/wiki/Eliezer_Yudkowsky
|
# ? Mar 30, 2023 13:28 |
|
4lokos basilisk posted:e: and establishing trust relationships with people online webrings are poised for a comeback
|
# ? Mar 30, 2023 13:47 |
|
Gubbinal Girl posted:I know nothing about Yudkowsky but his name pops up a lot, is he an actual AI researcher or just a blogger? he's an "AI researcher" in that he writes lots and lots of words about how scary it would be if we made the robot devil, and how you should give him and his organization money so we can make robot god before that happens to fight the robot devil actual researchers make fun of him and dump his books a lot but people like musk take him very seriously
|
# ? Mar 30, 2023 13:57 |
|
yeah just read this article about him if you're curious it's p good
|
# ? Mar 30, 2023 13:59 |
|
Shame Boy posted:yeah just read this article about him if you're curious it's p good quote:(above a certain income level) stopped there lmao
|
# ? Mar 30, 2023 14:14 |
|
Cybernetic Vermin posted:the "pause giant ai experiments" open letter in the news also has some weird tinges, the very common "should we automate away all the jobs?" can be read in the "must not let capitalism just crush all poor people" but i am a lot more inclined to believe they (at least a lot of the top signatures like elon musk and yuval noah harari) mean "we must keep poor people busy at meaningless jobs", and then it gets into humanity being "obsoleted and replaced", how we'll "risk loss of control of our civilization", and so on. just going to cross-post this here infernal machines posted:VICE has a piece on why that open letter is just a nonsense distraction from the actual problems with llm tools the letter is attention getting nonsense that distracts from the actual problems with the way llms are being used. bombastic, sensational bullshit for idiots like yudkowski
|
# ? Mar 30, 2023 14:18 |
|
Shame Boy posted:he's an "AI researcher" in that he writes lots and lots of words about how scary it would be if we made the robot devil, and how you should give him and his organization money so we can make robot god before that happens to fight the robot devil we used to call these people "philosophers" or maybe theologians, depending
|
# ? Mar 30, 2023 14:19 |
|
lol i didn't make this connection beforequote:Yudkowsky's papers are generally self-published and had a total of two cites on JSTOR-archived journals (neither to do with AI) as of 2015. One of these came from his friend Nick Bostrom at the closely-associated Future of Humanity Institute.[43] hey guess who put out that stupid open letter lol infernal machines posted:we used to call these people "philosophers" or maybe theologians, depending nah he's far smarter than them quote:Thus Yudkowsky sees the need to "solve" ethics in some form that can be computerized — although ethics remains an open problem in philosophy after thousands of years. However, Plato, Aristotle and Kant just weren't as smart as Yudkowsky believes himself to be.
|
# ? Mar 30, 2023 14:23 |
|
yes, he's a very smart boy who has a firm grasp on reality, which is why he's so clearheaded as to devise the need to algorithmically "solve" ethics
|
# ? Mar 30, 2023 14:28 |
|
there are some very real problems with the way llms are enabling more oppression and discrimination, but those are things being done by people and governments, using them as they would any other piece of software, and so not really exciting news. instead, we get to hear from the biggest clowns in philosophy on the incredible dangers of their own imaginations, and this is all anyone is going to report on
|
# ? Mar 30, 2023 14:32 |
|
yeah I'm an ai researcher an actually insane researcher
|
# ? Mar 30, 2023 14:33 |
|
i wonder if he’s worried that popular awareness of real things that “look like ai” to regular people will make it clear to everyone that miri and the like have absolutely no connection to any research in the field
|
# ? Mar 30, 2023 14:46 |
|
raminasi posted:i wonder if he’s worried that popular awareness of real things that “look like ai” to regular people will make it clear to everyone that miri and the like have absolutely no connection to any research in the field at some point in the past you could have convinced me that yudorowsky and his ilk were just working people and they didn’t actually believe any of this poo poo, but over the past several years I’ve become increasingly convinced that they’ve all fully drank their own kool-aid and are incapable of this level of guile
|
# ? Mar 30, 2023 15:12 |
|
can't believe the guy called eliezer might be lying electronically
|
# ? Mar 30, 2023 15:17 |
|
quote:With no training in his field of interest, Yudkowsky has no accomplishments to his credit beyond getting Peter Thiel to give him money.
|
# ? Mar 30, 2023 15:20 |
|
infernal machines posted:we used to call these people "philosophers" or maybe theologians, depending nah, philosophers also call him a crank
|
# ? Mar 30, 2023 15:21 |
|
quote:He claims to be a skilled computer programmer, but has no code available other than Flare, an unfinished computer language for AI programming with XML-based syntax. Hahahaha this is the dumbest thing I've ever heard of. None of the features make any sense and all the documentation is pompous gibberish https://flarelang.sourceforge.net/goals.html posted:A new programming language has to be really good to survive. A new language needs to represent a quantum leap just to be in the game. Well, we're going to be up-front about this: Flare is really good. There are concepts in Flare that have never been seen before. We expect to be able to solve problems in Flare that cannot realistically be solved in any other language. We expect that people who learn to read Flare will think about programming differently and solve problems in new ways, even if they never write a single line of Flare. quote:LISP, of course, is the traditional king of self-modifying languages, because LISP uses the same representation for program code and program data. Flare also uses the same representation for code and data, except that the common representation is extensible tree structures (XML) rather than lists. The difference is a major one; in a list structure, an object's role is determined by where it is. In Flare, an object's role is determined by its name and its metadata. LISP is all about trees what the gently caress does this mean??? There's just so much. I don't understand how anyone could read his writing and not immediately see he's a conceited moron.
|
# ? Mar 30, 2023 15:35 |
|
either way yudkowsky must not be allowed to for his nonsense claim the obvious yospos default position of a grumpy comedy call for a butlerian jihad. i also think that stuff like the furry community putting in great efforts to make models generate bespoke porn with great precision is doing way more for a public understanding of what these things are than a scenario where the few corporate giants are the only places with models, which they present carefully limited to pointless tasks performed with horoscope levels of "never being quite incorrect" while simulateously tuned into e.g. pretending that homosexuality does not exist.
|
# ? Mar 30, 2023 15:37 |
|
Beeftweeter posted:can't believe the guy called eliezer might be lying electronically lol
|
# ? Mar 30, 2023 16:16 |
|
Cybernetic Vermin posted:either way yudkowsky must not be allowed to for his nonsense claim the obvious yospos default position of a grumpy comedy call for a butlerian jihad. a much larger portion of the furry community has gotten real fuckin' mad at the AI porn people cuz if nothing else furries respect the hell out of their artists so that's nice i guess
|
# ? Mar 30, 2023 16:40 |
|
shame on an IGA posted:wait so we don't want to do the butlerian jihad? yeah what the hell? we finally get normies calling for the datacenters to burn and we make fun of them?
|
# ? Mar 30, 2023 16:41 |
|
Gubbinal Girl posted:Hahahaha this is the dumbest thing I've ever heard of. None of the features make any sense and all the documentation is pompous gibberish but he wrote a harry potter fanfiction where harry potter lectures you about bayesian inference or whatever, and it's the best book ever written if you ask the right people
|
# ? Mar 30, 2023 16:41 |
|
rotor posted:yeah what the hell? we finally get normies calling for the datacenters to burn and we make fun of them? big yud is not a normie by any stretch of the word lmao
|
# ? Mar 30, 2023 16:42 |
|
he doesn't want to burn the datacenters, he just wants them to run his own personal warm and fuzzy posthuman AI
|
# ? Mar 30, 2023 16:43 |
|
haveblue posted:he doesn't want to burn the datacenters, he just wants them to run his own personal warm and fuzzy posthuman AI specifically he wants all the money that would be going to the datacenters to instead go to his foundation, which will make Robot God by... i don't know, writing enough prayers down in XML that it manifests itself i guess
|
# ? Mar 30, 2023 16:44 |
|
this is the same robot god that's also behind roko's basilisk*, if you were wondering. big yud is that dude. *well he didn't come up with the idea, but him treating it as if it were valid and banning discussion about it on his website to keep the memetic virus from cursing his followers is kinda what boosted it enough to become an internet household name
|
# ? Mar 30, 2023 16:48 |
|
You're all just yuddites
|
# ? Mar 30, 2023 16:48 |
|
Gubbinal Girl posted:Hahahaha this is the dumbest thing I've ever heard of. None of the features make any sense and all the documentation is pompous gibberish I can understand why Yudkowsky considers LLMs to be an existential threat because they are far better at writing pompous-sounding confident gibberish than he is
|
# ? Mar 30, 2023 16:52 |
|
Gubbinal Girl posted:Hahahaha this is the dumbest thing I've ever heard of. None of the features make any sense and all the documentation is pompous gibberish this reads like someone just discovered Reflection and has got carried away
|
# ? Mar 30, 2023 17:00 |
|
|
# ? Apr 26, 2024 20:26 |
|
i dont really read this thread much but ok well as long as we can all agree the data centers should burn, im cool
|
# ? Mar 30, 2023 17:08 |