|
endlessmonotony posted:I'm a cyborg. And I hate it. The future sucks.
|
![]() |
|
![]()
|
# ¿ Mar 23, 2023 19:54 |
|
I'm hoping it's an upbeat year and we get more of a cyberskapunk vibe
|
![]() |
|
hi, im posthumous carrie fisher and im excited to tell you about the new Star Wars nfts
|
![]() |
|
i feel like the guy doing flips over the pedestrian footbridge is enjoying the park better than the guy taking a picture of him to complain on the internet
|
![]() |
|
Replace the words "content" with "code' and replace "pass Google AI detection screening" with "not get flagged for copyright infringement" and I'm staring at my future
|
![]() |
|
The base package includes a roomba conveniently docked between the driver and the windshield
|
![]() |
|
/the car is facing north. nu metal is playing loudly over speakers embedded in the seat of your chair, pumping uncomfortably into your genitals. there are more words on the side of the display you can't make out > TURN DISPLAY /the music gets louder
|
![]() |
|
also don't confuse the pickle warnings as just another nsfw tag
|
![]() |
|
NoneMoreNegative posted:lol scrolling AliX and "But cannot truth can be made through the repetition of a lie?" yells Dick Cheney, as he tears his shirt off exposing a giant mechanical heart protruding from his chest, his weak point
|
![]() |
|
Analytic Engine posted:https://twitter.com/andresg_tv/status/1626291943264362496?s=46&t=zFmx-7Tad5M1ZBBoPaFP2w idiot thinks the car is driving him somewhere rn
|
![]() |
|
NoneMoreNegative posted:quite the opposite! there goes jason waterfalls
|
![]() |
|
Xpost from the musk thread but I'm loving how cyberpunk it is that an addicted billionaire purchased a brain addling machine and completely lost his mind for it:![]()
|
![]() |
|
NoneMoreNegative posted:Also the reason I was actually ITT would look awesome in a bitchin red and black color scheme, like driving two nod obelisks of light
|
![]() |
|
Jabor posted:you might find this take from a fellow educator interesting: https://acoup.blog/2023/02/17/collections-on-chatgpt/ Mr Devereaux here states outright that statistical relationships between words are fundamentally different than knowledge, which is sort of a stupid thing to assume. I "know" harry potter rides a magic broom, despite my only experience with harry potter and his magic broom is because they were arranged in a certain way on the pages of a book, how is that not knowledge if chatgpt can both know and communicate about the same thing by also just knowing relationships between words. He also states that an essay is, of his own definition, certain steps to create as an essay, so therefore gpt cannot create essays because it didn't do the legwork he prescribed. Which is garbage. You can fart out a good essay about a subject without going through his steps, and if you want to say "your prior knowledge of the effects of furry culture on the mascot suit industry maps to the same steps" then you can make the same argument about chatgpt's training and modeling and synthesis steps mapping just as easily. The best argument he makes is that using chatgpt to create your college essays for you is bad because the essays it writes are bad (it doesn't adhere to the truth well enough). There's a much better argument to be made that the purpose of his class and his college in general is to teach students how to learn and think and synthesize new ideas, but that doesn't actually result in the conclusion "so obviously there's no place for language models in that process". If chatgpt's essays didn't suck, they would be an invaluable resource for learning about a subject and its related fields, and as a source for your own process of synthesizing ideas and trying to communicate them.
|
![]() |
|
That's not right, chatgpt and all large language models see plenty of gibberish and tune the conceptual value of this gibberish to zero during the learning process (rejecting it). You can say that chatgpt doesn't understand "concepts" and it only works with "words", but it's not clear that concepts and the statistical interrelation of words (and more exactly, the structured, statistically weighted connections between words, phrases, sentences, and groups of sentences) is fundamentally different than a concept. To go further than harry potter, I can tell you about zorblots being bigger than zangos, and you only need words to form these concepts of two things that don't exist. There are no pictures, but does that mean blind people can't have concepts? Obviously you don't need to directly experience things to have a concept of them, as zorblots don't exist, so that can't be the reason why relations between words and concepts are different. infernal machines posted:how? can you explain the mechanism or process by which using chatgpt to create an essay (of any quality) could improve the user's grasp of the subject? explicitly using it as input to your own research and synthesizing processes. This is something that seems obvious to me (so it's probably wrong), but if chatgpt was less full of poo poo (i.e. more than 90% right on factual issues), writing an on why everything is called postmodernism would be aided by asking chatgpt to write essays on A) what it thinks the answer to this questions is B) what the extent of things called postmodernism is C) What the history of these things are and why they took on this nomenclature etc etc. Using these as summaries to work from for your own research, and then dissecting it to build your own arguments, I think this would make your own work better.
|
![]() |
|
infernal machines posted:reject traditional concepts like meaning and embrace a world without referents or the signified chatgpt is amazing at winograd schema. old terry doesn't do ai research anymore but his test for intelligence is now firmly in the category of "too easy".
|
![]() |
|
the poster saw something terrible on his monitor, which was turned off. what was turned off, the poster, the thing he saw, or the monitor?
|
![]() |
|
infernal machines posted:i'd need to know what a poster and a monitor are to infer the subject of the query, but maybe if i see several thousand more sentences containing the words monitor and poster i can make a guess based on their correlation with each other in the text if you need to "know" something to answer this sort of question, then because chatgpt can answer this sort of question it does "know" things! https://freddiedeboer.substack.com/p/chatgpt-and-winograds-dilemma Obviously that's not a good conclusion to draw, but falling back in ineffable properties of human minds is not productive (the article linked goes on to point out chatgpt doesn't have a "theory of the world" and therefore this test is void). winograd schema problems were a litmus test for knowledge and understanding only until an ai came along that could trounce them. RokosCockatrice fucked around with this message at 13:59 on Feb 24, 2023 |
![]() |
|
the deadpan delivery of half sarcastic remarks make your posts considerably harder to understand than a normal turing test
|
![]() |
|
It's actually fascinating stuff. SuperGLUE is a set of tests they put together when GLUE (general language understanding evaluation) started to get aced by language models, so they had to pull out the big guns. https://super.gluebenchmark.com/leaderboard Currently the top scorers are hitting numbers ilke 98% on SuperGLUE's referrent tests, so we probably need to drop these tests for a new CrazyGLUE set. These models are almost definitely all testing on data that includes mentions and examples from the SuperGlue dataset, so, idk, maybe they're getting falsely high scores. Me angrily putting a bad prompt into chatbing: https://www.youtube.com/watch?v=TIkYqCJjEtw Me angrily putting a good prompt into a gpt-neox-20b model running on my own machine, fine tuned on forum posts: https://www.youtube.com/watch?v=ND9C9RrBum0
|
![]() |
|
that is fun. shamelessly stolen from a friend:![]()
|
![]() |
|
rotor posted:we can agree to disagree. both parties coming to an agreement that each other is unsalvagably stupid does imply some level of understanding
|
![]() |
|
distortion park posted:E: missed the agreement to move on. I think it's pretty obvious to any yosposter that rotor isn't unsalvageably stupid though! yeah i think the right joke was distain for one anothers opinion. IDK sorry rotor not on my a game here.
|
![]() |
|
iirc the only part that is a "gun" is the receiver, everything else you can buy as a spare part without any overhead.
|
![]() |
|
aah, heres the vice video that 100% of my opinions are based on: https://www.youtube.com/watch?v=C4dBuPJ9p7A
|
![]() |
|
it turna out "how far homemade guns have come" is surprisingly far because gun science has been aggressively trying to make guns simpler and easier to manufacturer for a hundred years
|
![]() |
|
Roosevelt posted:what would you do? make it look like a cat with the muzzle being under its tail
|
![]() |
|
The Saddest Rhino posted:https://twitter.com/MothershipSG/status/1631855843053551616?t=xuojXpjONYfyNH92KGRhCg&s=19 when people bring up culture wars and say "china isn't arguing about this!" they apparently only know the half of it
|
![]() |
|
NoneMoreNegative posted:
I'm all for cathecting but NMN this seems a little bullshitty. You know what, I got you covered, I made you one you can right click -> save as for free. ![]()
|
![]() |
|
Remember the implied super horny part about cyberpunk? Maybe now that AI boobs are as easy to generate as it is to think about them, we'll start seeing everything getting super horny super fast. Exhibit A: https://www.tiktok.com/embed/7213445991813352747
|
![]() |
|
![]()
|
# ¿ Mar 23, 2023 19:54 |
|
I really don't like that guy. Unrelated: ![]()
|
![]() |