|
Mod E: Decent guide as of April 2023: https://stable-diffusion-art.com/install-windows/ EDIT: HOW 2 AI ART: Boba Pearl posted:I will write you one my friend. It's crazy to look back on what neural nets were spitting out barely five years ago, basically acid-trip unintelligible remixes of source imagery, and compare it to this which is...something else entirely. The advent of photography changed painting from chronicling reality to expressionism...I'm not sure what this does. Or will do. Only a handful of people have access to this as of yet, like less than a thousand, but we have Twitter where they are posting their bonkers rear end poo poo. Tags are #dalle2 and #openai. Some of this stuff is legitimately insane. The implications for, at the very least, the concept art industry are staggering. "Post-apocalyptic skyscraper covered in vines with urban rainforest below, digital art" "Spongebob dressed as a soldier during WW2 landing in the beach of Normandy, digital art" "Steampunk old western painting of a cyborg ox pulling a Conestoga wagon " Context Embedded "futuristic mozart composing a new digital symphony with a huge synthesizer made of stained glass and electricity" "Steampunk intracellular diagram with mitochondria and single flagella, digital art" "medieval manuscript depicting an explanation of the internet" "A hyperrealistic painting of an alien gently holding the earth, digital art" There are, obvs, a lot of misses and weird glitch poo poo. But some of this stuff is deeply disturbing because of how much nuance it is interpreting and accurately spitting back out. Somebody fucked around with this message at 22:44 on Apr 24, 2023 |
# ¿ Apr 22, 2022 19:06 |
|
|
# ¿ Apr 27, 2024 17:20 |
|
"Fooling around and using DALL·E 2 inpainting techniques to imagine what weirdness Hieronymus Bosch might have painted if he lived today:"
|
# ¿ Apr 22, 2022 19:21 |
|
frumpykvetchbot posted:gently caress. if real, some of these are too good. What's really fascinating about this one is that, unlike algorithms like the old google deep dream and such, this one isn't just stitching together bits and pieces of the data set in a collage. It's taking what it's learned from the huge data sets it was fed and generating entirely original content from the pixel up based on these sometimes incredibly nuanced inputs. It's really fuckin' scary.
|
# ¿ Apr 22, 2022 19:59 |
|
https://twitter.com/Riabovitchev/status/1517435304990416896 I went and spoke to my roommate, who I remembered (after recovering from my hangover) is a researcher working on machine assisted image analysis lol. He hasn't read the paper yet and couldn't shed light on how this is doing what it is doing, so he doesn't know if they've changed something fundamental in the system from Dall-E-1 or if open.ai just made the dataset way bigger and trained it more to get better results "which wouldn't be as interesting, IMO". He mentioned that his department is somewhat befuddled at Dall-e-2 right now because it, as seen, it handles these abstract nuanced artistic requests terrifyingly well and can generate incredibly intricate mural scenes like that nu-Bosch painting or those alien skulls without issues from thin air - so it clearly understands form and positioning and a lot of techniques which take legitimate artists years if not a lifetime to lock down - but when you feed it something like "a blue square and a red ball sitting on a table, square left and ball right." and then mix it up, it'll get confused and return the wrong shape or color or position, so it doesn't handle nitty gritty specificity in regards to form and positioning quite right. These is some exceedingly odd stuff to me, an uneducated layman. Rime fucked around with this message at 20:55 on Apr 22, 2022 |
# ¿ Apr 22, 2022 20:51 |
|
https://twitter.com/paulnovosad/status/1517138092573937664
|
# ¿ Apr 22, 2022 21:21 |
|
Here is the paper underlying DALL-E-2 if anyone wants to melt their brain trying to understand it.
|
# ¿ Apr 22, 2022 21:39 |
|
Bardeh posted:Stuff like this, and this: Toward Self-Improving Neural Networks: Schmidhuber Team’s Scalable Self-Referential Weight Matrix Learns to Modify Itself Nvidia Uses GPU-Powered AI to Design Its Newest GPUs poo poo is really popping off in AI land, if we don't destroy ourselves this decade I legitimately cannot imagine where this is going to end up. I've gone through the DALL-E-2 paper twice now and it's really fuckin' nutty to my laymans brain who last dabbled in this stuff a decade ago now when it was in absolute infancy really. Anybody remember that old web novel Metamorphosis of Prime Intellect? Getting hella throwback vibes to that today.
|
# ¿ Apr 22, 2022 22:50 |
|
So, like, idle thought while discussing all this: using large data sets of existing art to generate original imagery from a stew of concepts is an interesting piece to think about when it comes to creativity. How much is truly original? Last week in another thread goons were talking about the theory that the human brain acts as a transducer for consciousness rather than its source, that it filters lived reality to an otherwhere and makes physical reality intelligible to whatever hardware conciousness actually runs on. Extrapolating out from this line of thinking, then: aren't this AI and artists alike just translating experiences into alternative mediums? So this AI is translating the mass human interpretation of experiences which it has ingested from the training set, and returns the its own interpretations of its pseudo-lived experiences. The line where consciousness possibly lies is getting pretty blurry the more I dig into the most recent papers on this stuff. It makes reality of the classic joke about cable TV: "everything is a remix". Getting spooky.
|
# ¿ Apr 22, 2022 23:42 |
|
https://twitter.com/hausman_k/status/1511152160695730181 This is drifting far from the cool-rear end images topic (I think I jumped the gun, and should have saved this for when it has wider access so we can generate OG content), but holy poo poo I have been asleep on where AI research has been going the past few years.
|
# ¿ Apr 23, 2022 00:08 |
|
Justin Credible posted:So, another thing to this, and I don't know how to speculate how this all ties in, but. I've always found many of the AI-generated images basically snapshots of what a deep psychedelic experience is like. Like, eerily accurate, down to known things being warped and merged, background/edges looking far stranger than the focal point. And it always struck me that, this, this is how AI is 'seeing' things. Yeah, that struck me immediately back in the day and while I brushed it off as just limitations of maybe having been fed too many psychedelic images in the initial datasets, the effect has persisted way too long now to not be taking into consideration. There's something fundamental going on here. https://syncedreview.com/2021/11/30/deepmind-podracer-tpu-based-rl-frameworks-deliver-exceptional-performance-at-low-cost-155/ quote:The PolyViT design is motivated by the idea that human perception is inherently multimodal Seems like the cutting edge is exploring this avenue as well. Khanstant posted:i signed up like 2 weeks ago still never heard back They're opening up wide in the next couple of months. The crazy poo poo is that this model is looking like it's already outdated and there is even crazier developments waiting in the wings here. The Media Synthesis has a pretty good list of resources, both open and currently in development. Some of this stuff, like the self-writing code AI, is extremely Rime fucked around with this message at 00:55 on Apr 23, 2022 |
# ¿ Apr 23, 2022 00:51 |
|
I knew this poo poo was getting weird when I found out Cash App, the money transfer service which advertises on like every podcast, has a cutting edge machine learning division which is revolutionizing generative AI speeds.
|
# ¿ Apr 23, 2022 01:14 |
|
Cup Runneth Over posted:So what you're saying is that merely making it pick up a sponge and place it on a table based on a request to clean up a spill was incredibly complex and took decades of research, but within the next ten years we will have robot maids capable of deep-cleaning our entire households for us using tools designed for humans based on on-the-fly interpretations of verbal commands. Similar to how the sytemic problems causing the breakdown of the biosphere have followed exponential curves turning vertical in recent years.
|
# ¿ Apr 23, 2022 02:51 |
|
Khanstant posted:feel like if you're quick, some ai art covers and and ai generated work-study type gentle music could be a good revenue stream. spotify wants people pumping out albums really often and low-stakes background-noise type music seems prime for letting a computer get the jist and pump out more like that. Attempts at doing this stuff with music have been, uh, really really bad across the board. frumpykvetchbot posted:https://twitter.com/sama/status/1511727724682899461 Yeah. It is not the quality of the images it is spitting out which is even what stuns me, it is the degree to which it seems to “understand” concepts at many levels and how they relate to each other in extremely nuanced and sophisticated ways - and then presents it in a fashion which looks natural and appealing to human observers. This is uncanny valley but in a very new and unnerving way. Rime fucked around with this message at 03:16 on Apr 23, 2022 |
# ¿ Apr 23, 2022 03:14 |
|
Khanstant posted:I mean, this NO SOUL they have on https://www.youtube.com/watch?v=qGjsGTvwL78 Yeah, if it utilizes MIDI like this guy did it's easier to get something that's kind of passable but still sounds like it's skipping a record every so often. I thought music would be simpler than interpreting visual things like DALL-E does, due to the pseudo-algorithmic nature of music, so I reached out to my BFF who did his masters in computer music, who said that most of the attempts had gone nowhere because: quote:Music requires a lot of temporal context because it happens over time, traditionally things trying to make or analyze music have a really hard time with it as a result.
|
# ¿ Apr 23, 2022 03:48 |
|
uma posted:Do you happen to have a link to this thread? This rules. Click the gangtag, we get up to all kinds of fun poo poo when the bird news is quiet.
|
# ¿ Apr 23, 2022 04:40 |
|
pancake rabbit posted:hi there "Apocalyptic mountain landscape, in the style of Monet". "A photo of an alpine meadow filled with various flowers, good lighting, realistic". Thanks, Pancake. Rime fucked around with this message at 04:51 on Apr 23, 2022 |
# ¿ Apr 23, 2022 04:48 |
|
Philthy posted:Been playing with Disco Diffusion a bit. What the sweet hell generated this madness.
|
# ¿ May 2, 2022 16:16 |
|
|
# ¿ Apr 27, 2024 17:20 |
|
Thread has been up for a year now and it is, frankly, completely unreal to compare the first few pages of content to what these algorithms can now generate. Impressive doesn't seem to really do justice to flipping through and seeing how rapidly the tech has evolved in just twelve months. That being said, we're now lightyears past the breathless shock and awe of the first few posts and the community understanding of how to get consistent quality has come a long way. Could the OP do with a refresh? A summary of the various open & closed source generation technologies now available, links to prompting resources, etc? Or is the thread happy carrying on as it has been?
|
# ¿ May 20, 2023 17:20 |