|
I don’t have any images to share (yet…?) but this thread convinced me to download stable diffusion and start messing around - it’s incredible that these images can be generated in “real time” on my modest home computer!
|
# ? Jan 31, 2023 07:21 |
|
|
# ? Apr 25, 2024 16:04 |
|
Lord of the rivets posted:I found that using terms like tattered, broken, torn about clothes can be very good for making post-apocalyptic images in stable diffusion. There is an entire postapocalypse model out and it's pretty awesome https://civitai.com/models/1136/postapocalypse it might be right up your alley. postapocalypse armored dog Negative prompt: soft, cuddly Steps: 48, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 3720845338, Size: 768x768, Model hash: 4ef65125 Condimentalist posted:I don’t have any images to share (yet…?) but this thread convinced me to download stable diffusion and start messing around - it’s incredible that these images can be generated in “real time” on my modest home computer!
|
# ? Jan 31, 2023 09:17 |
|
KakerMix posted:
wow I thought these are the originals the other images are based on. Confusing times.
|
# ? Jan 31, 2023 09:51 |
|
busalover posted:wow I thought these are the originals the other images are based on. Confusing times.
|
# ? Jan 31, 2023 14:37 |
|
|
# ? Jan 31, 2023 14:40 |
|
I'm really liking home made for costume design right now. Leaning into the topic of apocalypse and clowns Danny Devito Joker, apocalyptic, worn down, exhausted, torn clothing, used, home made, old
|
# ? Jan 31, 2023 17:48 |
|
Doing some forests in the style of old Rankin Bass animation. Prompt is variations of "1970s. Topcraft animation. Magical forest. Highly detailed. Focused. Intricate." with the "cheesedaddy" landscape model. I would be interested to see how these could be improved.
|
# ? Jan 31, 2023 21:10 |
|
I don't know what exactly it is but I'm really digging this style I tried to accomplish the same thing in SD but it's not going so well, mostly due to me not being well versed in SD use. Some keywords for MJ to evoke the same style: "Dreamglitch", "Cosmic Circuitry", "divine glyphs", "dreary", "surreal ASCII", "Astral". Just be extremely pretentious about your prompts! Write prompts about surreal ASCII liminal dreamglitch astral sunrises over the forbidden city under a rain of divine glyphs and cosmic circuitry or whatever Literal example prompt: Ominous comet over dreary suburb. Liminal DreamGlitch. Surreal but beautiful cosmic circuitry. Made by surreal ascii art style with divine glyphs instead of letters. e: I'm actually pretty curious to see how well it works for others, I've been "training" the phrase "dreamglitch" by using it a lot and rating images I generate with it, and as far as I can tell from searching the community feed it's a phrase no one else is using (at least publicly) so it should in theory be skewed heavily towards this kind of art, but I haven't had a chance to really test it and see if that's really having an effect as opposed to the rest of my promptwriting overall. For example, the prompt "dreamglitch cityscape" will give you something vaguely like this, and throwing in the ascii bit does a lot of the rest of the heavy lifting: (dreamglitch cityscape) (dreamglitch ascii cityscape) But back when I started this project, I don't remember it being so distinct. e2: But I am pretty sure this "training" is style/model specific and will require --style 4a to get the effects of my "training" (if there are any). Using --4b (or no style tag) will most likely include lots of human faces built into the landscape (possibly because I used the same prompt format on 4b to generate a ton of portraits ) and --niji can do some neat things with this prompt format but it will rarely style match the others. My earliest use of the same phrasing was giving me stuff like this, which is still extremely cool IMO but distinctly different: And from there it's morphed more into "full scene" kind of things, with more patterns in the dots/lines. Maybe just confirmation bias. deep dish peat moss fucked around with this message at 22:04 on Jan 31, 2023 |
# ? Jan 31, 2023 21:24 |
|
deep dish peat moss posted:
Holy poo poo these are awesome.
|
# ? Jan 31, 2023 21:29 |
|
Fitzy Fitz posted:Doing some forests in the style of old Rankin Bass animation. Prompt is variations of "1970s. Topcraft animation. Magical forest. Highly detailed. Focused. Intricate." with the "cheesedaddy" landscape model. I would be interested to see how these could be improved. https://civitai.com/models/1998/autumn-style https://civitai.com/models/2623/winter-style https://civitai.com/models/4843/floral-style Maybe worth a shot.
|
# ? Jan 31, 2023 22:05 |
|
KwegiboHB posted:I'm not sure if they would actually be improved, these are pretty cool as they are right now, but there are some textual embeddings that might change things up a bit. I'll need to learn how to use these. I don't think they work with this NMKD version I've been using.
|
# ? Feb 1, 2023 00:00 |
|
It looks like they go in baseinstall/ExampleConcepts and are loaded with the Load Concept button on the UI. Let me know if you get it to work, I'm not used to NMKD.
|
# ? Feb 1, 2023 01:22 |
|
So in my efforts to recreate a bunch of the characters and environments from my current D&D campaign, I'm running into a few consistent issues I'm not sure if I can solve with different prompts or models or sampling methods or something. 1. If I want a character with a more novel skin color (red/blue tiefling, grey goliath, green half orc/gith, etc...), it's hard to find a prompt that doesn't make everything the same colour as the skin. It'll make a green skinned orc or goblin but even when I try to add prompts explicitly describing the environment it'll make that also green, or heavily lit with green light, that sort of thing. 2. Gnomes seem pretty much impossible unless you want them to look like garden gnomes with the red hat and beard. Tried different variations to generate a pic for our group's gnome ranger and the results always looks like a disgruntled David the Gnome. It's also tricky to get it to make gnomes or halflings actually short. For a lot of other stuff it's great though. Dungeon scenes, monsters, human/elf/dwarf character portraits, etc... A lot of the newer models are way better than standard SD 1.4/1.5 at those things with the caveats above. Mr Luxury Yacht fucked around with this message at 01:30 on Feb 1, 2023 |
# ? Feb 1, 2023 01:25 |
|
Mr Luxury Yacht posted:So in my efforts to recreate a bunch of the characters and environments from my current D&D campaign, I'm running into a few consistent issues I'm not sure if I can solve with different prompts or models or sampling methods or something. I mean I know it isn't what most people want to hear, but inpainting can solve most of this. Just attempt to generate the person you want, then inpaint their skin, then change the prompt to 'blue skin' or whatever at the front and leave the rest, see what comes out. Likewise with the gnomes I'd just do 'dwarf' or unfortunately insensitive language like 'midget' and see what comes up. You're slaved to whatever the images are tagged as, which might include other words for dwarves.
|
# ? Feb 1, 2023 01:37 |
|
https://www.reddit.com/r/StableDiffusion/comments/10gfwaq/dungeons_and_diffusion_final_version_beautiful/ has gnomes
|
# ? Feb 1, 2023 02:15 |
|
|
# ? Feb 1, 2023 03:33 |
At some point recently, Civitai started putting the text prompts for example images directly in the viewer, if they're available. Pretty handy.
|
|
# ? Feb 1, 2023 03:40 |
|
Mr Luxury Yacht posted:So in my efforts to recreate a bunch of the characters and environments from my current D&D campaign, I'm running into a few consistent issues I'm not sure if I can solve with different prompts or models or sampling methods or something. Look into and start playing around with prompt weighting. You can make various aspects of the prompt more important than others. Also negative prompts. Unfortunately there’s no magic formula, so you will need to mess around and see what works best for what you are after. Like maybe MJ-style “dungeons and dragons gnome bard” —no garden
|
# ? Feb 1, 2023 04:23 |
|
Google has a new text-to-music thing with some examples: https://google-research.github.io/seanet/musiclm/examples/
|
# ? Feb 1, 2023 10:16 |
|
https://www.twitch.tv/watchmeforever A chatgpt made version of an eternal animated Seinfeld episode that is always running. It's kind of addicting to watch, but it also shows that writers jobs are safe. Watching it for a long time makes me feel like my brain is going to melt. Vlaphor fucked around with this message at 12:14 on Feb 1, 2023 |
# ? Feb 1, 2023 12:08 |
|
Vlaphor posted:https://www.twitch.tv/watchmeforever It says OpenAI's GPT-3 in the description. The quality of the output would suggest that they are using one of the faster and cheaper models, or maybe the good GPT-3 version is not that great either. I don't think ChatGPT is available as an API yet.
|
# ? Feb 1, 2023 12:56 |
|
There's a pretty good article summarizing all the generative AI progress recently. Probably not entirely new information for the regulars ITT: https://arstechnica.com/gadgets/2023/01/the-generative-ai-revolution-has-begun-how-did-we-get-here/?comments=1&comments-page=1
|
# ? Feb 1, 2023 13:11 |
AI Birthday Cake came out a little more dystopian than intended.
|
|
# ? Feb 1, 2023 16:57 |
|
Sedgr posted:AI Birthday Cake came out a little more dystopian than intended. I got these: someone else made this LOL: "she wasn't lying, that rear end can fart."
|
# ? Feb 1, 2023 17:32 |
|
Added "Hiro Isono" to my previous prompt to get these.
|
# ? Feb 1, 2023 19:14 |
|
Looks p chill. Like doom metal cover art.
|
# ? Feb 1, 2023 19:30 |
|
https://i.imgur.com/UcG2Du5.mp4
|
# ? Feb 1, 2023 19:42 |
|
You haven't watched The Godfather until you've watched the Dialogue Removal cut
|
# ? Feb 1, 2023 19:48 |
|
Somewhat excited for the potential of dubbing foreign media. Probably really expensive and won't be used for anything I watch though.
|
# ? Feb 1, 2023 20:02 |
|
Fitzy Fitz posted:Somewhat excited for the potential of dubbing foreign media. Probably really expensive and won't be used for anything I watch though. Like everything AI this will keep coming down until you can do it at home. Right now that clip probably took a ton of processing time. Are there any stats on this?
|
# ? Feb 1, 2023 20:05 |
|
Fitzy Fitz posted:Somewhat excited for the potential of dubbing foreign media. Probably really expensive and won't be used for anything I watch though. They're gonna use the state of the art AI to resync lips but still have every character voiced by the same monotone guy
|
# ? Feb 1, 2023 20:08 |
|
The Mona Lisa cheeseburger horrors got me thinking about what she would look like painted by different artists. Andy Warhol: Bob Ross: Banksy: Frida Kahlo: Salvador Dali: Frank Frazetta: And finally, Lisa Frank:
|
# ? Feb 1, 2023 20:13 |
|
Ok another interesting article on reconstructing training images. https://arstechnica.com/information-technology/2023/02/researchers-extract-training-images-from-stable-diffusion-but-its-difficult/ From a quick look, it seems that only a small percentage of images that had duplicates in the training set were sufficiently memorized by the model. We've seen that over-trained images like the Mona Lisa can be reproduced pretty well so under some conditions more obscure stuff works too. Not that shocking.
|
# ? Feb 1, 2023 21:28 |
|
ymgve posted:They're gonna use the state of the art AI to resync lips but still have every character voiced by the same monotone guy I love how you can tell when a movie is dubbed these days without even looking at it, because that guy that has all the inflection of Microsoft Sam voices them all
|
# ? Feb 1, 2023 21:32 |
|
ymgve posted:They're gonna use the state of the art AI to resync lips but still have every character voiced by the same monotone guy one of the cool things with voice cloning AI is you can take the original dialog and replace it with the same actor's voice speaking in a different language while maintaining the performance once they put all the pieces together it's gonna be really cool and "dubbed" might actually be the better way to watch foreign films
|
# ? Feb 1, 2023 21:39 |
|
Applying my recent style experiments with MJ to img2img generations based on my own drawings: These were all made from combining the prompt structure I've been using with this old drawing as an image prompt: If anyone's interested there are some old posts I made ITT about my experiments/findings with using your own drawings (even if it's bad/childish lineart mspaint kind of stuff) to guide MJ, they can be found here, here, and here. MJ v4 is extremely powerful when used this way! deep dish peat moss fucked around with this message at 23:03 on Feb 1, 2023 |
# ? Feb 1, 2023 22:42 |
|
mobby_6kl posted:Ok another interesting article on reconstructing training images. Google has the red-rear end over getting beaten to the punch on all this AI stuff, and I expect instead of polishing some of the dozens of AI models in their internal vaults, and releasing them, they're instead going to start making GBS threads on OpenAI and all these startups for being "unsafe" because they did only a 99.9999% good job de-duping their image data. Elotana fucked around with this message at 00:19 on Feb 2, 2023 |
# ? Feb 2, 2023 00:14 |
|
TIP posted:one of the cool things with voice cloning AI is you can take the original dialog and replace it with the same actor's voice speaking in a different language while maintaining the performance Speaking of the whole voice thing, don't think it's been posted in this thread yet, examples of right-now AI tech: https://vocaroo.com/13IHEGRtZdRC https://vocaroo.com/1cFrjVcWRRJt https://vocaroo.com/18SiPVf5Y4cP https://vocaroo.com/16Eg6fcMrc00 https://vocaroo.com/1hyWO1Av1yav https://vocaroo.com/15GFeddYbT1K It can be messed with right now: https://beta.elevenlabs.io/speech-synthesis
|
# ? Feb 2, 2023 00:44 |
|
KakerMix posted:Speaking of the whole voice thing, don't think it's been posted in this thread yet, examples of right-now AI tech: WOW
|
# ? Feb 2, 2023 01:22 |
|
|
# ? Apr 25, 2024 16:04 |
|
KakerMix posted:Speaking of the whole voice thing, don't think it's been posted in this thread yet, examples of right-now AI tech: This is very impressive, I stuck some random notes I have about game design/stories/whatever into it and it nailed the pronunciation and inflection even with a complete lack of punctuation and using completely made-up words. Also, what a great way to proofread text, having a voice read it back to you in an unmistakably literal way. Also lol at your examples.
|
# ? Feb 2, 2023 01:36 |