|
Koramei posted:I bet a lot of the trailer footage is faked or under ideal conditions, but the demos had me inputting my own photos / text prompts and creating meshes from that. It was working pretty well. A lot of the tools are up online to try out, too, this stuff is a released product now. This is all true. Just like 2d artists have been bemoaning automation for concept art, now 3d artists are facing the same existential threat. Except... I think VFX has always welcomed new technology as a way to speedup workflow. So, I would imagine every VFX / game studio has simply figured out ways to implement AI in their workflows. It's just a question of "to what level of automation"? Meshy, Luma and about a dozen others are fast making 3d modeling obsolete for the junior / entry level artist. So... yeah...
|
|
|
|
|
| # ? Dec 14, 2025 07:52 |
|
I do feel less negative about it if I'm being entirely honest. Whereas for 2D it's attempting to create the finished image, a complete model is simply one part of the process -- how it's implemented, lit, animated etc are all still creatively driven. It's like, ChatGPT creating a trim sheet or seamless textures with PBR maps is just cool in my opinion, and I feel like that might not even be controversial? It's a tedious process that can be largely automated in a couple of minutes now. The same way if an AI-driven retopo tool comes out. Modeling is more ambiguous but in my mind at the far end of that 'tool' spectrum that helps just amplify what we can create rather than stifle it. My mind will probably change as they develop or we see something come out that genuinely is well crafted but I don't feel that way about the 2D art solutions, or video, that keeps getting used as a shortcut to go all the way to the end. Still, times are changing and I think we all need to keep up.
|
|
|
|
I'm really curious to see this trim sheet process if anyone has some information on it. For 3D we are still very far away. My company is leveraging AI very heavily for games, and we are very vocal about it, and it's very clear that even the best AI 3D solutions are not up to task. 3D has many more considerations than 2D. For background or representative meshes, you can certainly find something usable, but it's a very small percentage of assets.
|
|
|
|
sigma 6 posted:This is all true. Just like 2d artists have been bemoaning automation for concept art, now 3d artists are facing the same existential threat. Except... I think VFX has always welcomed new technology as a way to speedup workflow. So, I would imagine every VFX / game studio has simply figured out ways to implement AI in their workflows. It's just a question of "to what level of automation"? Meshy, Luma and about a dozen others are fast making 3d modeling obsolete for the junior / entry level artist. So... yeah... I was thinking about this and for most artistic disciplines the idea of "boosting productivity" doesn't really exist. We all know those crazy artists that can do a task in half the time as everyone else and make it look better, but they're the exception and usually even when they try to teach their workflows to others it doesn't speed everyone up to that same level. The time it takes to animate something in 2d was the same for decades, reliant on how much "pencil mileage" it took to draw the characters. The only way to boost productivity was to send the work to cheaper/subsidized areas. But vfx overall has been more receptive to productivity boosts than other arts, because there is a whole tech stack that has been improving and making thing faster, or at least automating a whole bunch of annoyingly repetitive tasks to make room for a bunch of new tasks. 3d rigs became more complex but also got modulated so they could be built faster, their performance also became real time at some studios, rendering got faster, small studios began to be able to do the type of vfx that only ILM could have done 20 years ago, etc. So I am not that surprised that vfx artists are a bit more receptive than illustrators to AI. Compositors especially, who see it as a way for them to control other parts of the process and then link it all together with the node-based workflow they're comfortable with. As an animator I would say I'm in the least technically savvy vfx department and I also like 2d animation and drawing a lot. So I have a real antipathy towards AI. I don't like how much foundational knowledge it removes from the craft, and how much it has stolen from artists without any compensation. My wife is in rigging/cfx, knows how to code in multiple languages, and is very accepting of new technologies. She's much more accepting of it. She even tried to devise a way for AI to take a Blocking animation to polish by having it study a whole bunch of footage and learn how to polish animation in particular styles, though in the end she couldn't figure it out.
|
|
|
|
I totally agree about it gutting foundational knowledge. It's coming first for all the junior jobs people use to learn from which I think has pretty dire implcations.Gearman posted:I'm really curious to see this trim sheet process if anyone has some information on it. I think it's only a matter of (not that much) time. 3D was essentially worthless a year ago and is now solid enough in a meaningful number of situations for full use. Game assets might be some of the last on the list because of stringent performance requirements, but I think anyone in our industry is fooling themselves if they're not bracing for the change. Hero assets and stuff that needs to be extremely precise will probably be a while but even there, the AI companies are making a serious push for the output to be consistent and modifiable. Like with trim sheets too; right now it starts with just inputting any image and asking for a trim sheet of it, which is obviously kind of crap, but because now you can do meaningful back and forth now you can give it more guidance and feed in more images and come out with something. Still not up to what a human can do, but with a very big yet.
|
|
|
|
Lots of talk about how "AI is taking over!" but no one wants to show their actual PRACTICAL USEFUL implementations off or show it working in production. It all still looks like absolute poo poo and still takes a ton of effort to get anything actually screen ready final. Whatever. Nothing I can do about jack poo poo. If my latest gig disappears out from under me again, I wont be reupping back into 3D or commercial art in general. Ghouls can have it if they want to gum it all and get their stupid spit all over it. Edit: To be clear, AI-driven tools that 'learn', etc seem interesting! Generative AI is the antichrist, and I'm absolutely going to take my ball and gtfo if that does indeed reach turn-key levels of quality (which is currently still up in the air given LLM limitations). And the theft on top of it all? Yeah, gently caress off. mutata fucked around with this message at 16:59 on Apr 9, 2025 |
|
|
|
Considering how bad clients are at giving useful feedback and half the job is to interpret that; yeah, it's going to take awhile. It's also ignoring the creative collective you get from a team of real people. At Animal Logic, even the render wrangler was encouraged to put forward ideas.
|
|
|
|
Seamless textures with ChatGPT's new image generation is the first use case I've found that's good enough to just straight up plop in and use. I've worked with a bunch of artists that have used Midjourney in early stages of production but never taken it further than that. But I don't think it's about Claude taking over your modeling program and creating a beautiful finished scene for you, even if that's what AI people are promoting. It's about it taking over tool processes here and there in production, up to and probably eventually moving past modeling.
|
|
|
|
To be clear I also do hate AI and wish it had never been made, but I think it's a train you have to hop on rather than be left behind. I do plan to remain in this field.
|
|
|
|
The thing I'm still wondering about AI generated 3D is - at what point does this become a better solution to a 3D issue than either hiring an outsource studio or buying assets and I can't think of one. A decently made bought asset is about the best you could possibly expect from a generative model even in the future. At this point, it's far better to buy or hire. Not even a comparison. So, where is the added value. Cheap 3D already exists. There are millions of excellent 3D artists in the world, many either willing to work for pennies or who have already made the work you want for a similar amount. You can buy these things wholesale and put together any kind of product you want. If that is all you want, take your shot, you'll add to the pile of debris. It's worthless out of the gate. Digital noise, which is all this is, is completely worthless. Yeah it'll get used in early stages of production to quickly fart out ideas, and that'll steal some work, but that part has already happened and even then I think it's the 'we don't give a poo poo' option, not a quality option. You could pay a concept artist only slightly more than your credits to send you something actually worth looking at. That's not even getting into the inherent disgust so many people have with it. Which is huge.
|
|
|
|
roomtone posted:The thing I'm still wondering about AI generated 3D is - at what point does this become a better solution to a 3D issue than either hiring an outsource studio or buying assets Copium and ethics aside, it's largest use case right now is to get an artist on second base with an idea and start working from there. You can easily concept in ChatGPT and then create a usable asset in TRELLIS or something similar. It will still need some tweaking or remeshing, but it GREATLY reduces the amount of artists and time and money to get going quicker. It's much cheaper and more art-directable than stock. And it's much MUCH cheaper than hiring actual people. Where are these millions of excellent 3D artists willing to work for pennies by the way? If they were cheaper *or* faster than AI they'd be hired. But they're not. It's gross and something we absolutely need to find a way to work to our advantage but naively dismissing it as "digital noise" is not going to get you anywhere.
|
|
|
|
roomtone posted:There are millions of excellent 3D artists in the world, many either willing to work for pennies or who have already made the work you want for a similar amount. Lol, no.
|
|
|
|
The last project I was on we did use AI for face replacement. But it was one option among many. There was an AI trained on the actors face that would try to take the stunt double's face and make it look like the real actor. But we also did digidouble faces using the CG rig. Then we rendered that and both were sent to comp to see which they could make work better on a shot-by-shot basis. Same for aging and deaging. They had one senior DMP artist working on making the character look old through "traditional" DMP techniques, and another artist running AI on the faces, trained on the actress' face and similar looking older people. I don't know which in the end was the approved version.
|
|
|
|
I've continued making models of the local buildings. The local hardware store has talked bout selling little ornament sized versions and ultimately it would be completely around the town square. There are enough interesting and varied buildings to make it a cool project. I also did a model of a trolley that used to go between local towns back before cars took over.![]() This is three of the "ornaments" There's some adjustments I'm working thru to get more details is the colors. ![]() I'm taking screenshots and adjusting them Photoshop to use (along with actual drawings) in a coloring book. ![]() ![]() ![]() These are some of the 3D prints from the designs. ![]() ![]() ![]() ![]() I've tried importing them into Blender and adding textures but so far it hasn't worked. It would be interesting to see these in a properly textured and colored animation with people walking in and around the buildings.. Darth Brooks fucked around with this message at 21:27 on Apr 14, 2025 |
|
|
|
That's the coolest project Since I started taking freelance gigs as well as my 9-5 I don't have time or energy for personal work. Self inflicted problem
|
|
|
|
In Blender does anyone know off hand how to setup drivers in the metarig such that it transfers to the generated rig so I don't have to redo them if I need to regen the rig?
|
|
|
|
My general response now to AI is that I would love a star trek holodeck and I have to recognize in order to get to that, AI is going to be involved. There's no going back. Rather than sticking your head in the sand or just being instantly negative, try to enter the conversation to guide it to better and more ethical uses.
|
|
|
|
Alterian posted:My general response now to AI is that I would love a star trek holodeck and I have to recognize in order to get to that, AI is going to be involved. There's no going back. Rather than sticking your head in the sand or just being instantly negative, try to enter the conversation to guide it to better and more ethical uses. I don't see how genAI is on the path to a holodeck, so this argument is a complete non-starter for me. Also, "entering the conversation to guide it to better and more ethical uses" is, practically speaking, indistinguishable from being instantly negative to how the vast majority of current genAIs are being trained and used. in any case, I don't really want to re-litigate the "is genAI good actually" discussion for the umpteenth time. People have chosen their sides.
|
|
|
|
Jeez. I was just adding to the conversation roomtone started. If you think someone was commissioned to sit on the Enterprise and 3d model instantly all of Barkley's holodeck requests then I don't know what to tell you.
|
|
|
|
Look, I get what you're saying, and I'd love a holodeck driven by a benevolently neutral AI that has internalized the entirety of the world's history, knowledge, and art in a post-capitalist, post-utopian society where scarcity is no more and all art can be free, too, but ChatGPT ain't that.
|
|
|
Raenir Salazar posted:In Blender does anyone know off hand how to setup drivers in the metarig such that it transfers to the generated rig so I don't have to redo them if I need to regen the rig? What's your specific setup that's breaking? If this is with standard Rigify, in my experience it's preserved the drivers and transferred them to the generated rig, but I'm not doing anything too fancy. The one time I had to do some odd workarounds was when I was trying to tie an action constraint to a custom property.
|
|
|
|
|
PublicOpinion posted:What's your specific setup that's breaking? If this is with standard Rigify, in my experience it's preserved the drivers and transferred them to the generated rig, but I'm not doing anything too fancy. The one time I had to do some odd workarounds was when I was trying to tie an action constraint to a custom property. Yeah with Rigify, and ah, I just assumed they wouldn't work because googling brings up nothing! I'll actually go and try it! But to mention just in case, I wanna add some custom bones with drivers to like correct some deformations like when sitting down (albeit maybe I could use shapekeys for this, but I wanna try rig first methods before resorting to shapekeys).
|
|
|
|
I've only used it for using a custom property to enable/disable some bone constraints and to adjust the open/closed mouth shapes for a character with tusks, so I don't know how robust it will be for that, especially if you need to read information from a bone that doesn't exist until the rig is generated. I believe the intended workflow in that case is to add a python script for the "Finalize Script" slot of the Rigify properties, and create your driver/constraint/etc in python there so it will get created whenever you generate the rig. Unfortunately I haven't really dug into the python side of Blender so I can't give a good example of how that works.
|
|
|
|
|
PublicOpinion posted:I've only used it for using a custom property to enable/disable some bone constraints and to adjust the open/closed mouth shapes for a character with tusks, so I don't know how robust it will be for that, especially if you need to read information from a bone that doesn't exist until the rig is generated. I believe the intended workflow in that case is to add a python script for the "Finalize Script" slot of the Rigify properties, and create your driver/constraint/etc in python there so it will get created whenever you generate the rig. Unfortunately I haven't really dug into the python side of Blender so I can't give a good example of how that works. I hadn't had a chance to try it yet, but this sounds like a good idea as a plan be!
|
|
|
|
With the basic.raw_copy and basic.super_copy rig types you can relink constraints. You need to name your constraints with @bone_name syntax to target the constraint at the named bone in the generated rig. https://docs.blender.org/manual/en/latest/addons/rigging/rigify/rig_types/basic.html#basic-raw-copy
|
|
|
|
Are drivers also constraints?
|
|
|
|
Raenir Salazar posted:Are drivers also constraints? No, apparently I'm illiterate.
|
|
|
|
Almost done the hair!![]() ~13k which is *checks notes* MAYBE twice as much poly as I would like but I have no idea what I am doing! I think for my use case where I'd like to use this character rigged for a game engine like Unity or Unreal I was max like 10k tris but closer to 8k. But I'd have to take a closer look at Honkai Star Rail models which are my main reference. (with subdivision smoothing some aspects look better some look worse) ![]() ![]() Ignore some parts at the back, after I apply the mirror modifier my aim is to adjust and add in some asymmetry. What the subdivision modifier seems to help the best are the splits/creases between major strands, but kinda make the smaller offshoots like in the bangs I dunno too smooth? ![]() Now that I look more closely I'm not sure if this flow is right. Overlapping strands is hard. I think I need to move that hanging strand a little further back? So far boxmodelling the hair has taken 3x how long the face or the base body mesh has taken. Given this experience if I started over I think I might be faster and more consistent using the path curve + tubes approach I've seen, I wasn't sure how to get like the flat-edged look for the front bangs when I first tried them but I happened on a video where they used them for the same sort of style and just left the edge of the path 'empty' and closed it once they converted the path to a mesh.
|
|
|
|
I'd recommend using curves for stylized hair like that, you'll get much cleaner results
|
|
|
|
Fun. Are you slapping some cel shading on there to see what it will look like in "anime style"? I always find it funny to see the topology of cg anime characters because in order to get the ridge of the nose to look like a drawing, the cg model has a super super points beak.
|
|
|
|
Ccs posted:Fun. Are you slapping some cel shading on there to see what it will look like in "anime style"? I always find it funny to see the topology of cg anime characters because in order to get the ridge of the nose to look like a drawing, the cg model has a super super points beak. I'm actually trying to do something a little different and trying to find a style that's like inbetween the cell shaded and a PBR style because for professional anime style games they tend to do a lot of things custom to get the look they want. So I'm trying to figure something out that's feasible as an indie and scrutinizing and looking at PBR anime style tech demos for inspiration. Basically what can I as one person accomplish in a consistent and standardized way that achieves the sort of anime inspired style I'll be happy with. Having a lighting/shading model that naturally blends together is a big important thing for me right now because I generally want to be able to just throw stuff together and not have the characters clash too much with the environment or each other. And occasionally I've been experimenting with procedural texture based approaches, i.e: ![]() ![]() ![]() But I haven't went too far down those rabbit holes because any shader I use needs to be something I can either implement in Unity's or Unreal material editor or in raw HLSL. Koramei posted:I'd recommend using curves for stylized hair like that, you'll get much cleaner results Yeah I was trying that initially but I got scared by it being a different workflow and defaulted to box modelling. Raenir Salazar fucked around with this message at 16:33 on May 6, 2025 |
|
|
|
A little while ago, and inspired by the all timer how to animate a box in houdini video, I wanted to see how far I could push automata in tyflow and how many simulations I could strap onto each other. https://imgur.com/SV9rX92 There are no keyframes! Used the Jensen linkage design from the strandbeast, after testing one leg set up attached to a rigid board, I made a fully self propelled version. I was not expecting it to work as well as it did - the simulation is extremely stable and fast. All movement is handled by breaking out the drive cylinders and adding a 'spin' operator. Then I strapped a springy physx simulation to it in the shape of a dog, then I strapped a cloth simulation to that. Leaves and plants are pretty basic simulations - leaves are spawned and turned to cloth - and even though you cant see, they do collide with the dog. plants and trees have a low res physx voxel grid bound together and affected by wind, then the high poly objects skinned to that.
|
|
|
|
cubicle gangster posted:A little while ago, and inspired by the all timer how to animate a box in houdini video, I wanted to see how far I could push automata in tyflow and how many simulations I could strap onto each other. It looks like you're offsetting the starting positions of the spinning discs?
|
|
|
|
Listerine posted:It looks like you're offsetting the starting positions of the spinning discs? There's one disc for front and one for back - both legs have to be connected to the same one. I used the test rig to snapshot the geometry and get the position for the linkage exactly 180 degrees from the starting position as a guide.
|
|
|
|
Over the past few generations, GPUs have gotten so large that motherboards typically don't have more than 2 PCI slots anymore. What are the options if you have 3+ cards and want to use them all for rendering?
|
|
|
|
Listerine posted:Over the past few generations, GPUs have gotten so large that motherboards typically don't have more than 2 PCI slots anymore. What are the options if you have 3+ cards and want to use them all for rendering? I don't think it's to do with gpu sizes as much as it is that multiple pcie slots are kinda irrelevant for the average consumer. Most PCs now will have a GPU and nothing else in 'em... maybe some people might have some extra SSDs plugged into one. eATX / 'Extended ATX' mobos can roll with up to 7 or 8 PCIe slots (Though those are for 1 slot cards, not really multi-gpu), there's a handful with 3-4 PCIe x16 slots intended for multi-gpu stuff, you do pay a hefty premium since they're solidly in the HEDT market though. But you'll still be limited by / make sure that the GPUs aren't excessively thick, not sure if the 3 slot gpus would fit in a setup like that. https://www.gigabyte.com/Motherboard/TRX50-AI-TOP There's an example, seems like a number of them might start being 'AI' branded, since the growing market for multi-gpu is people running huge models. Alternatively there are eGPU cases, but I've always imagined that market is more focused around laptops, and tend to rely on either proprietary connectors, or pcie->usbC setups.
|
|
|
|
Listerine posted:Over the past few generations, GPUs have gotten so large that motherboards typically don't have more than 2 PCI slots anymore. What are the options if you have 3+ cards and want to use them all for rendering? You could also setup separate computers on the network and do like distributed rendering, I think Blender had support for it.
|
|
|
|
Listerine posted:Over the past few generations, GPUs have gotten so large that motherboards typically don't have more than 2 PCI slots anymore. What are the options if you have 3+ cards and want to use them all for rendering? you could also look for a mobo with multiple thunderbolt ports and get some egpu enclosures
|
|
|
It was suggested I ask my question here, maybe y’all can help an absolute novice:Bad Munki posted:I just want to move the origin of an object to the lower-X/lower-Y/lower-Z corner of the object’s bounding box. That’s it.
|
|
|
|
|
|
| # ? Dec 14, 2025 07:52 |
|
Bad Munki posted:It was suggested I ask my question here, maybe y’all can help an absolute novice: If it's just one object you can just go to edit mode, pick a vertex, cursor to active, then tab back to object mode and origin to cursor. If you definitely want to use the bounding box uh you should be able to do that in python, getting the object position, bounds and then again setting 3d cursor to that spot and setting the origin to the cursor. Or silly way, in edit mode just select all and then move it to offset from the current origin. If that doesn't make sense I can check later what the actual python commands would be.
|
|
|








































