|
looks amazing
|
# ? Apr 19, 2020 08:23 |
|
|
# ? Apr 24, 2024 13:21 |
|
heh. that's really neat.
|
# ? Apr 19, 2020 08:36 |
|
That's super cool. Can you give us a breakdown or quick tut on how it was done? Agisoft or reality capture?
|
# ? Apr 19, 2020 18:16 |
|
sigma 6 posted:That's super cool. Can you give us a breakdown or quick tut on how it was done? Agisoft or reality capture? Thanks! I used Agisoft Metashape and Houdini. But you could really use any photogrammetry software that lets you export a point cloud. I started by finding a nice alley in this video. I used the one around the 9 minute mark. Then I set the video to full screen and used the chrome inspector to remove the UI so I could step through frame by frame and take screenshots. Next, I generated a dense point cloud from those screenshots in Metashape, exported the points to obj, and then imported the obj into Houdini. The disintegration effect is created by adding a noise attribute to the particles in a point VOP that will flag some for disintegration, and then moving the position of those particles slightly on the y axis every frame with a solver. Last steps are adding a camera and some lights and rendering with Redshift.
|
# ? Apr 19, 2020 19:30 |
500 posted:Thanks! I used Agisoft Metashape and Houdini. But you could really use any photogrammetry software that lets you export a point cloud. Awesome breakdown, thanks for the explanation, it came out looking sweet!
|
|
# ? Apr 19, 2020 20:05 |
|
yeah, thank you! i'm going to try a tyflow version.
|
# ? Apr 19, 2020 21:12 |
|
Woah that is sick as gently caress. We've done a few quick and dirty gopro-on-a-stick facade captures for, like, cheap reference, but this is just...sublimely novel.
|
# ? Apr 20, 2020 03:03 |
|
Handiklap posted:Woah that is sick as gently caress. We've done a few quick and dirty gopro-on-a-stick facade captures for, like, cheap reference, but this is just...sublimely novel. Thank you! I first started thinking about it a couple of months ago when I saw this video, by Benjamin Bardou: https://vimeo.com/392767896 I wanted to make something similar, but didn't really feel comfortable photographing people in public. I actually spent way too long trying to find sequential photograph series on flickr and stuff. Took me a while to realise I could just use a youtube video of someone walking around.
|
# ? Apr 22, 2020 05:39 |
|
Did you manually save out each frame or automate it?
|
# ? Apr 22, 2020 08:39 |
|
Manual. Windows key + print screen with the video set to full screen, using the > key to step through frames. Only took a few minutes.
|
# ? Apr 22, 2020 09:48 |
|
I think I'm finally going to switch over to Cinema4D from Maya. I've used Maya since 1.0 in the 90s but never really used it enough to feel it in my bones. As my work has taken me into more motion graphics career and compositing I feel like C4D might be the better option for a solor user supplemental program. Maya is super powerful but for a one man show that uses it to create content to supplement my After Effects work, it can be really burdensome and obtuse sometimes. I've been watching a lot of the stuff from C4D Live over the past few days and goddamn so much stuff in the C4D interface just makes sense and I like it a lot. What's the consensus on Redshift? The subscription models have a with or without option but from what I'm reading it might be a while before Redshift is the optimal render for C4D. But, ya know, the internet and all. Anyway just puttin' out some feelers.
|
# ? Apr 23, 2020 04:07 |
|
BonoMan posted:I think I'm finally going to switch over to Cinema4D from Maya. I've used Maya since 1.0 in the 90s but never really used it enough to feel it in my bones. As my work has taken me into more motion graphics career and compositing I feel like C4D might be the better option for a solor user supplemental program. Redshift is really good, very fast We run max but lots of the freelancers we work with that do motion graphics-y stuff use C4D/redshift
|
# ? Apr 23, 2020 06:01 |
|
When the guys at Entagma ran a poll to see which renderer their viewers wanted more tutorials to cover, Redshift was the top choice by a landslide. It's actively supported and updated fairly frequently, and also cheaper than Octane by a little.
|
# ? Apr 23, 2020 09:23 |
|
Listerine posted:Did you manually save out each frame or automate it? 500 posted:Manual. Windows key + print screen with the video set to full screen, using the > key to step through frames. Only took a few minutes. Because I'm a sucker for scripting, I found a quick one-liner command to automate that if you have youtube-dl and ffmpeg installed on your system: code:
|
# ? Apr 24, 2020 08:41 |
|
BonoMan posted:I think I'm finally going to switch over to Cinema4D from Maya. I've used Maya since 1.0 in the 90s but never really used it enough to feel it in my bones. As my work has taken me into more motion graphics career and compositing I feel like C4D might be the better option for a solor user supplemental program. You won't regret it, except sometimes. You'll be creating things that just "look" good in a fraction of the time but in 2020 why not give Blender a shot? Like... c4d is the language I speak, I even use it for modelling poo poo for 3d printing where a dedicated CAD program would benefit me greatly because I just KNOW cinema, and can push it around way easier. But if I were learning something new from scratch today... and looking at all the crazy poo poo blender has as its default package, not... whatever studio/master/ultimate/production bundles maxon is cutting their features up into today. Obviously if you're pirating it this is all out the window, just get the ultimate version with hair and dynamics and real time GPU rendering, scripting, etc. but if you're going to be paying for it I almost feel like it's prime-time has passed. EDIT: waitwait wait i'm on the Maxon site and they dont seem to cut c4d up into crippled versions anymore. that's cool, I guess it would have been a pain in the rear end to combine that system AND a subscription model bring back old gbs fucked around with this message at 11:52 on Apr 24, 2020 |
# ? Apr 24, 2020 11:47 |
Yeah in my opinion Maya is lacking a user-friendliness about it's motion graphics tools. There's MASH which is a pretty powerful system that's included for procedural/instanced type effects but I've seen C4D people on youtube make motion graphics stuff in 1/4th the time it would take me to do it in Maya and I consider myself fairly proficient. I still prefer Maya for all my modelling, UVing (although RizomUV is just about beating this), and game engine stuff. But yeah Blender is looking hella juicy for pretty much anything these days. I'll probably end up learning it in the next 1-2 years, maybe less.
|
|
# ? Apr 24, 2020 15:15 |
|
500 posted:Manual. Windows key + print screen with the video set to full screen, using the > key to step through frames. Only took a few minutes. Fragrag posted:Because I'm a sucker for scripting, I found a quick one-liner command to automate that if you have youtube-dl and ffmpeg installed on your system: If you download the video, I've used PotPlayer to rip frames, since you can have it just save out every n frames to a directory with no hassle.
|
# ? Apr 24, 2020 15:16 |
|
Fragrag posted:Because I'm a sucker for scripting, I found a quick one-liner command to automate that if you have youtube-dl and ffmpeg installed on your system: Sick! I haven't heard of youtube-dl before. Could the script be edited like this to only record a certain time range? (I haven't done much command line scripting). For example if we want to start 60 seconds in, and go for the next 30 seconds. code:
SubNat posted:If you download the video, I've used PotPlayer to rip frames, since you can have it just save out every n frames to a directory with no hassle.
|
# ? Apr 24, 2020 17:33 |
Proof of concept for a potential project.
|
|
# ? Apr 27, 2020 04:00 |
|
Pretty sure the Lego star wars movie is under NDA! Good bumps and scuffs. The Lego moves had a layer of greasy fingerprints too - and I've just realised the one situation where the vray triplaner map 'randomise every frame' checkbox would actually be useful. cubicle gangster fucked around with this message at 06:48 on Apr 27, 2020 |
# ? Apr 27, 2020 06:41 |
|
cubicle gangster posted:and I've just realised the one situation where the vray triplaner map 'randomise every frame' checkbox would actually be useful. What use would that be?
|
# ? Apr 27, 2020 16:06 |
|
Odddzy posted:What use would that be? Making the smudges move every frame like it was actually being hand animated?
|
# ? Apr 27, 2020 16:14 |
|
BonoMan posted:Making the smudges move every frame like it was actually being hand animated? aaaah yeah, that would make sense!
|
# ? Apr 27, 2020 16:39 |
|
Playing around with some volumes in Houdini. Still figuring out how to dial in the samples and stuff in Redshift. It ended out coming out a bit grainy but oh well. https://i.imgur.com/AsUKSqG.gifv
|
# ? May 3, 2020 01:15 |
|
drat, that looks good man.
|
# ? May 3, 2020 05:44 |
|
OP's literally over a decade old, so sorry if this isn't the right thread, it looks like the most appropriate one. Coronabordom has me interested pursing my interest in the artistic possibilities of VR. I don't want to make games or anything like that, just create non-interactive environments for people to inhabit / experience, but there are lots and lots of programs out there for creating VR content and I'm having a hard time picking one to go with. The in-headset tools I've used, while cool, don't offer the level of precision I'm after. What do y'all recommend for this purpose? I'm a complete neophyte at digital art but I've got all the time in the world, so a steep learning curve isn't a turnoff if that's what it takes. Also, to be clear, I have no interest in this professionally, something about VR just tickles an artistic bone in me that has long been dormant.
|
# ? May 6, 2020 01:36 |
|
Some Goon posted:OP's literally over a decade old, so sorry if this isn't the right thread, it looks like the most appropriate one. I’ve no experience with this (no vr headset) but I follow a guy on twitter who makes cool stuff with : https://quill.fb.com/ And apparently free Edit: also I know Modo has some tools for VR, but that’s a heavier app, there’s an indie version
|
# ? May 6, 2020 01:46 |
|
If stuff like Tiltbrush isn’t doing it for you take a look at Blender. Iirc the next major release is getting built in VR support which would let you build and mess with things both in VR and in your desk proper. E - here we go https://www.roadtovr.com/blender-vr-support-openxr/amp/ So late May and it’s just a viewer for now. Double e - There’s also an existing plugin to let you do that so you could get cracking now if you’d want. https://www.marui-plugin.com/blender-xr/ Warbird fucked around with this message at 04:26 on May 6, 2020 |
# ? May 6, 2020 04:23 |
|
Why not Unreal? You wouldn't have to look at programming, and it's quite easy to just get started with vr in scenes. There's the ability to work in the editor in VR if that tickles your fancy, and you can scale your ambitions as high or low as you'd like, though custom models would be best to make in a 3d program and drop in. There's a veritable fuckload of free assets for use in it between quixel plus the free epic and epic-subsidised assets as well.
|
# ? May 6, 2020 05:38 |
|
A 3d package like Blender feeding assets into a VR-ready middleware engine like Unreal or Unity is what you want. Source: I do exactly what you described as a job.
|
# ? May 6, 2020 05:39 |
|
Thanks y'all.
|
# ? May 6, 2020 16:14 |
|
Similarly to Some Goon I find myself with a bunch of free time. I've got some old stained glass windows and I was thinking it might be fun to take a stab at recreating them in 3D so I can try to get a feel for what they might look like in different window depths and stuff without shattering them IRL. I feel like I should be able to work out the modeling, I've got some experience with it from my days studying architecture forever ago, but I never really got into materials or rendering as much. I assume this is possible, tell me if it isn't, but ideally I'd like to cut out and port over each piece of glass from a reference photo like this one and then I have some kind of map on top of that I guess to act as a light filter if that makes sense. Basically to simulate the effect of light passing through the different paints and bits of grime. I don't need anyone to do a full writeup or anything, but if anyone can offer a quick summary of what terms I should Google search for this that'd be super helpful.
|
# ? May 10, 2020 05:46 |
|
The March Hare posted:Similarly to Some Goon I find myself with a bunch of free time. Depending on the refractive shader you're using you should be able to pretty much plug in the colour map into some some sort of absorption or extinction value to get the glass refracting the desired colour. Then use an overall grunge map in some kind of material blend to bring in a black dirt material into desired areas.
|
# ? May 10, 2020 06:53 |
|
Finally getting around to Zbrush, once you get passed the weird broken poo poo left over from when it was designed for something else, it's pretty cool. My first attempt in progress. Is it worth doing my textures in Zbrush? Or should I go to 3dCoat/Substance.
|
# ? May 11, 2020 16:30 |
|
cYn posted:Finally getting around to Zbrush, once you get passed the weird broken poo poo left over from when it was designed for something else, it's pretty cool. My first attempt in progress. I'm not a fan of texturing in Zbrush, it relies heavily on Polypaint, which is frustrating, and the obtuse interface that Zbrush is known for really gets clunky when you're trying to figure out how to work between polypaint color, the texture palette, and exporting your work. I much prefer Substance.
|
# ? May 12, 2020 11:19 |
|
First look at Unreal Engine 5 https://www.unrealengine.com/en-US/blog/a-first-look-at-unreal-engine-5 I'm confused by this part: quote:Nanite virtualized micropolygon geometry frees artists to create as much geometric detail as the eye can see. Nanite virtualized geometry means that film-quality source art comprising hundreds of millions or billions of polygons can be imported directly into Unreal Engine—anything from ZBrush sculpts to photogrammetry scans to CAD data—and it just works. Nanite geometry is streamed and scaled in real time so there are no more polygon count budgets, polygon memory budgets, or draw count budgets; there is no need to bake details to normal maps or manually author LODs; and there is no loss in quality. Wouldn't a system like this result in enormous downloads?
|
# ? May 13, 2020 16:56 |
|
500 posted:First look at Unreal Engine 5 This sounds insane and it’s what got my attention I’m just learning about normals and all that crap, maybe I’ll skip it and just throw tons of polys at the problem THERES NO POLYGON BUDGETS ANYMORE or so they say
|
# ? May 13, 2020 17:06 |
|
500 posted:First look at Unreal Engine 5 I wouldn't at all be surprised if it comes with a very efficient way of storing the models as well. I imagine they would be a lot bigger than current assets on their own, but at the same time, if they were to make normal maps for assets of that quality level you'd probably be drowning in 4k and 8k normal maps (streamed in through virtual texturing, of course.) so that in itself would have some space savings. (Same goes for stripping away LODs, that too would clean out some space usage, and memory overhead.) 1 8k Normal map is like 60MB after all. Meanwhile a 12.5 million polygon mesh saved in a FBX file is like ~25MB. Nevermind that the final mesh would likely also have multiple LOD steps, and possibly be partially duplicated due to having lower LODs be proxy-meshed together in their HLOD system as well. The same goes for Lumin, (Aka what I assume is hybrid/cached RT GI) which could also save a lot of 'download size' in that a lot of spaces that would normally be lightmapped to get good GI no longer needs to be. With the bonus that it's suddenly supporting dynamic scenes. It seems a lot like they're utilizing the fact that they can finally work with platforms where you're expected and required to have mesh shader capability, and can thus leverage it into insanely efficient delivery of high poly assets. Which means that a lot of workarounds for drawcall limitations, as well as size and poly budgets, can now be waived. Honestly I just hope this means they can finally kill off prerendered cutscenes for good, which would probably be a huge net benefit to game size.
|
# ? May 13, 2020 17:59 |
|
SubNat posted:Meanwhile a 12.5 million polygon mesh saved in a FBX file is like ~25MB. How did you arrive at this number? I just saved out a mesh with ~12 million tris and it came out to about 450MB. I feel like I'm missing something.
|
# ? May 13, 2020 18:11 |
|
|
# ? Apr 24, 2024 13:21 |
FBX size is also dependent on stuff like vertex color, UVs, smoothing groups, skinned vertices, stored maps, etc. I definitely have huge FBX files that come out of ZBrush that contain a lot of vertex color info and stuff (12 mil is definitely a 400-700MB file for me). They definitely must be developing some efficient method for storing and reading/writing that data. Curious to see what it is I really hate hate hate that FBX is handled differently in almost all applications though. I hope Epic tries to standardize it with an open source format. FBX is hot garbage, but at least it's better than OBJ.
|
|
# ? May 13, 2020 19:39 |