Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«431 »
  • Post
  • Reply
SubNat
Nov 27, 2008

I wish I was more Moomin-minded...


500 posted:

How did you arrive at this number? I just saved out a mesh with ~12 million tris and it came out to about 450MB. I feel like I'm missing something.

Yeah I just did a huge fuckup and misread it. Ugh. I have had a way too long day it seems. Rolling a couple new tests did make that immediately evident.

I did it again with a skinned mesh I have: (2 shapes, couple of morphs.)
62k tris: 3MB.
1M tris: 41MB. (smoothed up, and baked that to model.)

Checking out one of Quixel's huge rock/cliff meshes though, and the one I checked was 12.1 million tris in a 257MB fbx as the untouched, downloaded asset.
(Trying to export that from maya to compare the size just took too loving long, maybe it got stuck. But I assume their downloadable file is pretty compressed.)
While I waited for Maya to do that, I even tried just importing it to Unreal, and re-exporting it from there as a 2018 binary fbx, to see what the difference would be. It ended up at 784MB. (~3x the original size.)

The size difference really threw me, but I guess Maya and other DCC tools might just favor some kind of heavy compression on binary FBX assets, at the expense of longer export times (depending on settings, I imagine.).
So that might account for why your file ended up so much larger than mine, perhaps? Or perhaps yours included extra data mine did not, like vertex colours, etc.

As ceebee mentions, there's a fuckload of things that causes sizes to blow up in different ways. And it'll be very interesting to see how it works in action once this is available next year.
Not needing to have every .uasset store a bunch of LODs will certainly help on import time and storage space in-project.

e: typod,

SubNat fucked around with this message at 19:23 on May 13, 2020

Adbot
ADBOT LOVES YOU

500
Apr 7, 2019



SubNat posted:

Yeah I just did a huge fuckup and misread it. Ugh. I have had a way too long day it seems. Rolling a couple new tests did make that immediately evident.

That's ok! I wasn't sure if I missed something or if you knew something I didn't.

And yeah, additional attributes can blow out the size massively. At work we don't really use FBX. We mostly do stuff for the web, so we just export all the position/uv/normal data to json. What doing this has taught me though is that things like uvs, normals, vertex colors, etc, are basically just big lists of numbers. The more numbers you need to include, the bigger the file. So unless they've found a way to compress a large amount of numbers in a way that no one has been able to do before, I'm skeptical about how small their files are going to be.

But also I don't really know what I'm talking about and these guys seem pretty smart, so... I look forward to the GDC talk where they explain things a bit further.

SubNat
Nov 27, 2008

I wish I was more Moomin-minded...


500 posted:

... The more numbers you need to include, the bigger the file. So unless they've found a way to compress a large amount of numbers in a way that no one has been able to do before, I'm skeptical about how small their files are going to be.

...

For computers, a number isn't just a number.
Yeah, you can crunch down numbers by dropping the accuracy of them, so that the number itself doesn't take as much space.
If you want the accuracy of 1.0000000563, 0.0000346, 54864.00006734. for a vertex, you'll likely be using a Double float(64 bits / 8bytes) per value.
Which means 8bytes per value get reserved regardless of if the value is 1.500000000 or 63563456.5546, you're only really moving the decimal.

I apologize in ahead if this gets too word-salady:

One way you could do exactly that is by divinding meshes into chunks, which would then allow you to lower the accuracy needed for positions. Which translates to needing fewer bits per value.
(Much like the tile-based approach of virtual texturing, allowing the GPU to pick out only the relevant chunks needed for a render pass. Which allows for partial culling of meshes, reducing overdraw, and allowing only the needed parts to be streamed in when required, instead of the whole mesh.)
(Each chunk in this case would then have a low-resolution local space, and then be placed in world space by the chunk manager. Much like we're used to in regards to local and world space relative to a model.)

I believe FBX uses doubles for storing data like the XYZ position of a vertex. (Opening an ASCII fbx file sure has doubles designated for a lot of things.)
Just skimming a ASCII fbx file showed that they had 15-16 digits of precision for decimals. Which means in itself that just the X,Y,Z position of a single vertex takes up 24 byte, as an example.
With a 1M tri mesh that would mean spending 24 million bytes purely on vertex translations, disregarding any form of optimization. (If you could drop the accuracy down to just using single floats, you'd immediately halve that.)
That's an insanely high degree of accuracy, but required for it to be able to process huge scenes without losing accuracy.



However, with chunks where you don't need high values, you can slice away a lot of that accuracy. If you set up a chunk-size of 100x100x100cm, and used FP16, or half-precision floats (Aka the type of float that GPUs excell at working with best).
You can have an accuracy of less than a thousandth of a cm in a chunk like that storing the values as half-accuracy floats. And this a 1/4th the actual size compared to a double.

To put it another way, if you need accuracy over an entire large scene, then of course each vertex needs a high accuracy. But if you slice that up, you could massively cut down on the accuracy needed for accurate positions, while at the same time opening for more parallell acces to whichever part of the mesh is required for rendering.
Instead of slapping in a 50 million poly statue, and having half of it inside a wall regardless.
If they have an approach like that, I wouldn't be surprised if they have options to allow for adjusting the size of the chunks, and the accuracy within them. There's no need for that giant cliff to have vertex positions that are accurate down to 0.001 cm, after all.
You could crunch the accuracy down quite a bit in dense scenes, especially ones with a lot of nature elements.


TL:DR: If you split a mesh into smaller chunks, you can get away with storing vertex values with far lower accuracy. Since each 'voxel' in the grid would have a local reference space.
FBX files use 64bit per value, but if you drop down to FP16 you're only using 16 bit per value, and still have a range of 6.55 ◊ 10^4 - 6.10 * 10^-5. With ~3.3 digits of precision. Which is basically microscopic in a cm/unit space. or in the mm-range in a m/unit space.

I sincerely hope I got my intent across, and that it doesn't just read like garbage.

SubNat fucked around with this message at 22:46 on May 13, 2020

500
Apr 7, 2019



SubNat posted:

I sincerely hope I got my intent across, and that it doesn't just read like garbage.

I'm following, I think! Sounds like you know a lot more about this stuff than me. I already usually drop float precision to, like, 3 decimal places when exporting. Anything lower than that and I usually start seeing noticeable errors. That's an interesting idea of splitting things into chunks, and positioning the chunks in space, and giving everything inside the chunks a lower precision. I often wonder about how people go about compressing this stuff other than 'lower the precision across the board', so it's cool to learn about approaches like this. Thanks for sharing!

Elukka
Feb 17, 2011



It turns out Eevee gets very high exposure shots wrong in a very fun way.



TooMuchAbstraction
Oct 14, 2012

Hubris

Fun Shoe

Those look fantastic honestly

Elukka
Feb 17, 2011



Also did some renders because I wanted to see how bright the five megaton blasts from my spaceship's drive might be. It's based on a whole lot of guesses, though a real living plasma physicist was involved in making them. The glare settings are consistent between the first and second image, though in the first the actual flash is off-frame because it would completely blind the camera. All you're seeing is reflected light from the ship. The second one is from 10 000 kilometers away. In the raw render, the flash is a single pixel with a value of 500 000.



Pathos
Sep 8, 2000



I donít have any special information or anything but my guess as to how UE5 is handling this is that you can probably important models of unlimited complexity, itíll automatically build and LODs for you, and then you can say ďdiscard LODs greater than this size for this asset or this group of assetsĒ. That way you can just import high-res models out of ZBrush, ignoring the hassle of baking and all that nonsense, and then decide down the line when youíre getting ready to ship that you donít need 6 quadrillion triangles for the mountain in the distance. Thatís how Iíd engineer it, at least.

One thing that this does bring up, though, is how some apps like Substance will evolve. If Substance intends to be next gen capable, Substance Painter will have to be able to do texturing on much heavier assets than it does currently. Personally Iíd feel a HUGE loss having to go from the brilliance of Substance Painter back to ZBrush polypaint, but thatís me.

Either way itís pretty fascinating. Their GI solution is equally astonishing. Really excited to see how this actually plays out.

Neon Noodle
Nov 11, 2016

there's nothing wrong here in montana

TooMuchAbstraction posted:

Those look fantastic honestly

ceebee
Feb 12, 2004


Substance recently released an update that allows you to export displaced and tessellated meshes, so what you can probably do is build your meshes as you normally would with a low poly/uvs/bakes, do your texturing in Substance, and then export it pre-displaced and pre-tessellated to UE4. What I assume will happen with textures is that they get converted into some type of texture streaming thing, or packed into a megatexture type thing behind the scenes to "get rid of drawcalls" as they say.

Also dang SubNat that writeup is crazy in-depth hahaha. Props.

SubNat
Nov 27, 2008

I wish I was more Moomin-minded...


500 posted:

I'm following, I think! ...

ceebee posted:

...
Also dang SubNat that writeup is crazy in-depth hahaha. Props.

Thanks, I had to grapple with the concept a bit a few years back, before Unreal had any kind of support ( or plugins ) for displaying pointclouds, and had a very insistent senior architect who really wanted them.
I ended up baking down the pointclouds to textures to make them easily parse-able and workable in shaders (encoding X/Y/Z to RGB values.), which got used to displace individual polygons to their position. (Thank god someone else had done the groundwork for that, all I had to do was really learn how to use MatLab.)
Even at PNG 16bit that just meant having a resolution of 0-65,536 in either axis.
(Barring encoding trickery like using the Alpha channel as a enumeration for various multipliers of X/Y/Z, of course.)

I was seriously considering the grid-approach with just PNG8 ( 0-255 accuracy per axis/channel ), until I got the PNG16s to work.

As for the actual Nanite workflow, I still have a very hard time seeing it actually getting used that way most places.
It feels very much like it'll be used as a speed-up concession, especially in cases like simple archvis where architects just want to be able to drop in full scenes into datasmith without any preprocessing. (God knows that's what they want.)
I would not at all be surprised if normal maps + 'sensible' polygon density meshes will still be the norm, and have the size and performance advantage on the whole.

I am insanely curious about what kind of auto-quality-adjusting it does under the hood for them to be comfortable making the claims they are, though.
I suspect that auto-mesh-resolution and a couple other optimizations will be joining their dynamic resolution and upsample tools.

KinkyJohn
Sep 18, 2002



I want to get into Unity and I want to know if Modo plays nicely with it? Iíve been using Modo for a few years now after abandoning Maya but Iím wondering if getting back into Maya will make working in Unity easier

500
Apr 7, 2019



Working on my own global illumination and soft shadows in the browser. The only difference between mine and the unreal engine one is that mine is completely fake (i.e. it lerps through a lightmap spritesheet in the shader)

https://i.imgur.com/phx8r4P.gifv

500
Apr 7, 2019



Linked the spritesheet animation to model rotation so shadows are always facing the correct direction. I think I'm gonna animate it next to see what that looks like

https://imgur.com/QnDqoim.gifv

autojive
Jul 5, 2007
This Space for Rent

KinkyJohn posted:

I want to get into Unity and I want to know if Modo plays nicely with it? Iíve been using Modo for a few years now after abandoning Maya but Iím wondering if getting back into Maya will make working in Unity easier

I use Modo as well (chiefly at home now since work switched software packages) mainly for product and environmental modeling and rendering, but they did recently release support for a Unity bridge between the apps. I haven't tried it for myself but if it works as well as the Unreal Bridge, you'll have no problems getting models into Unity. I can't comment on animation, though, since I've never had to do that aspect while working with real-time engines.

https://learn.foundry.com/modo/cont...odo_bridge.html

Plus, as I'm sure you're aware, you can set your materials to mimic Unity shaders to preview them in the Advanced Viewport and it can be pretty darned close to what you would expect in-engine. Here's a video someone did comparing three DCC apps' real-time previews to Unreal. I'd imagine that it wouldn't be too different for Unity.

https://www.youtube.com/watch?v=m6S7hF4YITo

Edit:

500 posted:

Working on my own global illumination and soft shadows in the browser. The only difference between mine and the unreal engine one is that mine is completely fake (i.e. it lerps through a lightmap spritesheet in the shader)

https://i.imgur.com/phx8r4P.gifv
That's some pretty slick stuff happening, there. Nice job!

autojive fucked around with this message at 03:40 on May 16, 2020

Alan Smithee
Jan 3, 2005


any solo freelancers seeing an uptick in work? My Czech buddy who left The Mill (granted that was way back into the beforetimes) said he's getting more busy than ever. Another moved to Texas to work at Already Been Chewed and left after probably not even six months if that to go back to freelancing.

ImplicitAssembler
Jan 24, 2013



erhh, at least in movies, there's virtually no work, as nothings been shot. Many of us have also taken involuntary paycuts.

ceebee
Feb 12, 2004


I'm seeing a ton of recruiters hit me up for remote work lately. Although they want me to move to work in-house once the 'rona stuff is over. I'm pretty dead-set on remote working indefinitely, I love working from home especially now that I don't live alone. The less time I spend commuting the more time I can devote to my hobbies, make my house awesome, get a dog or two soon, take care of kids eventually if I have them. I don't want to waste 3 hours a day commuting to and from the main cities. In one year, 3 hours of commuting (1 and 1/2 hour to and from) that's a WHOLE loving MONTH just wasted commuting every single year.

ceebee fucked around with this message at 01:08 on May 18, 2020

Ccs
Feb 25, 2011


I got out of film at the right time because all my old colleagues have been laid off, even the leads and supes. I have a contract on a tv show until July but after that Iím not sure if the crew will stay the same size or shrink, and without seniority Iíd be the first out.

Some studios like Framestore are hanging on somehow and even recruiting for work they have for the next 4 months. But all the Technicolor and Deluxe vfx studios seem screwed.

Elukka
Feb 17, 2011



Is Blender's animation system kind of a mess, or am I just using it wrong? I have things made out of multiple objects which need multiple animation cycles, and I end up with this huge mess of strips in the NLA editor where I no longer have any idea what is what. I really don't have a solid sense of what the intended workflow is.

Also, tiny little rocket effects that are almost too fast to see:

Ccs
Feb 25, 2011


I saw something on twitter where they're planning on revamping the Blender animation system but have to wait for developers to free up from other tasks. It's on their roadmap for 2022 as opposed to 2021.

Listerine
Jan 5, 2005

Exquisite Corpse

Trying to render a flipbook crashes Houdini every time. With MPlay open, writing images to disk only, and combined. Every time, crashes immediately at the end of the first frame.

Google's not being my friend tonight- anything obvious I should try?

ImplicitAssembler
Jan 24, 2013



What if you start the flipbook from a different frame? Or cache out what you're trying to flipbook locally first.

500
Apr 7, 2019



Does it happen just with that .hip file, or with any file you try? Maybe create a project with just a simple cube geometry and see if you get the same result.

Slothful Bong
Dec 2, 2018

Filling the Void with Chaos


Biscuit Hider

Iím assuming thereís no AOVs or anything that arenít supported/erroring out?

I usually use the Mplay render to check for bad textures/AOV issues, as itíll make the red warning on the ROP (unlike IPR/render to disk, at least with Arnold) but if itís causing a full crash after one frame, that wouldnít help...

Have you enabled logging in the ROP? Could be a decent way to at least narrow down exactly what step itís crashing on.



E: this def isnít your problem, but this morning I accidentally pasted my sim file name into the ROP output, lol. Luckily only got 3 frames in before I went to check, found a single .sim file in the output folder. Wish it enforced extensions a little better, but in the end that one was on me.

Slothful Bong fucked around with this message at 20:18 on May 27, 2020

Adbot
ADBOT LOVES YOU

Listerine
Jan 5, 2005

Exquisite Corpse

Some of these things I probably should have thought of trying but it was pretty late, I'll do a few tests tonight.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«431 »