Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Chev
Jul 19, 2010
Switchblade Switcharoo

Nude posted:

https://www.youtube.com/watch?v=j8N-c8H0pgw

Yeah it's 2.5d, but its roots are in traditional animation. I think this is the closest you'll get to a true 2.5d game. If you watch the talk he describes how it was made. What confuses me is how people really really really want to emulate traditional animation, without doing traditional animation. Again I suppose the argument is cost, but watching that video I'm not convinced that's the full argument, considering Pixar & Disney is trying to do the same thing with shaders.
The concrete, observed advantages this brings (vs some stuff that's more abstract like their mention of Blazblue already being "perfect 2d" when its 2d engine is laughable compared to Skullgirls') specifically in the context of games are:

-Texture memory savings: 3d skeletal animations, even with all the parts-swapping they're doing, are a whole lot more compact than the corresponding HD sprite sheets, even tiled and compressed to hell and back, and the animations themselves are held on conventional memory. Arcsys devs were constantly struggling with video memory limitations for their PS3 games, removing frames from old moves to include new ones. Skullgirls gets around that by constantly decompressing and streaming the needed sprites from system to video memory to get around this and required very clever coding and constant optimization.

-Resolution independence: Now you're playing with vector data, all you need to up the resolution is up the resolution. People using Gedosato to make 4k shots of Xrd have produced nothing but breathtaking results.

-Changeable outfits: All the animation data is separate from the graphics and even characters that have swappable parts don't have that many compared to the equivalent sprite sheet. So as long as you make new parts that conform to the same skeleton you can go wild. Currently seen in the game in two instances: one of the characters getting a different outfit in the newest version, and another who has a superpowered form in which all his moves gain new properties: in the 2d games that was just his normal sprite blinking red, in Xrd he's got an entirely different design that shares the same animations. But of course the real benefit of all that is gonna be future paid DLC outfits.

Adbot
ADBOT LOVES YOU

Chev
Jul 19, 2010
Switchblade Switcharoo

Haledjian posted:

I have a feeling that if I had any kind of maths/programming background I could probably figure out a way to do it in the Unreal material editor (since it lets you do pretty much whatever with UVs). But unfortunately I think it's beyond me, haha (or at least would sidetrack me way too much from getting the main systems stood up). Holding out hope that I can get someone to help me out with it in the future at some point.
I dunno how it works in UE or Unity specifically and their visual editors, but when it comes down to shaders themselves you don't need any math, it's something that's built-in, with interpolation modifiers in HLSL and interpolation qualifiers in GLSL. Instead of declaring your input like "float2 texCoords;" for a texture coordinate you add the noperspective modifier, ie "noperspective float2 texCoords;" and you've got your affine texturing right there. The modifier is noperspective in both HLSL and GLSL.

Chev
Jul 19, 2010
Switchblade Switcharoo
That was the origin story of FNAF, yeah.

Chev
Jul 19, 2010
Switchblade Switcharoo

Hammer Bro. posted:

I might be misconstruing but it seems like if people commonly recommend some extension for some basic functionality then I'd expect that the implementation of that functionality leaves something to be desired.
If the functionality is "get standardized layouts across all gamepads" then it is not basic at all because such a standard does not exist (and even within partially supported should-be standards like Xinput you get a few surprises). There's no way around it apart from testing each new gamepad and adding the layout to a database (which is what InControl does).

Chev
Jul 19, 2010
Switchblade Switcharoo
It's worth noting that Dispose() you've been calling in XNA isn't a destructor at all. It's just, well, Dispose(). There are no destructors in c#, for that matter, closest thing is finalizers and they can't be called explicitly, only the garbage collector can call them. But yeah, remove all outstanding references and your object will be collected, eventually.

Chev fucked around with this message at 11:49 on Jun 27, 2018

Chev
Jul 19, 2010
Switchblade Switcharoo
The first two Siren games did it. It was suitably creepy.

Chev
Jul 19, 2010
Switchblade Switcharoo

Kassoon posted:

Also a lot of engines poo poo themselves when they get too far away from 0,0 so keeping the player/camera centered there and moving the universe around them is fairly sensible.
This is specific to floating point formats so depending on your internals it may not be a a factor. Also many games will keep the coordinate system world-based but shift the world around once in a while, instead of keeping it player-centered at all times, to simplify the math.

Chev
Jul 19, 2010
Switchblade Switcharoo
The levels thing is especially easy with procgen. That's how there were 500 levels in Populous, then they reversed the seed and got 500 more for the expansion pack, and the extra galaxies in NMS are similarly about feeding different seeds to the generator.

Chev
Jul 19, 2010
Switchblade Switcharoo
I feel having an environment that makes it easy to rename classes really helps because even if a name is stupid when you come up with it down along the line you can rename the class once you have a better idea of what it does (like, my shader manager is really a shader cache because all it does is load and cache shaders).

Chev
Jul 19, 2010
Switchblade Switcharoo

Bert of the Forest posted:

resolved by changing the "Transparency Sort Mode" in Graphics settings to be Orthographic instead of Perspective, even though I'm still using a perspective camera.[/url]
It's a bit of a diabolical trap in that the fact it's called orthographic or perspective transparency sort has no connection to whether you're using an orthographic or perspective projection. To be fair it's kinda their fault because they chose crappy names. What they should say is the one they call perspective is sorting based on distance from the camera's eye position, while the orthographic one sorts based on distance to the camera's near plane.

And in the one-point perspective used by every game camera ever, billboards are usually not aligned towards the camera's position but towards the camera plane, so in turn the distance to the camera plane should be used.

Chev
Jul 19, 2010
Switchblade Switcharoo

Your Computer posted:

I know this isn't super interesting and it's still baby steps, but I feel like I'm starting to understand at least a bit more about shaders :shobon: I learned about using rendertextures with shaders to make effects, and I can already imagine some of the effects I want to implement! Other than that I added some more stuff to my shader like emission (self-lighting?) with support for a texture to make parts of a model less/unaffected by lighting. None of this is actually a game but.. it's related, right?
At the very least it's super interesting to me! I'd forgotten all about the 3dfx dither lines but this is definitely familiar.

Chev
Jul 19, 2010
Switchblade Switcharoo
The lower-level way it's done, in xaudio with sharpdx for example, is that you can chain sound buffers in a given source voice (the object that plays sounds), so you just submit both buffers in order and either tell it to loop the second one or re-submit it in a loop. That's also how compressed stuff like Ogg works, you decompress only enough to fill a couple of small fixed-size buffers and whenever one's been consumed you fill it with the next bit of uncompressed data and append it again. That second bit is encapsulated in Unity's "compressed in memory" and "streaming" load types for audio clips.

Anyway, yeah, I'm rather surprised Unity doesn't have a "playAppended" method or something in addition to playScheduled, would make all that much easier.

Chev
Jul 19, 2010
Switchblade Switcharoo
The way perspective transforms are implemented in real-time rendering completely messes up z-buffer precision (instead of encoding z you're encoding D = a * (1/z)+b, which isn't linear at all) in exchange for being able to transform your objects with a simple matrix multiplication, so you lose precision way faster than you'd think, thus z-fighting.

Ways around it for road markings:

-There's a bias parameter in rendering APIs that should allow you to offset polygons by a somewhat constant post-encoding amount, or at least that's the intention, but in practices no two GPUs implement it the same way so it's likely to disappoint you. Worth a try, though!

-Put the line in the road texture! Or directly in the road shader if you want that sharp look. Anyway, the idea is that if you draw the line directly as part of the road polygon, there's no z-fighting issue. Either in a single pass or with a second overlay pass that uses the same polygons and the equality depth test. Anyway, that's probably the best option.

-Deferred decals! If your renderer is deferred it's a reasonably handy way to do it.

-Per-pixel depth shading! Shading languages allow you to play with the depth writes themselves, at the price of losing early-z shader optimizations, so you can do stuff like logarithmic depth buffers, increasing z-buffer precision to planetary scale. Not really worth the hassle if you don't need planetary scale though. More simply, you can use it to implement a linear z bias yourself, but that'll still be subject to precision loss, although I think a big enough math nerd could minimize it.

tl;dr use textures for lines.

Chev fucked around with this message at 00:50 on Jun 19, 2019

Chev
Jul 19, 2010
Switchblade Switcharoo
Just a correction there regarding delegates, so you know: an anonymous function can be used where a delegate's expected but a delegate isn't an anonymous function.

In C# the name for an anonymous function is, well, an anonymous function (of which there are two types, anonymous methods and lambda expressions). Instead a delegate's basically a function pointer, and so it can point to an anonymous function just as well as a named function, and isn't either of those, just like a reference isn't an object.

Chev
Jul 19, 2010
Switchblade Switcharoo
The delegate's the type, yes. But it's defined like this: delegate int somefunction(int x, int y). That's the signature.

The thing you've written, (int, int) => ..., is a lambda expression, basically a block of inline code treated as an object with a list of incoming parameters, a possible value to a delegate's type.
Note that it's not a signature, and instead the provided block will have to match the delegate's signature defined earlier; notably, the return type is entirely implicit, based on the code inside the block (so it doesn't actually work in the (int, int) => (int) way, that last parenthesis isn't actually a parenthesis and can be any block of code, with as many lines as you want).

So if I take two lines from your doc link:

delegate void TestDelegate(string s);
[...]
TestDelegate testDelC = (x) => { Console.WriteLine(x); };


That first line defines the TestDelegate type, the delegate itself. Then TestDelegate testDelC declares a variable that has that delegate type, and (x) => { Console.WriteLine(x); } is the value assigned (via the same assignment operator as any variable) to that variable, in this specific case a lambda expression.

Note how the lambda's parameter list actually isn't explicitly typed at all when you write it. You only know x is a string because it'll be matched against the signature of the delegate that defines the variable you assign it to, just like a method's parameter list is matched against the method's signature.

Chev fucked around with this message at 01:48 on Jul 18, 2019

Chev
Jul 19, 2010
Switchblade Switcharoo

TooMuchAbstraction posted:

The thing about quaternions is that while, no, you don't have to understand how they work to use them, you do have to understand how to use them. You have a limited set of tools available, and it can be tricky to translate your desired behavior into things those tools can accomplish.

For example, how would you translate "this gun's firing angles are from (-90 Y, 0 X) through (+90Y, +70 X)" into something that uses quaternions instead of Euler angles?
Exactly, you should know where to use them, and quaternions are a tool for arbitrary orientations with 3 degrees of freedom while turrets have two well-defined degrees of freedom so using euler to compute their local orientation is fine, with only two degrees you won't ever get gimbal lock. You just need to know the rotation order of your euler-to-orientation operator is the right one. If you don't trust it then you can separate it into two quaternion rotations for the turret's two axes and just use that.

Chev
Jul 19, 2010
Switchblade Switcharoo
Ultimately it's the same problem as binding to a game controller that may not be always connected, you need to provide an alternative. Having some reserved keys always work in menus as a substitute can be one, but there's also the option of having a specific key or start option dedicated to emergency rebinding.

Chev
Jul 19, 2010
Switchblade Switcharoo
Being able to change/delete the keybind file is good, but it's worth keeping in mind many users, especially the kind that'd bind themselves into a dead end, will have no idea where said file can be found.

Chev
Jul 19, 2010
Switchblade Switcharoo
I mean, even outside of that specific aspect bindings should be, well, bound only to a specific controller and recalled when that controller's plugged in.

Chev
Jul 19, 2010
Switchblade Switcharoo
In the original L-system-for-cities paper, the title of which I forget, the idea was that when you add a new road segment you scan a given circular area around the segment's endpoint (and if your new segment crosses an existing one the endpoint is the intersection point) and if you find one or more existing vertices within that radius you snap your endpoint to the closest or otherwise more appropriate vertex.

Chev
Jul 19, 2010
Switchblade Switcharoo

TooMuchAbstraction posted:

To be honest I personally find 3D art to be easier to do a borderline-acceptable quality level at than 2D. Having to hand-draw each individual frame scares me because I don't think there's any way I could keep a character on-model for all of their animations.

If that's any comfort some companies invest a lot of R&D in getting their 3D characters to become off-model on purpose, to animate better.

Chev
Jul 19, 2010
Switchblade Switcharoo
Functionally, the animation data in Blender is bound through bone names, so technically as long as the control rigs are the same you should even be able to link them to the same animation data if you want a preview of all the variants in Blender.

Chev fucked around with this message at 12:31 on Sep 8, 2019

Chev
Jul 19, 2010
Switchblade Switcharoo
You also need to be careful with how you're scaling it. If you're just applying a scale transform to the object rather than the mesh, that's likely to trip up a lot of engines or exporters and mess with things like dynamic parenting. To be foolproof at the end of the day your object should be the "right" scale when you clear all its transforms.

Regarding problems with stretchy bones, several exporters or engines used to have a problem with non-uniform scaling (ie different scaling on different axes), simply because they were encoding scaling as a single factor, but I think that's solved nowadays. That being said, when I saw the mention of stretchy bones, I misread that as blender's bendy bones and that's another problem entirely, those may not be supported in unity.

Chev
Jul 19, 2010
Switchblade Switcharoo

Your Computer posted:

shot in the dark here; do any y'all wizards have any idea how one would go about implementing custom texture filtering in Unity's Shader Graph?
In Unity like in D3D11, you're limited to the existing samplers. If you want custom filtering, you're gonna have to implement it yourself on top of the existing samplers (usually point filtering). The basics are as follows

Given texture dimensions (width, height) and coordinates (u, v), floor(u*width) and floor(v*height) give you texel position (x, y) (divide it by width/height to get the uv of that texel), and fract(u*width) and fract(v*height) give you the interpolation factors from that texel to the neighbours (x+1, y,) (x, y+1) and, combined, (x+1, y+1). Just sample those four texels with point filtering and implement your own interpolation.

You can do all that with nodes. It's not unity but blender and don't pay too much attention to the node tree since I grouped some of them and it's super small anyway, but here's a test implementing n64-style 3-point filtering I made a couple days ago, just to show it's doable:



EDIT: it's a bit too late tonight but if that can help I can provide an annotated version of the blender file tomorrow.

Chev fucked around with this message at 01:57 on Sep 12, 2019

Chev
Jul 19, 2010
Switchblade Switcharoo
Aw yeah, seems you've figured it out!

Didn't realize you had access to a node that can integrate custom code. But yeah, to explain a bit:

-Your fundamental mental obstacle was that you can't access texture data without a sampler state. A texture sample always has a sampler state and a texture, it is by definition the combination of those things.

-Yeah, as mentioned, when doing your own filtering you need to take several samples per pixel, as many as you need for your own sampling function (4 for bilinear, 16 for bicubic, etc). So for bilinear you'd take samples T00, T01, T10, T11 to have the whole neighborhood then do A = lerp(T00, T01, dx), B = lerp(T10, T11, dx) and finally lerp(A, B, dy) to get your filtered color.

-To reduce the number of samples you need you can still use the existing samplers. Like, whenever you need a weighted average of four texels you could use a bilinear sampler even if your final filtering isn't bilinear. Some of them fancy modern antialiasing, FXAA and onwards, put that to good use.

For N64 filtering, the paradox is that even though you're emulating 3 point filtering, on modern hardware you still need 4 samples, so what was a performance measure originally is now making the shader a tiny bit more complex. That's just how emulating things go I guess. That's because the pixel shader is going to sample all possible necessary locations. I mean, technically, the shader still can have conditions but that's what it'll do behind the scenes. In the bit of code you found, the condition part is one through the "step" function, which is "if a < b return 0 else return 1". So that code computes the possible colors for both 3-point pairs in a texel neighborhood and then weights them using step, effectivement cooshing between them.

Chev
Jul 19, 2010
Switchblade Switcharoo

Zaphod42 posted:

You should be able to render to texture and then use that texture without updating it. That should be super cheap. The only challenge is making it tile smoothly on the boarder.
Six 90° FOV renders to texture with a common origin and pointed along each axis will provide a seamless cubemap. I'm not really familiar with Unity but it seems to have a function dedicated to it, even: https://docs.unity3d.com/ScriptReference/Camera.RenderToCubemap.html

Chev
Jul 19, 2010
Switchblade Switcharoo

Your Computer posted:

so I've started throwing some stuff together to see if I can use ProBuilder/Polybrush for level design and I've run into an annoyance - the mipmapping is kicking in right in front of the player, and since I'm using my own filtering instead of trilinear filtering, there's a sharp, flickery line between each mipmap level, shown here and most easily seen on the grass:

*snip*
(please excuse the placeholder model and excellent walk animation)

I'm assuming this is just something I'll have to live with if I use my own filtering, but is there a way to influence when the mipmapping kicks in? I'm just thinking it might look less distracting if it didn't happen right in front of the player. I know I could disable mipmapping altogether but of course that would cause its own flickering issues.
This is Unity, right? I guess then the shading language for the custom bits you wrote would be HLSL? If it's DX10+ syntax, you've been using texture.Sample() to get your values? Well, you can use texture.SampleLevel(), basically the same but with an extra parameter in which you specify the mipmap level you want to use. if you're in pre-DX10 syntax you're using tex2d and you want tex2dLod.

Just use top level for now (should be 0) but for a future better implementation you'll have to figure out, likely based on screen space derivatives (ddx and ddy in hlsl), which two mipmap levels of the texture you need, do the triangle interpolation thing for both levels, then blend between them (can't remember if N64 did blender between mipmap levels).

EDIT: looking at this https://forum.unity.com/threads/calculate-used-mipmap-level-ddx-ddy.255237/ you even have a function for computing the desired LoD. Then turn it into fractional and integer parts to get the base mipmap and blend factor.

Chev fucked around with this message at 01:26 on Sep 26, 2019

Chev
Jul 19, 2010
Switchblade Switcharoo

Your Computer posted:

point filtering by necessity, since I'm doing my own texture sampling in code. That said, the import settings for filtering are discarded anyway when using the shader since it supplies its own sampler state.

Thanks! I've been using the built-in macro SAMPLE_TEXTURE2D to sample, which from what I can tell just does some compatibility stuff behind the scenes, but is used the same way. I found a similar macro for the other function you mention (SAMPLE_TEXTURE2D_LOD in Unity) and now I know how to manually get (and sample) different mipmap levels I guess!

gotta admit the stuff after that is a bit beyond me though, using CalculateLevelOfDetail and interpolating between mipmap levels and whatnot :shobon: That's basically implementing custom trilinear filtering isn't it?

anyway, the mipmapping isn't actually happening that close to the player anymore (might have to do with switching to Cinemachine for camera? I have no idea, I just noticed it when I opened the project today :iiam:) so I guess the problem solved itself kinda? That said, if we're talking accuracy I'm pretty sure a lot of N64 games didn't use mipmapping at all because it would use up more of that preciously small texture cache :v: It would be really neat to extend my shader with that stuff though.

all of this stuff owns

Right. Too much info at once. Lemme try again.

That visible fringe, the transition between mipmap levels is just something you're gonna get if you use mipmaps but not trilinear or anisotropic filtering, short of disabling mipmapping which will introduce aliasing artifacts instead.
It's worth noting an amusing way to mitigate it somewhat, along with aliasing artifacts, is mentioend in what I've seen of the N64 sdk's manual: blur the texture in the first place as it'll make all those things less jarring. That's right, not only were N64 textures blurry due to memory limitations, the SDK was advising people to make them extra blurry. That's just delightful, in a way.

As for where those thresholds lie, it's based on the screen space derivatives of the UVs. To put it simply, the GPU will determine how zoomed out the texture is in screen space and choose a mip level where the texture's texels are wider or at same width as the screen's pixels, so that there is no aliasing (which happens when you jump over several texels as you go from one pixel to its neighbor). That does mean that the mipmap transitions depend on render target resolution, so unless you fiddle with the numbers you'll get better detail that an actual N64 would give unless you match the output resolution (not that it really matters unless you're specifically trying to fake or emulate a N64's output to pixel perfection)

Anyway, the CalculateLevelOfDetail thing (which certainly has an equivalent Unity macro) is there precisely because you don't need to know how that work but you may need the calculated value. Just pass the UVs to it and it'll return the mip level for the current pixel, but as a float. That is to say, if it returns 1.5 as the mipmap level, it means it's judged it to be halfway between level 1 and 2, so if you're not using trilinear filtering, just rounding it (well, using the floor or ceiling operator, can't remember which) should be fine.

---

Now, I'm bothering you with all that because the 3-point filtering we discussed earlier has a bit of a flaw (common to all custom texture sampling schemes): it uses the pixel dimensions of the texture, or rather of the mipmap level you're sampling. If you sample a 64x64 or 32x32 textue with samples spaced for a 128x128 one, you don't get the right sampling anymore. You can see it happening in your splish splash video: the further mipmap levels have squares instead of the triangular interpolation, so at the very least when you made that one you weren't aware of the problem (although your lod comparison screenshot seems to do it correctly). I don't know if the texel size node in your shader node tree screenshots takes mipmaps into account but if it doesn't, you need to adjust your texel size values based on the current mipmap level. So the idea is to retrieve the mipmap level with CalculateLevelOfDetail, then your actual texel size is original_texture_texel_size * ceiling(2 to the power of rounded mip_level) if we follow standard mipmapping dimensions.

Or, indeed, just not using mipmapping will get rid of the artifact.

Chev fucked around with this message at 17:48 on Sep 26, 2019

Chev
Jul 19, 2010
Switchblade Switcharoo
I know they know. But if they want to move it they need to know why, and beyond that as the remainder of my post mentions the N64-style interpolation is busted mipmap-wise as it is.

EDIT:

Zaphod42 posted:

Makes sense its automatic based on resolution, but you should still be able to force certain LOD levels manually?
Yep! Forcing levels manually is precisely what the SampleLod (and its unity equivalent SAMPLE_TEXTURE2D_LOD) function I talked about earlier is about. But it does just that, it forces whatever LoD you passed to it, so you still need to compute the LoD in some way in the first place.

For them, the best option is to get the value from calculateLod, manipulate it in some way (for example by adding/substracting a constant to/from it), then pass the altered value to SampleLod. Technically there's a SampleBias function that should do just that, but like some other bias functions it's poorly documented (in fact its documentation is partially wrong) and without testing I can't guarantee it'll behave the same on all hardware without some extensive testing.

It is a bit of a fool's errand because the mip thresholds will move depending on resolution anyway. You could also use a fixed resolution but as I said that could be pushing the N64 emulation too far.

Using distance or some other alternative to derivatives or calculateLod to choose the mipmap level is just gonna introduce artifacts because hey, that's not how mipmapping works.

Chev fucked around with this message at 18:36 on Sep 26, 2019

Chev
Jul 19, 2010
Switchblade Switcharoo
Alright!

Well, there's several versions of the shader language(s) AKA shader models. Your Shader Graph is set to output shader code for pixel shader model 4.0 and CalculateLevelOfDetail/CALCULATE_TEXTURE2D_LOD is a pixel shader model 4.1 thing. There's probably a setting somewhere in Shader Graph where you can change that, otherwise you'd need to write the LoD calculation yourself, something I hope we can avoid (EDIT: well, technically it's like five simple lines of code, we'll just have to dig up a couple unity macros).

EDIT: V V V Awesome :D

Chev fucked around with this message at 19:09 on Sep 26, 2019

Chev
Jul 19, 2010
Switchblade Switcharoo

Your Computer posted:

see this is the kind of stuff I literally could never figure out on my own :psyboom:
To be fair I've been throwing you pretty far down the shader rabbit hole recently. If it weren't for N64-style filtering you wouldn't need most of that stuff.

Chev
Jul 19, 2010
Switchblade Switcharoo

Your Computer posted:

now, is this something anyone would even notice? no.
See, that's a good thing! Look at it like this: when the filtering had thresholds you immediately noticed because it felt jarring. Preventing people from noticing is the whole point!

Congrats on making it work!

Chev
Jul 19, 2010
Switchblade Switcharoo

Your Computer posted:

It turned out that CalculateLevelOfDetail returns an already floored value so I had to calculate the mipmap level manually, but since you already mentioned the ddx/ddy stuff I thankfully knew what to google. Not gonna lie the function is like black magic to me, but it works and I get the same lod value (+ fraction!) so I'm happy :v:
Ah, didn't know that about calculateLod, the docs are pretty inconsistent. Not that it matters since you got it working anyway.

The ddx and ddy functions are easily some of the trickiest functions to understand in shader languages, because they kinda break the usual rules. Normally in a pixel shader you cannot access data from a neighboring pixel, you always work on a pixel shader in isolation and if you want a neighboring value you have to get it from a texture you rendered earlier. The ddx and ddy functions are the exceptions to that, they give you the difference in a given value between your pixel and an adjacent one in the horizontal (ddx) or vertical (ddy) direction. There are some quirks to it so you can't use it for, say, edge detection or anything like that, it's specifically intended for getting derivatives, ie rate of change of values over a single polygon's surface.

When measuring that on UVs, the length of ddx or ddy (whichever is biggest) gives you the maximum rate at which your texture coordinate is changing from the current pixel to its neighbors. The idea of mipmapping is that rate of change needs to be less than the texel size of the current mipmap. If it is bigger, it'll "miss" texels in-between two pixels, the phenomenon called aliasing which we seek to avoid by using mipmaps. Since we know the formula to get the mipmap index from the texel size, we just apply the same formula to that rate of change to get the desired mipmap level or combination thereof.

There, that was most of the details you don't need to know.

Chev fucked around with this message at 12:37 on Sep 28, 2019

Chev
Jul 19, 2010
Switchblade Switcharoo

Your Computer posted:

The lighting is completely messed up, and I can't figure out why. Turning or moving the camera makes objects just stop receiving lights at random:
A shot in the dark, but look there: https://gamedev.stackexchange.com/questions/28761/why-do-my-point-lights-disappear-when-another-nearby-light-is-above-1-85-range and/or try unchecking the 'Local Shadows' option on the LWRP asset.

Chev
Jul 19, 2010
Switchblade Switcharoo
In earlier games you'd basically have a jump up frame and a fall frame, then that was extended to an animation chain of jump up-> post jump up loop->[when vertical speed < 0] start falling->fall loop (still used in many games today, it's a good chain) but there''s some really fancy stuff that can be done with blend trees. Although it's kinda amusing because when they were popularized people would start blending and layering tons of stuff but in recent years there's been work to design blend trees in smarter ways to use fewer base animations and reduce the art load.

Chev fucked around with this message at 08:55 on Oct 8, 2019

Chev
Jul 19, 2010
Switchblade Switcharoo
I mean, they still are, because that's how 2D's been done in general in our post-DX7 world where rasterizers have taken the role of blitters. Unless you've been making your own pixel-by-pixel drawing routines, I guess.

Chev
Jul 19, 2010
Switchblade Switcharoo

Ghetto SuperCzar posted:

When a sort order becomes negative it just stops showing up I guess. I solved it by just adding a huge number to all of the calculations :-/

It's quite likely that the sort order is just handled as z depth internally, so naturally anything with a negative depth would be on the wrong side of the camera's near plane and get culled.

Chev
Jul 19, 2010
Switchblade Switcharoo

Imhotep posted:

Also, it's really interesting how Blender seemingly can't achieve that look, like, I don't know, I guess it's just my total lack of experience with visual art in general, but it's bizarre to me that that's not easily achievable in Blender, let alone maybe even too difficult to achieve that it's not worth attempting.
A very interesting interrogation indeed! What's happening here, is that each renderer is "biased", so to speak, towards the techniques of its time. It means, as time goes on, that more complex techniques become more easily accessible, like how you can just slap down a principled BSDF node in Blender to get the whole Disney-type physically based shading going, but also that older techniques are dropped in favor of the new ones (searching for the blinn-phong shading node in Blender 2.8x? Good luck!) and you need to recreate them using basic components.

We've kinda touched upon this with the whole exchange between me and Your Computer about N64 filtering earlier, where they had to go from standard bilinear filtering, which requires slapping down a simple texture lookup node, to understanding what samplers are and how they interact with textures, to combining a number of math nodes or operations to reproduce the N64 three-point lookup on top of what is actually a 4 point lookup, to learning about screen space partial derivatives and the role they play in mipmapping, just so they could reproduce the silly cost-saving filtering that gives textures on that hardware their particular look and feel. To top it off, even if you've got a leaked SDK the filtering is called bilinear in the docs, so unless you find the right page that admits it's not actually bilinear you've first got to look at actual renders from back then really hard until you get what's going on.

I myself have gone through a similar process before when replicating PS1-style graphics (which are a bit more involved than just affine texturing).

So, essentially, to reproduce that kind of renderer feel you first need to know exactly what that feel actually is, in the strict cold mathematical sense. You need to determine the shading equations that were used, the color space (cause nowadays 3d software and hardware tend to default to what's called a linear color space, necessary for good shader math, but that's kind of a post-2012 thing, before they were using gamma space which meant the shading was mathematically wrong but they didn't care and the resulting feel was different), the post-processing (very likely dithered) and color depth (256 colors?), the bump mapping algorithm used (surprise surprise, normal mapping was first introduced to the world at Siggraph 1998, one month after Banjo-Kazooie came out, so it cannot have been normal mapping that was used in rendering their stuff). A lot of it you can infer if you know what software was used and at what point in computer graphics history it was made but anyway, you've got to be a pretty technical person with dubious hobbies, or know one, or have access to the blog of one (everyone and their uncle learned about N64 filtering from the same blog post), and then also find out how to reproduce the process in your modern software of choice, and also which are really important (the normal mapping thing maybe isn't). Or, the alternative, have access to the 3d program that was used in the first place and just use that. In fact it cracks me up (but heartwarmingly) that there are people devoted to preserving old dev tools for exactly that kind of purpose.

In our specific case, if someone were to reproduce that look in Blender, the starting point would be knowing what precise version of 3DS max produces the right results, and know what the shaders used are called in it and a couple material settings with accompanying renderings as spheres that can be used as reference, including a bog standard white boring sphere with and without specular, lit from a single light, from which color space and other niceties could be inferred. Then with a bit of information hunting the right shading equations can likely be found and implemented as reusable node groups for blender, along with rendering settings.

TL;DR What I'm saying is you need to be not only a big nerd, but the biggest nerd.

Chev fucked around with this message at 13:46 on Oct 13, 2019

Chev
Jul 19, 2010
Switchblade Switcharoo

Omi no Kami posted:



Even with terrible, super-simple textures I'm getting happier with my ability to junk out generic office spaces! I'm finding that keeping a consistent scale is super-tough, though. I'm thinking for production assets I should probably just keep a guy and some modular floors/walls in every asset file and size props relative to that.

If you're in Blender just link your character and whatever other reference you need to each file that needs them.

Adbot
ADBOT LOVES YOU

Chev
Jul 19, 2010
Switchblade Switcharoo

Your Computer posted:

something similar was suggested with the doublejump (of "leaving behind" parts of the rig) but I simply can't figure out a way to actually feasibly do that. I could animate the legs to go downwards like you're suggesting but it would have to be at the exact same velocity as the player is moving up, and the only way that would look right is if it's synced up exactly with the jump code.
As long as they're moving downwards with less velocity than the jump is moving upards it'll work fine, and that leaves you wiggle room for adjusting the jumps. But for a double jump actually even moving a bit faster will work (think of it as jumping from a trampoline, during the impulse the legs go into the trampoline and stretch opposite to the jump direction).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply