Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
Unfortunately years of watching Sonic dying while completing stages as well as other games made me automatically structure the code in such way that winning and dying at the same time is impossible.

Adbot
ADBOT LOVES YOU

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
Next update should come any day now. It's closer in tone to the chapters earlier in the thread and I really like how it's shaping.


In the meanwhile, I coded something interesting for the ranks, which is automated detection of valid ships. This is because some ships might, due to the procedural nature, have a trajectory that make them invalid targets. This is minimized by refining the algorithm but human error is still a possibility, so I established the following rule:

Any ship that, during its entire trajectory, does not show up in a valid position for more than 0.25 second is not counted unless it's destroyed. A valid position is counted as being at least partially on-screen. This way any ship that may be spawned outside the boundaries due to a combination of matrices/arrays values in its spawning behavior will not add up to the final count.

I don't think my solution will prevent impossible enemies from counting, but it should alleviate the issue. It's nothing major but a small detail that, again, stuff you learn on the mistake of others. Sonic 2 had a "Perfect Bonus" that happened every time you collected every single ring in a stage. This is not a very relevant feature but it's important to mention that it's impossible to get a Perfect Bonus in every stage because some stage have rings placed in impossible locations, which is a fun trivia in itself.

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
Chapter 31 - Real-Time Rendering, Part I


Going back a bit to the origins of the Let's Play, I want to talk about something that is part of the game development pipeline.

That something is Physically-Based Rendering, or PBR for short. You might have heard, you might not. For a period of time a few years ago PBR was kind of a buzzword as engines migrated to it. Now that PBR is the de-facto standard, people don't mention it a lot anymore. I'd like to take some time to talk about the history of real-time rendering and how we got to where we are.


So what is PBR, in a nutshell?

PBR is a model of trying to replicate how materials in real life work, but in video-games, hence the "physically-based" name. The TLDR is - Instead of a single texture, a material now contains textures that tell data about things other than color. What things?

I want to explain how we got here, because the history of CGI is long and quite honestly, just downright fun. It's how I started, and if you're reading a Let's Play in a Games Forum, odds are that real-time CGI has been a huge part of your life, even if you're just on the consumer side of it.

To go through the history thoroughly would be very hard. My goal is to get to PBR, so I'll focus on the material side of things. Let's see how it all got started.


The road so far:

In the old days of yore each object had one texture to cover a surface.



See the ground and the walls? Each one of these is a single texture. This isn't different from tilesets in 2D games. The fundaments are the same.



In fact, looking at this Mode-7 screenshot, you can see a bit how with some tilting, a 2D texture can look like a landscape with some nice perspective in Final Fantasy VI. The technology involved in these early days escapades to make things look 3D was amazing. After you're done with the chapter, I highly recommend watching this video of how Sonic 3D Blast Special Stages were made.

The first 3D engines are sometimes called 2.5D because they were not full-fledged 3D engines like the ones we have. I believe Doom and D3D use a flat 2D map with height information on the floor and ceiling, and the render is a very simplistic projection of that. The properties of such engines allow for very weird and unique effects, such as non-euclidean geometry similar to the Portal game (where rooms can intersect with each other because they're layered over each other on the map).

The point is, the first transition in real-time graphics from 2D to 3D was performing clever and fast geometric operations on triangles and flat textures. To this day, you can still achieve a lot doing just that.

So let's take a journey on to the wonderful world of game materials. If we were to replicate this model of creating 3D objects and applying a flat texture to them, this is what we'd get. I'm gonna create a simple texture with Filter Forge.



And I'm gonna use the Spoony Goon, a Sphere and a Cube as references. Let's apply the texture.



If we were to add a lighting, then we'd have...



Notice how we can see those pesky polygons on the sphere.


Putting the rad in Gouraud

As you might have realized, we'd need an unreal amount of polygons to make a smooth object, plus some incredible anti-aliasing. Each triangle would have to be smaller than a pixel and that would be a big no.

Luckily, we never really needed those, thanks to something called Gourad Shading. This technique was published in 1971 so by the time 3D games were a thing we need not to worry.

Wikipedia has a page on Gouraud Shading with some nice examples, but to sum it up, here's how Gouraud works:



Not the most accurate representation, but the deal is - instead of coloring each triangle with a single color, you make a nice gradient between them. The result is good. You can now have proper lighting.



Take a look at this Super Mario 64 screenshot. See if you can spot where each triangle begins and where they end. It should be easy to see the gradients on Mario's nose.



Or take a look at Final Fantasy VII's boss Aps:



You can see even some seams, where the vertices aren't connected. If you pay attention, everything is somewhat limited by triangle shapes.

Originally when writing this session I thought it'd be smaller because Gouraud has been superseded in a lot of ways... but not all of them. This kind of gradient is an issue to this day, unfortunately. There are clever ways to design your meshes as to minimize triangular patterns, but they still exist and the fact every 3D form is ultimately triangles even though "good practice" asks you to design things around quadrangles mean dealing with these gradients is still a thing.

Of course, back then it wasn't an issue. It was a glorious feature that avoided models looking faceted. And it worked. Simple Triangular interpolation is still used to render basic ambient lighting, so this 1971 concept is relevant to this day.


Vertex Colors

Looking at Mario and Aps you may notice they're mostly flat colors. 3D models usually encode colors in each vertex and this is, again, very important, even to this day. In Aps, you can see that he's colored by the vertices, but since the game has no real-time lighting, please notice how the colors try to mimic a lighting come from above. Usually most light sources come from above.

His horns are downright black underneath. His eye and mouth cavities are darker when facing down, and so on.

Vertex Colors can usually be combined with textures in a multiplicative way (so that a white vertex color = the flat texture, black color = a black vertex color). This is useful for lighting. In Super Mario 64, see how the box to the right seems lit from above:



You may be asking why is this relevant. Vertex Colors are very useful to this day, though they're usually used as masks. When you're playing World of Warcraft and you walk around Barrens and see this:



The textures are very likely defined by being colors painted on the terrain. Say red = Dirt, Green = Grass, Black = Darker Grass, Blue = Road. Then you could have something like:



Terrains are a Pandora's Box which I so totally do not want to go in right now. One could write 10 chapters about Terrains and still have much to cover and I'm no specialist on them. But using Vertex Colors as masks is something very much used to this day, believe it or not. Terrains use it because since they're already encoding 4 (or 8, or more) textures, avoiding yet another texture to use as a mask is a good price to pay. Once again, this is relevant to this day.

Anyway, back to shading models if you take a look at the Gourad's page you'll see that it's not very good for reflections. And because of this we eventually moved to...


Phong and Specular

The Phong Shading model was created in 1973 and it was featured in computer graphics as soon as we had the power to do them. This is because for specular highlights, Gouraud is not very good. Phong manages to shade the object in such a way that it "smooths" the triangular gradients a lot better. In particular, games use a simplified version called Blinn-Phong Shading. This is relevant to this day.

In case you're not familiar with them, specular highlights are basically reflective spikes in an object surface caused by intense light sources. In truth, they're just a consequence of reflection and the deeper physics behind this are partially what would later become physically-based rendering. But for early CGI, having specular sufficed. If you google "car reflecting sun", imagine that the sun is the specular highlight.

In particular, the difference between specular reflection and diffuse reflection is the following: Specular reflections happens when light bounces right off the surface, and diffuse reflection happens when light bounces in a scattered manner.



In the common vernacular, a specular reflection is just what we just call a reflection. The kind of which a mirror will show you. Many objects don't have a lot of specular reflectivity, but since lamps or the sun are such intense sources, we get these highlights:



Early 3D games actually did not do a lot of specular highlights. To show you an example of a game that uses Phong extensively in order to produce the required result, let's take a look at Final Fantasy XIV:



Notice how the light in the fabric part of the armor is more diffuse and less focused.



Here's an interesting factoid about metal: Metals are mostly specular reflection. Since FFXIV is not physically-based, metal in FFXIV relies heavily on specular highlighting. Most of that yellow glow is specular reflection.



If we turn the bright lights source off:



The Blinn-Phong model is still very important in general. Specular highlights are used less frequently nowadays and are progressively replaced by PBR but as you can see from a game like FFXIV, are still also relevant to this day.


NEXT TIME:

How Pixar changed everything!

Elentor fucked around with this message at 14:36 on Jun 30, 2018

Kurieg
Jul 19, 2012

RIP Lutri: 5/19/20-4/2/20
:blizz::gamefreak:
And now you know why the character in reboot made mostly of round shiny surfaces was named what he was.

User0015
Nov 24, 2007

Please don't talk about your sexuality unless it serves the ~narrative~!

Kurieg posted:

And now you know why the character in reboot made mostly of round shiny surfaces was named what he was.

Holy poo poo.

Zurai
Feb 13, 2012


Wait -- I haven't even voted in this game yet!

Kurieg posted:

And now you know why the character in reboot made mostly of round shiny surfaces was named what he was.

Reboot gets better and better as time goes on and I learn more of the stupid inside jokes like this.

Kurieg
Jul 19, 2012

RIP Lutri: 5/19/20-4/2/20
:blizz::gamefreak:
Most of the characters in Reboot use Gouraud shading because it's much simpler and less processor intensive. They used Phong shading on Phong for the joke.

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
You know, just as a personal comment, I really like the usage of polygons in Mario from SM64. As far as being efficient goes, that model is pretty crazy.

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
Next post should be up this weekend. I took some days off to play FFXII since I finished off a few freelance jobs and finally got some spare time. As soon as I get back some air I'll continue from where I left. It's been a while since I played a single-player game and I'm now wondering how to add gambits to everything because pretend-AI is funner than coding actual AI. :v:

Also holy poo poo I don't want to be a tease but the soundtrack is shaping out to be amazing.

Elentor fucked around with this message at 08:56 on Jun 28, 2018

Monokeros deAstris
Nov 7, 2006
which means Magical Space Unicorn

Elentor posted:

Next post should be up this weekend. I took some days off to play FFXII since I finished off a few freelance jobs and finally got some spare time. As soon as I get back some air I'll continue from where I left. It's been a while since I played a single-player game and I'm now wondering how to add gambits to everything because pretend-AI is funner than coding actual AI. :v:

The subsumption architecture is just as real-AI as any 80s AI.

By which I mean, add gambits!

Switzerland
Feb 18, 2005
Do what thou must do.
This thread is rad!

Also..


Gouraud :colbert:

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
poo poo I got so carried away with the pun I forgot it was wrong in the first place, thanks a lot for correcting me.


Alhireth-Hotep posted:

The subsumption architecture is just as real-AI as any 80s AI.

By which I mean, add gambits!

I have some experimental ideas and have had before playing FFXII but they're very experimental and I'm not gonna get to them for a while. But I can share some of them. These are things that are not planned in the slightest, just thoughts that are very likely never to happen:

* One idea I had was to have a drone with special properties, namely that you can modify its AI. Basically a smaller support ship that occupied a Trinket Slot. The drone would have a few abilities, basically inferior version from other trinkets and weapons (temporary shield, heal, boost damage, shoot, increase self projectiles for a while, draw projectiles toward it, scramble target systems from the enemy, and so on) and you'd be able to set an AI on how to use them.
* Another idea was to have a similar captured drone but with neural networks. These networks would be random and you would instead assign goals and over a long, looooooooong time the drone would, maybe, develop a good AI. These networks would be able to be manually imported/exported so anyone who could create their own simulation of the drone behavior and train them would be able to create a smartship if you will.
* An early idea I had was that one of the side-quests in the campaign would allow you to align yourself with a faction and create a station, which you'd be able to travel to from time to time. I then thought that it'd be cool if you could setup the ship models for defending the station based on past ships and weapons. Or assign an AI to them.

These are not even close to feature creeps, they're more like feature delusions, so I don't even think about them. If I ever finish this thing I might get to one of them as a side-project or part of an expansion.

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
Chapter 32 - Real-Time Rendering, Part II


Shaders

In 1988, Pixar came up with the term Shader while designing the RenderMan API. RenderMan is Pixar's own renderer and one of the gold standards for animated rendering, as you can see from its page showing off it being used in movies from Blade Runner to Star Wars Rogue One to Cars.

Shaders are, in a nutshell, basically how everything is rendered today. A Shader is effectively a program used to give color to an object, to shade it if you will. For games, there are two main types of shaders, pixel shaders and vertex shaders. A vertex shader is a program made to modify an object's vertices and is widely used for a variety of things.

A pixel shader however, is a program that runs on every pixel of a given image, hence the name. It yields a 2D output, and because everything we need is ultimately made of pixels...

This might sound a bit vague, but that's because pixel shaders, ultimately, are a bit vague in what they can do, which means they can pretty much do anything. Remember when I talked about blurring? Have you noticed how a lot of games nowadays have motion blur, depth-of-field, bloom and other bloom effects? Those can be easily achieved with a Pixel Shader. Or you can make a shader that turns the entire screen red for no real reason.

Typically a shader will receive an input, a source if you will, and a bunch of variables, and then the shader code does (whatever you want) and outputs something. For example, let's say we have this nebula I created for TSID, and we want to make it darker by halving the light from the red, green and blue channels:



PseudoShaderCode(input i, output o){
o.r = i.r / 2;
o.g = i.g / 2;
o.b = i.b / 2;
}



This program will run once for every pixel. So in this 512x512 image, this program will run 262,144 times to complete its task. Needless to say, GPUs today are really, really powerful, because that's pretty much nothing compared to the vast amount of calculations we must perform - a GeForce GTX 1070 has a fill rate of up to 117 billion pixels per second. Similarly, if we reduce the red in the image a lot, and increase the blue...

PseudoShaderCode(input i, output o){
o.r = i.r * 0.4;
o.g = i.g;
o.b = i.b * 2.0;
}





Or what if we shift the channels? Red = Green, Green = Blue, Blue = Red

PseudoShaderCode(input i, output o){
o.r = i.g;
o.g = i.b;
o.b = i.r;
}



Woah, that's pretty crazy. What if the program knows the x and y coordinates of the texture? Let's see what we can do.

PseudoVariables {
coords = TEXCOORD0; //This ranges from 0 to 1 instead of 1 to 512.
}

PseudoShaderCode(input i, output o){
o.r = coords.x;
o.g = (i.r + i.g + i.b) / 3;
o.b = coords.y;
}



In this case we're ignoring the original image for red and blue and are just making a gradient instead. Green is used from converting the original image to a grayscale. In fact, if we want, we can completely ignore the original input and make o.g = 0.



Because a shader can receive arbitrary inputs, we can make whatever we want with shaders. And this is important, because at some point we realized that as GPUs were becoming more and more powerful, games could be more and more elaborate and coding every single effect as a single own entity would not do. We needed a unified system to do every image-related operation, something easy to expand when needed.


But how is this useful in a 3D engine? Or how CGI is Magic

Let's now turn away from that rendered image. Picture if you will... a circle.



Imagine, dear reader, that we are now to create a gradient from the outside to the inside of the circle. Such an operation would be easy to do by hand to a skilled artist with naught but a pencil, but in our case we're aided by computers. Oh gradients, the pillar of many a flash animation!



Now what if - and this is a big if - were we not to stop here, but instead do simple calculus and get the derivatives of such an image? Why, but this is so simple a task! If we calculate the difference between pixels horizontally and vertically and offset the values so that they're all positive, they'll be thus:



This, my friends, is where the evil machinations of man begin. It takes but one's will to see fit that these two images are combined into one. Since it's known that images are to be made of a red, a green and a blue channel, we can encode one in the red, one in the green, and have a blue channel to spare! Oh, were we to use that blue channel as a shortcut to normalize the vector of R and G, the things that could be sprung forth to life!



Now more than ever, you must not avert your gaze. In an affront to all that is good to men and women alike, humans saw fit to take this image and use it as a false idol. For we know the light to reach our eyes upon colliding with a surface to be readily and easily calculable as the dot product of the light direction with the perpendicular or normal vector of such a surface.

So were we to pass the direction of a fake light to a pixel shader, and use that blueish image as an instruction... an instruction to lie a lie most foul about the true direction of the surface... to use it as a map if you will, a map to point the way with which we change the angle of the perceived normal of a planar surface by the unsuspecting light, and guide it away from its true destiny as the laws of physics intended...







And thus we have a 3D sphere on a plane. If you billboard the plane (make it always face the camera) you now have a particle that looks like a 3D object and reacts to light like a 3D object. And that sphere has only two polygons, the two triangles needed to create a square.

The more savvy of you probably knew where I was going from the start, but otherwise now you know what Normal Maps are. Google search for normal map - they all have that strong characteristic blueish tint and generally look like the relief of the surface as a blue object being lit from the top-right or sometimes, bottom-left (because different standards).

When I first learned Normal Maps they seemed like magic to me. I showed you what you can do with a normal map in a single plane, so imagine how useful they are in 3D. Because of how powerful they are being able to produce high-poly models and efficiently bake them into efficient normal maps for low-poly game assets was a huge part of the graphics arms race between companies to produce the most faithful assets possible.

Normal Maps are incredibly relevant to this day. As far as consoles go the Dreamcast was the first to feature it, and they've been a cornerstone of the asset development pipeline for a long while. A lot of PS2 games already used them extensively.


NEXT TIME:

Some more shader examples!


Supplementary Material:
Here's a bit of supplementary material, though this one goes a bit too in-depth, it's not a hard read:

http://shaderbits.com/blog/octahedral-impostors/

It's about how the type of sprite I created in the chapter can be used to replace real 3D objects. These are sometimes called "Impostors" because that's what they are, and are very frequently used in lieu of trees in forests because a forest otherwise would contain way too many objects and triangles.

This site is about how they implemented it in Fortnite and features a massive reduction of a forest from 380,000 triangles to only 6,000. The idea is that trees rendered far away are replaced by impostors, so you only see 3D trees if they're next to you. In order to do that, the trees have to be rendered from a lot of different angles, and each sprite selects the texture based on the angle it's turning to hit the camera.

You can see how it looks before interpolation kicks in here. All the videos in that link are pro-click to view a step-by-step of the process.

Elentor fucked around with this message at 04:58 on Jul 13, 2018

Synthbuttrange
May 6, 2007


We're going to have to burn you at the stake now. Sorry. Witchcraft rules.

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
You can burn me at the stake but you better be ready to hear me talking about the properties of real-time fluid simulation of the fire in the meanwhile.

Kurieg
Jul 19, 2012

RIP Lutri: 5/19/20-4/2/20
:blizz::gamefreak:
Did you guys get the object properties for this wood from Wolfram Alpha? I just want to make sure it will properly ash and not just turn into a blacken*muffled by sock shoved in mouth*

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS


Wake up, sheeple!

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

ahahahah

CPColin
Sep 9, 2003

Big ol' smile.
The Earth is flat. Confirmed. Sun also confirmed for flat.

Switzerland
Feb 18, 2005
Do what thou must do.
What I always wondered, since it's "difficult" to render a perfect sphere (think: planet), why not use a bitmap'd billboard mask with a circle cut-out, and put that over the low-ish-polygon-count sphere to make it look perfectly round from the viewer's perspective?

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
Here's a bit of supplementary material, though this one goes a bit too in-depth, it's not a hard read:

http://shaderbits.com/blog/octahedral-impostors/

It's about how the type of sprite I created in the chapter can be used to replace real 3D objects. These are sometimes called "Impostors" because that's what they are, and are very frequently used in lieu of trees in forests because a forest otherwise would contain way too many objects and triangles.

This site is about how they implemented it in Fortnite and features a massive reduction of a forest from 380,000 triangles to only 6,000. The idea is that trees rendered far away are replaced by impostors, so you only see 3D trees if they're next to you. In order to do that, the trees have to be rendered from a lot of different angles, and each sprite selects the texture based on the angle it's turning to hit the camera.

You can see how it looks before interpolation kicks in here. All the videos in that link are pro-click to view a step-by-step of the process.

Elentor fucked around with this message at 20:51 on Jul 2, 2018

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS

Switzerland posted:

What I always wondered, since it's "difficult" to render a perfect sphere (think: planet), why not use a bitmap'd billboard mask with a circle cut-out, and put that over the low-ish-polygon-count sphere to make it look perfectly round from the viewer's perspective?

That doesn't work as well as you'd hope. Coincidentally that was one of the first things I thought when studying CGI programming but you're ultimately presented with a lot of issues. Ultimately, it boils down to:

1) If you're away from the sphere then a sphere with a lower poly-count is acceptable
2) If you're at mid-range then seams and other lighting issues may appear. If you fix them through proper shading, you're likely spending too much effort on it - an impostor as mentioned above would likely be a better fit, or just downright fill it with more triangles which nowadays are not that costly. Triangle count is actually one of the lesser issues of today's rendering pipeline and you're more likely to hit a draw call wall than a triangle count wall
3) The closer you are the higher the resolution you'll need to match. If you're using a signed-distance field then that's great but, again, probably overthinking a problem. As you approach the sphere you'll need more and more detail which at this point you either acquire through tessellation or just having a high poly-count sphere in the first place.

So ultimately just using impostors + a real sphere is easier and cheaper.

Switzerland
Feb 18, 2005
Do what thou must do.
Got it, thanks for the explanation! :)

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS

Switzerland posted:

Got it, thanks for the explanation! :)

It's still a very good question because for a long while how to smooth out silhouette of objects (which isn't something normal map can do) was a real subject that a lot of people studied and tried to solve without resorting to tessellation, and ultimately we just got to the point where people either don't care anymore or the triangle count is high enough to make up for it.

Signed Distance Fields are still used because they're the best solution for non-bitmap fonts having infinite resolution and perfectly smooth corners.

nielsm
Jun 1, 2009



Are imposters always pre-baked, or is it feasible to generate them in-engine on the fly as they are needed? I imagine having hundreds of pre-baked impostor images per object can cost a lot in asset storage.

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS

nielsm posted:

Are imposters always pre-baked, or is it feasible to generate them in-engine on the fly as they are needed? I imagine having hundreds of pre-baked impostor images per object can cost a lot in asset storage.

They're usually pre-baked. LOD in general tends to be. In a game like Fortnite or Realm Royale or PUBG you don't need a full set of impostors - only some rocks and trees matter, and rocks usually will just fall down to a lower-poly LOD model.

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
Not that there's a lot of content, but I changed my YouTube channel. And since I haven't posted anything in a while, here's something.

https://www.youtube.com/watch?v=J0Z957GLXrM

CPColin
Sep 9, 2003

Big ol' smile.
Heard that "Good luck" and now I have to listen to the entire Tyrian soundtrack. Thanks a lot.

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
I for one call that a victory.

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
Chapter 33 - Real-Time Rendering, Part III


Last Chapter we talked about Normal Maps. Let's apply them to our scene, shall we? Previously we had this:



Now let's make a depth texture for it...



And take its derivatives to create a normal map:



And voilą! We can now apply a Normal Map to our scene. Notice how you can see the light caught in the small gap between the wood planks.




Doom 3 was one of the first game to really abuse them to their fullest.



In fact, Doom 3 is in some ways more advanced than many modern games. This should not be a huge surprise given that it's a John Carmack's project.

You see, advances in real-time CGI are not exactly always toward making it look better. They're about reaching a good ratio between quality and performance, with a heavy emphasis on performance. Sometimes we get stuck in evolutionary peaks - we follow a technology that's the best or most efficient at the time, only to find out that it's not good enough to solve problems in the future. In an evolutionary landscape that means getting stuck in a local maximum. But sometimes it's very hard to understand what's better in the long run.

Here's an example: Doom 3 uses a very unique shadowcast system for its dynamic shadows: Every object's shadow is a polygonal projection created in real time against the light sources. This makes the shadows in the game perfectly crisp and detailed silhouettes.

Indeed, such a technique is great but because of its very nature it's about unusable when it comes down to dynamic shadows in large-scale worlds. It works in Doom 3's small environment. You also lose subtleties of soft shadows and penumbras, since every shadow is as sharp as it can possibly be. However, the alternative to it are shadowmaps - dynamic textures cast over the scene, and at low settings (or high settings but not set up properly) these can suck. They offer very low temporal stability, flicker a lot and just suck, in general. A lot of earlier games opted for these and as a result their console versions had very poor shadow quality in order to reach proper performance. Nowadays, thanks to brute-forcing high resolution maps as well as other auxiliary advancements (screen-space shadow raytracing which doesn't replace, but enhances it, can be used both by Unity and Unreal engines), dynamics shadows are at a decent place.


Forward and Deferred Rendering

Last chapter I showed you how a plane can render a 3D sphere. In a way, every 3D sphere you see in your computer screen is, well, rendered on a plane - your screen.

This may sound obvious but it's important to understand where I'm coming from. In the old days of yore, games rendered through something called Forward Rendering. To put in very, very crude terms, imagine that every object is rendered like cardboards and these cardboards are placed on top of each other, and the final image is a composition of all the cardboards.

Deferred Rendering is slightly different. Instead, every object collapses into a single gigantic image - or rather, a series of images, and then the operations responsible for turning them into something that looks 3D to us are performed. In other words, Deferred Rendering is the same process with which we created a fake 3D sphere on a plane last chapter, except it's used for your entire screen to render an entire scene.

The advantages of Deferred are many. The first main advantage is that we can now have a lot more light sources. Previously, each object would cast itself against the relevant light sources and the amount of redundant calculations would be enormous. Now we convert everything into an image containing the normal texture, an image containing the depth texture, and we perform the calculations on each pixel once. The light doesn't care about the object it's hitting anymore. Just like our fake sphere didn't exist, it makes no distinction between the sphere being a real 3D object or not - it just cares about the angle of reflection in that pixel. Before the final image comes out, its individual elements are held into a buffer for geometry elements, or G-Buffer.

When used to its maximum, Deferred offers a significant performance advantage. It also opens up the possibility to a myriad of new pixel shaders. Now that the entire image being shown to you is a single composition of multiple images in a single object, we can use those other images like the depth of the pixel in a single pixel shader to create better effects. So the world of Post-Processing Effects, which you've probably seen as a setting in a few games:



Pictured: PUBG's menu

Other games spread their post-processing options around. Take a look at FFXIV:



"Screen Space" is usually a give away that something is a post-process. A Post-Process, as the name implies, comes last in the pipeline, taking the final image and its buffered elements. Because that image represents what is being shown on your screen rather than the actual 3D data of the game, these effects sometimes start with Screen Space.



Pictured: With and without Ambient Occlusion in FFXIV. You can see how Ambient Occlusion casts a soft shadow that helps distinguishing objects.

Ambient Occlusion in particular has been used by dozens of algorithms because Screen Space solutions, and there are real-time non-screen space solutions, so it merits mention in this here menu screen. The funny names are the names of particular algorithms.

Another post-process that's popular nowadays is Screen Space Reflection:



Because SSR can only reflect things that are on screen, it typically reflects things at grazing angles. So a mirror with SSR is impossible, at least standing in front of one - if you're looking at it, either first person or third person, that means you can't see your character in the first place. You might be able to see the back of your character in third person, but not the front, so the Screen Space Reflection doesn't have anything with which to work.

Here's Glare/Bloom, a very common effect, as depicted in Path of Exile. Look at the Exclamation Mark glowing:




Let's play with some post-processing effects in our scene. First, let's add a ground to it:



Now let's add a few post-processing effects. So many pixel shaders! I'll add them one by one:

Ambient Occlusion (AO or SSAO for short) - Notice how there's a soft shadow on the ground now:



Bloom/Glare - let's take it up to 11:



Let's add some vignette and Color Correction to make it look fashionable. Color Correction is super important but for now let's just make it very teal-tinted in the dark and mid tones with no regard for one's sanity:



What a time to be alive!


NEXT TIME

More Shaders! One cannot simply shut up about Shaders!

NHO
Jun 25, 2013

I am looking at SSAO and later images and screaming because it's... horror.

Elentor
Dec 14, 2004

by Jeffrey of YOSPOS
My kind heart shied away from adding lens dirt because there's only so much a fellow human should endure.

On a more serious subject, do you guys find the subject interesting? I intended this to be a shorter series but there's so much to talk about real-time rendering that it ended up becoming a bit bigger than I thought, and I'm skipping a lot. Is it too in-depth? I'm assuming everyone reading this thread has had experience with seeing these effects even if just a name in the options menu, but if stuff is flying over your head I'd like to know. If they're not I'd like to know as well.

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
I have some limited 3D modeling experience, including a vague idea of how textures worked, but getting the extra detail is nice. I don't consider this to be too in-depth.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

Elentor posted:

My kind heart shied away from adding lens dirt because there's only so much a fellow human should endure.

On a more serious subject, do you guys find the subject interesting? I intended this to be a shorter series but there's so much to talk about real-time rendering that it ended up becoming a bit bigger than I thought, and I'm skipping a lot. Is it too in-depth? I'm assuming everyone reading this thread has had experience with seeing these effects even if just a name in the options menu, but if stuff is flying over your head I'd like to know. If they're not I'd like to know as well.

I'm enjoying it

oystertoadfish
Jun 17, 2003

i have no experience with this field and i enjoy the detail. i wouldn't mind even more math and academic concepts, but i bet lots of people would feel the opposite way. in general, i bet you're at a pretty good balance right now

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

This is fascinating, and the flat earth bit was comedy gold.

VodeAndreas
Apr 30, 2009

Yeah I'm really enjoying your last few posts about lighting and shaders

Karia
Mar 27, 2013

Self-portrait, Snake on a Plane
Oil painting, c. 1482-1484
Leonardo DaVinci (1452-1591)

I adore this stuff. I played around with 3d modelling a bunch years ago, but never really got into the technical side. Definitely fun to learn what I was actually doing when I clicked that render button.

EDIT: Though technically, since I mostly used physically based renderers (Indigo), a lot of this doesn't really apply, since (unless I'm horribly mistaken) it's mostly just ways to cheat having to run full-on ray tracing since that's so insanely computationally expensive. I never did anything with real-time rendering systems.

Karia fucked around with this message at 05:27 on Jul 5, 2018

EponymousMrYar
Jan 4, 2015

The enemy of my enemy is my enemy.
As someone who's trying to make assets for games without much artistic skill, this stuff is good.

Also obligatory 'YOU'VE GONE MAD WITH SHADERS' post too.

Adbot
ADBOT LOVES YOU

Boksi
Jan 11, 2016
This stuff is fun to read, even though it makes my brain hurt sometimes. And my eyes too, with that last image.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply