|
KillHour posted:Noita's GIF export thing works for two reasons: I've been considering trying to code in an instant replay feature in my game shoving simulator for basically exactly these reasons, the whole game is people falling down your post is pushing me further in that direction, I don't think I'm going to code in a gif capture thing but being able to replay what happened makes it much easier to capture cool moments using the built in tools on quest (or your screen recorder of choice on PC) the thing holding me back is that my game is a whole lot of physics and it's non-deterministic so it's kind of a pain in the rear end and will probably end up eating way more dev time than I expect
|
# ? Mar 30, 2022 22:59 |
|
|
# ? Apr 18, 2024 15:02 |
|
TIP posted:the thing holding me back is that my game is a whole lot of physics and it's non-deterministic so it's kind of a pain in the rear end and will probably end up eating way more dev time than I expect Non-determinism is going to gently caress you here, yeah. The kind of stuff people want to save are the 1 in a million things. If it's Unity, your best bet is probably to have each object store its transform info into a circular buffer each physics frame and make the length of the buffer a configuration option, but then you have stuff like particles to deal with still.
|
# ? Mar 30, 2022 23:02 |
|
KillHour posted:Non-determinism is going to gently caress you here, yeah. The kind of stuff people want to save are the 1 in a million things. there are some things that make it easier than it might be for other games like I don't really need to worry about a circular buffer because the replay only needs to cover the start of a round to the end, which is about 15 seconds and the physics items are already separated out for each station in the game, so I know exactly what things I need to be tracking and it's finite so I mainly just have to track the positions/rotations of all the physics items along with tracking all the sounds and particles that are triggered by physics interactions and making sure those play (and it's not really important for the particles to perfectly match the original play) I've actually pulled up some tutorials and information on how to implement and I feel like I could get a pretty good version done in a day but I also feel like I'm gonna end up running into a bunch of weird edge case problems that drive me insane and eat up days or weeks figuring them out
|
# ? Mar 30, 2022 23:16 |
|
Maintaining determinism isn't all that complicated if you start with it in mind, but retrofitting it in is very difficult.
|
# ? Mar 30, 2022 23:20 |
|
Noita works by the circular buffer approach, or something similar to it. You can configure how much history it stores, but it warns you that storing a lot will result in memory bloat. And yeah, I'm not going to be able to replicate Noita's success, because their GIF system is peculiarly suited to that game. My main goal here is to make the player's ship look as cool as possible. My intuition is that that will help drive them to share the ship, because a) it's something they made, and b) it looks cool. So they get the "pride of ownership" from their creation, I get to bill the auto-screenshot thing as helping to make the game more fun (which it legitimately should do), and hopefully the result helps market the game. I'm always annoyed when some product I interact with tries to get me to share my interaction on social media ("tweet about buying this product!" etc), but if I can make it be something that the player thinks to do on their own, that works much better. EDIT: for the VR game, you might consider just tracking the transformation matrices of each transform on each frame. As long as you don't have too many objects, that should be pretty easy to keep track of, and it would let you replay events by just applying the transforms. You would have to make sure that you never instantiate or destroy objects in the middle of a recording, though. And you'd need to manually track events like "thing exploded" or "particle system played".
|
# ? Mar 30, 2022 23:21 |
|
A thing to consider in the "replay or screenshot" balancing act, too, is that as well as maybe not glitching a frame because of rendering a second frame to an off-screen buffer during gameplay, an advantage of the 'replay' option (by which I mean you capture the position of everything that's going to be in shot, and maybe 'age' for particle systems, etc. and generate the screenshot later) is that if you do it that way you can allow for user adjustable camera angle, crank up the rendering quality to max, etc. for the show-off screenshots (maybe make them fake-3D like people's photos are these days). Which might feel like cheating a bit ("look at how nice this game looks that doesn't actually look that nice for most users!") but would probably make for a more compelling presentation.
|
# ? Mar 31, 2022 00:42 |
|
The cool cam is a project saving feature https://thedailywtf.com/articles/the-cool-cam
|
# ? Mar 31, 2022 00:49 |
|
I bought an Intech Grid, which is like a MIDI controller thing. So obviously I had to completely rewrite my visualization software to be a video synthesizer. [Warning - intense flashing lights] https://www.youtube.com/watch?v=1XBgs5xijXg I'm still getting used to it, and I just got it all working in a basic way today, so I'm sure I can make some better looking stuff in the future, but hopefully at least some of it is cool.
|
# ? Mar 31, 2022 07:21 |
|
the more I talked about instant replay the more I felt like I needed it, so I did it https://i.imgur.com/FhFtvnH.mp4 only took a few hours, the most complicated part was figuring out how to handle my ragdoll after a lot of fighting with it I figured out that I needed to replace it with a bare duplicate model to be able to animate it still need to add the rest of the particles and sound effects but it's a solid start
|
# ? Mar 31, 2022 09:45 |
|
TIP posted:the more I talked about instant replay the more I felt like I needed it, so I did it It feels so good to be like "I'm going to implement something" and then go do it. Good job!
|
# ? Mar 31, 2022 10:18 |
|
It's been a while since I lurked in the game dev threads. Is this mostly just programming or do 3d artists hang out here too?
|
# ? Mar 31, 2022 15:49 |
|
Alterian posted:It's been a while since I lurked in the game dev threads. Is this mostly just programming or do 3d artists hang out here too? I believe this is usually the more programming focused thread. The Making Games Megathread seems to be the more diverse - and usually active - one. e: link https://forums.somethingawful.com/showthread.php?threadid=3506853 chglcu fucked around with this message at 15:55 on Mar 31, 2022 |
# ? Mar 31, 2022 15:53 |
|
Tunicate posted:The cool cam is a project saving feature God, I last read The Daily WTF over a decade ago, but I do still remember that article. And yeah, this is one of those things that isn't obviously necessary, but has the potential to be what makes the project succeed. I wish there weren't so many unknowns when it comes to marketing Alterian posted:It's been a while since I lurked in the game dev threads. Is this mostly just programming or do 3d artists hang out here too? In addition to the Making Games megathread, you might also check out the 3DCG thread, which has a mix of different kinds of CG artists, including games. It's pretty slow, though.
|
# ? Mar 31, 2022 17:58 |
|
Progress on the Cool Shots system: This is just looking for two things: the player firing a lot of their guns, and the player getting hit a bunch. But the basic shot composition seems to be mostly working. One issue I'm having though is that sometimes I'll get a shot where the player is basically completely obscured by smoke from explosion effects. I have rules for making sure the player isn't blocked by anything solid (by using a spherecast, and moving the camera if it's blocked), but smoke has no colliders and doesn't even write to the Z buffer, so I'm not sure how best to handle that. It may not be doable. TooMuchAbstraction fucked around with this message at 21:31 on Mar 31, 2022 |
# ? Mar 31, 2022 21:28 |
|
Those are some really cool shots. You could always add your own collider to your particle effects in a layer that only coolshots cares about. Another option is to use a shader for the coolshots camera that makes the smoke more transparent and maybe does some other fun post processing stuff, but I know your shader situation is already... complicated. The 'burst shot' thing might solve the issue too, since the player can choose a shot that isn't too obscured.
|
# ? Mar 31, 2022 22:02 |
|
One thing I've considered is doing a pre-render where the player's ship is replaced by a special material, and then we look to make sure that material is visible in the render before we do the true render. But a) that sounds complicated, and b) I can't think how to do it without doing a bunch of pixel counting. Even if the test render is done at like 64x64, that's still a lot of pixels to look at. Plus I wouldn't be able to say "okay, if I see exactly magenta, that counts as the player's ship", because a) a ship that's only lightly obscured by smoke is fine, and b) postprocessing can change the colors. ...or item (b) would be an issue, except I can't get postprocessing working on the coolshot camera. It has the same components as the main camera has (except no Cinemachine Brain). And yes, I've made sure that the components on the coolshot camera that reference a camera referenc the coolshot camera, not the main camera. The only differences I can identify are: - the coolshot camera has a render texture set - the coolship camera's Camera component is disabled, and it uses Camera.Render to generate images instead. - the main camera has the MainCamera tag and MainCamera layer. Changing the first two doesn't fix anything, and the game gets upset if I change the last one. Any ideas?
|
# ? Apr 1, 2022 02:58 |
|
Sounds like a job for ai image analysis! You need an is ship/is not ship routine. (it's a joke)
|
# ? Apr 1, 2022 03:15 |
|
Well I've made significant progress in my shader work, instead of creating a triangle structure and using an append buffer to add them and retriangulate in a postprocess shader I've created an index and vertex buffer that I add to so I can retriangulate during creation. This cuts the time down to like 1/5 of what it was taking before. I still have a postprocess shader but now all it does is copy everything into a smaller buffer(the original buffers being created for the maximum triangles marching cubes could theoretically spit out) and call .normalize on the normals. So it's super quick. This all works fine... usually... when I rebuild the mesh slowly. When I pause for a second and then rebuild it it almost, though not always, builds correctly, but if I'm constantly rebuilding it every frame a small proportion of random triangles are being generated wrong, almost certainly just the indexes being screwy. But I've no real idea why. I thought maybe it was something about the way I reused many of the buffers leaving old data in there, but I tried making it so I create new buffers every time and that had no change. Then I figured maybe something about running multiple copies of the compute each frame was doing it, but no, forcing it to only run once a frame also didn't help. C code:
Shader programming is so frustrating.
|
# ? Apr 2, 2022 00:57 |
|
Even if you make a new buffer you have to specifically initialize it to 0 if you want that. Don't know if that will help though.
|
# ? Apr 2, 2022 03:48 |
|
Following up on that whole automated screenshots thing: https://twitter.com/byobattleship/status/1510031444000747522 https://i.imgur.com/C6HQXu9.mp4 I'm pretty happy with this. The photo selection could use some work, but the basic concept seems sound.
|
# ? Apr 2, 2022 04:27 |
|
KillHour posted:Even if you make a new buffer you have to specifically initialize it to 0 if you want that. Don't know if that will help though. Thank you! I had considered that possibility but decided against spending the time testing it because, ultimately, it shouldn't matter if the buffers are filled with gibberish or not. But this, and not knowing what else to try, prompted me to test that and, sure enough, clearing the vertex buffer fixes the issue. The index buffer can remain gibberish without any effect but the verts need to be cleared. They shouldn't need to be cleared, any vert that the index buffer points to should be rewritten and any that it doesn't point to is [supposed to be] never used. But at least now I know what to investigate.
|
# ? Apr 3, 2022 15:19 |
|
Uh holy poo poo this new Unity demo. https://www.youtube.com/watch?v=eXYUNrgqWUU That's real-time.
|
# ? Apr 3, 2022 20:13 |
|
KillHour posted:Uh holy poo poo this new Unity demo. What I want to know is, how many person-years would be required for a non-Unity team to replicate this, and how much of that effort would go into wrangling the engine, as opposed to the modeling, animation, shader work, etc. that you would reasonably expect to be required for this kind of short?
|
# ? Apr 3, 2022 22:41 |
|
TooMuchAbstraction posted:What I want to know is, how many person-years would be required for a non-Unity team to replicate this, and how much of that effort would go into wrangling the engine, as opposed to the modeling, animation, shader work, etc. that you would reasonably expect to be required for this kind of short? They said that this one was made without source customization (unlike a lot of their earlier ones) and the features they used to make it would be available in the next version.
|
# ? Apr 3, 2022 22:47 |
|
That cloth simulation got some issues still but I wanna play with that in real-time so bad, it looks magic.
|
# ? Apr 3, 2022 22:52 |
|
KillHour posted:They said that this one was made without source customization (unlike a lot of their earlier ones) and the features they used to make it would be available in the next version. Right, I was more thinking about how HDRP is renowned for being borderline impossible to work with. How practical is it for a dev team (not necessarily an indie dev team!) to use this stuff?
|
# ? Apr 3, 2022 23:07 |
|
TooMuchAbstraction posted:Right, I was more thinking about how HDRP is renowned for being borderline impossible to work with. How practical is it for a dev team (not necessarily an indie dev team!) to use this stuff? In my limited experience the learning curve is huge but once you get it, it’s not that bad. Get good with substance and you can crank out assets pretty fast. But it can be infuriating to get the lighting you want because there are a lot of variables to learn. However I’ve not tried to code around it so got no input on that half of things, I was only building scenes and wiggling sliders.
|
# ? Apr 3, 2022 23:14 |
|
TooMuchAbstraction posted:Right, I was more thinking about how HDRP is renowned for being borderline impossible to work with. How practical is it for a dev team (not necessarily an indie dev team!) to use this stuff? IME, SRP used to be a horrible nightmare because half of it was just broken but now I won't go back to the built in renderer ever. I prefer URP just because of the simpler workflow (and the fact that I don't need the fancy features), but HDRP isn't too bad.
|
# ? Apr 3, 2022 23:16 |
|
Anyone know how to implement a physics delta time? I understand how to calculate a normal delta time, but from what I understand, it's better to have a fixed delta for physics right? Any good material where I could read on the subject?
|
# ? Apr 6, 2022 06:57 |
|
I think this would get you started at least. It's ancient, and there might be something better out there by now, but it was the first thing that came to mind. Or maybe there isn't anything better since it's still result number 2 from google.
ZombieApostate fucked around with this message at 09:39 on Apr 6, 2022 |
# ? Apr 6, 2022 09:29 |
|
If you don't have a fixed physics timestep then your physics behaviour will be framerate dependent and it can create a lot of edge cases. Like: Why can't my character fit through this gap when I'm using my 144Hz monitor, but I can on my 60Hz monitor? Why can I glitch through thin walls if I look at smoke and drop the framerate to 10 FPS? Etc. That page is still quite relevant if you're implementing it yourself.
|
# ? Apr 6, 2022 13:56 |
|
Maybe someone better at math than me can help here. I'm using Unity's VFX Graph thing to create a parametric spirograph-like pattern. I have that working pretty well for a basic sin/cos thing and you can play around with the periods of each yadda yadda. Anyways, Now I want to have a second line following the first one, offset by a fixed distance. I'm doing that by getting the normal of the curve like this: This essentially breaks down to the following algorithm: For each of sine and cosine: - Get the tangent / cotangent at the point - Get the sign of the output and the tangent slope. If they agree, (both positive or both negative), do nothing. If they disagree, flip the tangent slope. Then I treat the slopes as the inputs to a 2d vector that I normalize and multiply by some distance. This is my offset vector from the original function. This works great when the periods of sin and cos are the same (so a circle) [Original graph is in the center, offset graph is outside.] But if the periods are different, I get the vector staying on the outside, not the same side. Here's an example with sine having twice the frequency of cosine: I want the offset graph to stay on the same side as the original, so the bottom loop should have the offset graph smaller and inside instead of larger and outside. Here are some graphs with colors based on sign to help visualize what might be happening. Red channel is sine, green is cosine. Original is colored by the sign of the sine/cosine functions and the offset graph is colored by the sign of the tangent functions: Help I'm real bad at trig!
|
# ? Apr 11, 2022 03:01 |
If you take the derivative of your x and y functions with respect to t (resulting in dx/dt and dy/dt) shouldn't you be able to find the velocity direction (dx/dt, dy/dt) then your left direction is (-dy/dt, dx/dt) and the right direction would be (dy/dt,-dx/dt). Maybe swap left/right. That might be overcomplicating things and I'm a bit sleepy so feel free to ignore me. Ruzihm fucked around with this message at 06:37 on Apr 11, 2022 |
|
# ? Apr 11, 2022 06:09 |
|
So more compute shader learning/fiddling. I was doing this to find matching verts so as to not add duplicates and to accumulate the normals while generating a mesh C code:
Adding the break; statement that's commented out up there seemed to fix the badly formed triangles but also greatly screwed up the normals. A bunch of fiddling and reading later and I've learnt that this technique is probably just not tenable in a shader. What I think was happening is that multiple verts would be checked/built simultaneously by different threads. If two threads happened to be trying to add the same vert to the array for the first time that vert would get added twice. Which at the time I started working on it I thought the InterlockedAdd would magically prevent. Which I'm going to blame on my having taken roughly a year off of doing any programming before I jumped into this project(mixed with most of my shader learning being more than 10 years out of date). What I really need is a mutex, but I know those don't exist in shaderland, and probably never will due to the fundamental way shaders work. At least if I'm understanding what I've read correctly. Welp, lesson learned at least. So now I'm back to my original plan of doing the retriangulation in a post-process shader. And that works just perfect.... except for the performance. It is considerably slower than doing the retriangulation during mesh creation. It was taking about 15ms to generate the mesh, including retriangulation, on my test case as a single step. Cut into two steps it takes about 8ms for step one and 2500ms for step 2, in a mesh that has only 1246 verts once properly triangulated and 7200 before that. C code:
|
# ? Apr 11, 2022 17:24 |
|
What's with the complete lack of comments and meaningful names in your code? Clear and understandable code is easy code to identify higher-level improvements in. Higher-level improvements like "not using an n^2 deduping algorithm where you add all the verts one-at-a-time to an array and then check the entire array for duplicates with every new vert you see".
|
# ? Apr 11, 2022 18:06 |
|
Ruzihm posted:If you take the derivative of your x and y functions with respect to t (resulting in dx/dt and dy/dt) shouldn't you be able to find the velocity direction (dx/dt, dy/dt) then your left direction is (-dy/dt, dx/dt) and the right direction would be (dy/dt,-dx/dt). Maybe swap left/right. I figured it out. The issue was "how the gently caress do I figure out the derivative of this function with only the tools available in VFX Graph?" and the coding questions thread got me sorted out. Here's what I came up with from that [Flashing Lights Warning] https://www.youtube.com/watch?v=DV3holRHRFM
|
# ? Apr 12, 2022 04:47 |
|
Jabor posted:What's with the complete lack of comments and meaningful names in your code? Clear and understandable code is easy code to identify higher-level improvements in. Well, the comments is mostly because the code is in flux and changing every other time I look at it so any comments I add just end up out of date and wrong very quickly. As far as variable names, I thought they mostly were named meaningfully. I guess _Counters[] is nonsense, that's commented on where I declare it C code:
Perhaps I'm just bad at naming things though, I'd certainly believe it. I've barely ever worked with others so it's not something I've ever worried about too much. Jabor posted:Higher-level improvements like "not using an n^2 deduping algorithm where you add all the verts one-at-a-time to an array and then check the entire array for duplicates with every new vert you see". Yes, that is definitely the slowest part of the code by a large margin, I've just no idea how else I could do it. If I were in main-memory I'd try a dictionary/map, I think that would probably be considerably faster but no such construct exists in hlsl and it's the only idea I've had that might help.
|
# ? Apr 12, 2022 16:38 |
|
Spek posted:Other than that everything seems to be either adequately named or is just an indexing variable(which, admittedly I should probably name, but I rarely do) Like they say, there's only two hard things in computer science: race conditions, naming things, race conditions, and off-by-one errors. I've done little-to-nothing with shaders so I'm speaking from ignorance here, but can you delete vertices efficiently during a deduplication step? If so, I would be inclined to generate a sorted list of all the vertices (n log n, not n^2 !) and then run through the sorted list with a final uniq-like step. Also, can your shaders access thread(? core?)-local storage? If so, you might be able to have each thread return its own sorted list of vertices, and then the main-ish thread could perform a final merge step of all the sorted lists, skipping duplicates as it goes.
|
# ? Apr 12, 2022 20:10 |
|
Grace Baiting posted:Like they say, there's only two hard things in computer science: race conditions, naming things, race conditions, and off-by-one errors. Grace Baiting posted:I've done little-to-nothing with shaders so I'm speaking from ignorance here, but can you delete vertices efficiently during a deduplication step? If so, I would be inclined to generate a sorted list of all the vertices (n log n, not n^2 !) and then run through the sorted list with a final uniq-like step. Definitely going to ponder/research this for a day or two though, maybe I'll figure something out. Grace Baiting posted:Also, can your shaders access thread(? core?)-local storage? If so, you might be able to have each thread return its own sorted list of vertices, and then the main-ish thread could perform a final merge step of all the sorted lists, skipping duplicates as it goes. Not through any clean way that I know of, I could certainly give the shader one big buffer to use and then let each thread access/sort a subset of that. I can just have the whole thing send out a list of all the verts back to the main memory and do retriangulation/deduplication there. It's not even that inefficient to do that. It's faster than what I'm currently doing, at least. I'm trying hard to avoid that for two main reasons, 1: I just think it'd be neat to do it all in-shader, and as this is just a hobby project I can waste time chasing waterfalls. 2: most of the time I have no need to ever get the data from the graphics card, so it'd be nice to not need to retrieve it, filter it, and send it back to the GPU. Currently I only need to read it back if I'm making physics meshes. Other than that it builds on the GPU and it renders on the GPU and the CPU doesn't need to know squat about it.
|
# ? Apr 12, 2022 22:49 |
|
|
# ? Apr 18, 2024 15:02 |
|
Spek posted:I don't think there's any way to do this. But there certainly may be. I'm very ignorant of anything more than basic shader programming myself. All the resources I've found for learning shaders are either super basal introductory stuff, or indecipherable advanced stuff that's way beyond my comprehension. Is there some way you can sort vertices, and then arrange your loop so that it exits early once it encounters a vertex that it doesn't need to care about? Maybe maintain an index variable or two to help define the boundaries of the relevant portion of the array.
|
# ? Apr 12, 2022 23:01 |