Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
TIP
Mar 21, 2006

Your move, creep.



KillHour posted:

Noita's GIF export thing works for two reasons:

1: There is a specific gameplay mechanic that is very cool, and often hilarious Dying. Dying in Noita is almost always cool and hilarious. Except when it's frustrating, but often even then.
2: Sharing that mechanic is self-driving. The person who experienced it says "holy poo poo I need to share this with the world" and the people seeing it say "holy poo poo what game is that? That's amazing."

So the question is - do your "cool moments" have the same kind of emergent property that make people think "holy shitballs that's cool" or is it "yep, there's another ship that blew up." It doesn't need to be the level of emergence and :psyduck: as Noita, but it needs to evoke that same feeling. If it does, making it as easy as possible to share those moments is going to sell the game for you. If it doesn't, it's a party trick at best.

I've been considering trying to code in an instant replay feature in my game shoving simulator for basically exactly these reasons, the whole game is people falling down :v:

your post is pushing me further in that direction, I don't think I'm going to code in a gif capture thing but being able to replay what happened makes it much easier to capture cool moments using the built in tools on quest (or your screen recorder of choice on PC)

the thing holding me back is that my game is a whole lot of physics and it's non-deterministic so it's kind of a pain in the rear end and will probably end up eating way more dev time than I expect

Adbot
ADBOT LOVES YOU

KillHour
Oct 28, 2007


TIP posted:

the thing holding me back is that my game is a whole lot of physics and it's non-deterministic so it's kind of a pain in the rear end and will probably end up eating way more dev time than I expect

Non-determinism is going to gently caress you here, yeah. The kind of stuff people want to save are the 1 in a million things.

If it's Unity, your best bet is probably to have each object store its transform info into a circular buffer each physics frame and make the length of the buffer a configuration option, but then you have stuff like particles to deal with still.

TIP
Mar 21, 2006

Your move, creep.



KillHour posted:

Non-determinism is going to gently caress you here, yeah. The kind of stuff people want to save are the 1 in a million things.

If it's Unity, your best bet is probably to have each object store its transform info into a circular buffer each physics frame and make the length of the buffer a configuration option, but then you have stuff like particles to deal with still.

there are some things that make it easier than it might be for other games

like I don't really need to worry about a circular buffer because the replay only needs to cover the start of a round to the end, which is about 15 seconds

and the physics items are already separated out for each station in the game, so I know exactly what things I need to be tracking and it's finite

so I mainly just have to track the positions/rotations of all the physics items along with tracking all the sounds and particles that are triggered by physics interactions and making sure those play (and it's not really important for the particles to perfectly match the original play)

I've actually pulled up some tutorials and information on how to implement and I feel like I could get a pretty good version done in a day

but I also feel like I'm gonna end up running into a bunch of weird edge case problems that drive me insane and eat up days or weeks figuring them out :v:

more falafel please
Feb 26, 2005

forums poster

Maintaining determinism isn't all that complicated if you start with it in mind, but retrofitting it in is very difficult.

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
Noita works by the circular buffer approach, or something similar to it. You can configure how much history it stores, but it warns you that storing a lot will result in memory bloat.

And yeah, I'm not going to be able to replicate Noita's success, because their GIF system is peculiarly suited to that game. My main goal here is to make the player's ship look as cool as possible. My intuition is that that will help drive them to share the ship, because a) it's something they made, and b) it looks cool. So they get the "pride of ownership" from their creation, I get to bill the auto-screenshot thing as helping to make the game more fun (which it legitimately should do), and hopefully the result helps market the game. I'm always annoyed when some product I interact with tries to get me to share my interaction on social media ("tweet about buying this product!" etc), but if I can make it be something that the player thinks to do on their own, that works much better.

EDIT: for the VR game, you might consider just tracking the transformation matrices of each transform on each frame. As long as you don't have too many objects, that should be pretty easy to keep track of, and it would let you replay events by just applying the transforms. You would have to make sure that you never instantiate or destroy objects in the middle of a recording, though. And you'd need to manually track events like "thing exploded" or "particle system played".

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!
A thing to consider in the "replay or screenshot" balancing act, too, is that as well as maybe not glitching a frame because of rendering a second frame to an off-screen buffer during gameplay, an advantage of the 'replay' option (by which I mean you capture the position of everything that's going to be in shot, and maybe 'age' for particle systems, etc. and generate the screenshot later) is that if you do it that way you can allow for user adjustable camera angle, crank up the rendering quality to max, etc. for the show-off screenshots (maybe make them fake-3D like people's photos are these days). Which might feel like cheating a bit ("look at how nice this game looks that doesn't actually look that nice for most users!") but would probably make for a more compelling presentation.

Tunicate
May 15, 2012

The cool cam is a project saving feature

https://thedailywtf.com/articles/the-cool-cam

KillHour
Oct 28, 2007


I bought an Intech Grid, which is like a MIDI controller thing. So obviously I had to completely rewrite my visualization software to be a video synthesizer.

[Warning - intense flashing lights]
https://www.youtube.com/watch?v=1XBgs5xijXg

I'm still getting used to it, and I just got it all working in a basic way today, so I'm sure I can make some better looking stuff in the future, but hopefully at least some of it is cool.

TIP
Mar 21, 2006

Your move, creep.



the more I talked about instant replay the more I felt like I needed it, so I did it

https://i.imgur.com/FhFtvnH.mp4

:toot:

only took a few hours, the most complicated part was figuring out how to handle my ragdoll

after a lot of fighting with it I figured out that I needed to replace it with a bare duplicate model to be able to animate it

still need to add the rest of the particles and sound effects but it's a solid start :)

KillHour
Oct 28, 2007


TIP posted:

the more I talked about instant replay the more I felt like I needed it, so I did it

https://i.imgur.com/FhFtvnH.mp4

:toot:

only took a few hours, the most complicated part was figuring out how to handle my ragdoll

after a lot of fighting with it I figured out that I needed to replace it with a bare duplicate model to be able to animate it

still need to add the rest of the particles and sound effects but it's a solid start :)

It feels so good to be like "I'm going to implement something" and then go do it. Good job!

Alterian
Jan 28, 2003

It's been a while since I lurked in the game dev threads. Is this mostly just programming or do 3d artists hang out here too?

chglcu
May 17, 2007

I'm so bored with the USA.

Alterian posted:

It's been a while since I lurked in the game dev threads. Is this mostly just programming or do 3d artists hang out here too?

I believe this is usually the more programming focused thread. The Making Games Megathread seems to be the more diverse - and usually active - one.

e: link https://forums.somethingawful.com/showthread.php?threadid=3506853

chglcu fucked around with this message at 15:55 on Mar 31, 2022

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe

Tunicate posted:

The cool cam is a project saving feature

https://thedailywtf.com/articles/the-cool-cam

God, I last read The Daily WTF over a decade ago, but I do still remember that article. And yeah, this is one of those things that isn't obviously necessary, but has the potential to be what makes the project succeed. I wish there weren't so many unknowns when it comes to marketing :v:


Alterian posted:

It's been a while since I lurked in the game dev threads. Is this mostly just programming or do 3d artists hang out here too?

In addition to the Making Games megathread, you might also check out the 3DCG thread, which has a mix of different kinds of CG artists, including games. It's pretty slow, though.

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
Progress on the Cool Shots system:





This is just looking for two things: the player firing a lot of their guns, and the player getting hit a bunch. But the basic shot composition seems to be mostly working. One issue I'm having though is that sometimes I'll get a shot where the player is basically completely obscured by smoke from explosion effects. I have rules for making sure the player isn't blocked by anything solid (by using a spherecast, and moving the camera if it's blocked), but smoke has no colliders and doesn't even write to the Z buffer, so I'm not sure how best to handle that. It may not be doable.

TooMuchAbstraction fucked around with this message at 21:31 on Mar 31, 2022

KillHour
Oct 28, 2007


Those are some really cool shots. :coolspot: You could always add your own collider to your particle effects in a layer that only coolshots cares about. Another option is to use a shader for the coolshots camera that makes the smoke more transparent and maybe does some other fun post processing stuff, but I know your shader situation is already... complicated.

The 'burst shot' thing might solve the issue too, since the player can choose a shot that isn't too obscured.

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
One thing I've considered is doing a pre-render where the player's ship is replaced by a special material, and then we look to make sure that material is visible in the render before we do the true render. But a) that sounds complicated, and b) I can't think how to do it without doing a bunch of pixel counting. Even if the test render is done at like 64x64, that's still a lot of pixels to look at. Plus I wouldn't be able to say "okay, if I see exactly magenta, that counts as the player's ship", because a) a ship that's only lightly obscured by smoke is fine, and b) postprocessing can change the colors.

...or item (b) would be an issue, except I can't get postprocessing working on the coolshot camera. It has the same components as the main camera has (except no Cinemachine Brain). And yes, I've made sure that the components on the coolshot camera that reference a camera referenc the coolshot camera, not the main camera. The only differences I can identify are:

- the coolshot camera has a render texture set
- the coolship camera's Camera component is disabled, and it uses Camera.Render to generate images instead.
- the main camera has the MainCamera tag and MainCamera layer.

Changing the first two doesn't fix anything, and the game gets upset if I change the last one. Any ideas?

xzzy
Mar 5, 2009

Sounds like a job for ai image analysis! You need an is ship/is not ship routine.

(it's a joke)

Spek
Jun 15, 2012

Bagel!
Well I've made significant progress in my shader work, instead of creating a triangle structure and using an append buffer to add them and retriangulate in a postprocess shader I've created an index and vertex buffer that I add to so I can retriangulate during creation.

This cuts the time down to like 1/5 of what it was taking before. I still have a postprocess shader but now all it does is copy everything into a smaller buffer(the original buffers being created for the maximum triangles marching cubes could theoretically spit out) and call .normalize on the normals. So it's super quick.

This all works fine... usually... when I rebuild the mesh slowly. When I pause for a second and then rebuild it it almost, though not always, builds correctly, but if I'm constantly rebuilding it every frame a small proportion of random triangles are being generated wrong, almost certainly just the indexes being screwy. But I've no real idea why.



I thought maybe it was something about the way I reused many of the buffers leaving old data in there, but I tried making it so I create new buffers every time and that had no change. Then I figured maybe something about running multiple copies of the compute each frame was doing it, but no, forcing it to only run once a frame also didn't help.

C code:
uint3 vIndex = 4294967295u;
			
for(uint www = 0; www < 3; www++)
{
	for(uint vvv = 0; vvv < _Counters[0]; vvv++)
	{
		if(all(newVerts[www].position == _SVOVerts[vvv].position))
		{
			_SVOVerts[vvv].normal += newVerts[www].normal;
			vIndex[www] = vvv;
			continue;
		}
	}
		
	if(vIndex[www] == 4294967295u)
	{	//Counters[0] holds the highest index yet created for a vertex
		InterlockedAdd(_Counters[0], 1, vIndex[www]);
	//	vIndex[www] = _TestVertexBuffer.IncrementCounter();		
		_SVOVerts[vIndex[www]] = newVerts[www];
	}
}
_SVOIndexes.Append(vIndex);
InterlockedAdd(_Counters[1], 1);
//Counters[1] holds the highest index yet created for a triangle
The only thing I can think of is it's some sort of race condition while spawning the triangle indices, but InterlockedAdd() should prevent that from happening if I understand how that function works correctly. Most examples I've found use [structuredbuffer].IncrementCounter() for this instead, but that is always, under all circumstances, returning precisely 0 for me for some reason.

Shader programming is so frustrating.

KillHour
Oct 28, 2007


Even if you make a new buffer you have to specifically initialize it to 0 if you want that. Don't know if that will help though.

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe
Following up on that whole automated screenshots thing:

https://twitter.com/byobattleship/status/1510031444000747522

https://i.imgur.com/C6HQXu9.mp4

I'm pretty happy with this. The photo selection could use some work, but the basic concept seems sound.

Spek
Jun 15, 2012

Bagel!

KillHour posted:

Even if you make a new buffer you have to specifically initialize it to 0 if you want that. Don't know if that will help though.

Thank you! I had considered that possibility but decided against spending the time testing it because, ultimately, it shouldn't matter if the buffers are filled with gibberish or not. But this, and not knowing what else to try, prompted me to test that and, sure enough, clearing the vertex buffer fixes the issue. The index buffer can remain gibberish without any effect but the verts need to be cleared. They shouldn't need to be cleared, any vert that the index buffer points to should be rewritten and any that it doesn't point to is [supposed to be] never used. But at least now I know what to investigate.

KillHour
Oct 28, 2007


Uh holy poo poo this new Unity demo.

https://www.youtube.com/watch?v=eXYUNrgqWUU

That's real-time.

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe

KillHour posted:

Uh holy poo poo this new Unity demo.

https://www.youtube.com/watch?v=eXYUNrgqWUU

That's real-time.

What I want to know is, how many person-years would be required for a non-Unity team to replicate this, and how much of that effort would go into wrangling the engine, as opposed to the modeling, animation, shader work, etc. that you would reasonably expect to be required for this kind of short?

KillHour
Oct 28, 2007


TooMuchAbstraction posted:

What I want to know is, how many person-years would be required for a non-Unity team to replicate this, and how much of that effort would go into wrangling the engine, as opposed to the modeling, animation, shader work, etc. that you would reasonably expect to be required for this kind of short?

They said that this one was made without source customization (unlike a lot of their earlier ones) and the features they used to make it would be available in the next version.

xzzy
Mar 5, 2009

That cloth simulation got some issues still but I wanna play with that in real-time so bad, it looks magic.

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe

KillHour posted:

They said that this one was made without source customization (unlike a lot of their earlier ones) and the features they used to make it would be available in the next version.

Right, I was more thinking about how HDRP is renowned for being borderline impossible to work with. How practical is it for a dev team (not necessarily an indie dev team!) to use this stuff?

xzzy
Mar 5, 2009

TooMuchAbstraction posted:

Right, I was more thinking about how HDRP is renowned for being borderline impossible to work with. How practical is it for a dev team (not necessarily an indie dev team!) to use this stuff?

In my limited experience the learning curve is huge but once you get it, it’s not that bad. Get good with substance and you can crank out assets pretty fast. But it can be infuriating to get the lighting you want because there are a lot of variables to learn.

However I’ve not tried to code around it so got no input on that half of things, I was only building scenes and wiggling sliders.

KillHour
Oct 28, 2007


TooMuchAbstraction posted:

Right, I was more thinking about how HDRP is renowned for being borderline impossible to work with. How practical is it for a dev team (not necessarily an indie dev team!) to use this stuff?

IME, SRP used to be a horrible nightmare because half of it was just broken but now I won't go back to the built in renderer ever. I prefer URP just because of the simpler workflow (and the fact that I don't need the fancy features), but HDRP isn't too bad.

Big Scary Owl
Oct 1, 2014

by Fluffdaddy
Anyone know how to implement a physics delta time? I understand how to calculate a normal delta time, but from what I understand, it's better to have a fixed delta for physics right? Any good material where I could read on the subject?

ZombieApostate
Mar 13, 2011
Sorry, I didn't read your post.

I'm too busy replying to what I wish you said

:allears:
I think this would get you started at least. It's ancient, and there might be something better out there by now, but it was the first thing that came to mind. Or maybe there isn't anything better since it's still result number 2 from google.

ZombieApostate fucked around with this message at 09:39 on Apr 6, 2022

Spatial
Nov 15, 2007

If you don't have a fixed physics timestep then your physics behaviour will be framerate dependent and it can create a lot of edge cases. Like: Why can't my character fit through this gap when I'm using my 144Hz monitor, but I can on my 60Hz monitor? Why can I glitch through thin walls if I look at smoke and drop the framerate to 10 FPS? Etc.

That page is still quite relevant if you're implementing it yourself.

KillHour
Oct 28, 2007


Maybe someone better at math than me can help here. I'm using Unity's VFX Graph thing to create a parametric spirograph-like pattern. I have that working pretty well for a basic sin/cos thing and you can play around with the periods of each yadda yadda.

Anyways, Now I want to have a second line following the first one, offset by a fixed distance. I'm doing that by getting the normal of the curve like this:



This essentially breaks down to the following algorithm:

For each of sine and cosine:
- Get the tangent / cotangent at the point
- Get the sign of the output and the tangent slope. If they agree, (both positive or both negative), do nothing. If they disagree, flip the tangent slope.

Then I treat the slopes as the inputs to a 2d vector that I normalize and multiply by some distance. This is my offset vector from the original function. This works great when the periods of sin and cos are the same (so a circle)



[Original graph is in the center, offset graph is outside.]


But if the periods are different, I get the vector staying on the outside, not the same side. Here's an example with sine having twice the frequency of cosine:



I want the offset graph to stay on the same side as the original, so the bottom loop should have the offset graph smaller and inside instead of larger and outside. Here are some graphs with colors based on sign to help visualize what might be happening. Red channel is sine, green is cosine. Original is colored by the sign of the sine/cosine functions and the offset graph is colored by the sign of the tangent functions:






Help I'm real bad at trig!

Ruzihm
Aug 11, 2010

Group up and push mid, proletariat!


If you take the derivative of your x and y functions with respect to t (resulting in dx/dt and dy/dt) shouldn't you be able to find the velocity direction (dx/dt, dy/dt) then your left direction is (-dy/dt, dx/dt) and the right direction would be (dy/dt,-dx/dt). Maybe swap left/right.

That might be overcomplicating things and I'm a bit sleepy so feel free to ignore me.

Ruzihm fucked around with this message at 06:37 on Apr 11, 2022

Spek
Jun 15, 2012

Bagel!
So more compute shader learning/fiddling.

I was doing this to find matching verts so as to not add duplicates and to accumulate the normals while generating a mesh
C code:
uint3 vIndex = 4294967295u;
for(uint www = 0; www < 3; www++)
{
	for(uint vvv = 0; vvv < _Counters[0]; vvv++)
	{
		if(all(newVerts[www].position == _SVOVerts[vvv].position))
		{
			_SVOVerts[vvv].normal += newVerts[www].normal;		
			vIndex[www] = vvv;
			//break;
		}
	}

	if(vIndex[www] == 4294967295u)
	{
		InterlockedAdd(_Counters[0], 1, vIndex[www]);
		_SVOVerts[vIndex[www]] = newVerts[www];
	}
}
...
_SVOIndexes.Append(vIndex);
InterlockedAdd(_Counters[1], 1);
and it mostly worked but the created mesh would sometimes have a tiny amount of triangles either missing or having their verts linked up to ones nowhere near them.

Adding the break; statement that's commented out up there seemed to fix the badly formed triangles but also greatly screwed up the normals. A bunch of fiddling and reading later and I've learnt that this technique is probably just not tenable in a shader. What I think was happening is that multiple verts would be checked/built simultaneously by different threads. If two threads happened to be trying to add the same vert to the array for the first time that vert would get added twice. Which at the time I started working on it I thought the InterlockedAdd would magically prevent. Which I'm going to blame on my having taken roughly a year off of doing any programming before I jumped into this project(mixed with most of my shader learning being more than 10 years out of date).

What I really need is a mutex, but I know those don't exist in shaderland, and probably never will due to the fundamental way shaders work. At least if I'm understanding what I've read correctly.

Welp, lesson learned at least. So now I'm back to my original plan of doing the retriangulation in a post-process shader. And that works just perfect.... except for the performance. It is considerably slower than doing the retriangulation during mesh creation.

It was taking about 15ms to generate the mesh, including retriangulation, on my test case as a single step. Cut into two steps it takes about 8ms for step one and 2500ms for step 2, in a mesh that has only 1246 verts once properly triangulated and 7200 before that.
C code:
uint vvv;
uint iii;
bool newVert;

for(uint aaa = 0; aaa < _Counters[1] * 3; aaa++)
{
	iii = _SVOIndexes[aaa];
	newVert = true;
	
	for(uint bbb = 0; bbb < _Counters[2]; bbb++)
	{
		if(all(_SVOVerts[iii].position == _SortedVBuffer[bbb].position))
		{
			_SortedVBuffer[bbb].normal += _SVOVerts[iii].normal;
			_SortedIndexes[aaa] = bbb;
			newVert = false;
			break;	
		}
	}
	if(newVert)
	{
		InterlockedAdd(_Counters[2], 1, vvv);
		_SortedVBuffer[vvv] = _SVOVerts[iii];
		_SortedIndexes[aaa] = vvv;
	}
}

for(uint bbb = 0; bbb < _Counters[2]; bbb++)
	_SortedVBuffer[bbb].normal = normalize(_SortedVBuffer[bbb].normal);
The biggest problem, I'm sure, is just that this is all done in a single thread. I have no idea how to split this sort of thing into multiple threads without bumping back into the problem of two of them trying to sort two different instances of the same vertex at the same time. I probably should just give up and do it in main memory. But it feels like there must be something I'm missing.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
What's with the complete lack of comments and meaningful names in your code? Clear and understandable code is easy code to identify higher-level improvements in.

Higher-level improvements like "not using an n^2 deduping algorithm where you add all the verts one-at-a-time to an array and then check the entire array for duplicates with every new vert you see".

KillHour
Oct 28, 2007


Ruzihm posted:

If you take the derivative of your x and y functions with respect to t (resulting in dx/dt and dy/dt) shouldn't you be able to find the velocity direction (dx/dt, dy/dt) then your left direction is (-dy/dt, dx/dt) and the right direction would be (dy/dt,-dx/dt). Maybe swap left/right.

That might be overcomplicating things and I'm a bit sleepy so feel free to ignore me.

I figured it out. The issue was "how the gently caress do I figure out the derivative of this function with only the tools available in VFX Graph?" and the coding questions thread got me sorted out.

Here's what I came up with from that

[Flashing Lights Warning]
https://www.youtube.com/watch?v=DV3holRHRFM

Spek
Jun 15, 2012

Bagel!

Jabor posted:

What's with the complete lack of comments and meaningful names in your code? Clear and understandable code is easy code to identify higher-level improvements in.

Well, the comments is mostly because the code is in flux and changing every other time I look at it so any comments I add just end up out of date and wrong very quickly.

As far as variable names, I thought they mostly were named meaningfully. I guess _Counters[] is nonsense, that's commented on where I declare it
C code:
RWBuffer<uint> _Counters;	//[0] = vertex count, [1] = triangle count, *3=index count [2] = vertex count after retriangulation
Other than that everything seems to be either adequately named or is just an indexing variable(which, admittedly I should probably name, but I rarely do)

Perhaps I'm just bad at naming things though, I'd certainly believe it. I've barely ever worked with others so it's not something I've ever worried about too much.

Jabor posted:

Higher-level improvements like "not using an n^2 deduping algorithm where you add all the verts one-at-a-time to an array and then check the entire array for duplicates with every new vert you see".

Yes, that is definitely the slowest part of the code by a large margin, I've just no idea how else I could do it. If I were in main-memory I'd try a dictionary/map, I think that would probably be considerably faster but no such construct exists in hlsl and it's the only idea I've had that might help.

Grace Baiting
Jul 20, 2012

Audi famam illius;
Cucurrit quaeque
Tetigit destruens.



Spek posted:

Other than that everything seems to be either adequately named or is just an indexing variable(which, admittedly I should probably name, but I rarely do)

Perhaps I'm just bad at naming things though, I'd certainly believe it. I've barely ever worked with others so it's not something I've ever worried about too much.

Yes, that is definitely the slowest part of the code by a large margin, I've just no idea how else I could do it. If I were in main-memory I'd try a dictionary/map, I think that would probably be considerably faster but no such construct exists in hlsl and it's the only idea I've had that might help.

Like they say, there's only two hard things in computer science: race conditions, naming things, race conditions, and off-by-one errors.


I've done little-to-nothing with shaders so I'm speaking from ignorance here, but can you delete vertices efficiently during a deduplication step? If so, I would be inclined to generate a sorted list of all the vertices (n log n, not n^2 !) and then run through the sorted list with a final uniq-like step.

Also, can your shaders access thread(? core?)-local storage? If so, you might be able to have each thread return its own sorted list of vertices, and then the main-ish thread could perform a final merge step of all the sorted lists, skipping duplicates as it goes.

Spek
Jun 15, 2012

Bagel!

Grace Baiting posted:

Like they say, there's only two hard things in computer science: race conditions, naming things, race conditions, and off-by-one errors.
I'd not heard that but I've, thankfully, done only a little threaded programming in my life. It's a pain and I hate it 90% of the time.

Grace Baiting posted:

I've done little-to-nothing with shaders so I'm speaking from ignorance here, but can you delete vertices efficiently during a deduplication step? If so, I would be inclined to generate a sorted list of all the vertices (n log n, not n^2 !) and then run through the sorted list with a final uniq-like step.
I don't think there's any way to do this. But there certainly may be. I'm very ignorant of anything more than basic shader programming myself. All the resources I've found for learning shaders are either super basal introductory stuff, or indecipherable advanced stuff that's way beyond my comprehension.

Definitely going to ponder/research this for a day or two though, maybe I'll figure something out.

Grace Baiting posted:

Also, can your shaders access thread(? core?)-local storage? If so, you might be able to have each thread return its own sorted list of vertices, and then the main-ish thread could perform a final merge step of all the sorted lists, skipping duplicates as it goes.

Not through any clean way that I know of, I could certainly give the shader one big buffer to use and then let each thread access/sort a subset of that.

I can just have the whole thing send out a list of all the verts back to the main memory and do retriangulation/deduplication there. It's not even that inefficient to do that. It's faster than what I'm currently doing, at least. I'm trying hard to avoid that for two main reasons, 1: I just think it'd be neat to do it all in-shader, and as this is just a hobby project I can waste time chasing waterfalls. 2: most of the time I have no need to ever get the data from the graphics card, so it'd be nice to not need to retrieve it, filter it, and send it back to the GPU. Currently I only need to read it back if I'm making physics meshes. Other than that it builds on the GPU and it renders on the GPU and the CPU doesn't need to know squat about it.

Adbot
ADBOT LOVES YOU

TooMuchAbstraction
Oct 14, 2012

I spent four years making
Waves of Steel
Hell yes I'm going to turn my avatar into an ad for it.
Fun Shoe

Spek posted:

I don't think there's any way to do this. But there certainly may be. I'm very ignorant of anything more than basic shader programming myself. All the resources I've found for learning shaders are either super basal introductory stuff, or indecipherable advanced stuff that's way beyond my comprehension.

Is there some way you can sort vertices, and then arrange your loop so that it exits early once it encounters a vertex that it doesn't need to care about? Maybe maintain an index variable or two to help define the boundaries of the relevant portion of the array.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply