Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Sex Bumbo
Aug 14, 2004

Ugg boots posted:

Collision Detection questions:

First, do all 150 objects need to be able to collide with all other 149 objects? If you can group them up somehow it can help - for example, projectiles can hit enemies but will probably never hit other projectiles.

Also, you can try sorting your objects first by their position. Then you just need to compare objects that are close each other since objects far away can't possibly be colliding. Here's a link for more information: http://www.ziggyware.com/readarticle.php?article_id=128
It's geared towards xna so it has some extra information but I thought it was really easy to follow.

Adbot
ADBOT LOVES YOU

Sex Bumbo
Aug 14, 2004

PerOlus posted:

Anyone have experience, comments, about XNA? I'm interested in using it. Do I have to use C# for it?

I've used XNA Game Studio a lot, and I love it. It's not a game engine if that's what you're looking for. You can get a 3D model up a few minutes, but it doesn't provide any game logic, which is nice because it's meant to be a platform and there's no 'best' game engine.

C# is a much more enjoyable experience than using C++. C# has tons of really simple advantages like intellisense working, the debugger working better, edit and continue working, much much faster build times, and that's not even counting numerous language features that make it more enjoyable.

You can use any 2005 variant of Visual Studio with the 2.0 beta too. There are some fairly serious bugs in the beta right now but nothing that prevents general usage. Also, the final version should be coming out soon with the big bug fixes in most likely since they've been closed on their bug submission site (connect.microsoft.com). No 2008 support sadly.

Really, the biggest issues with XNA Game Studio are mostly related to difficulties if you ever try releasing a commercial game, especially on the Xbox 360 (which will be completely impossible for some time).

I definately recommend it over everything else. Questions get answered extremely quickly on their support forums too.

Sex Bumbo fucked around with this message at 10:31 on Dec 5, 2007

Sex Bumbo
Aug 14, 2004

PerOlus posted:

I'm a fairly experienced C++ programmer (with visual studios), are there any good resources for learning C# going from that? I have googled etc, but you guys might have better tips.

I went from mainly C++ to C#, it wasn't too difficult really. You could probably read most C# programs and be able to figure out what's going on (unlike, say F#).

I wouldn't bother picking up a book, good programming practices are easily translatable. Some important things to learn would be:
properties
foreach
.NET Containers
generics
delegates
difference between structs and classes
don't use pointers because they're almost always unnecessary

Learning the basics wouldn't take long at all for an experienced C++ programmer. Then there's some fun things like attributes and reflection but you can still make lots of cool stuff if you never bothered learning about them.

Sex Bumbo fucked around with this message at 21:52 on Dec 5, 2007

Sex Bumbo
Aug 14, 2004

Citizen Erased posted:

Unfortunatly one of the things I need to do with the geometry after I've transformed it is export it to an .obj file. Because of that, I can't just transform by the view matrix at draw time or such, I physically need to change the x, y, z values of each vertex in the ring. Anyone got any clue on how to do so?

You want to rotate verts so that they face an arbitrary direction and then translate in that direction right? You have the rotation done too? Can't you just add the direction to all the verts?

Sex Bumbo
Aug 14, 2004

Horn posted:

Does anyone have a good resource of free game friendly models? I've tried :google: but everything I find is high vertex stuff.

http://turbosquid.com/ sometimes has good stuff. It has a lot of stuff at least.

If you download the free Crysis demo, that has a lot of textures you can steal. Just don't, you know, sell your game with them.

Sex Bumbo
Aug 14, 2004
Isn't that fairly well known? At least for older cards, which is what most people have.

Sex Bumbo
Aug 14, 2004

IcePotato posted:

Right now I'm thinking about using C# and the XNA platform (which I know.. exists, and that's all I know about it). I'm very experienced in Java, I'm very familiar with C# syntax and features, so that's not going to be the hard part. The hard part is defining a project that has significant milestones that I can complete in a reasonable amount of time. I've never done a game from scratch before so I really have no gauge on what I can accomplish in a semester.

Right now I'm thinking that creating a platformer is a good simple task to start out with. Anyone have any advice on the subject? I'm, of course, open to using other languages and frameworks (Except Python. I'm a Perl guy. And I'd like to move past Java) or even creating multiple small game platforms (IE a demo Pong, a demo Asteroids, a demo platformer, and a demo RPG/RPG engine). Basically I just need some guidance because I have no idea where to get started.

Also how screwed am I if I hate linear algebra? :)

If you know C#, you should be fine with XNA. At least, if you have problems, it won't be because of the platform or the language. You're really just limited by your own software engineering skills. I can say with mild confidence that forums.xna.com has better support than probably any other platform you might consider.

A small platformer is probably a good idea. I don't think people will be impressed by pong or asteroids, as they can be made in an afternoon. And, not to be a jerk, but even if you come up with a decent RPG engine, any game you make for it is going to suck if you're working by yourself and have a single semester.

If you're making a 2D game you don't really need to know any linear algebra at all.

If you make a 3D game you don't need to know much. Any competent person can learn the basics.

Sex Bumbo
Aug 14, 2004

tyrelhill posted:

I would also suggest you try out using just the DirectX SDK instead of XNA. There's alot more you have to do yourself with DirectX that XNA usually handles but you'll learn a million times more about graphics programming using DirectX than XNA.

I disagree. I think you'll end up wasting time on unimportant details if what you want to do is make a game. It doesn't really hide that much either. It will setup D3D for you - not very exciting, and setup a render loop for you - trivial. You can override device creation too and just a tiny bit of hacking, use your own render loop if you really want too. Render targets are a little weird since they need to be compatible with the 360, but nothing too fancy is going on.

I can't really think of anything else it does that D3DX doesn't already do. You'll probably have a better understanding of DirectX too since you're doing a lot of the same things while not getting frustrated with reference counting or lost devices.

Sex Bumbo
Aug 14, 2004

haveblue posted:

This is essentially correct. The 360 C++ API is a variant of Win32 and the graphics system is a variant of DirectX 9 with some 10 stuff thrown in like geometry shaders.
There are no geometry shaders on the 360 or anywhere in the XDK.



GuyGizmo posted:

What's the programming language and libraries used for non-XNA Xbox 360 development? In particular, the SDK for Xbox Live Arcade? I assume the Xbox 360 SDK uses some kind of variant of DirectX, and I think you can program with it in both C# and C++, but I'm not sure.
The first XBLA game to use XNAGS was Schizoid. I'm pretty sure that there have either been more, or that others are in development, and the feedback from said developers is overwhelmingly positive. The XDK supports XACT but no one seems to like that. Not being an audio guy I can't comment. It also supports XAudio2, which I know next to nothing about.
Graphics can be handled at a much lower level. You can do almost anything you could imagine with a system with well known hardware and shared memory.



Hubis posted:

Unless you plan on doing DirectCompute/CUDA programming
GPU programming begin as assembly only, and now allows you to make higher level things like classes and interfaces. it seems naive to think that we'll be stuck with nothing better than atomic intrinsics forever.



timecircuits posted:

2) What do I have to concern myself about in terms of memory allocation? Can I just new and delete things as before, or do I need a thread-safe allocation scheme to avoid race conditions?
DX9 was not made with multicore applications in mind, and is mostly meant to be single threaded. Many of the functions you call on the device need to be called from the same thread that the device was created on.

DX11 was made with multicore applications in mind, and you can also target DX9 level hardware with it. It doesn't make a lot of sense to use DX9 as a learning process.

Sex Bumbo fucked around with this message at 22:09 on Oct 27, 2009

Sex Bumbo
Aug 14, 2004
It doubled the latency or something?

Sex Bumbo
Aug 14, 2004

Ludicrous Gibs! posted:

It also returns collisions for a pair of objects on the same x plane moving horizontally at the same speed with no contact between boxes.

Not sure why it would keep returning collisions but if the relative velocities are the same, it's going to skip everything in the for loop and compare 0 <= 1 at the end due to u0 and u1's initial values.

The constant returning of collisions is probably due to how you're handling collisions.

Also, if you're interested, there are fancy pants SSE collision data structures in <xnacollision.h> in the DXSDK.

Sex Bumbo fucked around with this message at 18:38 on Nov 10, 2009

Sex Bumbo
Aug 14, 2004
Make sure your contact info is up to date and accessable everywhere. Kind of stupid but a lot of people screw this part up. Make it really really obvious about who you are and what you're applying for.

Sex Bumbo
Aug 14, 2004

Joda posted:

I'm honestly a tiny bit worried about the direction I've taken my education with how things have been going with the industry for the past year or so. I tend to prefer working on low level stuff, so any chance to take a graphics course with C/C++/GLSL I've taken, which means I've passed on a lot of more generally useful software engineering courses.

I'm not sure how to react to this, so if anyone with industry experience would oblige: Does this change the industry enough that I should seriously consider not taking anymore games related courses and salvage my education as much as I can?

I use C/C++/GLSL all the time, as an engine graphics programmer. Someone's gotta make the engines everyone's so excited about. All the different platforms that are popular now means there's a ton of work to get things to work on all of them.

That said my education was a colossal waste of time and money.

Sex Bumbo
Aug 14, 2004
Try not dividing at all and scaling by the angle between edges. Also as mentioned weld the verts first and post a squat form video.

Sex Bumbo
Aug 14, 2004

Joda posted:

When you finish, flush and swap buffers with OpenGL, does the driver make the main program wait for the GPU to give a ready signal before the CPU code can move on, or does it only make the CPU wait if you do an OGL call while a framebuffer is actively being written to? Like it seems like I should be able to do all my CPU-side game logic concurrently with drawing the previous frame (since the data for the previous frame has already been batched to GPU memory as uniforms,) but if this process is not automatic I can't tell how I'd be able to do it. I assume it'd driver dependent and that most modern drivers automatically manage this and returns control to the CPU while the backbuffer is being drawn, but seeing as it'd be one major way to get performance out of your game I thought I'd make sure.

E: Having done a bit of testing, it seems like it doesn't actually do this automatically. So how would you go about forcing this behaviour?

Actually it should generally be done automatically. Why do you think it isn't? The driver might stall if it's flushing its command buffer at an inopportune time. Consider the case where your gpu can't process all the commands that you issue within a single frame. It's going to become farther and farther behind where your application wants it to be, increasing frame latency and command buffer size. To prevent that from happening, the driver needs a way to catch up again.

If you're on Windows, you can use GpuView to see if this is happening, and also to see if there's unexpected synchronization points.

You can make weak assumptions about how your driver works. E.g. you could use queries to get an idea of where your device is in relation to the commands you sent it, and then you might be able to avoid a device synchronization when mapping a resource for read.

Sex Bumbo fucked around with this message at 23:19 on Apr 7, 2015

Sex Bumbo
Aug 14, 2004

Tres Burritos posted:

I recently watched a talk that said the "hello world" for Vulkan is like 600 lines fwiw, and that verbosity (!) is sort of a bummer. I'm hoping that some good, relatively lightweight libraries come out for it so it's a little more accessible. Also, "a few months"?

The older versions of GL/DX will presumably still work, so you kind of need to be able to clearly define why you want to use 12/Vulkan. If you just want to move around some 3D shapes, GL/DX has always been a terrible option over using an existing higher level graphics engine. If you want to make your own high performance graphics engine that competes with something like Unreal or Unity or whatever, then you're always going to want as much direct access to the GPU as makes sense.

If you just want to learn how GPUs work, then Vulkan will be great because it isn't going to hide the myriad operations the GL driver currently does. For example, if you want to make small state changes, you're going to need a new pipeline which might look kind of annoying at first but it really highlights the work involved with changing state that isn't as apparent as when GL just handles it for you.

Sex Bumbo fucked around with this message at 22:16 on Apr 13, 2015

Sex Bumbo
Aug 14, 2004
Crossy Road definitely does not use sprites for its gameplay rendering. As mentioned before it's flat 3d models with some basic shadow rendering. It might have used some interesting voxel editor but it's not using some esoteric voxel rendering algorithm at runtime. It's rendering polygons in probably the most straightforward way imaginable.

The thing that ties it together is the art direction. It's got a nice style and color palette. A lot of people think making a cartoony styled game is easier than a realistic looking one, but without a strong art direction it will look rough no matter what style one chooses.

Sex Bumbo fucked around with this message at 20:59 on Apr 27, 2015

Sex Bumbo
Aug 14, 2004
I don't think I've ever heard of a ray-hull intersection algorithm like that. I don't follow your algorithm very well, but why not just treat it like a bunch of ray-polygon intersections? Transforming every vertex is not going to be fast for large meshes, it should be way faster to transform only the ray.

Sex Bumbo
Aug 14, 2004
I see, you can do a quick test by creating a bounding sphere over your set of vertices, and turning it into a simple ray-sphere intersection test. This is fast because you only need to calculate the bounding sphere once (or whenever you change the mesh). The math for this isn't too hard either.

Triangle-ray intersection is never going to be fast without some sort of acceleration structure, so you just don't want to do a ton of ray-mesh intersection tests every frame.

Sex Bumbo
Aug 14, 2004

Joda posted:

When you upload uniforms in OpenGL, is it then guaranteed to have uploaded the data or created a thread safe copy of the data in the state of when it was asked to uploaded? I was considering splitting the buffer swap into its own thread that I join from main before the next draw pass. Part of me tells me this is probably not safe to do, but the only alternative I see is to make copies of all mutable uniforms that are guaranteed to stay immutable until the beginning of the next draw pass, but that's potentially a lot of memory to be allocating every loop if it can be avoided.

First, what is it you want to achieve? Is this just an optimization?

If you're referring to glSetUniform* then opengl doesn't own any of the memory given to it so it makes a copy and is done with it by the time it returns.

If you want to avoid extra copying (which for constant data is unlikely to be a bottleneck) you can look into persistent mapping: https://www.opengl.org/wiki/Buffer_Object#Persistent_mapping

The above is good for things like particle systems in which a lot of data is modified frequently. I don't know that I've ever had to optimize something and the culprit was the copying happening inside glSetUniform*.

Regarding a lot of memory, this is by its very nature unavoidable if you have a buffered renderer. Once you begin submitting calls for a new frame, all the commands for the previous frame must exist somewhere in memory or else the previous frame can't be rendered.

Sex Bumbo
Aug 14, 2004

Ireland Sucks posted:

(roughly 3200x3200x8 tiles, each tile being 2 or 4 bytes)

327,680,000 byte array, blammo

Sex Bumbo
Aug 14, 2004

Ireland Sucks posted:

Huh... I'm sure I dismissed that at some point in the past but that probably is small enough to be an option


Thought it might be a bit slow

If you need all the tiles for a single operation and have enough memory, it's best to keep it all resident and lay it out in the most coherent way. If you don't want a 300 meg file (who cares about 300 megs these days? Even on a phone?), you can zip it or something. If you only look at chunks at a time, you can stream them in chunks and compress the individual chunks. You can compress dense chunks differently from sparse chunks because it's kind of hard to say what the best representation is without knowing what the data is typically like. Packing a dense chunk into a sparse matrix is going to have a bunch of unnecessary overhead.

Sex Bumbo fucked around with this message at 22:18 on Sep 26, 2015

Sex Bumbo
Aug 14, 2004
Why is performance bad iterating over a large list? That seems like the most straightforward way to process data.

Sex Bumbo
Aug 14, 2004
I was referring to List<T> in C# which I presumed was a wrapper around arrays. People on the internet* seem to think there's a difference in performance which seems a little odd to me because if I were writing .net, I'd want List<T> foreach speed to be as optimal as possible. To be fair I haven't yet actually seen something like a Unity performance benchmark with actual game data of List<T> vs array, it's all been artificial nonsense.

* http://www.codeproject.com/Tips/531893/For-Vs-Foreach-Benchmark

E: or maybe it doesn't matter, I don't know. http://lj.rossia.org/users/steinkrauz/300537.html

Sex Bumbo fucked around with this message at 18:09 on Sep 28, 2015

Sex Bumbo
Aug 14, 2004
Ahhhhh gotcha.

Sex Bumbo
Aug 14, 2004
If it's a freeze, just wait until it freezes before you attach the debugger.

Sex Bumbo
Aug 14, 2004
You can attach debuggers to running and stalled programs. Go through the threads and one of them will be erroneously waiting on some deadlocked event or in an infinite loop. It might not matter, but there's differences between starting with the debugger attached and without. Most obvious is that the duration of your program will have a debugger attached, something your program might actively be looking at and operating differently. But also it uses a debug heap causing memory to be different. This is often the cause of heisenbugs.

Sex Bumbo
Aug 14, 2004

emanresu tnuocca posted:

Is there a thread for basic OpenGL related questions? I struggle with some very basic poo poo.

there's this
http://forums.somethingawful.com/showthread.php?threadid=2897255

If it renders with an orthographic projection it's almost certainly your perspective projection parameters which you didn't post.

Manslaughter posted:

I would think you would want your up vector to be on the Y axis if you're offsetting the camera on the Z axis. Try glu.glulookat(0,0,-6,0,0,0,0,-1,0)
E: or this, yeah.

Sex Bumbo fucked around with this message at 21:13 on Oct 4, 2015

Sex Bumbo
Aug 14, 2004

Rocko Bonaparte posted:

What's a good foundational basis for lighting in a tile-based 3d world? I'm starting to think about it for my situation since I'm reaching a point where I could actually worry about it. I asked about it earlier and saw some stuff about generating light information, but I have to admit I don't even know what I should be trying to do. At this point I assume that--given my map data is basically a 3d array--I should create a 3d texture representing my light levels at each location. I assume I then ingest that data into a shader. The lights I add affect that 3d texture and I apply all my logic for deciding how much an area of a level is affected by lights to that texture. Is this about right?

Do you have a reference image of what you want it to look like?

Sex Bumbo
Aug 14, 2004

Rocko Bonaparte posted:

Ehh not really. In 2d the mapping would be calculated from something like:



Numbers are the light value at the center of each tile. Presumably, each vertex would look at the values in the spaces it corners and blend it together. This could then be extrapolated into 3d space. I believe Minecraft does something like this, for example. It sounds like I should instead try to use the built-in lighting stuff and then go off on a tangent if I figure out I don't like it for some reason.

Stock minecraft doesn't do per-vertex lighting, it does per-block lighting like what you drew (but not what you described). The lighting algorithm would be quite similar to what you drew -- for each block, calculate the distance to the light, either taxicab or euclidian, and scale the lighting on the entire tile/block based on that. If you have an instanced tile rendering algorithm, you'll need to batch the lighting data into the instance data.

Calculating the distance isn't necessarily going to be cheap. You can notice minecraft does optimizations that result in lighting errors sometimes with blocks being lit when they shouldn't be, likely because it's caching the lighting data and not updating it correctly.

Sex Bumbo
Aug 14, 2004
You could use this http://mathworld.wolfram.com/HammersleyPointSet.html

Sex Bumbo
Aug 14, 2004

Ranzear posted:

atan2(sin(A-B), cos(A-B))

If you use this, make it really clear in the function name or whatever that it's calling three trig functions and should probably not be in a tight inner loop.

Sex Bumbo
Aug 14, 2004
Do you people seriously store angle measurements as integers?

E: wait this is C# or something isn't it.

Sex Bumbo fucked around with this message at 19:05 on Oct 12, 2015

Sex Bumbo
Aug 14, 2004
What is it that people are doing with them that get them into trouble?

Sex Bumbo
Aug 14, 2004
Coroutines and threads is a weird comparison because their purposes don't seem too similar. I mean, you can even mix them together. Coroutines are like streamlined state machines, not anything concurrent. I'm pretty sure any coroutine can be implemented with a normal function and a state object as a parameter.

Sex Bumbo
Aug 14, 2004
What's the motivation behind resurrecting XNA? I don't want to diss on it but it's not something I would ever recommend to someone.

Sex Bumbo
Aug 14, 2004

ToxicSlurpee posted:

Negligible. I think it's less than 100 bytes each so even 10,000 empty lists is still less than 1,000,000 bytes.

However if you're worried about having too many empty data things this is a prime time to use inheritance. Impassable tiles that never, ever have objects don't need a list so create a base Tile class that has everything that every tile does have. Then you inherit ImpassibleTile and whatever other tile tiles. Because they all inherit Tile you can do things like Tile[,] = new Tile[10, 10](); and then say like Tile[5, 5] = new ImpassableTile();

Or you can just say List<Thing> objects = null and put a list in only when you actually need one there. An empty list is comparable to an empty array; it's a reference and not much else. So long as you aren't creating millions of empty references it won't be a huge issue in most cases.

If you're feeling particularly ambitious you can write a simple program that just keeps creating empty lists, run it until it crashes, then see how many lists it made.

I think the correct answer is "don't worry about it" which is accumulating technical debt but assuming this is a learning or hobby project focused around game design, there's almost no chance this will matter.

If it were important to address this though, using inheritance is definitely not the correct solution. Neither is lazy initialization. It's best to avoid huge lists of heterogeneous objects and lists of nulls -- inheritance and lazy initialization both fail to do this.

Sex Bumbo
Aug 14, 2004

xgalaxy posted:

compile your .NET Core applications to native code!

What are the reasons one would do this? Seems like a big operation if it's just to make it compatible with other native code or platforms, especially given:

quote:

official .NET version that is actually cross platform

It's going to be running at a comparable speed and is still garbage collected and all that jazz, right? Or is it not? What does this buy beyond skipping a jit compile?

Sex Bumbo
Aug 14, 2004
I'm trying to copy these Dreams slides: http://advances.realtimerendering.com/s2015/AlexEvans_SIGGRAPH-2015-sml.pdf

The final technique they use is based on point splatting -- they came up with some fancy ways to store lots of points and use them to render a surface. But I don't understand how the points get rendered exactly. On page 114 you can see them using point rendering, which is easy to understand but obviously not the final way they render them. As they mention, this is really inefficient, and even with all the fancy sorting they use, rendering gl_points just sucks for a gpu to do. On the next slide they mention:

quote:

simon wrote some amazingly nicely balanced CS splatters that hierarchically culled and refined the precomputed clusters of points, computes bounds on the russian
roulette rates, and then packs reduced cluster sets into groups of ~64 splats.


So for an arbitrarily dense model and viewing direction, you're going to end up with a potentially huge number of points covering a single pixel and I assume the CS is reducing the points per pixel/subpixel. But I still don't really get how the points get rendered. Later it also mentions:

quote:

Interestingly , simon tried making a pure CS ‘splatting shader’, that takes the large splats, and instead of rasterizing a quad, we actually precompute a ‘mini point cloud’
for the splat texture, and blast it to the screen using atomics, just like the main point cloud when it’s in ‘microsplat’ (tight) mode.

I also don't understand how atomics is useful here. It still needs to prefer points that are close to pixel centers, and also in front of other points, right? Can someone explain how this point splatting works?

Like, what is it taking the min of? It seems like depth, but then this is turning the algorithm around into "find the closest point in this area to the near z plane", which doing a ton of atomic mins sounds really bad at doing. Also you need more than the depth, you would need the id of the point that's the closest too, right? Which an atomic min isn't going to actually give.

Sex Bumbo fucked around with this message at 21:39 on Nov 30, 2015

Adbot
ADBOT LOVES YOU

Sex Bumbo
Aug 14, 2004

Xerophyte posted:

The rendering is done by mapping a single point id and z-value to each pixel, which is where atomicMin comes in for z-culling points in the compute shader that does this. The one point/pixel approach only works due to the monte carlo alpha: a point is either fully opaque or fully transparent. Each point id is then looked up and resolved to a G-buffer in a single full screen raster pass, which gets deferred shaded as usual.

If you just drew points with GL_POINTs, wouldn't this be the same? Like, isn't the depth buffer functionally doing atomic mins and writing attributes normally? And the point rendering sucks not because GL_POINTS is intrinsically awful (I'm sure it is) but because it's doing a humongous scatter onto a framebuffer with a shitton of overdraw, and I'm not sure how their CS solves this.

Also, how do they avoid gaps in the points like they show in the mini splat mode? Presumably they would scale them up, but then that's going to drastically increase the overdraw and also each point needs to scatter to multiple pixels, right? I can see this functionally working just by spitting out billboards everywhere and discarding randomly but it would be incredibly slow.

EEEEE: Thinking about it a little more, if their thread group writes to a chunk of memory in an image that's swizzled correctly it isn't really scattering anything, and since their clouds are all in bvh's they'll still have a bit of gathering to do but it's on the scale of lod-adjusted clusters and the clusters themselves are coherent. So the memory access doesn't sound all that bad put that way. I can still imagine a worst case where you're staring down a huge column of clusters, but maybe they have some occlusion culling for that.

Sex Bumbo fucked around with this message at 21:36 on Dec 1, 2015

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply