Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
In regards to the clip plane stuff:

A good range of ATI cards have no hardware clip plane support, so using them will cause it to kick back to the software rasterizer. gl_ClipVertex support is spotty as well.

For near-plane clipping, there's a technique called oblique depth projection which basically lets you set the near plane to an arbitrary plane instead of one perpendicular to the camera, giving you the same effect.

http://www.terathon.com/code/oblique.php

There are some caveats with it, it breaks down when the plane gets too close to the camera and I doubt it works properly if the portion of the plane within the view frustum isn't entirely in front of the nearplane.

MasterSlowPoke posted:

I'm writing a MD2 (Quake 2 model) importer for XNA right now. If you don't know, the MD2 format stores the texture coordinates with the triangle indices. This allows two triangles that share a vertex to not share a UV coordinate at that position. The problem I'm having is that I see no way to emulate that behavior in my VertexBuffer.
Build a list of unique point/UV combinations and reindex the triangles to that.

OneEightHundred fucked around with this message at 03:01 on May 21, 2008

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

The problem is that would cause the normals calculated incorrectly, as the normals of all the triangles that share that split vertex wouldn't be averaged. I know it's a small and probably undetectable error but might as well do it right the first time.
Calculate the normals for the original points list, do the reindex after that and copy the normals.

EDIT -- You don't really even need to do that with MD2 because it stores the normals with the point. Points are stored as 3 bytes for the vertex position and a 1-byte lookup into a precomputed normals table.

http://tfc.duke.free.fr/coding/src/anorms.h

Out of curiousity, why are you using MD2? MD3's a higher-precision format which has the same capabilities and then some.

OneEightHundred fucked around with this message at 06:31 on May 21, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I'd recommend staying away from Half-Life MDL. It's an awful clusterfuck of a format. If you want a skeletal format that's not too hard to write a parser for, try MD5, Unreal PSK/PSA, or Cal3D.

Half-Life and Source SMD files are easy to parse, but they're very far from what you want the internal representation to be so I'd recommend writing something to compile it to your own format if you want to use it.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
The problem with HL MDL is that it's a typical Valve format, one that fails to separate the implementation details from the file format itself. It's not just an old format, it's a BAD format.

I wouldn't describe COLLADA as easy to parse either. Even with FCollada, which does a very good job of parsing it and getting it into a usable format, practically everything has an additional layer of complexity that you have to work through, and everything's indexed separately which means it takes a good deal of work to get it into a usable format. I'm writing a COLLADA importer for the project I'm working on right now, and it's definitely the most difficult format to work with of the ones I mentioned in my last post.

OneEightHundred fucked around with this message at 23:05 on May 21, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Adhemar posted:

Still, I think it's very useful to learn because it's growing as a standard in the industry.
It's definitely useful, but for an initiate I think it may be a bit much to stomach compared to other formats.

quote:

A good exercise might be to convert some other model format to COLLADA (using FCollada), or the other way around.
Converting from COLLADA is an order of magnitude less annoying than converting to it.

TSDK posted:

The single biggest selling point for COLLADA is that there are open source exporters for both Maya and Max.
Not really, FBX occupies the same niche and is closed-source. The big selling points of both are that they let you exchange data between modeling software and not lose huge amounts of data (that is, you DON'T have to extend the exporters), and that having one common format means fewer blockades when trying to author content.

Authoring plug-ins for multiple versions of multiple modeling packages is a severe pain in the rear end compared to supporting one format. Being able to export one format and have the whole middleware market support it is a great selling point for the modeling software developers.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

TSDK posted:

Being closed source is a major disadvantage when putting toolchains together. I've yet to see one 'standard' file format that didn't need either extending or bug-fixing in one way or another to meet all of the requirements for a project.
FBX and COLLADA both contain a shitload of information, I'm really not sure what more you'd want out of them.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I've always used timeGetTime, and whatever caveats that entails. You need to use timeBeginPeriod to raise its precision though.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Jo posted:

If I use an early out algorithm for detection, can I expect a significant performance improvement in a map with 60-70 shadow casters?
As long as you are actually culling a good portion of the potential set, it is generally a good idea to use cheap calculations to skip complex ones.

quote:

My other alternative is to build a list of shadow casters and discard all objects outside the light sphere rectangle.
Can you do both? i.e. cull off shadow casters to build the list? If you're ever going to do calculations on the shadow list multiple times, it's probably a good idea to store the shadow caster list because it scales a lot better.

quote:

Second question:
What kind of overhead does building a render list entail? I'd like to build one every time a dynamic light is moved so performance will be snappy when they're still. Is this a silly thing to do?
It depends what you're doing. Best thing to do is have two algorithms, one which generates shadow information that minimizes rendering time by culling out unneeded shadowing information (for still lights, done once), one which processes the shadow casters faster but renders slower (for dynamic lights, done repeatedly).

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Crash Bandicoot posted:

Unfortunately once the Texture is instantiated I am not sure how to pass it the actual image data.
If it works anything like the C++ version, then the Texture is an interface which has a method called LockRect which will let you access a portion of the texture (including writing to it). Call UnlockRect when you're done with it.

EDIT -- In MD3D they're called LockRectangle and UnlockRectangle respectively. Apparently the MSDN pages for them are broken and the only documentation for them are in Japanese, so good luck.
http://msdn.microsoft.com/en-us/library/bb152978(VS.85).aspx

OneEightHundred fucked around with this message at 23:20 on Jun 8, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

krysmopompas posted:

ARB_texture_non_power_of_two

Otherwise, round up and waste the extra space (or find a creative use for it.)
You can also use NV_texture_rectangle (which ATI supports too on the vast majority of their hardware, but reports as EXT_texture_rectangle). It's faster than non-power-of-two textures for the most part, but has its own caveats, like using pixel-based texture coordinates instead of normalized coordinates, and can't mipmap.

Non-power-of-two textures are glacially slow on a lot of hardware.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Thug Bonnet posted:

Can someone point me toward a good, no-bullshit discussion of quaternions that includes practical code examples? I always end up finding either high-level descriptions of the math involved or API-specific implementations. I'd love to see a bridge between the two somewhere. I haven't found the gamedev articles to be particularly helpful (they seem to suffer from the same problems).
Quaternions can represent scale and offset transformations too, but they're usually used for rotations so I'll stick to that:

As far as the basic concept, which I found makes them a lot easier to digest: Any rotational transformation can be represented as a rotation around an axis in space. Quaternions are exactly that. The advantage is that they are valid as several other types of values as well (i.e. as sphere coordinates and complex numbers) so there are a lot of valid ways to manipulate them.

The X, Y, and Z components are the rotation axis multiplied by the sine of the rotation angle, the W component is the cosine of the rotation angle.

Getting the opposite transformation can be done by negating the X, Y, and Z components (which reverses the rotation direction), or by negating the W component (which flips the rotation axis). For why it does those, see the previous paragraph, and remember that negating a sine result is the same as negating the angle, and negating a cosine result is the same as a 180-degree offset of the angle.

Since these operations cancel each other out, negating all 4 components will actually result in the same transformation, which is useful if you're trying to manipulate the sign of the W component.

As a sphere coordinate (X^2 + Y^2 + Z^2 + W^2 = 1), it's easy to fix error accumulation by just renormalizing them, they're easy to store compactly because you can recalculate its W component (and force the sign), and you can interpolate them smoothly with spherical lerp.

As a complex number, multiplying two quaternions produces the same result as combining their rotations the same way as a matrix concatenation would.


If you need some code, you can rip off mine:
http://svn.icculus.org/teu/trunk/tertius/src/tdp/tdp_rotation.hpp?view=log
http://svn.icculus.org/teu/trunk/tertius/src/tdp/tdp_math_quat.hpp?view=log
http://svn.icculus.org/teu/trunk/tertius/src/tdp/tdp_math_quat.cpp?view=log
http://svn.icculus.org/teu/trunk/tertius/src/tdp/tdp_math_matrix.cpp?view=log (Has the matrix-to-quaternion code)


If you want some no-bullshit explanation of the math involved and formulae:
http://www.euclideanspace.com/maths/geometry/rotations/conversions/index.htm

OneEightHundred fucked around with this message at 10:19 on Jun 13, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Intel Penguin posted:

It's the prevention of tempering I'm looking to prevent. Though I doubt anyone will have the motivation to crack my game.

My project is so hobbyist. I don't plan on selling it, and chances are it won't go beyond my circle of friends. I was just curious.
If it's not going to be extremely popular, then you don't really need to do much in the way of hack protection. If it is, then you're probably hosed no matter what you do, and would want to outsource it to someone like Even Balance that can at least pretend to focus on the intricacies of cheat-proofing full-time.

Delayed-response poisoning strikes me as one of the best solutions because it makes the trial-and-error process much more time-consuming: In single-player games, remove something critical that makes the game unbeatable. In multi-player games, delay bans for a long time after the cheats are detected. In either, act buggy.

Personally, for multiplayer games, I really think the solution is to make a better player moderation system. I've seen too many games where, for example, kick votes invariably fail to go through because they require a majority vote and the people who aren't getting griefed or steamrolled by a cheater can't be bothered to vote. There has to be a better way.

OneEightHundred fucked around with this message at 11:12 on Jun 21, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I'm not saying that there needs to be a better way to do a kick vote, if anything I think kick votes have repeatedly proven their uselessness. Compulsory voting would be far too easily abusable for griefing.

There are other forms of player moderation, like karma-type systems, but the only one that doesn't suck horribly is "have a shitload of admins." I'm trying to come up with a better way of letting the players clear the shitheads off servers, because if something like that exists, problematic cheaters would not get far, and it would have benefits in getting rid of other problems as well, i.e. griefers.

OneEightHundred fucked around with this message at 22:15 on Jun 22, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Plaintext saves aren't a great idea for numerous reasons, especially if you're using floating point numbers which lose information when converting to/from ASCII. Regardless, hacking savegames is probably the last thing you should worry about. It's more likely someone will just write a trainer, in which case anything you do to the savegame data is kind of moot.

I really have to ask: What do you expect to get out of all this security? Is the title you're releasing even big enough for people to waste their time hacking it?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Gary the Llama posted:

Should the Sprite class handle the rendering or the Player class?
How you make your renderer really depends on what you're trying to do with it. If you've only got a few objects on screen with a really simple game, you can get away with spamming draw calls and rebinds and not tightly optimizing everything for efficiency.

You shouldn't have to rewrite code for things that render the same way, but that's not to say you can't have players and any other sort of sprite just call one common sprite draw routine and still be treated as renderables.

OneEightHundred fucked around with this message at 21:00 on Jun 30, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

guenter posted:

What's the benefit of the Direct3D effect framework? I'm using it but only trivialy. I understand one of the benefits is being able to specify multiple techniques per effect but that's about all I understand in terms of differences.
It has the essentials of a data-driven material system, which means it helps get the information of how to render surfaces out of the renderer code. That translates mainly to more flexibility in materials and not having to hard-code as much.

quote:

I also get the impression that an effect (n .fx file) is not at all the same thing as what people generally refer to as effects (ie. bump mapping or something).
The colloquial definition of "effect" is practically anything that makes something look better, which can happen as a single line of shader code or a mountain of code scattered throughout the renderer. So no. It's a poor choice of a name, but "material" wouldn't be any more correct when FX files can apply to any renderable, i.e. UI components and post-processing, not just "material" surfaces, and "shader" means something else in D3D-speak.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
You could try Ogre3D and Irrlicht.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
It's not really that hard to make a 2D game in a 3D engine, just keep the camera pointed down one axis and make sure you never give anything a velocity or position that'll move it off of that plane. Or put a pair of invisible walls up. Use camera-aligned sprites for graphics (which practically every 3D engine supports). If anything it gives you some artistic flexibility if you ever want to make any elements of it 3D even if the gameplay is strictly 2D.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Entheogen posted:

is there any way to use glMultiDrawElements with VBOs? Specifically I would like to store my index and count arrays on GPU memory, but still use this call. Is this even possible?

EDIT: I checked and it worked with VBOs but indecies were still on client side.
NVIDIA and ATI both say the best thing to use is glDrawRangeElements. Off of fairly intensive testing (between myself and the person behind Sauerbraten), it is, even if you need to make repeated calls to it. OpenGL does not have the context switching overhead of D3D, so draw calls are cheap as long as you're not changing state, to the point where MultiDrawElements provides almost no speed benefit. DrawRangeElements does though, according to the vendors it's because it lets the GPU pack the indexes into 16-bit values.

OneEightHundred fucked around with this message at 07:21 on Jul 8, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Namgsemterces posted:

I had an idea to use HLSL and make a certain color a special transparent color.
You could use the alpha channel for that...

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Namgsemterces posted:

Thanks, but you didn't understand my question. If I try to cover up an area inside my clipping rectangle but outside of the area I want rendered (note: I'm talking about non-rectangular shapes, like a rounded rectangle) with something of alpha 0, the part I want covered up would still show up because the alpha 0 would be meaningless. I need to overwrite that rounded area with something that is opaque and then later change that opaque color to an alpha 0. Basically, I'm talking about a mask.
You'd write to the framebuffer with 1.0 alpha and render with a blend function that multiplies the source color with the destination alpha.

Of course, you could also use the stencil buffer, which is designed for this sort of thing.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Centipeed posted:

Also, why is OpenGL for Windows not available for download anywhere? There are a few links here or there, but they all seem to be third party links as opposed to links from whoever actually produces OpenGL for Windows.
The libraries and headers are licensed off to developers of development tools. Visual Studio comes with the OpenGL .lib files and headers, for example. You can get the latest extension headers here:
http://www.opengl.org/registry/

If you really need the main headers (i.e. GL.h), download Mesa, the headers are compatible. If you want the library, then I think Mesa includes it, if not, just use LoadLibrary, GetProcAddress, and wglGetProcAddress which you should be doing anyway.

If you mean the runtime, there is no standard one, implementations of the spec are produced and distributed by implementors of it. In other words, it comes with your video card drivers.

OneEightHundred fucked around with this message at 21:00 on Jul 22, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

I'm pretty much done with my 3d model importer, so now I'm off to the next step: the levels. Can anyone recommend a format and/or editor to use? Really anything could help, it's probably far more complex than I think.
Dealing with level editing is the biggest pain in the rear end of any 3D engine. If you want something easy to start with, then Quake 3 BSP is probably your best bet. It's a straight-forward format, the stock tools are very easy to extend, and the editor is reasonably functional.

Not to say it's good, Radiant is a train wreck of code and I try to avoid dealing with it as much as I can, and adding features to it is an exercise in frustration. That's not to say you can't make it dance to your tune if you try hard enough.

The alternatives are not good. There's a reason there are hardly any good level editors out there, and that's because it's extremely time-consuming, especially if it does convex solid editing and doesn't just do what most 3D engines do now, which is import a model from Maya and just use the editor to place lights, portals, and entities.

The only alternatives I'd even consider:
- Quark, which has a generally confusing interface and is written in Delphi and Python. Last time I tried, it was also buggy, with random crashes and geometry corruption.
- Getic, which looks promising and may be going open source soon
- BSP, which has a steep learning curve and doesn't have Quake 3 support, but does have a sane code base.

Don't bother with GtkRadiant, it's bloated as hell, is frustrating to even get to compile, and even more frustrating to modify due to severe overengineering. Don't bother with q3map2 either, it's also bloated and it's slow.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

I'm assuming you like Getic the most? You don't have anything good to say about Quark and BSP looks like it's far too ancient (it not being updated in almost a year is a bad sign too). Have you tried DarkRadiant? I've heard some good stuff about it.
I haven't tried DarkRadiant, but it's based on GtkRadiant which means I have low hopes for it. I haven't actually tried Getic yet, but it looks stable.

Quark may have resolved the major bugs since TGE uses it for a lot of stuff. It can't hurt to try it, but it wouldn't be my first pick.

Vanilla Q3Radiant is probably the best starting point if only because it works and it compiles without a problem. The options for level editors right now are not very good, but Q3Radiant gives you convex solid, parametric curve, and polygon mesh primitives which is enough for most purposes. The only thing it's lacking are portals, and you could hack those in with brushes if you really wanted to, and the entity editor is really bare-bones.

On the tools side, q3map IS very good, particularly the severely underutilized -vlight option, which basically lets you increase your lightmap resolution for free.

OneEightHundred fucked around with this message at 17:16 on Jul 28, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

more falafel please posted:

This doesn't exist, and likely never will.
It does exist, but it has obvious drawbacks, like being forced to use only whatever effects and prediction methods (if any) the built-in client-side gamecode supports. Quake and Quake 2 would be examples of this, since any mods automatically support multiplayer as long as you don't make dumb design decisions like setting globals in per-player code.

Warcraft 3 modding does it as well, but that's a bit of a stretch. Honestly, Warcraft 3 may be one of the best choices for a tower defense type game simply because it comes with a massive audience ready to go.

OneEightHundred fucked around with this message at 20:17 on Jul 28, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Suburban Krunk posted:

I want to learn a programming language and learn about game development at the same time. I know a small amount of Java, and feel I have a basic understanding about some basic programming fundamentals. On the plus side, I have a strong background in math (Up to Vector Calculus). Can anyone recommend me a book or site that would teach me a programming language and game development at the same time? From reading around a little bit, it seems as if C# is the language to learn, and something about XNA (Not sure what this is), is this correct?
What kind of game?

I still going to recommend modding for first timers, because it means there are design decisions, content, and mistakes you don't have to make in order to hit the ground running. Modding something like UT2k4 or Source comes with the advantage of a massive number of tutorials.

SheriffHippo posted:

Also, what you learn about programming and game programming in ActionScript will be transferable to C# and even C++.
I really can't stress one related point enough: Learning programming languages is kind of easy, learning APIs and learning how to program are the hard parts. Most good programming languages are only as difficult as the frameworks you're using them with.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

I want to be able to use the depth buffer values to extrapolate the eye-space coordinates of each texel of the depth buffer.
The perspective correction is determined by the W value, so leaving it as 1.0 is not going to give you the right values.

I'm not good enough with matrix math to give you the full answer, but you can recalculate the HPOS value's W coordinate, and that's easy enough: The HPOS Z value (a.k.a. the depth) is rescaled from nearplane-farplane to 0-1, so just denormalize that for the absolute depth value. The HPOS W is just calculated by negating that value.

OneEightHundred fucked around with this message at 23:23 on Aug 7, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

schnarf posted:

I'm working on generating a PVS set for a portal engine. The calculations I'm doing are for load-time, so each sector will have a set of potentially visible sectors that can be used during run-time. My algorithm is pretty standard I think: For the particular sector we're generating a PVS for, clear out the clip plane, then recurse for each portal in that sector. In the next step we form a clipping volume made of a bunch of planes, which is created from the portal we came from and every portal in that new sector. Then once we have a clip volume, we clip that to each successive portal so the clip volume gets smaller and smaller as we recurse more.
Portal engines normally work by casting the view frustum through portals, clipping them when they go through, and they do this at runtime.

I can tell you how Quake's vis tool solves sector-to-sector, which is first solving portal-to-portal visibility, and then just making a visibility bitfield. A sector is considered visible from another sector if any portal in one can see any portal in another.

Portal-to-portal I'm a bit hazy on, but the jist is to find a plane such that both portals are on the plane, and any occluding portals are either behind one of the portals, or are completely on either side of that plane. The candidate planes come from essentially every edge-point combination in the list of offending portals and the portals themselves. Needless to say, this is SLOOOOW. If you can find such a plane, then the two portals can see each other.

(Note that a "portal" in this sense isn't just from open areas into open areas, but also from open areas into solids, i.e. walls, which are considered occluding portals)

q3map uses two systems, the old portal system, a new "passage" system that I know nothing about, and a third method that uses both. You can check out visflow.c in the Quake 3 source code for details, but it really doesn't explain the implementation at all so it's extremely hard to read.

OneEightHundred fucked around with this message at 06:57 on Aug 12, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Okay, the way vis works is a bit different than I thought (and I'm still trying to figure it out...), but here's my current understanding:

Portals build a list of other visible portals by starting in one sector, and trying to see through a chain of other portals to other sectors. The separating plane is used to determine how much of the portal is visible, if any.

Say you had a chain of sectors/portals as follows:

A -> pAB -> B -> pBC -> C -> pCD -> D -> pDE -> E

... and you're trying to determine what portals are visible from pAB.

pAB can always see pBC because they share a sector.

To determine how much of pCD is visible, it tries creating separator planes using edges from pAB, and points from pBC. A separator plane is valid if all points on pAB are on one side, and all points of pBC are on the other. pCD is then clipped using that plane, so that everything remaining is on the same side as pBC. If there is nothing remaining, then that portal can't be seen.

To determine how much of pDE is visible, the same process is done, except the separating planes are calculated using whatever's left of pCD instead of pBC.

Hope that makes sense.

OneEightHundred fucked around with this message at 18:46 on Aug 12, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Murodese posted:

Is this the right way to do it and I'm missing something in the player's transformation step, or am I retarded for doing it like this? :(
I'd recommend doing matrix math with your own code and just using glLoadMatrix, it makes things easier to debug (like if one of your matrix multiplies is backwards).

code:
// i would think that this should set the camera's position to m_zoomlevel units directly behind
// the object
camera->m_position = camera->m_attached->m_position + (camera->m_attached->m_orientation.getFront()
   * -camera->m_zoomLevel); 
Generally you want to do zoom in the projection matrix by changing the FOV.

OneEightHundred fucked around with this message at 20:11 on Aug 12, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

In other news, the OpenGL 3.0 spec is out, and it looks crappy...
Went from "major refactoring of the API to get rid of legacy cruft" to "yeah, we marked those features deprecated, we promise we'll remove them next time, by the way, here are some more extensions made core."

Might as well just call it OpenGL 2.2.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

Weren't they planing at some point to switch to a more object oriented approach, instead of this whole freaking "bind identifiers to stuff then switch states using those identifiers" mentallity?
They added that as an extension:
http://www.opengl.org/registry/specs/EXT/direct_state_access.txt
... which is core in OpenGL 3.

As far as the evolutionary fixes, what they did was introduce a deprecation model. A context can be made as "full" or "forward compatible", with deprecated features not being available in "forward compatible" contexts. A lot of the legacy cruft is gone in "forward compatible," but the immutability and asynchronous object stuff isn't there yet.

OneEightHundred fucked around with this message at 23:50 on Aug 12, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

I hoping for something like:

Texture my_texture;
my_texture.setParameter(GL_TEXTURE.GL_MIN_FILTER, GL_TEXTURE.GL_LINEAR);

I dont know...maybe Java has spoiled me...
OpenGL's designed to be compatible with non-object-oriented languages including vanilla C. What is an improvement is that you can actually write object-oriented wrappers without having to gently caress with the state machine constantly so that you don't clobber the selectors, and execute most operations without side-effects. It's not like it's a huge advantage when you're not calling everything through a device object and the syntax for everything else is really light.

It's not like there are really that many object types in OpenGL either, and you'd probably want to write a wrapper anyway if you ever wanted to port to D3D. Aside from that, it's not really much more work to do this:

code:
VertexBuffer *vb = glDevice->NewVertexBuffer();
void *mem = vb->Map(GL_WRITE_ONLY);
<stuff>
vb->Unmap();
Than it is to do this:
code:
GLuint vb;
glGenBuffers(1, &vb);
void *mem = glMapNamedBufferEXT(vb, GL_WRITE_ONLY);
<stuff>
glUnmapNamedBufferEXT(vb);
But it is more work to do this poo poo:

code:
GLint oldBuffer;
glGetIntegerv(GL_ARRAY_BUFFER_BINDING, &oldBuffer);
GLuint vb;
glGenBuffers(1, &vb);
glBindBuffer(GL_ARRAY_BUFFER, vb);
void *mem = glMapBufferARB(GL_ARRAY_BUFFER, GL_WRITE_ONLY)
glBindBuffer(GL_ARRAY_BUFFER, oldBuffer);
<stuff>
glGetIntegerv(ARRAY_BUFFER_BINDING, &oldBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vb);
glUnmapBufferARB(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, oldBuffer);
... which makes wrappers a necessity just to keep your sanity.

What's ironic is that they STILL don't let you update textures directly, but they did find a way to work in stuff like deprecating passing anything other than zero to the border parameter of glTexImage2D. Yeah, good work Khronos, way to design that technology.

OneEightHundred fucked around with this message at 02:25 on Aug 13, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Shazzner posted:

welp I guess long live direct3d
Well, it's a bit of a hard call still, they did add instancing support, geometry shaders are still available on hardware that supports them, so it is up to Direct3D 10 in terms of features (except with WinXP support, so developers don't have to write two renderers).

Direct3D 11 looks like it's set to break new ground in underwhelming, so there's room to maneuver. Regardless, D3D is going to stay on top in Windows development just because of inertia. OpenGL had time to capitalize on D3D's overpriced draw calls long before D3D10 fixed it, they didn't do it then, they're not going to beat D3D to the punch tomorrow, so it's pretty much doomed to second place forever on Windows at this point.

OneEightHundred fucked around with this message at 05:16 on Aug 13, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

captain_g posted:

Do directx 10 and 11 still continue the COM-style?
Of course. Microsoft needs to keep pushing their technology.

11's features are basically:
- Multithreaded rendering. Who gives a poo poo, sending things to D3D is not expensive any more, you don't need to parallelize your D3D calls.
- Tessellation. Because developers were gladly willing to surrender control for the sake of renderer speed in 2001, I'm sure they'll do it this time around, it's not like relief mapping actually works or anything.
- Compute shaders, which OpenGL has mechanisms for already.
- Order-independent transparency. Yay, state-trashing. :barf:

OneEightHundred fucked around with this message at 07:34 on Aug 13, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

TSDK posted:

Submitting batches is actually pretty drat expensive CPU wise (i.e. framerate limiting). Being able to thread off calls is going to be handy, especially as more engines start threading more stuff off in order to take advantage of multicore systems.
D3D10's draw calls are a good deal cheaper (or at least, that was the goal). Even without that, I don't really see a huge advantage over just having a primary rendering thread that exists solely to throw commands at D3D. It's not an operation that takes up enough time to really see much of a gain from being parallelized, as opposed to physics and animation which are currently the big CPU drainers.

quote:

It's a lot more likely that they're using an accumulation buffer with a modest number of samples - something in the area of 8-16 fragments per pixel.
OpenGL's had accumulation buffers in the spec for ages, so this isn't really amazing. Just means that suddenly Microsoft has decided that it would be nice to have on consumer cards.

quote:

What this really means is that your content management thread won't need to be so chatty with your rendering thread
I haven't really had a problem with this, it's just more commands to throw on the render queue. If preloading is a problem, map everything, thread the loading, then unmap everything when it's done.

Edit:

StickGuy posted:

It will also be helpful for rendering intersecting transparent geometry without having to resort to tesselation to get the right result.
Well, at the same time, developers don't seem to have the hard-on for transparency that they did in the Quake 2/3 days. The Unreal engine doesn't really even bother sorting transparent objects, because "transparent objects" tends to mean "particles" and little more.

OneEightHundred fucked around with this message at 16:53 on Aug 13, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

MasterSlowPoke posted:

I'm trying to implement Quake 3's .shader scripts into a HLSL shader and I'm not quite sure how I should do it.
Your best bet is to create a "signature" to avoid duplicating shaders. The stage limit will actually vary:

- If a stage is using rgbgen or alphagen with a non-static value, then you'll need to allocate a texcoord to the color. If it's static, you can bake it into the pixel shader.
- If a stage is using a tcgen, then you need to allocate a texcoord to passing the generated texcoords.
- Doing tcmod transforms in the pixel shader may allow you to save texcoords, but is slower and should be a last resort.

All tcmods except turb can be combined as a 2x3 matrix, so it's best to just do that in the front-end and pass it as a uniform.

tcmod turb fucks everything up and you should probably just ignore it.



Realistically, the way Quake 3 does shaders is a bad template for a material system. You shouldn't have to tell the engine exactly how to render a lightmapped wall, for example, it ruins your scalability. The best way I've heard it describes is that ideally, your material system should not define how to render a specific material, it should define what it looks like. Or at least, you shouldn't be defining both in the same spot. Scalable engines have multiple ways to render a simple wall, so all you should really need to do is give it a list of assets and parameters to render a "wall" (i.e. the albedo, bump, reflectivity, and glow textures), tell it that it's a "wall," and have it figure out the rest. Build complex materials by compositing those simple types together.

The system I'm using is a data-driven version of this and has three parts for a simple diffuse surface:

The default material template, which tries loading assets by name and defines how to import them, and references "Diffuse.mrp" as the rendering profile:
http://svn.icculus.org/teu/trunk/tertius/release/base/default.mpt?revision=177&view=markup

Diffuse.mrp, which is a rendering profile that defines how to render a simple diffuse-lit surface. This is the branch-off for world surfaces:
http://svn.icculus.org/teu/trunk/tertius/release/base/materials/profiles/DiffuseWorldSurfaceARBFP.mrp?revision=152&view=markup

The shader and permutation projects:
http://svn.icculus.org/teu/trunk/tertius/release/base/materials/cg/world_base_lightmap.cg?revision=185&view=markup
http://svn.icculus.org/teu/trunk/tertius/release/base/materials/fp/p_world_base_lightmap.svl?revision=183&view=markup
http://svn.icculus.org/teu/trunk/tertius/release/base/materials/vp/v_world_base_lightmap.svl?revision=177&view=markup


There are simpler ways to do this, of course. Half-Life 2 just defines the surface type for every material (which does wonders for its load times!) and those surface types have various ways of rendering defined for different hardware levels and configurations.

OneEightHundred fucked around with this message at 09:04 on Aug 19, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
You can't even guarantee that you'd be able to single-pass it because of specific combinations. For example, suppose you had an additive layer followed by an alpha blend layer (these exist in the game!!), you CAN'T single-pass that because whatever your result is can still only be drawn to the framebuffer with one blend function and there's no such thing as a "blend shader" yet.


Even with just one, you'd have to permute the poo poo out of it so you get results that use the proper tcgen (on the vertex shader) and blendfunc (on the pixel shader).


My suggestion isn't to hand-code them, but rather, to generate them at runtime and compile them then. The alternative is to do all possible permutations in advance, which takes a LONG time, but will speed up load times a lot.

quote:

Really, even if .shaders describe how a material looks and how to render it, can't I just pick out the look part and toss the "how to render" part?
You can, sort of, but what you're doing is effectively analyzing the shader and seeing how much work you can combine into a single pass or shader, and then doing it. Even Quake 3 itself does this, the difference is that it does it using the fixed-function pipeline which is designed to be instantly reprogrammable, whereas you're using pixel shaders which need to be compiled in advance. In order for it to really be optimal, you're going to want to generate shaders at runtime or during load.

OneEightHundred fucked around with this message at 09:27 on Aug 19, 2008

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
A strictly multi-pass renderer is slow, since you're constantly trashing state and you require twice as many draw calls for the vast majority of surfaces. Quake 3 itself tries single-passing as much as it can.

quote:

Hah, I'm running a pretty similar system to this
Quake 3 has no concept of multitexture at all in the shader format, so if you have bumpmapping at all, you're already far ahead of it. :)

Just because it's using curly braces doesn't mean it works the same.

Adbot
ADBOT LOVES YOU

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

shodanjr_gr posted:

How about a deferred approach?

Use multiple render targets to save all the stuff you need per fragment (normal map value, diffuse value etc), then do a final pass with constant cost in which you combine the data for each fragment. It works pretty well if you dont wanna do transparency...
That's not really how Quake 3 works though, Quake 3 works by layering stuff on to the framebuffer exactly as specified. There are two ways to do a simple lightmapped surface that both produce the same result, for example. Deferred shading works by making everything a general case, in Quake 3 there is no general case other than the defaults. (Another problem of course is the fact that Quake 3 assets have a LOT of transparency!)

This is kind of the problem, Quake 3's shader format was designed to function on a non-multitexturing Voodoo 1 and only contains features for that hardware, with zero native support for fallbacks, slow-compiling pixel/vertex shaders, or even multitexture.

About the best you can really do is try merging layers. Any shader with a base layer with no alpha test and no blendfunc (or a blendfunc of ONE, ZERO) can merge all following layers. Sequential additive layers can be merged. Sequential multiplicative (filter) layers can be merged.

Of course, this is MORE difficult with hardware shaders though, because of things like tcgen mixing. The fixed-function pipeline can easily change the way a set of texture coordinates works and go with it, programmable shaders need to recompile to do that.

If you're not able to generate and compile shaders at load time, then you should make a tool to do it at authoring time.

OneEightHundred fucked around with this message at 18:25 on Aug 19, 2008

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply