|
In regards to the clip plane stuff: A good range of ATI cards have no hardware clip plane support, so using them will cause it to kick back to the software rasterizer. gl_ClipVertex support is spotty as well. For near-plane clipping, there's a technique called oblique depth projection which basically lets you set the near plane to an arbitrary plane instead of one perpendicular to the camera, giving you the same effect. http://www.terathon.com/code/oblique.php There are some caveats with it, it breaks down when the plane gets too close to the camera and I doubt it works properly if the portion of the plane within the view frustum isn't entirely in front of the nearplane. MasterSlowPoke posted:I'm writing a MD2 (Quake 2 model) importer for XNA right now. If you don't know, the MD2 format stores the texture coordinates with the triangle indices. This allows two triangles that share a vertex to not share a UV coordinate at that position. The problem I'm having is that I see no way to emulate that behavior in my VertexBuffer. OneEightHundred fucked around with this message at 03:01 on May 21, 2008 |
# ¿ May 21, 2008 02:46 |
|
|
# ¿ Apr 28, 2024 10:32 |
|
MasterSlowPoke posted:The problem is that would cause the normals calculated incorrectly, as the normals of all the triangles that share that split vertex wouldn't be averaged. I know it's a small and probably undetectable error but might as well do it right the first time. EDIT -- You don't really even need to do that with MD2 because it stores the normals with the point. Points are stored as 3 bytes for the vertex position and a 1-byte lookup into a precomputed normals table. http://tfc.duke.free.fr/coding/src/anorms.h Out of curiousity, why are you using MD2? MD3's a higher-precision format which has the same capabilities and then some. OneEightHundred fucked around with this message at 06:31 on May 21, 2008 |
# ¿ May 21, 2008 06:20 |
|
I'd recommend staying away from Half-Life MDL. It's an awful clusterfuck of a format. If you want a skeletal format that's not too hard to write a parser for, try MD5, Unreal PSK/PSA, or Cal3D. Half-Life and Source SMD files are easy to parse, but they're very far from what you want the internal representation to be so I'd recommend writing something to compile it to your own format if you want to use it.
|
# ¿ May 21, 2008 08:36 |
|
The problem with HL MDL is that it's a typical Valve format, one that fails to separate the implementation details from the file format itself. It's not just an old format, it's a BAD format. I wouldn't describe COLLADA as easy to parse either. Even with FCollada, which does a very good job of parsing it and getting it into a usable format, practically everything has an additional layer of complexity that you have to work through, and everything's indexed separately which means it takes a good deal of work to get it into a usable format. I'm writing a COLLADA importer for the project I'm working on right now, and it's definitely the most difficult format to work with of the ones I mentioned in my last post. OneEightHundred fucked around with this message at 23:05 on May 21, 2008 |
# ¿ May 21, 2008 23:03 |
|
Adhemar posted:Still, I think it's very useful to learn because it's growing as a standard in the industry. quote:A good exercise might be to convert some other model format to COLLADA (using FCollada), or the other way around. TSDK posted:The single biggest selling point for COLLADA is that there are open source exporters for both Maya and Max. Authoring plug-ins for multiple versions of multiple modeling packages is a severe pain in the rear end compared to supporting one format. Being able to export one format and have the whole middleware market support it is a great selling point for the modeling software developers.
|
# ¿ May 22, 2008 20:39 |
|
TSDK posted:Being closed source is a major disadvantage when putting toolchains together. I've yet to see one 'standard' file format that didn't need either extending or bug-fixing in one way or another to meet all of the requirements for a project.
|
# ¿ May 24, 2008 02:57 |
|
I've always used timeGetTime, and whatever caveats that entails. You need to use timeBeginPeriod to raise its precision though.
|
# ¿ May 26, 2008 05:38 |
|
Jo posted:If I use an early out algorithm for detection, can I expect a significant performance improvement in a map with 60-70 shadow casters? quote:My other alternative is to build a list of shadow casters and discard all objects outside the light quote:Second question:
|
# ¿ Jun 2, 2008 08:00 |
|
Crash Bandicoot posted:Unfortunately once the Texture is instantiated I am not sure how to pass it the actual image data. EDIT -- In MD3D they're called LockRectangle and UnlockRectangle respectively. Apparently the MSDN pages for them are broken and the only documentation for them are in Japanese, so good luck. http://msdn.microsoft.com/en-us/library/bb152978(VS.85).aspx OneEightHundred fucked around with this message at 23:20 on Jun 8, 2008 |
# ¿ Jun 8, 2008 23:15 |
|
krysmopompas posted:ARB_texture_non_power_of_two Non-power-of-two textures are glacially slow on a lot of hardware.
|
# ¿ Jun 11, 2008 04:53 |
|
Thug Bonnet posted:Can someone point me toward a good, no-bullshit discussion of quaternions that includes practical code examples? I always end up finding either high-level descriptions of the math involved or API-specific implementations. I'd love to see a bridge between the two somewhere. I haven't found the gamedev articles to be particularly helpful (they seem to suffer from the same problems). As far as the basic concept, which I found makes them a lot easier to digest: Any rotational transformation can be represented as a rotation around an axis in space. Quaternions are exactly that. The advantage is that they are valid as several other types of values as well (i.e. as sphere coordinates and complex numbers) so there are a lot of valid ways to manipulate them. The X, Y, and Z components are the rotation axis multiplied by the sine of the rotation angle, the W component is the cosine of the rotation angle. Getting the opposite transformation can be done by negating the X, Y, and Z components (which reverses the rotation direction), or by negating the W component (which flips the rotation axis). For why it does those, see the previous paragraph, and remember that negating a sine result is the same as negating the angle, and negating a cosine result is the same as a 180-degree offset of the angle. Since these operations cancel each other out, negating all 4 components will actually result in the same transformation, which is useful if you're trying to manipulate the sign of the W component. As a sphere coordinate (X^2 + Y^2 + Z^2 + W^2 = 1), it's easy to fix error accumulation by just renormalizing them, they're easy to store compactly because you can recalculate its W component (and force the sign), and you can interpolate them smoothly with spherical lerp. As a complex number, multiplying two quaternions produces the same result as combining their rotations the same way as a matrix concatenation would. If you need some code, you can rip off mine: http://svn.icculus.org/teu/trunk/tertius/src/tdp/tdp_rotation.hpp?view=log http://svn.icculus.org/teu/trunk/tertius/src/tdp/tdp_math_quat.hpp?view=log http://svn.icculus.org/teu/trunk/tertius/src/tdp/tdp_math_quat.cpp?view=log http://svn.icculus.org/teu/trunk/tertius/src/tdp/tdp_math_matrix.cpp?view=log (Has the matrix-to-quaternion code) If you want some no-bullshit explanation of the math involved and formulae: http://www.euclideanspace.com/maths/geometry/rotations/conversions/index.htm OneEightHundred fucked around with this message at 10:19 on Jun 13, 2008 |
# ¿ Jun 13, 2008 09:38 |
|
Intel Penguin posted:It's the prevention of tempering I'm looking to prevent. Though I doubt anyone will have the motivation to crack my game. Delayed-response poisoning strikes me as one of the best solutions because it makes the trial-and-error process much more time-consuming: In single-player games, remove something critical that makes the game unbeatable. In multi-player games, delay bans for a long time after the cheats are detected. In either, act buggy. Personally, for multiplayer games, I really think the solution is to make a better player moderation system. I've seen too many games where, for example, kick votes invariably fail to go through because they require a majority vote and the people who aren't getting griefed or steamrolled by a cheater can't be bothered to vote. There has to be a better way. OneEightHundred fucked around with this message at 11:12 on Jun 21, 2008 |
# ¿ Jun 21, 2008 11:09 |
|
I'm not saying that there needs to be a better way to do a kick vote, if anything I think kick votes have repeatedly proven their uselessness. Compulsory voting would be far too easily abusable for griefing. There are other forms of player moderation, like karma-type systems, but the only one that doesn't suck horribly is "have a shitload of admins." I'm trying to come up with a better way of letting the players clear the shitheads off servers, because if something like that exists, problematic cheaters would not get far, and it would have benefits in getting rid of other problems as well, i.e. griefers. OneEightHundred fucked around with this message at 22:15 on Jun 22, 2008 |
# ¿ Jun 22, 2008 22:04 |
|
Plaintext saves aren't a great idea for numerous reasons, especially if you're using floating point numbers which lose information when converting to/from ASCII. Regardless, hacking savegames is probably the last thing you should worry about. It's more likely someone will just write a trainer, in which case anything you do to the savegame data is kind of moot. I really have to ask: What do you expect to get out of all this security? Is the title you're releasing even big enough for people to waste their time hacking it?
|
# ¿ Jun 24, 2008 09:43 |
|
Gary the Llama posted:Should the Sprite class handle the rendering or the Player class? You shouldn't have to rewrite code for things that render the same way, but that's not to say you can't have players and any other sort of sprite just call one common sprite draw routine and still be treated as renderables. OneEightHundred fucked around with this message at 21:00 on Jun 30, 2008 |
# ¿ Jun 30, 2008 20:58 |
|
guenter posted:What's the benefit of the Direct3D effect framework? I'm using it but only trivialy. I understand one of the benefits is being able to specify multiple techniques per effect but that's about all I understand in terms of differences. quote:I also get the impression that an effect (n .fx file) is not at all the same thing as what people generally refer to as effects (ie. bump mapping or something).
|
# ¿ Jul 1, 2008 23:17 |
|
You could try Ogre3D and Irrlicht.
|
# ¿ Jul 7, 2008 16:38 |
|
It's not really that hard to make a 2D game in a 3D engine, just keep the camera pointed down one axis and make sure you never give anything a velocity or position that'll move it off of that plane. Or put a pair of invisible walls up. Use camera-aligned sprites for graphics (which practically every 3D engine supports). If anything it gives you some artistic flexibility if you ever want to make any elements of it 3D even if the gameplay is strictly 2D.
|
# ¿ Jul 7, 2008 20:56 |
|
Entheogen posted:is there any way to use glMultiDrawElements with VBOs? Specifically I would like to store my index and count arrays on GPU memory, but still use this call. Is this even possible? OneEightHundred fucked around with this message at 07:21 on Jul 8, 2008 |
# ¿ Jul 8, 2008 07:19 |
|
Namgsemterces posted:I had an idea to use HLSL and make a certain color a special transparent color.
|
# ¿ Jul 16, 2008 01:25 |
|
Namgsemterces posted:Thanks, but you didn't understand my question. If I try to cover up an area inside my clipping rectangle but outside of the area I want rendered (note: I'm talking about non-rectangular shapes, like a rounded rectangle) with something of alpha 0, the part I want covered up would still show up because the alpha 0 would be meaningless. I need to overwrite that rounded area with something that is opaque and then later change that opaque color to an alpha 0. Basically, I'm talking about a mask. Of course, you could also use the stencil buffer, which is designed for this sort of thing.
|
# ¿ Jul 16, 2008 16:36 |
|
Centipeed posted:Also, why is OpenGL for Windows not available for download anywhere? There are a few links here or there, but they all seem to be third party links as opposed to links from whoever actually produces OpenGL for Windows. http://www.opengl.org/registry/ If you really need the main headers (i.e. GL.h), download Mesa, the headers are compatible. If you want the library, then I think Mesa includes it, if not, just use LoadLibrary, GetProcAddress, and wglGetProcAddress which you should be doing anyway. If you mean the runtime, there is no standard one, implementations of the spec are produced and distributed by implementors of it. In other words, it comes with your video card drivers. OneEightHundred fucked around with this message at 21:00 on Jul 22, 2008 |
# ¿ Jul 22, 2008 20:55 |
|
MasterSlowPoke posted:I'm pretty much done with my 3d model importer, so now I'm off to the next step: the levels. Can anyone recommend a format and/or editor to use? Really anything could help, it's probably far more complex than I think. Not to say it's good, Radiant is a train wreck of code and I try to avoid dealing with it as much as I can, and adding features to it is an exercise in frustration. That's not to say you can't make it dance to your tune if you try hard enough. The alternatives are not good. There's a reason there are hardly any good level editors out there, and that's because it's extremely time-consuming, especially if it does convex solid editing and doesn't just do what most 3D engines do now, which is import a model from Maya and just use the editor to place lights, portals, and entities. The only alternatives I'd even consider: - Quark, which has a generally confusing interface and is written in Delphi and Python. Last time I tried, it was also buggy, with random crashes and geometry corruption. - Getic, which looks promising and may be going open source soon - BSP, which has a steep learning curve and doesn't have Quake 3 support, but does have a sane code base. Don't bother with GtkRadiant, it's bloated as hell, is frustrating to even get to compile, and even more frustrating to modify due to severe overengineering. Don't bother with q3map2 either, it's also bloated and it's slow.
|
# ¿ Jul 28, 2008 01:21 |
|
MasterSlowPoke posted:I'm assuming you like Getic the most? You don't have anything good to say about Quark and BSP looks like it's far too ancient (it not being updated in almost a year is a bad sign too). Have you tried DarkRadiant? I've heard some good stuff about it. Quark may have resolved the major bugs since TGE uses it for a lot of stuff. It can't hurt to try it, but it wouldn't be my first pick. Vanilla Q3Radiant is probably the best starting point if only because it works and it compiles without a problem. The options for level editors right now are not very good, but Q3Radiant gives you convex solid, parametric curve, and polygon mesh primitives which is enough for most purposes. The only thing it's lacking are portals, and you could hack those in with brushes if you really wanted to, and the entity editor is really bare-bones. On the tools side, q3map IS very good, particularly the severely underutilized -vlight option, which basically lets you increase your lightmap resolution for free. OneEightHundred fucked around with this message at 17:16 on Jul 28, 2008 |
# ¿ Jul 28, 2008 17:04 |
|
more falafel please posted:This doesn't exist, and likely never will. Warcraft 3 modding does it as well, but that's a bit of a stretch. Honestly, Warcraft 3 may be one of the best choices for a tower defense type game simply because it comes with a massive audience ready to go. OneEightHundred fucked around with this message at 20:17 on Jul 28, 2008 |
# ¿ Jul 28, 2008 20:13 |
|
Suburban Krunk posted:I want to learn a programming language and learn about game development at the same time. I know a small amount of Java, and feel I have a basic understanding about some basic programming fundamentals. On the plus side, I have a strong background in math (Up to Vector Calculus). Can anyone recommend me a book or site that would teach me a programming language and game development at the same time? From reading around a little bit, it seems as if C# is the language to learn, and something about XNA (Not sure what this is), is this correct? I still going to recommend modding for first timers, because it means there are design decisions, content, and mistakes you don't have to make in order to hit the ground running. Modding something like UT2k4 or Source comes with the advantage of a massive number of tutorials. SheriffHippo posted:Also, what you learn about programming and game programming in ActionScript will be transferable to C# and even C++.
|
# ¿ Aug 6, 2008 18:10 |
|
shodanjr_gr posted:I want to be able to use the depth buffer values to extrapolate the eye-space coordinates of each texel of the depth buffer. I'm not good enough with matrix math to give you the full answer, but you can recalculate the HPOS value's W coordinate, and that's easy enough: The HPOS Z value (a.k.a. the depth) is rescaled from nearplane-farplane to 0-1, so just denormalize that for the absolute depth value. The HPOS W is just calculated by negating that value. OneEightHundred fucked around with this message at 23:23 on Aug 7, 2008 |
# ¿ Aug 7, 2008 23:18 |
|
schnarf posted:I'm working on generating a PVS set for a portal engine. The calculations I'm doing are for load-time, so each sector will have a set of potentially visible sectors that can be used during run-time. My algorithm is pretty standard I think: For the particular sector we're generating a PVS for, clear out the clip plane, then recurse for each portal in that sector. In the next step we form a clipping volume made of a bunch of planes, which is created from the portal we came from and every portal in that new sector. Then once we have a clip volume, we clip that to each successive portal so the clip volume gets smaller and smaller as we recurse more. I can tell you how Quake's vis tool solves sector-to-sector, which is first solving portal-to-portal visibility, and then just making a visibility bitfield. A sector is considered visible from another sector if any portal in one can see any portal in another. Portal-to-portal I'm a bit hazy on, but the jist is to find a plane such that both portals are on the plane, and any occluding portals are either behind one of the portals, or are completely on either side of that plane. The candidate planes come from essentially every edge-point combination in the list of offending portals and the portals themselves. Needless to say, this is SLOOOOW. If you can find such a plane, then the two portals can see each other. (Note that a "portal" in this sense isn't just from open areas into open areas, but also from open areas into solids, i.e. walls, which are considered occluding portals) q3map uses two systems, the old portal system, a new "passage" system that I know nothing about, and a third method that uses both. You can check out visflow.c in the Quake 3 source code for details, but it really doesn't explain the implementation at all so it's extremely hard to read. OneEightHundred fucked around with this message at 06:57 on Aug 12, 2008 |
# ¿ Aug 12, 2008 06:37 |
|
Okay, the way vis works is a bit different than I thought (and I'm still trying to figure it out...), but here's my current understanding: Portals build a list of other visible portals by starting in one sector, and trying to see through a chain of other portals to other sectors. The separating plane is used to determine how much of the portal is visible, if any. Say you had a chain of sectors/portals as follows: A -> pAB -> B -> pBC -> C -> pCD -> D -> pDE -> E ... and you're trying to determine what portals are visible from pAB. pAB can always see pBC because they share a sector. To determine how much of pCD is visible, it tries creating separator planes using edges from pAB, and points from pBC. A separator plane is valid if all points on pAB are on one side, and all points of pBC are on the other. pCD is then clipped using that plane, so that everything remaining is on the same side as pBC. If there is nothing remaining, then that portal can't be seen. To determine how much of pDE is visible, the same process is done, except the separating planes are calculated using whatever's left of pCD instead of pBC. Hope that makes sense. OneEightHundred fucked around with this message at 18:46 on Aug 12, 2008 |
# ¿ Aug 12, 2008 15:24 |
|
Murodese posted:Is this the right way to do it and I'm missing something in the player's transformation step, or am I retarded for doing it like this? code:
OneEightHundred fucked around with this message at 20:11 on Aug 12, 2008 |
# ¿ Aug 12, 2008 20:05 |
|
shodanjr_gr posted:In other news, the OpenGL 3.0 spec is out, and it looks crappy... Might as well just call it OpenGL 2.2.
|
# ¿ Aug 12, 2008 22:25 |
|
shodanjr_gr posted:Weren't they planing at some point to switch to a more object oriented approach, instead of this whole freaking "bind identifiers to stuff then switch states using those identifiers" mentallity? http://www.opengl.org/registry/specs/EXT/direct_state_access.txt ... which is core in OpenGL 3. As far as the evolutionary fixes, what they did was introduce a deprecation model. A context can be made as "full" or "forward compatible", with deprecated features not being available in "forward compatible" contexts. A lot of the legacy cruft is gone in "forward compatible," but the immutability and asynchronous object stuff isn't there yet. OneEightHundred fucked around with this message at 23:50 on Aug 12, 2008 |
# ¿ Aug 12, 2008 23:41 |
|
shodanjr_gr posted:I hoping for something like: It's not like there are really that many object types in OpenGL either, and you'd probably want to write a wrapper anyway if you ever wanted to port to D3D. Aside from that, it's not really much more work to do this: code:
code:
code:
What's ironic is that they STILL don't let you update textures directly, but they did find a way to work in stuff like deprecating passing anything other than zero to the border parameter of glTexImage2D. Yeah, good work Khronos, way to design that technology. OneEightHundred fucked around with this message at 02:25 on Aug 13, 2008 |
# ¿ Aug 13, 2008 02:20 |
|
Shazzner posted:welp I guess long live direct3d Direct3D 11 looks like it's set to break new ground in underwhelming, so there's room to maneuver. Regardless, D3D is going to stay on top in Windows development just because of inertia. OpenGL had time to capitalize on D3D's overpriced draw calls long before D3D10 fixed it, they didn't do it then, they're not going to beat D3D to the punch tomorrow, so it's pretty much doomed to second place forever on Windows at this point. OneEightHundred fucked around with this message at 05:16 on Aug 13, 2008 |
# ¿ Aug 13, 2008 05:05 |
|
captain_g posted:Do directx 10 and 11 still continue the COM-style? 11's features are basically: - Multithreaded rendering. Who gives a poo poo, sending things to D3D is not expensive any more, you don't need to parallelize your D3D calls. - Tessellation. Because developers were gladly willing to surrender control for the sake of renderer speed in 2001, I'm sure they'll do it this time around, it's not like relief mapping actually works or anything. - Compute shaders, which OpenGL has mechanisms for already. - Order-independent transparency. Yay, state-trashing. OneEightHundred fucked around with this message at 07:34 on Aug 13, 2008 |
# ¿ Aug 13, 2008 07:28 |
|
TSDK posted:Submitting batches is actually pretty drat expensive CPU wise (i.e. framerate limiting). Being able to thread off calls is going to be handy, especially as more engines start threading more stuff off in order to take advantage of multicore systems. quote:It's a lot more likely that they're using an accumulation buffer with a modest number of samples - something in the area of 8-16 fragments per pixel. quote:What this really means is that your content management thread won't need to be so chatty with your rendering thread Edit: StickGuy posted:It will also be helpful for rendering intersecting transparent geometry without having to resort to tesselation to get the right result. OneEightHundred fucked around with this message at 16:53 on Aug 13, 2008 |
# ¿ Aug 13, 2008 16:45 |
|
MasterSlowPoke posted:I'm trying to implement Quake 3's .shader scripts into a HLSL shader and I'm not quite sure how I should do it. - If a stage is using rgbgen or alphagen with a non-static value, then you'll need to allocate a texcoord to the color. If it's static, you can bake it into the pixel shader. - If a stage is using a tcgen, then you need to allocate a texcoord to passing the generated texcoords. - Doing tcmod transforms in the pixel shader may allow you to save texcoords, but is slower and should be a last resort. All tcmods except turb can be combined as a 2x3 matrix, so it's best to just do that in the front-end and pass it as a uniform. tcmod turb fucks everything up and you should probably just ignore it. Realistically, the way Quake 3 does shaders is a bad template for a material system. You shouldn't have to tell the engine exactly how to render a lightmapped wall, for example, it ruins your scalability. The best way I've heard it describes is that ideally, your material system should not define how to render a specific material, it should define what it looks like. Or at least, you shouldn't be defining both in the same spot. Scalable engines have multiple ways to render a simple wall, so all you should really need to do is give it a list of assets and parameters to render a "wall" (i.e. the albedo, bump, reflectivity, and glow textures), tell it that it's a "wall," and have it figure out the rest. Build complex materials by compositing those simple types together. The system I'm using is a data-driven version of this and has three parts for a simple diffuse surface: The default material template, which tries loading assets by name and defines how to import them, and references "Diffuse.mrp" as the rendering profile: http://svn.icculus.org/teu/trunk/tertius/release/base/default.mpt?revision=177&view=markup Diffuse.mrp, which is a rendering profile that defines how to render a simple diffuse-lit surface. This is the branch-off for world surfaces: http://svn.icculus.org/teu/trunk/tertius/release/base/materials/profiles/DiffuseWorldSurfaceARBFP.mrp?revision=152&view=markup The shader and permutation projects: http://svn.icculus.org/teu/trunk/tertius/release/base/materials/cg/world_base_lightmap.cg?revision=185&view=markup http://svn.icculus.org/teu/trunk/tertius/release/base/materials/fp/p_world_base_lightmap.svl?revision=183&view=markup http://svn.icculus.org/teu/trunk/tertius/release/base/materials/vp/v_world_base_lightmap.svl?revision=177&view=markup There are simpler ways to do this, of course. Half-Life 2 just defines the surface type for every material (which does wonders for its load times!) and those surface types have various ways of rendering defined for different hardware levels and configurations. OneEightHundred fucked around with this message at 09:04 on Aug 19, 2008 |
# ¿ Aug 19, 2008 08:38 |
|
You can't even guarantee that you'd be able to single-pass it because of specific combinations. For example, suppose you had an additive layer followed by an alpha blend layer (these exist in the game!!), you CAN'T single-pass that because whatever your result is can still only be drawn to the framebuffer with one blend function and there's no such thing as a "blend shader" yet. Even with just one, you'd have to permute the poo poo out of it so you get results that use the proper tcgen (on the vertex shader) and blendfunc (on the pixel shader). My suggestion isn't to hand-code them, but rather, to generate them at runtime and compile them then. The alternative is to do all possible permutations in advance, which takes a LONG time, but will speed up load times a lot. quote:Really, even if .shaders describe how a material looks and how to render it, can't I just pick out the look part and toss the "how to render" part? OneEightHundred fucked around with this message at 09:27 on Aug 19, 2008 |
# ¿ Aug 19, 2008 09:21 |
|
A strictly multi-pass renderer is slow, since you're constantly trashing state and you require twice as many draw calls for the vast majority of surfaces. Quake 3 itself tries single-passing as much as it can.quote:Hah, I'm running a pretty similar system to this Just because it's using curly braces doesn't mean it works the same.
|
# ¿ Aug 19, 2008 16:12 |
|
|
# ¿ Apr 28, 2024 10:32 |
|
shodanjr_gr posted:How about a deferred approach? This is kind of the problem, Quake 3's shader format was designed to function on a non-multitexturing Voodoo 1 and only contains features for that hardware, with zero native support for fallbacks, slow-compiling pixel/vertex shaders, or even multitexture. About the best you can really do is try merging layers. Any shader with a base layer with no alpha test and no blendfunc (or a blendfunc of ONE, ZERO) can merge all following layers. Sequential additive layers can be merged. Sequential multiplicative (filter) layers can be merged. Of course, this is MORE difficult with hardware shaders though, because of things like tcgen mixing. The fixed-function pipeline can easily change the way a set of texture coordinates works and go with it, programmable shaders need to recompile to do that. If you're not able to generate and compile shaders at load time, then you should make a tool to do it at authoring time. OneEightHundred fucked around with this message at 18:25 on Aug 19, 2008 |
# ¿ Aug 19, 2008 17:58 |