Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Woz My Neg rear end posted:

Formally don't-write-in-new-code deprecated, or just out of favor? I thought they were still better for frequently updated buffers.
I believe VBOs are mandatory for all geometry in the forward-compatibility contexts. D3D switched to buffer-only ages ago, at least circa D3D8.

If you have a frequently-updated buffer, create it as DYNAMIC. Use discards (pass NULL to glBufferSubData glBufferData), and if you're doing CPU calculations then use SSE intrinsics and write to the buffer by using MapBuffer and _mm_stream_si128. Discards alone probably make them better than vertex arrays.

e: Oops you're right, mentioned the wrong call.

OneEightHundred fucked around with this message at 21:45 on Jan 11, 2012

Adbot
ADBOT LOVES YOU

Spite
Jul 27, 2001

Small chance of that...

OneEightHundred posted:

I believe VBOs are mandatory for all geometry in the forward-compatibility contexts. D3D switched to buffer-only ages ago, at least circa D3D8.

If you have a frequently-updated buffer, create it as DYNAMIC. Use discards (pass NULL to glBufferSubData), and if you're doing CPU calculations then use SSE intrinsics and write to the buffer by using MapBuffer and _mm_stream_si128. Discards alone probably make them better than vertex arrays.

Yes, use VBO for sure. VAR are probably super stale code in most drivers these days.

Note: calling glBufferSubData with NULL doesn't discard the buffer. You should use glBufferData with NULL to get that orphaning behavior. glBufferSubData is specified to not modify the buffer, only the data. You can also use the unsynchronized parameters to get nonblocking behavior.

ShinAli
May 2, 2003

The Kid better watch his step.

Woz My Neg rear end posted:

Formally don't-write-in-new-code deprecated, or just out of favor? I thought they were still better for frequently updated buffers.

I would think they'd be about the same performance as vertex arrays are always sent over to the GPU's memory.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
Arrays are slower because if you don't use compiled vertex arrays, DMA doesn't start until you issue a draw call and then it blocks until DMA is finished, and if you do use CVA, then it blocks your unlocks unless DMA is finished.

Static VBOs only DMA once ever and dynamic VBOs with discard don't block on DMA because it'll just give you a new memory region if the old one is in use.

ShinAli
May 2, 2003

The Kid better watch his step.
Where you get low level info like that? Just from experience?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

ShinAli posted:

Where you get low level info like that? Just from experience?
D3D is more upfront about the specific advantages, but you can usually find the jist of why things work a particular way in the OpenGL extension specs.

Some are inferrable. i.e. the entire point of discard is to ensure that the driver can pull data from a memory buffer that isn't being modified, that can't be done with arrays in system memory unless the array is locked, which means the unlock will block if the driver is using it (just like it ALWAYS blocks if you don't use compiled arrays or VBOs).

A LOT of extensions make more sense if you think of things in terms of the GPU being a high-latency device, and it generally being preferable to do things that don't care when the card gets around to starting OR finishing the operation.

OneEightHundred fucked around with this message at 19:09 on Jan 16, 2012

Illusive Fuck Man
Jul 5, 2004
RIP John McCain feel better xoxo 💋 🙏
Taco Defender
It's been like three years since I last did any 3d programming and back then I didn't know what the gently caress I was doing either. I started up using opengl again a few days ago and everything is totally different. Is this basically the gist of how to draw an object now?

-bind the array, index and texture buffers (glBindBuffer, glBindTexture)
-use appropriate shader program (glUseProgram)
-send crap to the shader's uniform variables, like a vector for translation, the texture units for the sampler(s) to use. (glProgramUniform****)
-glDrawWhatever(), doing all the matrix operations on the shader

I have this working, but is it efficient? Am I doing anything retarded here? It's so hard to find tutorials that aren't full of deprecated stuff.

Bisse
Jun 26, 2005

^^ Also why isn't all this required stuff in the OpenGL libraries? Right now I had to figure out I needed GLEW to be able to use the ..BufferARB() stuff.

It feels really out-there to ban glBegin()/glEnd() altogether. They're slow but are very useful for someone getting to learn 3D programming. Just imagine future graphics classes: "To draw your first basic polygon, poll your graphis card for function pointers to glBufferBlablabla, then create a pre-defined array of..."

Bisse fucked around with this message at 15:35 on Jan 18, 2012

Visible Stink
Mar 31, 2010

Got a light, handsome?

I agree. I am just starting out on learning OpenGL and am making a terrain generator (bit more interesting than starting with rotating cubes) and wanted to draw a set of axis in my scene. I used glBegin/glEnd because I didn't want to bind buffers and set up shaders just to draw 3 loving lines. I figure it may be slower than the alternative but it me saved a lot of effort and should it cause me problems later on I'll deal with it then.

As I said I'm still a novice to this so if I said something dumb please correct me.

Paniolo
Oct 9, 2007

Heads will roll.

Bisse posted:

^^ Also why isn't all this required stuff in the OpenGL libraries? Right now I had to figure out I needed GLEW to be able to use the ..BufferARB() stuff.

It feels really out-there to ban glBegin()/glEnd() altogether. They're slow but are very useful for someone getting to learn 3D programming. Just imagine future graphics classes: "To draw your first basic polygon, poll your graphis card for function pointers to glBufferBlablabla, then create a pre-defined array of..."

OpenGL is a low level library. It should not be used in a graphics 101 course. There are plenty of high level graphics engines which are much more suited for education. There is only one framework for low level access to the graphics hardware.

As for your first question, because OpenGL never bothered to put anything about deployment in the spec.

pseudorandom name
May 6, 2007

And because those functions are part of an extension that your OpenGL driver may not implement.

Bisse
Jun 26, 2005

pseudorandom name posted:

And because those functions are part of an extension that your OpenGL driver may not implement.
So the OpenGL driver may not implement the only currently allowed way to render? :confused:

pseudorandom name
May 6, 2007

If the function names have a vendor suffix (and ARB or EXT count), then there's no guarantee the driver implements them.

Spite
Jul 27, 2001

Small chance of that...

Bisse posted:

So the OpenGL driver may not implement the only currently allowed way to render? :confused:

It's a bit more complicated and nasty than that.
In Windows, the OS only implements OpenGL 1.1 or something like that. So that's all you are guaranteed to have. Everything else has to go through wglGetProcAddress, which queries the driver and returns a function pointer to the function.

To get a modern (3+) OpenGL context, you have to call wglCreateContextAttribs. Which is not part of the old OpenGL, so you have to actually create an old context, call wglGetProcAddress to get the new creation function and _then_ create your real context. It's a disaster.

And glBegin/End should absolutely be banned. One of OGL's biggest issues is that it has a billion ways to do things, but only one of them is fast, and the others don't tell you they are slow. As was said earlier, a more friendly API could be built on top of OGL to do similar stuff. Begin/End and fixed function are really out of date ways of thinking about modern GPUs and graphics - it may be user-friendly but it has nothing to do with how the hardware works or how you should organize your rendering.

A good low-level graphics API should only have fast paths. The ARB is attempting to remove the slow crap. Unfortunately they'll never succeed because there are too many apps and people that are using the old stuff and not adopting the new stuff.

ickbar
Mar 8, 2005
Cannonfodder #35578
Been trying to work on adding opengl text to my little hack, and as usual been completely stumped. There's a hack made by much more experienced coders that can get text in the middle of the screen (game engine has no sdk or open source available except for 3rd party sdk involving animation models etc..).

I wonder how they are able to center the text and what function they are hooking to get there??

For the game the most success I've had is hooking the glBegin function and issuing the glprint command, and funky stuff like turning off textures or whatever i'm trying usually results in fail due to being a dumb noob. From my reading of hte opengl redbook all I know is that I have to push matrix, modelview matrix, projection matrix, something in between to get the text centered on screen, pop matrix.

The only thing I can get working is this. The game uses glbegin to draw HUD elements and nothing else, but the other guys hack I know doesn't hook the glBegin function, because when I disable the HUD his text menu doesn't dissapear.


hookedglbegin()
{
....
glprint(0,0,"hello!");
orig-glbegin();
}

which results in this.



this is the code for the glprint i'm using is, which isn't mine btw.

quote:

void glPrint(int x, int y, const char *fmt, ...)
{
if (!bFontsBuild) BuildFonts();

if (fmt == NULL) return;

glRasterPos2i(x,y);

char text[256];
va_list ap;

va_start(ap, fmt);
vsprintf(text, fmt, ap);
va_end(ap);

glPushAttrib(GL_LIST_BIT);
glListBase(FontBase - 32);
glCallLists(strlen(text), GL_UNSIGNED_BYTE, text);
glPopAttrib();
}


This is the much more elite coder's hack, and they got text working where they want along with a snazzy menu. I'm not looking for anything this fancy, just to be able to draw text where I want it to be. WHich will be a stepping stone to being able to draw bounding boxes for in a world-to-screen function for ESP. I've been able to do a number of succesful things on my own with the hack (working wallhack, disable foliage) since my previous posts, if anyone has any clue how this person is able to do this, i'd appreciate it.

ickbar fucked around with this message at 17:18 on Jan 20, 2012

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

ickbar posted:

Been trying to work on adding opengl text to my little hack, and as usual been completely stumped. There's a hack made by much more experienced coders that can get text in the middle of the screen (game engine has no sdk or open source available except for 3rd party sdk involving animation models etc..).

I wonder how they are able to center the text and what function they are hooking to get there??

For the game the most success I've had is hooking the glBegin function and issuing the glprint command, and funky stuff like turning off textures or whatever i'm trying usually results in fail due to being a dumb noob. From my reading of hte opengl redbook all I know is that I have to push matrix, modelview matrix, projection matrix, something in between to get the text centered on screen, pop matrix.

The only thing I can get working is this. The game uses glbegin to draw HUD elements and nothing else, but the other guys hack I know doesn't hook the glBegin function, because when I disable the HUD his text menu doesn't dissapear.


hookedglbegin()
{
....
glprint(0,0,"hello!");
orig-glbegin();
}

which results in this.



this is the code for the glprint i'm using is, which isn't mine btw.



This is the much more elite coder's hack, and they got text working where they want along with a snazzy menu. I'm not looking for anything this fancy, just to be able to draw text where I want it to be. WHich will be a stepping stone to being able to draw bounding boxes for in a world-to-screen function for ESP. I've been able to do a number of succesful things on my own with the hack (working wallhack, disable foliage) since my previous posts, if anyone has any clue how this person is able to do this, i'd appreciate it.



I'll give you a hint:
code:
void window_pos( GLfloat x, GLfloat y, GLfloat z, GLfloat w )
{
   GLfloat fx, fy;

   /* Push current matrix mode and viewport attributes */
   glPushAttrib( GL_TRANSFORM_BIT | GL_VIEWPORT_BIT );

   /* Setup projection parameters */
   glMatrixMode( GL_PROJECTION );
   glPushMatrix();
   glLoadIdentity();
   glMatrixMode( GL_MODELVIEW );
   glPushMatrix();
   glLoadIdentity();

   glDepthRange( z, z );
   glViewport( (int) x - 1, (int) y - 1, 2, 2 );

   /* set the raster (window) position */
   fx = x - (int) x;
   fy = y - (int) y;
   glRasterPos4f( fx, fy, 0.0, w );

   /* restore matrices, viewport and matrix mode */
   glPopMatrix();
   glMatrixMode( GL_PROJECTION );
   glPopMatrix();

   glPopAttrib();
}

ickbar
Mar 8, 2005
Cannonfodder #35578
thank you very much my friend, i'll take a deep look at it once i'm free. So far most useful thread i've been on in sa.

ickbar fucked around with this message at 00:58 on Jan 22, 2012

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Bisse posted:

It feels really out-there to ban glBegin()/glEnd() altogether. They're slow but are very useful for someone getting to learn 3D programming. Just imagine future graphics classes: "To draw your first basic polygon, poll your graphis card for function pointers to glBufferBlablabla, then create a pre-defined array of..."
It honestly isn't that hard to create a VBO, map it, and write a function that just copies the parameters into it and increments a pointer, and a flush function to call manually to terminate the primitive, or if you try to push too much in to the buffer.

Congratulations, you just wrote a renderer with Begin/End-like behavior, except now you don't have to rewrite everything when you want to do stuff like batching, which in that case would consist of "if the next set of geometry has the same shading properties, then don't flush," and you can easily upgrade it to do stuff like pump multiple quantities of complex vertex data without massive amounts of function call overhead.

"The slow paths don't tell you they're slow" is part of the problem, but the other part is that you'll hit a brick wall on the limitations. If you want a nice example, circa 2002 it was becoming really obvious that extending fixed-function to the kind of effects people wanted it to do was turning into a mess, and the only way to fix it was going to ultimately be scrapping fixed-function the same way they scrapped it for the vertex pipeline when people started wanting to do skinning on the GPU.

The "added simplicity" of legacy paths is just a trap where you'll get used to doing things the lovely way and then have to relearn everything, better to do it right the first time.

HauntedRobot
Jun 22, 2002

an excellent mod
a simple map to my heart
now give me tilt shift
It's also a problem of documentation. People have had years to write tutorials and examples for OpenGL 1.1 glBegin/glEnd stuff, the new stuff, not so much. I mean yeah, plenty of "here's how you get a triangle up on screen with VBOs" but it falls off in quality when you get beyond that, like how your approach would change if you had a list of triangle meshes, how you would handle camera stuff, etc etc. But it'll catch up.

gonadic io
Feb 16, 2011

>>=
So all these people have been talking about how using glBegin and glEnd are really bad but that's been the only way I've been taught, and the only method I've seen in opengl tutorials.

Does anybody have links to a tutorial of the better way?

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

OneEightHundred posted:

It honestly isn't that hard to create a VBO, map it, and write a function that just copies the parameters into it and increments a pointer, and a flush function to call manually to terminate the primitive, or if you try to push too much in to the buffer.

Congratulations, you just wrote a renderer with Begin/End-like behavior, except now you don't have to rewrite everything when you want to do stuff like batching, which in that case would consist of "if the next set of geometry has the same shading properties, then don't flush," and you can easily upgrade it to do stuff like pump multiple quantities of complex vertex data without massive amounts of function call overhead.

"The slow paths don't tell you they're slow" is part of the problem, but the other part is that you'll hit a brick wall on the limitations. If you want a nice example, circa 2002 it was becoming really obvious that extending fixed-function to the kind of effects people wanted it to do was turning into a mess, and the only way to fix it was going to ultimately be scrapping fixed-function the same way they scrapped it for the vertex pipeline when people started wanting to do skinning on the GPU.

The "added simplicity" of legacy paths is just a trap where you'll get used to doing things the lovely way and then have to relearn everything, better to do it right the first time.

Well, the other problem with removing glBegin/glEnd (which I am strongly in favor of) is that you run the risk of having the same problem as DirectX does, where even for simple examples you need like 500+ lines of cruft to handle all the resource creation, etc.

What would be nice is an updated GLUT that creates a VBO in the background and provides an interface to the programmer so they can use something like "glutBegin/glutEnd" for examples, with the clear determination that this is just for illustrative purposes.

haveblue
Aug 15, 2005



Toilet Rascal
Does GL 3+ still provide matrix stacks? I only know that ES 2.0 does not.

pseudorandom name
May 6, 2007

Compatibility profiles do, but they're only used if you're doing fixed function rendering.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Woz My Neg rear end posted:

Does GL 3+ still provide matrix stacks? I only know that ES 2.0 does not.
They're on their way out. There's barely any point in them when you can do the exact same stuff with matrix multiplies in the vertex shader, and the documentation for all of the matrix functions (i.e. glOrtho and glProject) tell you exactly what matrices they generate.

High Protein
Jul 12, 2009
I've got a question I've had no luck finding an answer to anywhere. How did old (think Quake1/2, Unreal) games handle object lighting. I know it's per-vertex, but did they determine for each vertex whether it was visible to various light sources? I mean, map geometry obviously affects how the objects in these games are lit, and it even works for dynamic lights such as the Unreal flares: if you throw a flare in front of a pillar, objects behind the pillar won't be lit.

From what I understand, the modern approach is to first have a shadow mapping pass and then only light the visible pixels?

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

High Protein posted:

I've got a question I've had no luck finding an answer to anywhere. How did old (think Quake1/2, Unreal) games handle object lighting. I know it's per-vertex, but did they determine for each vertex whether it was visible to various light sources? I mean, map geometry obviously affects how the objects in these games are lit, and it even works for dynamic lights such as the Unreal flares: if you throw a flare in front of a pillar, objects behind the pillar won't be lit.
There are a lot of different answers to this, but none of them did per-vertex light checks. The ones that did line-of-sight checks at all generally did a single trace from the center of the object to the light source and occasionally faded out the light as it stopped being visible. You can actually see this behavior in Source engine games pretty easily.

Ones that I'm sure of:
- Quake 2 and Quake 1 do no LOS checks on dynamic lights. Fixed lighting is handled by tracing a line straight down from objects and using the lightmap sample it hits. This obviously has some interesting artifacts when jumping over pits where the bottom is much brighter (i.e. lava). Lighting is accentuated in a fixed direction.
- Quake 3 does no LOS checks on dynamic lights, static lights are prebaked into a 3D "light grid" where each point has a dominant light direction, a colorized intensity from that direction, and a fixed-intensity ambient contribution.
- UE1 tracks actual light sources and does line checks for visibility. How they're factored in is a mystery though, UE1's object lighting is heinously bad with a ton of black bleed.
- Source uses a single line checks on all light sources, but indirect lighting is baked into a 6-component "ambient cube" in each BSP leaf.
- Halo 3 uses third-order spherical harmonics for everything, indirect (and possibly direct) lighting is baked into a spherical harmonics term that's stored in a KD-tree that is sparser in areas with no geometry.

High Protein
Jul 12, 2009
Thanks, that's exactly the kind of answer I was hoping for!

PalmTreeFun
Apr 25, 2010

*toot*
This is a pretty basic math theory question, but I'm taking a graphics class right now and I need a little help. Long story short, my teacher isn't the best at speaking English, much less at explaining things, and I had to go read the book just to figure out what convolution and reconstruction was. I have that figured out, but what I can't wrap my head around is resampling. You know, scaling images/resampling audio. I understand what it's supposed to do, but I don't quite get the math.

For simplicity's sake, say I'm doing it on audio or some other 1-dimensional data. If I have this data set:

f(x) = 0, 1, 4, 5, 3, 5, 7

And I want to resample this using different filters with a radius of 0.5 in order to figure out, say, f(2.5) and f(2.75), with f(2) having the value 4 in the above data set. My question is, what results should I be getting with my estimates if I use, say, a box filter (1/(2r) if -r <= x < r, 0 otherwise) as opposed to a tent/linear filter (1-abs(x) if abs(x) < 1, 0 otherwise).

I hope I didn't make that too confusing, I just am not sure how to exactly compute resampled values. The book doesn't make it very clear. It says something about taking the data points, reconstructing and smoothing them, then resampling a new set of data, but I don't understand how you convolute two functions (a reconstruction and a smoothing one, as opposed to a function and a data set) together.

Contero
Mar 28, 2004

I feel like my graphics knowledge is incredibly dated, and I want to catch up in a hurry. I have decent (if a bit outdated) knowledge of OpenGL and graphics theory. I just feel like I'm missing the practical knowledge required to put together a modern and efficient graphics engine.

Where should I go? What should I be reading?

Unormal
Nov 16, 2004

Mod sass? This evening?! But the cakes aren't ready! THE CAKES!
Fun Shoe

Contero posted:

I feel like my graphics knowledge is incredibly dated, and I want to catch up in a hurry. I have decent (if a bit outdated) knowledge of OpenGL and graphics theory. I just feel like I'm missing the practical knowledge required to put together a modern and efficient graphics engine.

Where should I go? What should I be reading?

I recently went through the same exercise; after I figured a lot of it out, I ran across this book, which summed up most of the tricks I had collected from a lot of other sources:

http://www.amazon.com/OpenGL-4-0-Shading-Language-Cookbook/dp/1849514763/ref=sr_1_fkmr2_1?ie=UTF8&qid=1328669428&sr=8-1-fkmr2

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Contero posted:

I feel like my graphics knowledge is incredibly dated, and I want to catch up in a hurry. I have decent (if a bit outdated) knowledge of OpenGL and graphics theory. I just feel like I'm missing the practical knowledge required to put together a modern and efficient graphics engine.

Where should I go? What should I be reading?
What's kind of weird is the "high-end" graphics field is more disjointed than ever. Before programmable shaders there were only a few ways of doing things and one was usually dominant, now things can be radically different, especially based on how you want to do lighting.

One thing that's helped me a lot are the presentations/slides from GDC and SIGGRAPH which tend to include stuff that's been tried in a real-world environment and works well from a cost/impact perspective. Bungie and Valve in particular have put out quite a few papers detailing useful, efficient techniques.

e: I'd contrast that with GPU Gems which is much more theoretical, or doing stuff that works great in a demo but not under real-world resource constraints.

OneEightHundred fucked around with this message at 23:36 on Feb 8, 2012

haveblue
Aug 15, 2005



Toilet Rascal
Yeah, look at how many different approaches are out there just for, say, filtering shadow maps. Real-time graphics and offline graphics are converging and knowledge is moving between them faster than ever before. When I was in college around the turn of the century I was told that the rule of thumb was that real-time is perpetually where offline was ten years ago; that gap has narrowed a lot today.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

haveblue posted:

Yeah, look at how many different approaches are out there just for, say, filtering shadow maps. Real-time graphics and offline graphics are converging and knowledge is moving between them faster than ever before. When I was in college around the turn of the century I was told that the rule of thumb was that real-time is perpetually where offline was ten years ago; that gap has narrowed a lot today.
Well, offline is still chiefly concerned with accuracy and a lot of what's happening in real-time isn't just being more physically accurate, but using techniques which can cheaply approximate physically-accurate effects (i.e. directional lightmapping, real-time light transfer), cut back on artifacts (i.e. UE3 signed depth field shadow maps, BF3's stable CSM), allocate resources in ways that more closely match the information importance (i.e. CSM, YUV DXT5), or do things for artistic effect in spite of their non-realism.

What's especially nice is that there's a lot of stuff that is much more clever than it is complex, and once known, is very easy to implement. Probably the most admirable thing along those lines I've seen recently is pre-integrated skin rendering, which combines three completely different but technically simple techniques to fake one natural phenomenon cheaply and convincingly. The crepuscular rays thing that a lot of games are using today is hilariously unrealistic too, but it's convincing and it's CHEAP.


In general, pick something that'll work for your goals, and set your limitations in advance. Most games right now pick any of various solutions for static lighting (single-color, 3-direction basis, spherical harmonics), possibly have a single precalculated dominant light per surface that they can render in the forward pass (often the sun, lets you do better shadowing and specular), or use deferred rendering to have a lot of lights. Don't obsess about supporting every feature under the sun, support the features you need to meet your goals.

OneEightHundred fucked around with this message at 02:35 on Feb 9, 2012

Spite
Jul 27, 2001

Small chance of that...

PalmTreeFun posted:

This is a pretty basic math theory question, but I'm taking a graphics class right now and I need a little help. Long story short, my teacher isn't the best at speaking English, much less at explaining things, and I had to go read the book just to figure out what convolution and reconstruction was. I have that figured out, but what I can't wrap my head around is resampling. You know, scaling images/resampling audio. I understand what it's supposed to do, but I don't quite get the math.

For simplicity's sake, say I'm doing it on audio or some other 1-dimensional data. If I have this data set:

f(x) = 0, 1, 4, 5, 3, 5, 7

And I want to resample this using different filters with a radius of 0.5 in order to figure out, say, f(2.5) and f(2.75), with f(2) having the value 4 in the above data set. My question is, what results should I be getting with my estimates if I use, say, a box filter (1/(2r) if -r <= x < r, 0 otherwise) as opposed to a tent/linear filter (1-abs(x) if abs(x) < 1, 0 otherwise).

I hope I didn't make that too confusing, I just am not sure how to exactly compute resampled values. The book doesn't make it very clear. It says something about taking the data points, reconstructing and smoothing them, then resampling a new set of data, but I don't understand how you convolute two functions (a reconstruction and a smoothing one, as opposed to a function and a data set) together.

How's your math? Convolution has different meanings depending on which domain you are in. The easiest way to thing about it as the overlap between two functions. Or you can think about it as multiplying every data point in function A by every datapoint in function B and adding together. Of course, this really isn't feasible in real time so you use a small 'kernel' as the second function.

Take for example a gaussian blur.
You'll typically see something like this:
0.006 0.061 .242 .383 .242. 061 .006
That's the kernel, that's function B.
Your set is
0, 1, 4, 5, 3, 5, 7

Let's say I want to find the blurred value of the middle element, which is 5
0.006+1*0.0061+4*0.242+5*0.383+3*0.242+5*0.061+7*0.006
You repeat that for each value in your set to get the convolved set, ie
f(x-3)*0.006+f(x-2)*0.061+f(x-1)*0.242 0.383*f(x)+0.242*f(x+1)+f(x+2)*0.061+f(x+3)*0.006

That's for a 1D convolution, It can be extended to anything. If you are curious about the math, you should probably take a class on signal processing as it gets quite complex.

Or am I explaining the wrong thing?

PalmTreeFun
Apr 25, 2010

*toot*

Spite posted:

Or am I explaining the wrong thing?

I think so. I understand the part you explained already, but basically what I want to know is how scaling/reconstructing a sound/image works. Like, you convert a set of discrete data to a continuous function somehow, and you can use a kernel (thanks for explaining what that was, I didn't know that that and the "filter" were the same thing, this teacher really sucks at explaining things) to extrapolate new, "in-between" data.

Like, if you used something like a simple average to find a value between elements 1 and 2 (1 and 4) in the example I gave, you'd get a new value 2.5, because that's halfway from one to the other. The problem is, I don't get how you convey different ways of getting new values using a kernel. Same in reverse, shrinking the set instead of expanding it. I had an assignment on the last homework where we had to resample a data set using two different kernels, one being a tent and the other a box, and I had no idea how to compute that.

E: For what it's worth, here are the lecture slides on the topic:

http://pages.cs.wisc.edu/~cs559-1/syllabus/02-01-resampling2/resampling_cont.pdf

Scroll down to the page that says "Resampling".

E2: I just figured out what exactly the box/triangle filters do, (box is rounding up/down, tent is linear interpolation) but I still don't understand how the process in general is done. Like, I have no clue what's going on with the other filters, like Gaussian, B-Spline cubic, Catmull-Rom cubic, etc.

PalmTreeFun fucked around with this message at 00:15 on Feb 10, 2012

Contero
Mar 28, 2004

How do you guys feel about the glm Library? I had rolled my own vector/matrix class, but this seems pretty well implemented.

ephphatha
Dec 18, 2009




Contero posted:

How do you guys feel about the glm Library? I had rolled my own vector/matrix class, but this seems pretty well implemented.

I've been using Wild Magic 5 when I needed a proper matrix/vector library because it actually includes Quaternions, something missing from a lot of other libraries (glm included from the looks of it). Of course most of the time I use my homebrew quaternion/vector classes...
Edit: This is probably a really stupid reason to like a library, but it actually draws a distinction between Vectors and Points, allowing operations to be performed using both types and gives expected results.


One thing I've been wondering about, is there anything special you need to do when writing a program that is meant to output to an active shutter 3D display? Anaglyphs are easy enough, but I'm not sure whether just configuring the gl context to refresh at 120Hz and alternating frames will produce the desired effect. I don't have the monitor with me to test it, so I need to try develop the program on my computer using anaglyph output and hope that it works on the 3d display at the clients location.

ephphatha fucked around with this message at 03:35 on Feb 13, 2012

Spite
Jul 27, 2001

Small chance of that...

PalmTreeFun posted:

I think so. I understand the part you explained already, but basically what I want to know is how scaling/reconstructing a sound/image works. Like, you convert a set of discrete data to a continuous function somehow, and you can use a kernel (thanks for explaining what that was, I didn't know that that and the "filter" were the same thing, this teacher really sucks at explaining things) to extrapolate new, "in-between" data.

Like, if you used something like a simple average to find a value between elements 1 and 2 (1 and 4) in the example I gave, you'd get a new value 2.5, because that's halfway from one to the other. The problem is, I don't get how you convey different ways of getting new values using a kernel. Same in reverse, shrinking the set instead of expanding it. I had an assignment on the last homework where we had to resample a data set using two different kernels, one being a tent and the other a box, and I had no idea how to compute that.

E: For what it's worth, here are the lecture slides on the topic:

http://pages.cs.wisc.edu/~cs559-1/syllabus/02-01-resampling2/resampling_cont.pdf

Scroll down to the page that says "Resampling".

E2: I just figured out what exactly the box/triangle filters do, (box is rounding up/down, tent is linear interpolation) but I still don't understand how the process in general is done. Like, I have no clue what's going on with the other filters, like Gaussian, B-Spline cubic, Catmull-Rom cubic, etc.

It's kind of odd they'd have you do this without giving you the background theory around filtering and time domain vs frequency domain.

Forgive me if I'm going a bit overboard with the explanation. The Fourier transform will convert a function in the time domain into its equivalent into the frequency domain. The easiest way to visualize this is to think about a sine wave. Its Fourier transform is simply two peaks, one positive and one negative. They represent the frequency of the wave.

Now, every function (at least the ones you'll be interested in) has a transform that converts to this domain. Why is this interesting? Because you can multiply two functions together in the frequency domain to apply a filter. For example, a box function can be a lowpass filter.

However, this is a problem since you need all of both functions in order to do the transform. So we would like to apply them in the time domain as they are fed in to us. A multiplication in the frequency domain corresponds to convolution in the time domain. So we can take the filter we are interested in and convert them to the time domain via the inverse fourier transform. Then we can do the operation with the kernel we get in the time domain.

The box/triangle filters are in the frequency domain. If you want to apply them to the dataset you need to apply the inverse Fourier transform and convolve that with the set. The box filter's inverse fourier transform is the sinc function, so we can make a kernel out of sinx/x and use that if we wanted to.

Gaussian is a special case since the fourier transform of a Gaussian is also a Gaussian.

The splines are slightly different because they reconstruct points on a path. You can think of something like Catmull-rom, etc as simply a function F(t) that happens to pass through the control points.

passionate dongs
May 23, 2001

Snitchin' is Bitchin'
edit: nevermind

passionate dongs fucked around with this message at 01:49 on Feb 16, 2012

Adbot
ADBOT LOVES YOU

Contero
Mar 28, 2004

In gl 3.1+, do vertex array objects give a performance benefit over manually binding VBOs? What is the advantage of using them?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply