Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?

Jewel posted:

Line thickness is one of the most annoying graphics things in terms of how simple it seems it should be sadly and there's a few papers on it, it's called "line stroking".

Here's a normal opengl extension to do it that, I think, only works on nvidia: https://www.opengl.org/registry/specs/NV/path_rendering.txt

And here's a paper on how it's done if you feel up to implementing it https://hal.inria.fr/hal-00907326/PDF/paper.pdf

Thanks for the links, I'll look it over.

Adbot
ADBOT LOVES YOU

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
How would you go about changing the resolution of the framebuffer you're drawing to? I'm working on a project with a 500x500 viewport, but I need to encode a 10x10 texture with data that I compute on the GPU. I'm still a huge newbie when it comes to OpenGL, so I really can't figure out how to make that work. As far as I can tell, if I make a 10x10 texture and a 10x10 renderbuffer and draw the elements I want to the texture, it just takes the top-left most 10x10 pixels of the 500x500 framebuffer, in stead of drawing the entire scene in 10x10 texels as I would expect. That is to say I bind a 10x10 texture to the framebuffer, draw my elements and get the corner of the scene in stead of the whole scene.

Joda fucked around with this message at 14:48 on Dec 20, 2014

NorthByNorthwest
Oct 9, 2012

Joda posted:

How would you go about changing the resolution of the framebuffer you're drawing to? I'm working on a project with a 500x500 viewport, but I need to encode a 10x10 texture with data that I compute on the GPU. I'm still a huge newbie when it comes to OpenGL, so I really can't figure out how to make that work. As far as I can tell, if I make a 10x10 texture and a 10x10 renderbuffer and draw the elements I want to the texture, it just takes the top-left most 10x10 pixels of the 500x500 framebuffer, in stead of drawing the entire scene in 10x10 texels as I would expect. That is to say I bind a 10x10 texture to the framebuffer, draw my elements and get the corner of the scene in stead of the whole scene.

In my project, I used:
code:
        glBindFramebuffer(GL_FRAMEBUFFER, framebufferID);
	glViewport(0, 0, 1024, 1024);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        ..draw to 1024x1024 texture code...

	glBindFramebuffer(GL_FRAMEBUFFER,0);
	glViewport(0, 0, WINDOW_WIDTH, WINDOW_HEIGHT);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        ...draw scene code...
Changing the viewport worked nicely for me.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe

NorthByNorthwest posted:

In my project, I used:
code:
        glBindFramebuffer(GL_FRAMEBUFFER, framebufferID);
	glViewport(0, 0, 1024, 1024);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        ..draw to 1024x1024 texture code...

	glBindFramebuffer(GL_FRAMEBUFFER,0);
	glViewport(0, 0, WINDOW_WIDTH, WINDOW_HEIGHT);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        ...draw scene code...
Changing the viewport worked nicely for me.

That worked perfectly. Thanks a bunch :)

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Does anyone know why the Sublime Text GLSL validation plugin (which uses the ANGLE preprocessor) would say that version 330 is not supported? Is it an issue with my version of OpenGL, or can ANGLE just not be used for non-ES/WebGL code? I swear I try to Google these things, but they don't seem to be very Google friendly questions.

pseudorandom name
May 6, 2007

ANGLE is specifically an OpenGL ES-to-Direct3D implementation

fritz
Jul 26, 2003

I'm having trouble porting some code from linux to windows (which I know basically nothing about). I'm having some trouble with glfwCreateWindow returning null in this code:
code:
void init_glfw(void) {
    glfwSetErrorCallback(dump_glfw_error);

        if (!glfwInit()) {
                std::cerr << "Failed to initialize GLFW";
                return;
        }


    glfwWindowHint(GLFW_SAMPLES, 4);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);


    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
    glfw_window = glfwCreateWindow( 1024, 768, "A. Square", nullptr, nullptr); // <---- the problem is here
        if (glfw_window == nullptr) {
                glfwTerminate();
                return;

        }
    glfwMakeContextCurrent(glfw_window);
    glewExperimental = (GLboolean)true;
    glewInit();
    glfwSetInputMode(glfw_window, GLFW_STICKY_KEYS, GL_TRUE);

    // Dark blue background
    glClearColor(0.0f, 0.0f, 0.4f, 0.0f);

    // Enable depth test
    glEnable(GL_DEPTH_TEST);
    // Accept fragment if it closer to the camera than the former one
    glDepthFunc(GL_LESS);
}
I tried (based on a stackoverflow answer) adding
code:
        glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
with the other hints, no luck.

My questions include:
* Is there anything immediately wrong with this code, that it would work on linux but not windows
* What information about the windows system do I need to get to start debugging this, and where should I look for it?
* I'm actually running windows under a VM (with virtualbox), is that just going to be a bad idea here?

nye678
Oct 27, 2007

fritz posted:

* I'm actually running windows under a VM (with virtualbox), is that just going to be a bad idea here?

My guess would be that your VM's graphics driver cannot create a 3.3 context. Try commenting out the hints for the GL version and see if it won't create a window for you. I believe glfw will create the window with whatever the highest possible context version is so check out what what it gives you after the fact.

shodanjr_gr
Nov 20, 2007

nye678 posted:

My guess would be that your VM's graphics driver cannot create a 3.3 context. Try commenting out the hints for the GL version and see if it won't create a window for you. I believe glfw will create the window with whatever the highest possible context version is so check out what what it gives you after the fact.

That's most likely the case...I was messing around with OpenGL inside Windows VMs on OS X and i think that none were able to generate a context with version > 2.1 (that was a year or so ago)...

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

fritz posted:

virtualbox

Now there's your problem

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
Isn't compiling anything low level (e.g. C/C++) for cross-platform compatibility a bad idea on a VM? Like whatever version of gcc or MSVC you're using in the VM would compile for the system it thinks it's on (i.e. whatever system the VM is emulating, including hardware level emulations,) as opposed to having a native version of Windows or Linux run on dual boot.

The_Franz
Aug 8, 2003

Joda posted:

Isn't compiling anything low level (e.g. C/C++) for cross-platform compatibility a bad idea on a VM? Like whatever version of gcc or MSVC you're using in the VM would compile for the system it thinks it's on (i.e. whatever system the VM is emulating, including hardware level emulations,) as opposed to having a native version of Windows or Linux run on dual boot.

On Windows, no. On Linux only if you use a config script that detects and sets the -march parameter based on whatever processor you have.

fritz
Jul 26, 2003

Update: y'all were right, gl in VirtualBox is only 1.1 (!!!), when I hauled out the windows laptop and built it over there it all works ok.

shodanjr_gr
Nov 20, 2007

fritz posted:

Update: y'all were right, gl in VirtualBox is only 1.1 (!!!), when I hauled out the windows laptop and built it over there it all works ok.

If you try VMWare Fusion or Parallels, at least you will get a context that you can compile GLSL in.

BattleMaster
Aug 14, 2000

fritz posted:

Update: y'all were right, gl in VirtualBox is only 1.1 (!!!), when I hauled out the windows laptop and built it over there it all works ok.

That's probably just the software opengl32.dll that Windows comes with. I haven't looked at VMs in a little while but I've never known them to have serious hardware graphics support.

Xerophyte
Mar 17, 2008

This space intentionally left blank
So, Khronos are crowdsourcing the name for the next OpenGL. I'm sure that'll end well.

BattleMaster
Aug 14, 2000

Is it possible to bind a texture in GLSL so that I can, for example, create a buffer with data for a bunch of objects (like tiles or something) with vertex data as well as a texture ID so I can just tell it to draw the buffer once and not have to sort anything by what texture it has?

Jewel
May 2, 2009

Unsure what you mean but maybe you could use a 3D texture and bind multiple textures as "layers" on that (indexed by the "texture ID"). This/your idea might not be a good idea though.

BattleMaster
Aug 14, 2000

Jewel posted:

Unsure what you mean but maybe you could use a 3D texture and bind multiple textures as "layers" on that (indexed by the "texture ID"). This/your idea might not be a good idea though.

Sorry I think I wrote the question too hastily. In OpenGL 4, I want to load a single buffer with vertex data for a number of quads (composed of two triangles each) and draw it in one call. The caveat is that I want each quad to have one texture drawn on it, out of a pool of several textures.

Can this be done in GLSL with extra buffer data per triangle, or is the recommended method to keep separate buffers for each texture and bind a texture and draw them one by one?

BattleMaster fucked around with this message at 06:33 on Jan 20, 2015

pseudorandom name
May 6, 2007

You want a texture array, not a 3D texture.

Jewel
May 2, 2009

Whoops, yeah, that's what I meant. Same thing really but no attempt at blending between layers.

BattleMaster
Aug 14, 2000

Thanks guys, I had no idea such a thing existed but it looks perfect, and most hardware seems to support far more textures in an array then I'll have.

Falcorum
Oct 21, 2010
You can also use bindless textures, but those are a recent-ish feature and aren't really present on older cards.

Goreld
May 8, 2002

"Identity Crisis" MurdererWild Guess Bizarro #1Bizarro"Me am first one I suspect!"

pseudorandom name posted:

You want a texture array, not a 3D texture.

He could also use a texture atlas. Some people might pull out crucifixes and make hissing sounds, though.

BattleMaster
Aug 14, 2000

Goreld posted:

He could also use a texture atlas. Some people might pull out crucifixes and make hissing sounds, though.

For some reason I never thought of that even though I'm calculating and feeding texture coordinates to the shader anyway. However since there's a fancypants way of doing it I may as well use it.

Also she in spite of my name :v:

pseudorandom name
May 6, 2007

Goreld posted:

He could also use a texture atlas. Some people might pull out crucifixes and make hissing sounds, though.

Well, the whole point of a texture array is to not use a texture atlas, so, yes, we'd be sad.

roomforthetuna
Mar 22, 2005

I don't need to know anything about virii! My CUSTOM PROGRAM keeps me protected! It's not like they'll try to come in through the Internet or something!

pseudorandom name posted:

Well, the whole point of a texture array is to not use a texture atlas, so, yes, we'd be sad.
Does essentially everything support OpenGL texture arrays these days? (I know the integrated GPU of my laptop from three years ago is a butt and won't even initialize a late-version OpenGL, but I don't know if texture arrays are part of a spec old enough that it would do them.)

Edit: found that it's an OpenGL 3 feature. Unclear whether the GPU in question does OpenGL 3. Is there a table somewhere?

roomforthetuna fucked around with this message at 05:18 on Jan 23, 2015

pseudorandom name
May 6, 2007

It is a Direct3D 10 feature.

CodeJanitor
Mar 30, 2005
I still can't think of anything to say.
This seems like a good timed question based on the last page of discussion.

I am having issues with OpenGL texture arrays when the source is a single texture atlas image file.
The texture atlas is a 1024x1024 RGBA image file of 16x16 tiles, each being 64x64 pixels in size. This is similar to a 64x64 texture atlas file for Minecraft. However, attempting to generate the texture array from a single image file by describing individual tile x,y offsets and width/height has failed, so I definitely doing something wrong.

So, I ended up pulling out each individual tile into its own separate file (64x64 RGBA format) with the naming convention of "[Index].png". The texture array is simply made by iterating over each image file, loading it, and adding it to the texture array based on it's index.

Example working code (the Bitmap class is just a wrapper for stb_image library and does nothing weird..the images are loaded and working fine):
code:
    GLuint textureAtlas = 0;
    glGenTextures(1, &_textureAtlas);
    glBindTexture(GL_TEXTURE_2D_ARRAY, _textureAtlas);
    glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    glTexImage3D(
        GL_TEXTURE_2D_ARRAY,
        0,
        GL_RGBA,
        64,
        64,
        6,
        0,
        GL_RGBA,
        GL_UNSIGNED_BYTE,
        NULL);

    for (int index = 0; index < 6; index++)
    {
        std::stringstream ss;
        ss << ".//resources//textures//" << index << ".png";
        sandbox::Bitmap bitmap = sandbox::Bitmap::LoadFromFile(ss.str());
        bitmap.FlipVertically();

        glTexSubImage3D(
            GL_TEXTURE_2D_ARRAY,
            0,
            0,
            0,
            index,
            bitmap.GetWidth(),
            bitmap.GetHeight(),
            1,
            GL_RGBA,
            GL_UNSIGNED_BYTE,
            bitmap.GetPixels());
    }
Yay,, works nicely and I just provide a parameter to the shader to index into the current texture I want (where p is the index into the texture array):
code:
    finalColor = texture2DArray(tex, data.texCoord.stp)
I ended up doing this after I repeatedly failed to use a the single texture atlas scenario. Breaking up the atlas into individual image files for each texture works fine enough, but I would like to simply it to a single image file if possible.

How do I just use the single image file and then just create the texture array by iterating through the list of tiles by (col,row) indexing to create a texture array? Everything I have tried just ends with some horrible random garbage.

What I was hoping to do was to use the one texture atlas file for the call to glTexImage3D(...) as the source pixels, then just call glTexSubImage3D with the coordinate offset and width/height of each sub tile to generate the texture array.

samiamwork
Dec 23, 2006

fritz posted:

Update: y'all were right, gl in VirtualBox is only 1.1 (!!!), when I hauled out the windows laptop and built it over there it all works ok.

I'm way late to this party but but still useful info: out of the box Windows only supports OpenGL 1.1 (well at least up to Win 7 is that way. Haven't tested anything more recent). You'll have to install OpenGL drivers from your graphics card vendor to get anything more recent. VMware Fusion and Parallels do this when they install their integration packages I believe. I got bit by this a bunch at work in the past.

quote:

Edit: found that it's an OpenGL 3 feature. Unclear whether the GPU in question does OpenGL 3. Is there a table somewhere?

Maybe this helps?
http://opengl.delphigl.de

samiamwork fucked around with this message at 03:59 on Jan 26, 2015

HiriseSoftware
Dec 3, 2004

Two tips for the wise:
1. Buy an AK-97 assault rifle.
2. If there's someone hanging around your neighborhood you don't know, shoot him.
I want to apply a repeatable normal map texture to an arbitrary triangle mesh, like say a 3D model of a papercraft and give it a paper-y texture. I've got the model UV unwrapped for things like ambient occlusion baking, but I want the "paper" texture to be nice and uniform with no distortion. Can I use GLSL and texture matrices to do something like that? What math would be involved?

High Protein
Jul 12, 2009

HiriseSoftware posted:

I want to apply a repeatable normal map texture to an arbitrary triangle mesh, like say a 3D model of a papercraft and give it a paper-y texture. I've got the model UV unwrapped for things like ambient occlusion baking, but I want the "paper" texture to be nice and uniform with no distortion. Can I use GLSL and texture matrices to do something like that? What math would be involved?

I think the best approach would be to just index your texture using the model's vertices' coordinates. You could use the x/y y/z and x/z planes to do separate lookups. See example 1-3 here http://http.developer.nvidia.com/GPUGems3/gpugems3_ch01.html

nye678
Oct 27, 2007

CodeJanitor posted:

This seems like a good timed question based on the last page of discussion.

I am having issues with OpenGL texture arrays when the source is a single texture atlas image file.
The texture atlas is a 1024x1024 RGBA image file of 16x16 tiles, each being 64x64 pixels in size. This is similar to a 64x64 texture atlas file for Minecraft. However, attempting to generate the texture array from a single image file by describing individual tile x,y offsets and width/height has failed, so I definitely doing something wrong.

glTexSubImage takes a one dimensional array for the input data, the x,y offsets and width/height parameters refer to the destination texture and does not have any effect on the source data. This means you need to reorganize your source pixel data so that a tile's rows are sequential in memory rather than split up as is the case when the texture atlas is loaded.

Atlas looks like this in memory
| -- Tile 1 Row 1 --| |-- Tile 2 Row 1 --| ... |-- Tile N Row 1 --| |-- Tile 1 Row 2 --| |-- Tile 2 Row 2 --| ... |-- Tile N Row 2 --| ... | -- Tile 1 Row M --| |-- Tile 2 Row M --| ... |-- Tile N Row M --|

But glTexSubImage wants
| -- Tile 1 Row 1 --| |-- Tile 1 Row 2 --| .. |-- Tile 1 Row M --|

One possible methods for resolving this is to manually copy an individual tile's data into a new buffer sized for the tile and upload that to your texture.

code:
// Assuming 32 bit pixels, create a new pixel buffer for the tile.
uint32_t* tileBuffer = new uint32_t[tileWidth * tileHeight];

// For each tile in the atlas...
for (size_t tileIndex = 0; tileIndex < numTilesInAtlas; ++tileIndex)
{
	// X position in the texture atlas in tiles.
	size_t tileXIndex = tileIndex % numAtlasTileColumns;

	// Y position in the texture atlas in tiles.
	size_t tileYIndex = floor(tileIndex / numAtlasTileRows);

	// Starting offset of the tile in the atlas in pixels. The top left pixel  for the tile in the atlas.
	size_t tileStartOffset = tileXIndex * tileWidth + tileYIndex * tileHeight * bitmapWidth; 

	// For each row in a tile...
	for (size_t rowIndex = 0; rowIndex < tileHeight; ++rowIndex)
	{
		// Get a pointer to the first pixel in atlas for that row... 
		uint32_t* rowPtr = ((uint32_t*)bitmap.GetPixels()) + tileStartOffset + rowIndex * bitmapWidth;

		// Get a pointer to the first pixel in the tile buffer for that row...
		uint32_t* tileRowPtr tileBuffer + rowIndex * tileWidth;

		// Copy row data into the tile buffer.
		memcpy(tileRowPtr, rowPtr, tileWidth * sizeof(uint32_t))
	}

	// Upload the tile buffer to the texture.
	glTexSubImage3D(
            GL_TEXTURE_2D_ARRAY,
            0, 0, 0,
            tileIndex,
            tileWidth,
            tileHeight,
            1,
            GL_RGBA,
            GL_UNSIGNED_BYTE,
            tileBuffer);
}

delete [] tileBuffer;

unixbeard
Dec 29, 2004

I was poking through "OpenGL Insights" (ch 14)



:awesome:

unixbeard
Dec 29, 2004

I have a question about VAOs and transform feedback. The code is more or less straight from here http://prideout.net/blog/?p=67

I am kinda new to "new" opengl having mostly worked with 2.1 so I havent really used VAOs much.

There are two buffers, ParticleBufferA and ParticleBufferB. I use transform feedback to do the particle location update from BufferA to BufferB, then I swap (the GLuints) so the updated locations in B are referred to in A.

From what I have read about VAOs the general case is you bind the VAO, the bind the buffer and set up the vertexAttrib stuff then for subsequent stuff you just need to bind the VAO.

My question is: after I do the transform feedback stuff, whats the right way to handle ParticleBufferA and ParticleBufferB being swapped wrt the VAO?

At the moment I have 1 VAO, which I set up for each particle update (the advect() function below). Is it possible to make 1 VAO at the start and not update it? Should I instead make 2 VAOs and switch between them? Or does it not really matter that I am re-doing the VAO after every frame?

I am using openFrameworks so advectShader is just a vertex shader that moves the particles based on a fixed velocity, and renderParts is a vertex/frag shader that just does the MVP transform and sets a color. The buffer data in the Particles arrays is interleaved with position (3 floats x, y, z), birth time (single float), velocity (3 floats for x, y, z direction). partcnt is the number of particles.

code:

setup
......

    advectShader.setupShaderFromFile(GL_VERTEX_SHADER, "advect.vert");
    advectShader.bindDefaults();
    advectProgram = advectShader.getProgram();
    
    const char* varyings[3] = { "vPosition", "vBirthTime", "vVelocity" };
    glTransformFeedbackVaryings(advectProgram, 3, varyings, GL_INTERLEAVED_ATTRIBS);
    
    advectShader.linkProgram();

    SlotPosition =  advectShader.getAttributeLocation("Position");
    SlotBirthTime =  advectShader.getAttributeLocation("BirthTime");
    SlotVelocity = advectShader.getAttributeLocation("Velocity");

    // Create VBO for input on even-numbered frames and output on odd-numbered frames:
    glGenBuffers(1, &ParticleBufferA);
    glBindBuffer(GL_ARRAY_BUFFER, ParticleBufferA);
    glBufferData(GL_ARRAY_BUFFER, sizeof(Particles), &Particles[0].Px, GL_STREAM_DRAW);
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    // Create VBO for output on even-numbered frames and input on odd-numbered frames:
    glGenBuffers(1, &ParticleBufferB);
    glBindBuffer(GL_ARRAY_BUFFER, ParticleBufferB);
    glBufferData(GL_ARRAY_BUFFER, sizeof(Particles), 0, GL_STREAM_DRAW);
    glBindBuffer(GL_ARRAY_BUFFER, 0);

...

void testApp::advect()
{
    if (vaoIDA == 0)
        glGenVertexArrays(1, &vaoIDA);
    
    advectShader.begin();
    advectShader.setUniform1f("Time", curtime);
    advectShader.setUniformTexture("Sampler", potentialRender.getTextureReference(), 1);
    glBindVertexArray(vaoIDA);

    glEnable(GL_RASTERIZER_DISCARD);
    glBindBuffer(GL_ARRAY_BUFFER, ParticleBufferA);
    glEnableVertexAttribArray(SlotPosition);
    glEnableVertexAttribArray(SlotBirthTime);
    glEnableVertexAttribArray(SlotVelocity);
    unsigned char* pData = 0;
    glVertexAttribPointer(SlotPosition, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), pData);
    glVertexAttribPointer(SlotBirthTime, 1, GL_FLOAT, GL_FALSE, sizeof(Particle), 12 + pData);
    glVertexAttribPointer(SlotVelocity, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), 16 + pData);
    
    // Specify the target buffer:
    glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, ParticleBufferB);
    
    glBeginQuery(GL_TRANSFORM_FEEDBACK_PRIMITIVES_WRITTEN, query);
    
    // Draw it:
    glBeginTransformFeedback(GL_POINTS);
    glDrawArrays(GL_POINTS, 0, partcnt);
    
    // Restore:
    glEndTransformFeedback();
    
    glEndQuery(GL_TRANSFORM_FEEDBACK_PRIMITIVES_WRITTEN);
    glGetQueryObjectuiv(query, GL_QUERY_RESULT, &primitives);
    
    glBindVertexArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, 0);
    
    glDisableVertexAttribArray(SlotPosition);
    glDisableVertexAttribArray(SlotBirthTime);
    glDisableVertexAttribArray(SlotVelocity);
    glDisable(GL_RASTERIZER_DISCARD);
    std::swap(ParticleBufferA, ParticleBufferB);
    
    advectShader.end();
    
}
...

draw()
...

    ofClear(0, 255);
    ofSetColor(255);
    glPointSize(1.0);
    glDisable(GL_DEPTH_TEST);
    glEnable(GL_BLEND);
    
    renderParts.begin();
    
    glBindVertexArray(vaoIDA);
    glDrawArrays(GL_POINTS, 0, partcnt);
    glBindVertexArray(0);
    glDisable(GL_BLEND);
    renderParts.end();

so just to be clear advect() and draw() are called every frame.

I guess ideally I would like to setup my VAO(s) in setup() and just have a call to glBindVertexArray(some_vao_id) in advect() rather than all the glBindBuffer/glEnableVertexAttribArray/glVertexAttribPointer stuff, but yeah not sure what the right way to do it is if I swap the buffers. Maybe it doesn't even matter.

I'm trying to get as many particles as possible so would like it to be as performant as possible, if you see any other comments/suggestions.

Spatial
Nov 15, 2007

Seems like the ideal thing would be to set up two VAOs and switch between them. It's certainly a waste of time to create a VAO every time you draw, set it up, and then throw it away.

I'm not sure what you're trying to do after you're finished with the VAO in advect(). The array_buffer binding is a property of each vertex attribute array pointer in the VAO and they won't be modified by unbinding it. The attribute enables are also properties of the VAO, they're already gone since you unbound it.

Also there's a memory leak because you're not deleting the VAO after. :ohdear:

Edit: Oh sorry, you only initialise it once. :)

Spatial fucked around with this message at 12:21 on Jan 29, 2015

unixbeard
Dec 29, 2004

Hey thanks, that makes sense. Yeah I c&p'd most of that from the tutorial and he doesnt use VAOs so I wasnt sure what needed to be done with unbinding as the example does it every frame. For some reason glDrawArrays didnt work in oF unless I used a VAO which is why I added them in. NFI why that was happening I'm sure its something deep in the guts in oF.

[edit] also using 2 VAOs did not seem to have a material impact on performace vs rebinding everything each time, maybe 0.5 ms improvement

unixbeard fucked around with this message at 09:13 on Jan 30, 2015

CodeJanitor
Mar 30, 2005
I still can't think of anything to say.

nye678 posted:

glTexSubImage takes a one dimensional array for the input data, the x,y offsets and width/height parameters refer to the destination texture and does not have any effect on the source data. This means you need to reorganize your source pixel data so that a tile's rows are sequential in memory rather than split up as is the case when the texture atlas is loaded.
*snip*

Sorry, I have been really busy for the last couple of weeks, but thank you for the information on how the function handles the data. Wasn't able to find any good explanation of the details and appreciate the help!

Boz0r
Sep 7, 2006
The Rocketship in action.
Are there any recommendations for books teaching modern OpenGL (or DirectX)?

Adbot
ADBOT LOVES YOU

nye678
Oct 27, 2007
I recommend the OpenGL SuperBible sixth edition.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply