3

I want to point a Uint32Array buffer at a uint input variable.

All the information I can find online says that it should be possible.

I get this error:

[.WebGL-0x62401b7e200] GL_INVALID_OPERATION: Vertex shader input type does not match the type of the bound vertex attribute.

My vertex shader:

#version 300 es
in vec4 a_ipos;
in uint a_cdata;
uniform vec2 ures;

void main() {
    uint x = a_cdata & 0x7C00u;
    uint y = a_cdata & 0x03E0u;
    uint z = a_cdata & 0x001Fu;

    vec4 pos = a_ipos + vec4(float(x), float(y), float(z), 1.);

    gl_Position = vec4(pos.x * ures.y / ures.x, pos.yzw);
}

My call to webgl to point the buffer to the attribute:

gl.bindBuffer(gl.ARRAY_BUFFER, chunkBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Uint32Array(chunkData), gl.STATIC_DRAW);
gl.enableVertexAttribArray(cdataloc);
gl.vertexAttribPointer(
    cdataloc,
    1,
    gl.UNSIGNED_INT,
    false,
    0,
    0
);

It seems gl.UNSIGNED_INT is not the same type as uint. However the GLSL 300 ES reference card says that it's a 32-bit unsigned integer. MDN agrees that gl.UNSIGNED_INT is 32-bit unsigned integer.

I tried gl.INT and in int .... Changing the precision of the integers to highp also doesn't change anything (provided highp would convert integers to 32-bit, which doesn't seem to be the case).

Changing the type to float does work but any other type doesn't.

I also tried gl.SHORT. This is to rule out the theory that the int type in glsl might be a 16-bit integer. Still the same error.

WebGL: How to Use Integer Attributes in GLSL doesn't solve my issue.
The documentation the answer provides is outdated. GLSL 1.00 ES doesn't allow integers in attributes as per specification.

However, GLSL 3.00 ES specification doesn't seem to care as long as the type is not one of the following: boolean, opaque, array, struct.

1 Answer 1

6

This is a bit of a classic mistake. The data type in vertexAttribPointer() refers to the data type in the array itself (or more in the buffer).

It does not refer to the data type of the attribute in the shader. When you use vertexAttribPointer() the data is (still) converted into floats. You can use normalized to control if it gets normalized or not, but it's still converted into floats.

What you're actually looking for is vertexAttribIPointer() (notice the I for Integer.)

vertexAttribPointer()
//          ^ No I so Float
vertexAttribIPointer()
//          ^ I for Integer

The docs for vertexAttribPointer() also mention it under Integer attributes:

Integer attributes

While the ArrayBuffer can be filled with both integers and floats, the attributes will always be converted to a float when they are sent to the vertex shader. If you need to use integers in your vertex shader code, you can either cast the float back to an integer in the vertex shader (e.g. (int) floatNumber), or use gl.vertexAttribIPointer() from WebGL2.


You can also read more about it in the spec under "Uniforms and Attributes".

Sign up to request clarification or add additional context in comments.

4 Comments

I haven't tried to see if this fixes my issue. If it does I'll mark it as the answer. However, if true. WHAT THE F. JUST HOW IN THE DEEPEST PIT OF HELL DOES SOMEONE THINKS THAT ADDING 2 FUNCTIONS THAT LOOK IDENTICAL AT A GLANCE YET DO 2 VERY DIFFERENT THINGS WAS A GOOD IDEA? Thank you for your time finding the solution to my problem.
Actually now that I'm reading the docs in MDN. Those docs on the vertexAttribPointer are wrong then. I'm going to report it to mozilla.
You're welcome! Based on the error and the code snippet, then I'm confident in saying this is your issue. However, feel free to comment, if it doesn't resolve it. But yeah, like I said, this is a very classic mistake. Even the newer OpenGL API "suffers" from it, having glVertexAttribFormat() and glVertexAttribIFormat() or even more confusing glVertexArrayAttribFormat() and glVertexArrayAttribIFormat() if you use DSA
Which part of the Mozilla docs do you think is wrong? Using an integer data type for vertexAttribPointer() is perfectly valid, as long as you want it to be reinterpreted as a float. I just realize now, that the Mozilla docs also mention it, down at Integer attributes (I updated the answer to include the quote)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.