Storing a position in a buffer needs a lot of space! We need 32bits per channel (RGBA32F), which is a 128byte buffer, and that is too much. Low and even mid end gpu’s don’t even handle these formats very well and it goes horribly slow.
That’s why we will use the depth buffer (which is generated anyways).
Position is calculated in view space, because we actually don’t need to go back to world space
Sample depth buffer
Unproject depth buffer value to find position.z
Unproject the nDc position to find position.xy
Al code listed will be GLSL core 150 (GL3.2) and C++
We do not need to encode anything because the depth buffer is generated automatically!
Here is where the fun starts.
Step 1: Sampling the depth buffer
This should be easy, but for completeness:
//Generate texture to store depth in
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F,
0,GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
//Attach to your framebuffer
GL_TEXTURE_2D, m_DepthTextureID, 0);
pass m_DepthTextureID as Sampler2D to the shader
GLSL 150 core:
p_projParams are the projection parameters out of the projection matrix.
Every vertex in the vertex shader gets multiplied by a World View Projection matrix:
1. Transform vertex in worldspace.
2. Transform vertex in viewspace.
3. Transform vertex in screenspace.
When we go to the fragment shader the position is devided by the .w component. Now we have nDC (normalized device coordinates)
nDC ranging from -1 to 1, where .z is our depth component and .xy our 2D position on the screen.
The trick: we want to go back in viewspace, thus we do the inverse of 4. and 3.!
sidenote: openGL stores the depth default from 0 to 1, so we need to bring this in calculation as well
We will do this inverting in one go.
Finding The formula:
* Simplify projection matrix with A, B, C, D
* Multiply matrix with view-position (x, y, z, 1)
* Divide by .w
* nDc position = ( A*x / z, B*y / z, C + D/z, 1) –(x, y, z) in viewspace
* Now we calculate z in viewspace
* The depth stored in the depth buffer == nDc.z * 0.5f + 1.0f
* View z = D / (nDc.z - C) –nDc.z being sampeled depth * 2.0f - 1.0f
* We can now calculate view xy
* view x = (nDC.x * view z) / A
* view y = (nDC.y * view z) / B
The Math: (whiteboard action)
The depth is reconstructed with this function in the shader