Eternal Return.*Created with Processing & Blender.20 frames._Related: Virtual Lands*

Eternal Return.*Created with Processing & Blender.20 frames._Related: Virtual Lands*

Open GL Sunburst

Polygon Land.

58/100 “constellation”

Blah blah blah particles.

the aperture problem

SPIDERS

(via ROME | Tech)

Open GL Particles

65/100 “post up”

Doing some post-processing in the frag shader and rendering the Fbo.

Glitches captured from development

Hlaalu Brassguard got heavily redesigned recently. Hope this would be the last iteration of these guys. I took inspiration from traditional European armors pattern and added some Dunmeric details to it. The most western-oriented house must have most western-oriented armor. A bit of cargo-cult.

Pssst, I’m open for commissions!

Trying to get OpenGL to work with fullscreen mode in Java has led to some interesting issues…

29/100 “apprehension”

Google MapsGL - The shadows cast by the buildings are relative to the current position of the sun.

/via Chaotic

Wish I could show you guys this program in the browser but webgl is giving me trouble.

Reconstructing Position from Depth buffer

Why

Storing a position in a buffer needs a lot of space! We need 32bits per channel (RGBA32F), which is a 128byte buffer, and that is too much. Low and even mid end gpu’s don’t even handle these formats very well and it goes horribly slow. That’s why we will use the depth buffer (which is generated anyways).

Steps

Position is calculated in view space, because we actually don’t need to go back to world space- Sample depth buffer
- Unproject depth buffer value to find position.z
- Unproject the nDc position to find position.xy

Implementation

Al code listed will be GLSL core 150 (GL3.2) and C++

Encoding

We do not need to encode anything because the depth buffer is generated automatically!

Decoding

Here is where the fun starts.

Step 1: Sampling the depth buffer

This should be easy, but for completeness:

C++

//Generate texture to store depth in glGenTextures(1, m_DepthTextureID); glBindTexture(GL_TEXTURE_2D, m_DepthTextureID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, width, height, 0,GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0); //Attach to your framebuffer glBindFramebuffer(GL_FRAMEBUFFER, m_FboID); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_DepthTextureID, 0);pass m_DepthTextureID as Sampler2D to the shader

GLSL 150 core:

uniform sampler2D depthMap; uniform vec2 texCoord; void main() { float depth = texture2D(depthMap, texCoord).x }

Step 2: Unprojecting

I will give the code first and then explain what I do

GLSL:

vec3 reconstructPosition(in float p_depth, in vec2 p_ndc, in vec4 p_projParams) { float depth = p_depth * 2.0f - 1.0f; float viewDepth = p_projParams.w / (depth - p_projParams.z); return vec3((p_ndc * viewDepth) / p_projParams.xy, viewDepth); }C++

p_projParams = vec4((m_pCamera->getProjectionMatrix()(0, 0), m_pCamera->getProjectionMatrix()(1, 1), m_pCamera->getProjectionMatrix()(2, 2), m_pCamera->getProjectionMatrix()(2, 3));p_projParams are the projection parameters out of the projection matrix.

Every vertex in the vertex shader gets multiplied by a World View Projection matrix:

2. Transform vertex in viewspace.

3. Transform vertex in screenspace.

When we go to the fragment shader the position is devided by the .w component. Now we have nDC (normalized device coordinates)

nDC ranging from -1 to 1, where .z is our depth component and .xy our 2D position on the screen.

The trick: we want to go back in viewspace, thus we do the inverse of 4. and 3.!

sidenote: openGL stores the depth default from 0 to 1, so we need to bring this in calculation as well

We will do this inverting in one go.

* Simplify projection matrix with A, B, C, D

* Multiply matrix with view-position (x, y, z, 1)

* Divide by .w

* nDc position = ( A*x / z, B*y / z, C + D/z, 1) –(x, y, z) in viewspace

* Now we calculate z in viewspace

* The depth stored in the depth buffer == nDc.z * 0.5f + 1.0f

* View z = D / (nDc.z - C) –nDc.z being sampeled depth * 2.0f - 1.0f

* We can now calculate view xy

* view x = (nDC.x * view z) / A

* view y = (nDC.y * view z) / B

Conclusion

The depth is reconstructed with this function in the shader

vec3 reconstructPosition(in float p_depth, in vec2 p_ndc, in vec4 p_projParams) { float depth = p_depth * 2.0f - 1.0f; float viewDepth = p_projParams.w / (depth - p_projParams.z); return vec3((p_ndc * viewDepth) / p_projParams.xy, viewDepth); }p_depth = the sampled depth buffer

p_ndc = the nDC position of the current pixel (texCoord.xy * 2.0f - 1.0f)

p_projParams = vec4(A, B, C, D); (from c++)

p_projParams = vec4((m_pCamera->getProjectionMatrix()(0, 0), m_pCamera->getProjectionMatrix()(1, 1), m_pCamera->getProjectionMatrix()(2, 2), m_pCamera->getProjectionMatrix()(2, 3));

That’s all! Happy coding

Yay, artistic procrastination continues!

I’m addicted to flappy bird and I was bored today….since I was not able to break my record (**63 ! suck it! losers !**) I decided to code my own version of it !

It’s not over yet, the basis is there. We have a flying cube and obstacles popping randomly. I need to finish the collisions before putting real graphics on it (I’ll probably draw some textures).

Here the link to my github if you want to know or learn….*I commented the code ! * Click here **GITHUB**

it’s C++, openGL 2 and SDL 2 Very simple.

a glblendfunc error