5. GLSL Per Pixel Lighting
Jump To:
main.cpp Source
shader.vert Source
shader.frag Source
Download
This tutorial is going to be quite simple. To achieve the effect of per pixel lighting, we simply need to move one calculation from our vertex shader to our fragment shader.
So lets go to it!
Main.CPP File: Just for the sake of it, I have switched to a glutSolidTeapot for this tutorial. It will come on handy when testing different effects.
|
||||
Vertex Shader Source: In the last tutorial, to achieve lighting, we had one varying variable called diffuse_value. In this tutorial, we are going to remove this varying, and create two others. One of them is going to contain the vertex_light_position, and the other is going to contain the vertex_normal: varying vec3 vertex_light_position; Now we need to store some values in here. This is collected the exact same way as last time: vertex_normal = normalize(gl_NormalMatrix * gl_Normal); And that’s all their is in the vertex shader. Note that I have removed the diffuse_value variable.
|
||||
Fragment Shader Source: Since I changed the varying variables in the Vertex Shader, I also need to update them in the fragment shader: varying vec3 vertex_light_position; Now, if you remember the diffuse_value calculation from the previous tutorial, it has been pretty much copied and pasted into this tutorial, only now it resides inside the fragment shader: float diffuse_value = max(dot(vertex_normal, vertex_light_position), 0.0); And that is all their is to it. We now have per pixel lighting. That wasn’t too hard 🙂 If you have any questions, please email me at swiftless@gmail.com
|
||||
Download: Download shader.h Source Code for this Tutorial Download shader.cpp Source Code for this Tutorial Download main.cpp Source Code for this Tutorial |
It seems like it would work for GLSL shaders mod for Minecraft
Hi. IMHO this is not per pixel lightning yet. It’s still flat vertex/surface lightning calculated in fragment shader insted in vertex shader. It still calculates light position to vertex normal insted light to fragment position or interpolated positions from vertices of triangles. You can clearly see that if You draw cube insted of teapot or sphere.
I can’t figure out how to add specular light to the pixel shader.
When I do, it comes out per-vertex, not per-pixel…
Wtf?!
Using the light position is like 10x easier than the direction!!!!
Might be this is a real noob question,
I have followed the 5 GLSL tutes uptill now, but I am unable to understand that, how have we achieved per pixel/ vertex effects, I mean we have called our shader program inside the ‘display’ function and this will update per frame. Thus the code in shader program will affect each and every pixel/vertex in same way. How have we achieved control on each vertex/pixel?
How can we in same run, say change color of few pixels to red and few white depending on their z values?
Hi Paras,
If you would like to do something like that, the vertex shader allows you to read the z-values of vertices. You can then set up a variable which is sent from the vertex shader to the fragment shader, and the fragment shader can use this variable to decide on whether or not to change the colour of the current pixels.
It might be important to note that pixel values are interpolated between vertices. So if one vertex has a z-value of -10 and we tell the pixels for that vertex to be red, and the next vertex has a z-value of -20 (both for the same geometry), and we tell the pixels for that vertex to be green, then we will see a smoothing between red and green.
While we have full control over our pixels, we can’t access them straight on an x,y basis and say, I want this pixel at this location, to be this. The pixel shader is applied to geometry, and the pixels created by the geometry are what are manipulated.
Cheers,
Swiftless
Why do you call it “vertex_light_position”, when it is very clearly a light *direction*? You normalize it, you dot it with the surface normal. It’s a direction, not a position.
Also, there’s no reason to pass it as a varying, since it doesn’t change. It’s just the light direction accessed from the OpenGL variables. Which the fragment shader can access just as well as the vertex shader.
Hey Alfonse,
While your comments have a point, I call it vertex_light_position because OpenGL refers to it as light position. Also, it depends on the type of light, if you have a point light, it has a position and no direction while a directional light is opposite and has a direction but no position.
As for varying variables, it is not always optimal to do all your lighting calculations in the fragment shader this lets us offload some parts to the vertex shader. This also saves us reading the variable every pixel from OpenGL, we can read it per vertex which can save quite a few fetches.
Thanks,
Swiftless
Yes, OpenGL’s default lighting variables do call it a “position” even for directional lights. But that doesn’t mean that you need to do so. This is, after all, a tutorial; a way to make things clearer to the user. Treating a position like a direction does not help make things clearer.
As for the cost of reading uniforms in a fragment shader, that simply doesn’t exist. Reading uniforms costs just as much as reading varyings.
Also, passing varyings has a very real cost associated with it. It makes the per-output vertex data bigger, which means that per-output vertex data takes up more room in the post-T&L cache. Which means you can have fewer vertices in the post T&L cache. Which slows things down.
The cost of reading a uniform doesn’t compare to the cost of an additional varying.
Hi Alfonse,
I see what you mean, but I stick by my position it call it position to keep the transition simple from the fixed function mode. But I should explain this better when I do a rewrite of the tutorial eventually. I’m pretty sure I explain it on the OpenGL Tips page, but that is a bit of a round-about trip. Also, it’s not the fact that it’s dotted and normalized that makes it a direction. You do the same calculations for a position light (It shoots out in all directions). It’s just a minor change in the rest of your code to switch from positional to directional in appearance.
I’m not sure you understand what I meant in regard to the number of variable fetches.
“As for varying variables, it is not always optimal to do all your lighting calculations in the fragment shader this lets us offload some parts to the vertex shader.”
For example, you do the dot product calculation. If you do it per-vertex, you may have 1000 vertices and this means 1000 dot product calculations, as opposed to possibly 1000×1000 calculations if you do it on a per-pixel basis. Yes, there is a difference in quality, this is per-vertex lighting compared to per-pixel lighting, but it’s a valid example of the speed difference.
“This also saves us reading the variable every pixel from OpenGL, we can read it per vertex which can save quite a few fetches. ”
Take bump mapping vs displacement mapping. Either way you have a texture read. But in the vertex shader, you only have one texture fetch per vertex, as opposed to a texture fetch every pixel.
These examples are more on the extreme differences side, but it shows the point I am trying to make clearly.
Just a note on what you mentioned. Uniforms vs Varyings in regard to speed. Uniforms are passed from system memory to GPU memory, this has a MUCH higher cost associated with it compared to a varying variable which is created on the GPU and requires no passing around in memory.
I’d like to know where the information is coming from that the more varying variables you have, the less vertices you can have. Vertex output relies on the hardware capability, not the amount of varying variables you have. Imagine the kind of trouble geometry shaders and tesselation engines would run into if that was the case, where you can effectively output several times the number of incoming vertices.
Cheers,
Swiftless