Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>I'm not sure what you mean with invert (I'm sure you cannot mean to invert the distance as this won't work). What you do is transform the distance to the light source into the [0,1] range.</p> <p>This can be done by constructing a usual projection matrix for the light source's view and applying this to the vertices in the shadow map construction pass. This way their distance to the light source is written into the depth buffer (to which you can connect a texture with <code>GL_DEPTH_COMPONENT</code> format either by <code>glCopyTexSubImage</code> or FBOs). In the final pass you of course use the same projection matrix to compute the texture coordinates for the shadow map using projective texturing (using a <code>sampler2DShadow</code> sampler when using GLSL).</p> <p>But this transformation is not linear, as the depth buffer has a higher precision near the viewer (or light source in this case). Another disadvantage is that you have to know the valid range of the distance values (the farthest point your light source affects). Using shaders (which I assume you do), you can make this transformation linear by just dividing the distance to the light source by this maximum distance and manually assign this to the fragment's depth value (<code>gl_FragDepth</code> in GLSL), which is what you probably meant by "invert".</p> <p>The division (and knowledge of the maximum distance) can be prevented by using a floating point texture for the light distance and just writing the distance out as a color channel and then performing the depth comparison in the final pass yourself (using a normal <code>sampler2D</code>). But linearly filtering floating point textures is only supported on newer hardware and I'm not sure if this will be faster than a single division per fragment. But the advantage of this way is, that this opens the path for things like "variance shadow maps", which won't work that good with normal ubyte textures (because of the low precision) and neither with depth textures.</p> <p>So to sum up, <code>GL_DEPTH_COMPONENT</code> is just a good compromise between ubyte textures (which lack the neccessary precision, as <code>GL_DEPTH_COMPONENT</code> should have at least 16bit precision) and float textures (which are not that fast or completely supported on older hardware). But due to its fixed point format you won't get around a transformation into the [0,1]-range (be it linear of projective). I'm not sure if floating point textures would be faster, as you only spare one division, but if you are on the newest hardware supporting linear (or even trilinear) filtering of float textures and 1 or 2 component float textures and render targets, it might be worth a try.</p> <p>Of course, if you are using the fixed function pipeline you have only <code>GL_DEPTH_COMPONENT</code> as an option, but regarding your question I assume you are using shaders.</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload