- Draw the image to be distorted to an offscreen buffer
- Capture this to texture (see NPOT article)
- Clear the screen
- Enable a lens distortion shader
- Draw a textured quad
The first image is a very flat image before distortion. The second image is applying a distortion shader and adds complexity and interest to the image.
I did not want to use a second texture to distort the image. Instead I used a simple trick. The image is drawn and the distort is calculated from the orginal image alone. Infact the distortion is a function of colour in the original image.
Here is the vertex shader:
const char *vsh="varying vec4 p;\So p is set to the co-ordinate of the transformed vertex. Which ranges between -1 and 1 in my screen space.
void main(){\
gl_Position=ftransform();\
p=ftransform();\
}";
Here is the fragment shader:
const char *distortScreenfsh="\So t maps to between 0..1 and corresponds exactly to (0,0) in one corner of the screen and (1,1) in the opposite corner. This avoids having to define texture co-ordinates in the main OpenGL code and also, infact means one big trick. In the main code now we can use a gluDisk rather than define a real quad! This is far fewer bytes. If I had tried to use real texture co-ordinates in the shader I couldn't use a Disk as the texture co-ordinates would be wrong.
uniform sampler2D s;\
varying vec4 p;\
void main(){\
vec4 t=p/2.0+0.5;\
float d=length(texture2D(s,t.xy).xyz);\
gl_FragColor=texture2D(s,t.xy+d*p.xy*0.3);\
}";
The real magic then is here:
float d=length(texture2D(s,t.xy).rgb);\We do a texture lookup in the original image using t as the texture co-ordinates. We read the rgb values and then take the length of this as a vector. So somehow d is a measure of the intensity of colour in the original image.
gl_FragColor=texture2D(s,t.xy+d*p.xy*0.3);\
Then we finally do a texture lookup in the original image, adjusted by d and p. This is the distort right here (+d*p.xy*0.3).
The shader is tiny and produces a much richer feel to the image.
No comments:
Post a Comment