A good solution hit me today so here is the code. First the main opengl loop to draw that polygon contains:
glRects(-1,-1,1,1);
This is one good advantage for size coding of OGL over D3D. A single polygon covers the screen. We cant use texture coords or colours at the vertices or we would have to define a quad or two triangles. So we are stuck with just the vertices.
Now the magical, tiny, Vertex shader which will give us a moving camera:
varying vec3 v,EP;
void main(){
gl_Position=gl_Vertex;
v = vec3( gl_ModelViewMatrix*gl_Vertex);
EP= vec3( gl_ModelViewMatrix*vec4(0,0,-1,1) );
}
The first line, makes sure that the glRect still covers the screen in the Pixel shader. It does not transform it but leaves it where it is. The second line, however records the transformed vertex in world space as a vec3. This in effect will record the position in world space for every pixel on the screen.
The last line is recording the eye position. Arbitrarily, the eye position is hardcoded here to be one unit in Z away from the screen giving a filed of view of 90 degrees - quite normal for a camera. Note that eye point needs a homogeneous value of 1 as the fourth co-ordinate to work properly.
We could have used vec4 for both lines above and the code would be shorter, but as most raytracing will use vec3 later, yuo can chose to bite the bullet and make the code longer here and shorter in the fragment shader. Horses for courses.
Now in the fragment shader its easy to construct the ray to start tracing:
varying vec3 v,EP;
void main(){
vec3 Ro=EP; //set ray origin
vec3 Rd=v-Ro; //set ray direction
Its that easy. Now you can move the camera in your raytracer using normal OpenGL commands. To finish here is an image from YAST (yet another sphere tracer as I'm calling my glsl raytracer). I'm able to move around the spheres as I chose. As usual, click on the image to see a bigger version.
Out of interest, on an x1950, 1024x768, 30 spheres, one light, shadows and 3 levels of reflection, I'm getting around 50fps.