According to the GLSL spec version 1.2, ints can be cast to floats automatically. Ati has always held back on casting, nvidia being far more forgiving. However recently ATi made some advances in their compiler and now it works - almost. People programming small shaders, should be very careful in one particular case: functions.
Since the July driver update from ATi, the following are now all valid on Ati (verified with x1900, no version of shaders declared):
float a=1;
vec3 a=vec3(1);
float a=1/128;
a=max(a,0); //warning:invalid on NVIDIA..do not use
a=pow(a,8); //warning:invalid on NVIDIA..do not use
if(a>0)...
Previous Ati drivers required (in most cases above) to use a "." in the number eg:
a=max(a,0.);
if(a>0.)...
However, even on ATi, one exception still remains: user defined functions.
float f(float x) {return x+1;} //valid
f(1); // INVALID
f(1.); // valid
Also:
float f(float x) {return 0;} //INVALID
float f(float x) {return 0.;} //valid
So, essentially, values passed to and back from a user defined function must still be explicitly typed. This is the only exception I have found so far on ATI. Intrinsic functions operate by implicitly casting (eg max). However on Nvidia, intrinsic functions must also be strongly typed. So the only safe way forward as of today is to make sure that all parameters into and out of *any* function are strongly typed (use the "." form) and do not rely on implicit casting.
The GLSL spec says that functions may be overloaded just by changing the types input. Therefore, the cards are trying to show the correct behaviour. The only hiccup is that ATI have decided that intrinsic functions are known and the user has not provided an overloaded version themselves so the implicit cast will go ahead. This seems to be an open (but known) issue in the spec.