On Tue, Jul 26, 2005 at 05:45:53PM +0200, Sven Panne wrote:
| What is the rationale for "GLsizei" being an "int" on all platforms I know of
| and not an "unsigned int"?
I don't have an authoritative answer -- that decision was made before I
joined the OpenGL project and I don't recall discussing it with any of
the original team. But my guess is that there were two concerns:
(1) Arithmetic on unsigned values in C doesn't always yield intuitively
correct results (e.g. width1-width2 is positive when width1<width2).
Compilers offer varying degrees of diagnosis when unsigned ints appear
to be misused. Making sizei a signed type eliminates many sources of
semantic error and some irrelevant diagnostics from the compilers. (At
the cost of reducing the range of sizei, of course, but for the places
sizei is used that's rarely a problem.)
(2) Some languages that support OpenGL bindings lack (lacked? not sure
about present versions of Fortran) unsigned types, so by sticking to
signed types as much as possible there would be fewer problems using
OpenGL in those languages.
BTW, this note mentions that the OpenGL spec indicates sizei is
unsigned. That's not exactly true, but I agree that the spec is obtuse
about it. What the spec says is that sizei is intended to be used for
nonnegative values; however, that doesn't mean that sizei is incapable
of representing negative values, and in fact several places in the spec
deal with that possibility explicitly. (For example, Viewport generates
INVALID_VALUE if its width or height arguments are negative.)