Phil Keslin (philk++at++cthulhu.engr.sgi.com)
Tue, 26 Jan 1999 14:58:33 -0800
OK. The information that Performer prints is self computed (i.e., OpenGL
does not return that information to Performer). Therefore, any
indication concerning memory usage is just a guess (on Performer's
part). Also, since perfly shows textures, the download time is
influenced by the time needed to draw the texture to the framebuffer
following download. I don't know what the O2 does when a texture object
definition fails on subsequent draws, but you will draw something.
What peaks my curiosity is how it worked in the past. The OpenGL
implementation on O2 will not allow for texture sizes > 1K along either
axis. Any attempt to define such a texture will fail. Unless, you are
falling on some OpenGL software path, I don't see how it could ever have
worked.
> Lastly, I'm pretty surprised by that 10-bit limitation. Is that on
> the new (R10k) O2s? That seems like a pretty serious drawback. I thought the
> whole point of the UMA was that as long as you had main memory you could keep
> loading in textures. This excerpt is taken from
> http://www.sgi.com/o2/graphics.html: "Unlike traditional graphics boards that
> set a limit on texture memory, the flexible Unified Memory Architecture allows
> an unlimited amount of memory to be allocated for textures." What am I missing
> here?
The 10 bit limitation does not affect the number of textures, just the
size of each texture. Unlike memory, bits needed to address individual
texels is usually fixed regardless of the system config.
- Phil
-- Phil Keslin <philk++at++engr.sgi.com>
This archive was generated by hypermail 2.0b2 on Tue Jan 26 1999 - 14:58:37 PST