Angus Dorbie (dorbie++at++bitch.reading.sgi.com)
Fri, 23 Aug 1996 11:06:09 +0100
The bump mapping implementation is fairly straightforward once
it's been explained by someone who's smart enough to figure
out how to do it in the first place.
Here's my understanding of how it works.
o Firstly as an introduction, imagine an image processing 'emboss'
function which some paint packages apply to to get a stylistic
3D illumination of a photograph. This is achieved by subtracting
the original image brightness data from itself after it's been
translated towards an imaginary light source.
o Now imagine you've peeled the skin off your bump mapped object
and have drawn it on the screen, the skin is an intensity map
representing bump height. I think Brian called this 'tangential space'
so you're looking down on your surface and all the surface normals are
pointing out of the screen at you. The texture is a black and white
pattern which describes bump height.
o Remember the 'emboss' image processing function. It is possible to
generate an illumination map across the surface by redrawing the
skin translated towards an imaginary light source. But remember that
the objects skin has been peeled and normals are all the same now
so for every point on the surface the light is in a different relative
place.
o Since you have your bump map as a texture and you're drawing the
bump mapped objects skin as a streched texture map you can move every
vertex on our skin geometry in the direction of the real light source
by varying distances depending on light source elevation. The texture
interpolation will interpolate for every pixel.
o The second skin pass with vertices perturbed by lighting calculations
requires a subtractive blend function (Brians secret weapon on RE2).
o You can then redraw the object using lighting result you've just produced
in the framebuffer as a texture to modify the shading operation of
the objects geometry.
o You can also use tluts to obtain specular highlights from calculations
similar to those above but at this point in the description my head started
to hurt.
Rgds,
Angus.
On Aug 22, 7:25pm, Steve Baker wrote:
> Subject: Re: bump mapping using Infinite Reality
>
> True bump mapping requires that the IG's lighting calculation
> is performed at every pixel (or at the very least, at every texel).
>
> As far as I know, there is no hardware in the iR to perform that
> directly.
>
> If you are happy with a non-realtime (or maybe with luck a
> near-realtime) solution, then I suppose you could replace
> every texel with a teeny-tiny polygon in the DRAW process
> and do the appropriate thing with the surface normal. Possibly
> (if each map is only used on one polygon) you could recompute
> the lighting in software and reload the map every frame.
>
> Apart from that - I think you are stuck.
>
> Steve Baker 817-323-1361 (Vox-Lab)
> Hughes Training Inc. 817-695-8776 (Vox-Office/vMail)
> 2200 Arlington Downs Road 817-695-4028 (Fax)
> Arlington, Texas. TX 76005-6171 steve++at++mred.bgm.link.com (eMail)
>
> =======================================================================
> List Archives, FAQ, FTP: http://www.sgi.com/Technology/Performer/
> Submissions: info-performer++at++sgi.com
> Admin. requests: info-performer-request++at++sgi.com
>-- End of excerpt from Steve Baker
=======================================================================
List Archives, FAQ, FTP: http://www.sgi.com/Technology/Performer/
Submissions: info-performer++at++sgi.com
Admin. requests: info-performer-request++at++sgi.com
This archive was generated by hypermail 2.0b2 on Mon Aug 10 1998 - 17:53:24 PDT