Angus Dorbie (dorbie++at++bitch.reading.sgi.com)
Wed, 20 Mar 1996 09:38:59 +0000
On Mar 20, 9:26am, Mr Bok Shung Hwee wrote:
> Subject: Newbie Questions
>
> [ message/rfc822 ] :
content-type:text/plain;charset=us-ascii
> mime-version:1.0
>
> Hi
> This is my first time posting to the Performer Mailing List though
> I have been reading it for some time.
>
> I have several questions concerning a project on medical VR.
> I humbly hope that fellow Performers can help me.
>
> Firstly, we are intending to acquire an Onyx IR soon.
> We have been engaged in doing a detailed study of a slipped disc
> surgery problem since the beginning of the new year. MRI and CT images
> are to be used in conjunction with a package like ANALYZE. I believe
> contour data and voxel data can be extracted. And the slice images in 16-bit
> or 24-bit can also be extracted.
On IR, depending on the size of dataset & texture memory you have you may
want to use 8 bit information for texturing the database (you said it was
monochrome). If doing this you should use 2 component texturing & component
selection to fully utilise the texture memory, storing one half in one
component & the other half of the data in the other. If you have only one
8 bit component you waste 8bits per texel because the texel size = 16bits.
I have been given more details on the OpenGL calls you need for this, but
performer on IR may do this anyway...?
>
> I'd like to first attempt to represent using what I read abt 3D textures
> to represent in Performer the 'work' volume of the intervertebral area
> in question. I hope this is a sensible and relatively quicker way to show
> some results. There is no detail whatsoever for simulation.
>
> Any direct answers on how this would be done inside Performer ?
I don't see any support for 3D texture coordinates in performer
geosets so if you want to do the 3D volume rendering using performer then
you'll have to use pfTexGen, which you probably want to do in any case.
I'm not sure what performer brings much to the party here, youre going to
be seriously pixel limited & I don't think structuring the volume data
across some sort of hierarchy of polygons would help performance in any
way, unless you wanted to work out some sort of texture paging strategy
for huge datasets. Otherwise the arithmetic suggests it's just not worth
trying to cull polygons with this sort of data (unless you do something fancy
like distorting the data which would require meshing & 3d coordinates rather
than texgen), you should be trying to cull pixels by optimising the shape of
the stacked polygons you write to the framebuffer.
The contour data can be meshed in geosets, performer helps you with this.
The volume rendering should be done afterwards with depth testing enabled
to merge the datasets.
> What about introducing colour values to the images since MRI and CT data are
> 'colorless' ?
Texture look up tables let you do this, there's no support in performer that
I'm aware of but you can use OpenGL.
>
> More importantly, has anybody tried representing inside the scene graph
> a hierarchical structure of voxels and at the same time, this structure also
> contains surface polygons of the anatomical objects such as bone and ligament
> and spinal cord ?
>
> I hope to receive some opinions.
>
> We are also looking at the use of the PHANToM haptic device for interfacing
> and providing finger tracking and eventually some force feedback.
> Anybody has any comments ?
>
> That is all for now.
>
> Thank you for attention.
>
> Regards .... Bok
> CAD/CAM Specialist
> National University of Singapore
>
>-- End of excerpt from Mr Bok Shung Hwee
-- Angus Dorbie, Silicon Graphics Ltd, UK dorbie++at++reading.sgi.com
This archive was generated by hypermail 2.0b2 on Mon Aug 10 1998 - 17:52:34 PDT