From: Rodney Hoinkes (rhoinkes++at++imm-studios.com)
Date: 01/15/2000 07:28:43
Many thanks for your summary Don! I just saw this thread and of course found
it very interesting in light of my explorations. Maybe I can offer some other
insights and ask some further questions;
I have been doing something like this for a little over 2 years to let those poor
users with O2s do immersive 3-screen setups like their big IR siblings. I have
ported to Linux and am looking at playing with it there as well.
I am using a similar technique of UDP packets with the following process;
1. registration of new machines as they come online (so the master machine
knows how many machines there are to synch) and start performer app
2. message set at completion of drawing from each machine
3. master waits until all machines done drawing then send message to swapbuffers
4. send updated viewing coordinates based on input on master channel
repeat from 2.
This works 'reasonably' well but introduces latencies in input on the master
channel
and can be a little annoying depending on your framerate (at <10fps this really
gets
a little annoying). Of course framerate is especially tough on these lower-end
machines with complex datasets!
Other side benefits these UDP broadcasting techniques is you can plug in other
machines/processes that can do other things based on the viewing data such as
3d positional audio, controlling motion platforms, etc. if they listen for the data
themselves.
We also throw in interactive control over the elements in the performer scenegraph
(display control, simple transforms), switching to predefined eyes and animations,
synchronized images (flip back and forth from 3d to 2d images for context,
additional
material etc.). Of course this stuff needs network support as well once you go
beyond
a single performer machine/app.
What I am VERY interested in is frame-locking strategies! In theory the above (and
Don's
below) techniques can get you to within a frame of synchronization BUT you don't
know
where each machine sits with its vretrace clock timing and so drawing may be out by
up to a full redraw at whatever Hz your app is achieving.
I am looking into how to improve this. In theory I believe the internal VGA
expansion
connector on most PC cards support an external sync signal which 'could' be wired
into multiple PCs. Has anyone tried something like this or any other approach that
can get the sync more in line?
_Dr._Rodney_Hoinkes_____________________
Chief Technology Officer, Immersion Studios Inc.
rhoinkes++at++imm-studios.com, www.imm-studios.com
Don Burns wrote:
> Well, since my name's been mentioned...
>
> Synchronization of multiple views amongst loosely coupled systems is really a
> real-time issue. The better the method the better the quality.
>
> The demo mentioned was shown at IITSEC this year and employed the simplest
> method. It was simply a one-way update of eyepoint, broadcast to anyone on the
> network who cared to listen. The UDP packets (simply no need for TCP, too
> slow, too much latency and unecessesary contention of the network bandwidth)
> were sent at a constant rate, which is faster than the graphics was updating.
> During the APP phase (single proc Performer on Linux), on each of the
> recieving systems, the queue is emptied and the final value used for XYZ, HPR.
> This provides a reasonable (should we say "good enough" in some cases) method
> of synchornizing the eye point to within about a frame or two of each other.
>
> For applications that don't use contiguous viewing channels that will be
> displayed side by side, this can be fine. It would be stretch to call this
> truly synchronized, however. If displayed side-by-side, the disparity of frame
> synchronization is actually quite apparant.
>
> The next form of synchronization is to use frame-lock. An external
> synchronization source can be used to keep vertical retrace on multiple systems
> in step. This is sometimes mistakenly referred to as genlock. (Genlock synchs
> the entire signal - vertical and horizontal). This can be used in conjunction
> with the network broadcasting method to attempt to keep all eyepoints
> refreshing the screen at the same time. Some two-way traffic may be necessary
> to insure that all channels have the same eyepoint for the current frame. This
> depends also on the graphics hardware's capability to swap buffers on vertical
> retrace.
>
> It seems that some of the PC hardware has become caught up in chest thumping
> about how fast they can run Quake, measure purely on a frames per second basis.
> To exceed 60 hz, the block on vertical retrace is not used and thus the
> graphics may sometimes swap buffers in the middle of a vertical retrace. This
> can cause anomolies such as "tearing". Unless you can block on vertical
> retrace, there is little frame-lock can do for you.
>
> A swap ready signal can also be used in conjunction with either a genlock or
> frame lock signal. This provides one more step for insuring that the same
> frame is displayed on all channels at any given moment.
>
> -don
>
> PS. Not sure this all has a true bearing on the subject (clustering), but it
> is interesting for visual simulation.
> -----------------------------------------------------------------------
> List Archives, FAQ, FTP: http://www.sgi.com/software/performer/
> Submissions: info-performer++at++sgi.com
> Admin. requests: info-performer-request++at++sgi.com
This archive was generated by hypermail 2b29 : Sat Jan 15 2000 - 07:24:07 PST