Re: pfMultiPipe + pfMultiProcess

New Message Reply Date view Thread view Subject view Author view

John Rohlf (jrohlf++at++tubes)
Tue, 21 Jun 94 12:03:37 PDT


>
> Hi
>
> A problem :
>
> Lets say you have a 4 CPU, dual headed ONYX machine, and you want to run a dual
> pipe performer application. The following pfMultiProcess modes give you the
> following results :
>
> 1. PFMP_APPCULLDRAW - runs both channels on one CPU (bad)
> 2. PFMP_DEFAULT, PFMP_APP_CULL_DRAW - runs the system with 5 processes (bad -
> because you have only 4 CPUs)
> 3. PFMP_APPCULL_DRAW - not permited, because performer won't allow both culls
> in the same process
>
> No item here gives good results. This is due to the fact that multi-processing
> mode is the same for all pipes. If multi-processing mode was per pipe - I could
> run one pipe as PFMP_APPCULL_DRAW, and the other as PFMP_APP_CULL_DRAW. Then I
> would fully occupy my 4 CPUs.

        This makes sense if the culling loads for your 2 pipes
are not balanced. The pipe with PFMP_APP_CULL_DRAW will have an
entire CPU dedicated to culling while the PFMP_APPCULL_DRAW
pipe will have to share APP with CULL.

        In general I am not in favor of asymmetric multiprocessing
pipelines. It complicates Performer's API, semantics and internal code.
The salient issue here is how to attain better load balancing.
Performer's multiprocessing granularity is coarse which hampers
load balancing but I believe the dividends of vastly reduced
synchronization overhead, data management and a simplified programming
model greatly outweigh any load balancing problems arising from
coarse-grained parallelism.

        In addition, you can, in theory, duplicate the above behavior
by using IRIX5 sysmp/schedctl commands to restrict processes to CPUs.
For the above example you could MP_RESTRICT the 2 CULLs to the
same CPU and use priorities or your own semaphores in the cull callbacks
to serialize the 2 CULL processes and avoid wasted effort
in context switching.

> I know that 4 CPUs in not the bets configuration for running a dual pipe
> performer application, but I think that allowing a separate multi-processing
> mode for pipes would help in similar cases.
>


New Message Reply Date view Thread view Subject view Author view

This archive was generated by hypermail 2.0b2 on Mon Aug 10 1998 - 17:50:21 PDT

This message has been cleansed for anti-spam protection. Replace '++at++' in any mail addresses with the '@' symbol.