Marc Erich Latoschik (marcl++at++TechFak.Uni-Bielefeld.DE)
Wed, 16 Dec 1998 11:29:42 +0100
Phil Keslin wrote:
>
> There are no plans to move Performer to a pthreads base. To do so would
> break just about every application out there. Pthreads assumes implicit
> sharing and contains no support for the explicit shared memory model
> adopted by Performer. We'd still have to use arenas, but as I already
> stated, this doesn't work with pthreads.
>
> - Phil
I can not quite accept that point. The mixing of implicit and explicit
sharing (in terms of mapping the same memoryspace) does not mean it
is not compatible. Writing IPC and MP programs now for a long time works
out
pretty well e.g. with different UNIX standards like SVR4 or POSIX
(pthreads and
shmget). Most of the vendor specific APIs can be wrapped around. Of
course
this is what you guys did with your first pthread implementation that
works with
Performer).
Brian Corrie wrote:
> This conversation is very interesting to some of us, so please do keep it
> public. It seems to me that moving Performer to pthreads is not necessarily
> the solution that we want anyway (as Phil states, it would require many
> fundamental changes in many things). What would be REALLY nice is to have
> versions of pthreads/sproc/multiprocessing that do not break each other. Does
> SGI have any plans on fixing this problem???? We are in the same situation as
> Jason - we are using the CAVE libraries (which are Performer based), a package
> which is sproc based, and a package which is pthread based. Pthreads is
> arguably the "way ahead" for multi-processing (especially portable
> multi-processing), but doesn't work with either sproc or shared arenas. I see
> that as a bit of a fundamental problem, no? Is there a techical problem why
> they can not co-exist???
>
> Any comments on whether SGI plans on fixing this would be GREATLY
> appreciated... 8-)
i'll try to keep it open, just a mistake that i made one reply only
to sender :). ok, making pthread and sproc working together would do it
(like it
is in former irix versions). The question is why bothering with
different APIS
with the same functionality? And never to be quite sure if one is in any
case
better, faster, or just smoother to use? Not mentioning the work to
learn both.
Ok, i think there was the time when MP had to be implemented, and the
SGI guys
did it their way. Showing that most of the things they did now is
implemented
in a kind of standard, late but finally. Now i thing to ignore the
movement to
a standard in MP programming to me gives me the same headaches i had
when
writing M$ applications based on their "standards" and couldn't port
them to
other systems.
Don't get me wrong, i personally don't thing SGI has that kind of
mentality
staying to their proprietary API's (remember OpenGL and OpenInventor),
but this
MP thing has a touch of this.
Phil Keslin wrote:
> "Fixing" this would require a fundamental change in the way at least the
> C library works for multi-threaded applications and the way
> multi-threaded applications are scheduled by the kernel. This
> incompatibility won't be fixed.
Hmm, i might not quite understand what you mean. Of course, library
functions
in pthread application should be MP-safe. If the irix MP modell only
uses
explicit sharing, this is ouf course no problem. But when you wrap
pthread around
sproc, didn't you already have to make your c-lib thread safe?
Of course scheduling is a major thing. Like in threaded apps we can
distinguish between global and local threads (being scheduled by the
kernel or
scheduled by the thread library). But i thing such problems you had to
solve already
with the MP directives Irix offers???
i night a light ;(
--bye Marc
******************************************************************************** Marc Erich Latoschik, AG-WBS, Technische Fakultaet der Universitaet Bielefeld
Universitaetsstrasse 25 Postfach 100 131 33501 Bielefeld Raum M4-122 Fon: (0521) 106 2919 Fax: (0521) 106 2962
marcl++at++techfak.uni-bielefeld.de http://www.TechFak.Uni-Bielefeld.DE/techfak/persons/marcl/ ********************************************************************************
This archive was generated by hypermail 2.0b2 on Wed Dec 16 1998 - 02:30:19 PST