Orca is a simple, portable, imperative parallel programming language,
based on a form of distributed shared memory called shared data-object
s. This paper discusses the suitability of Orca for parallel symbolic
programming. Orca was not designed specifically for this application a
rea, and it lacks several features supported by specific languages for
symbolic parallel computing, such as futures, automatic load balancin
g and automatic garbage collection. On the other hand, Orca does give
high-level support for sharing global state. Also, its implementation
automatically distributes shared data (stored in shared objects). In a
ddition, Orca programs are portable, because the language abstracts fr
om the underlying hardware and operating systems. Efficient Orca imple
mentations exist on a variety of parallel systems. We first give a com
parison between Orca and two other models: imperative message-passing
systems and functional languages. We do so by looking at several key i
ssues in parallel programming and by studying how each of the three pa
radigms deals with these issues. Next, we describe our experiences wit
h writing parallel symbolic applications in Orca. We study the perform
ance of each application on two platforms: the SP-2 and a collection o
f SPARC processors connected by an ethernet. This work indicates that
Orca is quite suitable for writing efficient and portable programs for
symbolic applications.