The author has already proposed a method of analyzing the average inst
ruction execution time for estimating the performance of CISC in a rea
l-time system (RTS) [1]. However, in [1] there was no mention of a met
hod of analyzing the execution time when the CPU has a cache memory an
d an RISC architecture. This paper presents a simulation method for so
lving that problem, and clarifies the relative processing power of the
two architectures. The new method converts actual RTS CISC trace data
in an RTS to RISC access state data and uses the data for input in si
mulations of cache operation, etc., to clarify the processing performa
nce ratio between CISC and RISC in an RTS. The results clarify that th
e 1-level write-through cache strategy (MIPS R3000) suffers from conti
nuous writing, the two-level write-back cache strategy (MIPS R4000) de
pends strongly on the speed of the secondary cache, and that the overa
ll RTS characteristics differ from the characteristics of application
programs in a time-sharing system (TSS), etc.. These techniques go bey
ond simply allowing the comparison of RISC and CISC performance. They
also make it possible to clarify the factors of the performance charac
teristics of systems that include caches on the basis of an overall so
ftware model for actual RTS.