Interconnect-driven optimization is an increasingly important step in high-
performance design, Algorithms for buffer insertion have been successfully
utilized to reduce delay in global interconnect paths; however, existing te
chniques only optimize delay and timing slack. With the continually increas
ing ratio of coupling capacitance to total capacitance and the use of aggre
ssive dynamic logic circuit families, noise analysis and avoidance is becom
ing a major design bottleneck. Hence, timing and noise must be simultaneous
ly optimized to achieve maximum performance. This paper presents comprehens
ive buffer insertion techniques for noise and delay optimization. Three alg
orithms are presented, the first for noise avoidance for single sink trees,
the second for avoidance for multiple sink trees, and the last for simulta
neous noise and delay optimization. We prove the optimality of each algorit
hm (under various assumptions) and present other theoretical results as wel
l. We ran experiments on a highperformance microprocessor design and show t
hat our approach fixes all noise violations. Our approach was separately ve
rified by a detailed, simulation-based noise analysis tool. Further, we sho
w that optimizing delay alone cannot fix all of the noise violations and th
at the performance penalty induced by optimizing both delay and noise as op
posed to only delay is less than 2%.