Efficient optimization by modifying the objective function: Applications to timing-driven VLSI layout

Citation
R. Baldick et al., Efficient optimization by modifying the objective function: Applications to timing-driven VLSI layout, IEEE CIRC-I, 48(8), 2001, pp. 947-956
Citations number
35
Categorie Soggetti
Eletrical & Eletronics Engineeing
Journal title
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-FUNDAMENTAL THEORY AND APPLICATIONS
ISSN journal
10577122 → ACNP
Volume
48
Issue
8
Year of publication
2001
Pages
947 - 956
Database
ISI
SICI code
1057-7122(200108)48:8<947:EOBMTO>2.0.ZU;2-T
Abstract
When minimizing a given objective function is challenging because of, for e xample, combinatorial complexity or points of nondifferentiability, one can apply more efficient and easier-to-implement algorithms to modified versio ns of the function. In the ideal case, one can employ known algorithms for the modified function that have a thorough theoretical and empirical record and for which public implementations are available. The main requirement h ere is that minimizers of the objective function not change much through th e modification, i.e., the modification must have a bounded effect on the qu ality of the solution. Review of classic and recent placement algorithms su ggests a dichotomy between approaches that either: (a) heuristically minimi ze a potentially irrelevant objective function (e.g., VLSI placement with q uadratic wirelength) motivated by the simplicity and speed of a standard mi nimization algorithm; or (b) devise elaborate problem-specific minimization heuristics for more relevant objective functions (e.g., VLSI placement wit h linear wirelength). Smoothness and convexity of the objective functions t ypically enable efficient minimization. If either characteristic is not pre sent in the objective function, one can modify and/or restrict the objectiv e to special values of parameters to provide the missing properties. After the minimizers of the modified function are found, they can be further impr oved with respect to the original function by fast local search using only function evaluations. Thus, it is the modification step that deserves most attention. In this paper, we approximate convex nonsmooth continuous functi ons by convex differentiable functions which are parameterized by a scalar beta > 0 and have convenient limit behavior as beta --> 0. This allows the use of Newton-type algorithms for minimization and, for standard numerical methods, translates into a tradeoff between solution quality and speed. We prove that our methods apply to arbitrary multivariate convex piecewise-lin ear functions that are widely used in synthesis and analysis of electrical networks [19], [27]. The utility of our approximations is particularly demo nstrated for wirelength and nonlinear delay estimations used by analytical placers for VLSI layout, where they lead to more "solvable" problems than t hose resulting from earlier comparable approaches [29]. For a particular de lay estimate, we show that, while convexity is not straightforward to prove , it holds for a certain range of parameters, which, luckily, are represent ative of "real-world" technologies.