Traditionally, semantic information in computational lexicons is limit
ed to notions such as selectional restrictions or domain-specific cons
traints, encoded in a ''static'' representation. This information is t
ypically used in natural language processing by a simple knowledge man
ipulation mechanism limited to the ability to match valences of struct
urally related words. The most advanced device for imposing structure
on lexical information is that of inheritance, both at the object (lex
ical items) and meta (lexical concepts) levels of lexicon. In this pap
er we argue that this is an impoverished view of a computational lexic
on and that, for all its advantages, simple inheritance lacks the desc
riptive power necessary for characterizing fine-grained distinctions i
n the lexical semantics of words. We describe a theory of lexical sema
ntics making use of a knowledge representation framework that offers a
richer, more expressive vocabulary for lexical information. In partic
ular, by performing specialized inference over the ways in which aspec
ts of knowledge structures of words in context can be composed, mutual
ly compatible and contextually relevant lexical components of words an
d phrases are highlighted. We discuss the relevance of this view of th
e lexicon, as an explanatory device accounting for language creativity
, as well as a mechanism underlying the implementation of open-ended n
atural language processing systems. In particular, we demonstrate how
lexical ambiguity resolution-now an integral part of the same procedur
e that creates the semantic interpretation of a sentence itself-become
s a process not of selecting from a pre-determined set of senses, but
of highlighting certain lexical properties brought forth by, and relev
ant to, the current context.