We compare four nonlinear methods on their ability to learn models from dat
a, The problem requires predicting whether a company will deliver an earnin
gs surprise a specific number of days prior to announcement, This problem h
as been well studied in the literature using linear models, A basic questio
n is whether machine learning-based nonlinear models such as tree induction
algorithms, neural networks, naive Bayesian learning, and genetic algorith
ms perform better in terms of predictive accuracy and in uncovering interes
ting relationships among problem variables. Equally importantly, if these a
lternative approaches perform better, why? And how do they stack up relativ
e to each other? The answers to these questions are significant for predict
ive modeling in the financial arena, and in general for problem domains cha
racterized by significant nonlinearities. In this paper, we compare the fou
r above-mentioned nonlinear methods along a number of criteria. The genetic
algorithm turns out to have some advantages in finding multiple "small dis
junct" patterns that can be accurate and collectively capable of making pre
dictions more often than its competitors. We use some of the nonlinearities
we discovered about the problem domain to explain these results.