Abstract:
|
Using artificial neural networks to perform predictive modeling in finance has attracted significant media attention, and pundits are quick to speculate on their potential impacts on jobs and society. With the recent advancement in algorithms and readily accessible, highly specialized hardware, do these algorithms actually live up to such promises? Some unique features of applying artificial neural networks to finance include: 1) Financial datasets change almost constantly; 2) Calculation universe is usually quite large (e.g. a typical sub-universe of over a hundred companies, each of them having several hundred fundamental factors, with many different combinations of time lags); 3) Non-linear statistics that require multiple neurons, layers, and/or calibration with even more specialized techniques; 4) Risk of overfitting models that yield low predictive power; 5) If the computation cannot finish by a specific time constraint (say within 1~2 minutes, but the threshold will be dependent on the specific markets traded) the market may have moved by so much that any modeling results obtained can no longer be helpful to decision making in real-life markets.
The goal of this paper is to provide a formal study on the feasible ranges where predictive power and computational speed-up for different techniques in predictive modeling in finance can be helpful to solving time-constrained algorithms in finance, based on the most up-to-date technology available. The first author presented a similar formal study at the Russian Academy of Sciences in Moscow in 2011, while similar published studies were performed almost a decade ago. We achieve this objective by benchmarking computational performance on a state-of-the-art platform with real-life financial datasets using a 30,000-core supercomputer hosted by the National Supercomputing Centre (NSCC) Singapore.
|