City Research Online

A temporal continual learning framework for investment decisions

Philps, D. (2020). A temporal continual learning framework for investment decisions. (Unpublished Doctoral thesis, City, University of London)


Temporal continual learning (TCL) is introduced in this thesis as an extension of continual learning (CL). While traditional CL has been applied to sequential tasks, extending CL to TCL aims to allow machines to accumulate specific knowledge of temporal states, to address concept drift (CD) problems. This approach is shown to hold considerable benefits in domains where non-stationary time-series are used for decision-making, particularly in finance.

A TCL framework called continual learning augmentation (CLA) is introduced, to drive long-term decision making in complex, multivariate, temporal problems. Moreover, CLA uses an external memory structure to store learner parameters from particular past temporal states for recall in the future. The contributions of this work are fourfold: First, a temporal, state-based, external memory structure is developed. Second, this is used to memory augment well-understood base-learners, such as LSTM, feed-forward neural networks (FFNN) and linear regression. Third, a remember-gate, based on residual change, learns in an open-world fashion to define different states for which learner-parameters are stored along with a contextual reference of the state. Fourthly, a memory recall gate is developed, based on various time-series similarity approaches, which can compare the current input space with the contextual references stored in memory, recalling the most appropriate learner parameters for use in the current period.

In testing, CLA is found to improve the performance of LSTM, FFNN, and linear regression learners applied to a complex, real-world finance task: stock selection in international and emerging equities investing. Several different similarity approaches are tested in CLA's remember-gate, with dynamic time warping (DTW) outperforming simple Euclidean distance (ED), while auto encoder (AE) distance is found to both mitigate the resource overheads of DTW and provide better performance. A hybrid approach is also introduced, warp-AE, which performs well. In addition, a visualisation is introduced to allow CLA to be interpreted by domain experts in terms of which memory did what and when. A complex application is used to test TCL and a five-point statistical testing framework is introduced. This thesis elucidates the research of the last five years regarding TCL.

Keywords: Continual learning, time-series, memory, neural network.

Publication Type: Thesis (Doctoral)
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Departments: Doctoral Theses
School of Science & Technology > School of Science & Technology Doctoral Theses
School of Science & Technology > Computer Science
[thumbnail of DP_PhD_Thesis(3) (003).pdf]
Text - Accepted Version
Download (3MB) | Preview


Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email


Downloads per month over past year

View more statistics

Actions (login required)

Admin Login Admin Login