City Research Online

ReLEx: Regularisation for Linear Extrapolation in Neural Networks with Rectified Linear Units

Lopedoto, E. & Weyde, T. ORCID: 0000-0001-8028-9905 (2020). ReLEx: Regularisation for Linear Extrapolation in Neural Networks with Rectified Linear Units. In: Artificial Intelligence XXXVII. AI-2020 Fortieth SGAI International Conference on Artificial Intelligence, 8-9 Dec 2020; 15-17 Dec 2020, Virtual. doi: 10.1007/978-3-030-63799-6_13


Despite the great success of neural networks in recent years, they are not providing useful extrapolation. In regression tasks, the popular Rectified Linear Units do enable unbounded linear extrapolation by neural networks, but their extrapolation behaviour varies widely and is largely independent of the training data. Our goal is instead to continue the local linear trend at the margin of the training data. Here we introduce ReLEx, a regularising method composed of a set of loss terms design to achieve this goal and reduce the variance of the extrapolation. We present a ReLEx implementation for single input, single output, and single hidden layer feed-forward networks. Our results demonstrate that ReLEx has little cost in terms of standard learning, i.e. interpolation, but enables controlled univariate linear extrapolation with ReLU neural networks.

Publication Type: Conference or Workshop Item (Paper)
Additional Information: The final authenticated version will be available online at
Publisher Keywords: Neural Networks, Regression, Regularisation, Extrapolation
Subjects: R Medicine > RC Internal medicine > RC0321 Neuroscience. Biological psychiatry. Neuropsychiatry
Departments: School of Science & Technology > Computer Science
[thumbnail of SGAI2020_Camera_Ready-submitted.pdf]
Text - Accepted Version
Download (239kB) | Preview


Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email


Downloads per month over past year

View more statistics

Actions (login required)

Admin Login Admin Login