A Study on Generalization of Random Weight Network With Flat Loss
No Thumbnail Available
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Elsevier
Open Access Color
Green Open Access
No
OpenAIRE Downloads
OpenAIRE Views
Publicly Funded
No
Abstract
In the scheme of learning which adjusts model parameters by minimizing a loss function, there is a conjecture that the loss function with flatter minimum may correlate with better stability and generalization of the model. This paper provides experimental evidence within the Random Weight Network (RWN)/Extreme Learning Machine (ELM) framework and further develops a theoretical analysis linking flatness to the local generalization error upper bound by deriving the RWN loss as a quadratic polynomial with respect to random weights and representing the flatness as the maximum eigenvalue of a semi-positive definite matrix. By adjusting the random weights using a genetic algorithm, where the fitness function is defined as the flatness, we validate on 10 benchmark datasets within the ELM framework that flatter loss indeed improves the model's generalization ability. The improvement size depends on the specific characteristics of datasets, particularly, on the relative decrease of maximum eigenvalues. This study shows that RWN generalization performance can be improved by optimizing random weight selection.
Description
Keywords
Supervised Learning, Random Weight Network, Generalization, Loss Function, Flat Minimum
Turkish CoHE Thesis Center URL
Fields of Science
Citation
WoS Q
Q1
Scopus Q
Q1

OpenCitations Citation Count
N/A
Source
Neurocomputing
Volume
657
Issue
Start Page
131650
End Page
PlumX Metrics
Citations
CrossRef : 1
Scopus : 1
Captures
Mendeley Readers : 1
Google Scholar™

OpenAlex FWCI
0.0
Sustainable Development Goals
2
ZERO HUNGER

7
AFFORDABLE AND CLEAN ENERGY

8
DECENT WORK AND ECONOMIC GROWTH

9
INDUSTRY, INNOVATION AND INFRASTRUCTURE


