Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Wang, Xizhao"

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Feature Fusion Using Deep Learning Algorithms in Image Classification for Security Purposes by Random Weight Network
    (MDPI, 2025) Kiran, Mustafa Servet; Seyfi, Gokhan; Yilmaz, Merve; Esme, Engin; Wang, Xizhao
    Automated threat detection in X-ray security imagery is a critical yet challenging task, where conventional deep learning models often struggle with low accuracy and overfitting. This study addresses these limitations by introducing a novel framework based on feature fusion. The proposed method extracts features from multiple and diverse deep learning architectures and classifies them using a Random Weight Network (RWN), whose hyperparameters are optimized for maximum performance. The results show substantial improvements at each stage: while the best standalone deep learning model achieved a test accuracy of 83.55%, applying the RWN to a single feature set increased accuracy to 94.82%. Notably, the proposed feature fusion framework achieved a state-of-the-art test accuracy of 97.44%. These findings demonstrate that a modular approach combining multi-model feature fusion with an efficient classifier is a highly effective strategy for improving the accuracy and generalization capability of automated threat detection systems.
  • Loading...
    Thumbnail Image
    Article
    A Study on Generalization of Random Weight Network With Flat Loss
    (Elsevier, 2025) Liu, Chao; Liu, Qiang; Li, Rihao; Zhou, Xinlei; Kiran, Mustafa Servet; Wang, Xizhao
    In the scheme of learning which adjusts model parameters by minimizing a loss function, there is a conjecture that the loss function with flatter minimum may correlate with better stability and generalization of the model. This paper provides experimental evidence within the Random Weight Network (RWN)/Extreme Learning Machine (ELM) framework and further develops a theoretical analysis linking flatness to the local generalization error upper bound by deriving the RWN loss as a quadratic polynomial with respect to random weights and representing the flatness as the maximum eigenvalue of a semi-positive definite matrix. By adjusting the random weights using a genetic algorithm, where the fitness function is defined as the flatness, we validate on 10 benchmark datasets within the ELM framework that flatter loss indeed improves the model's generalization ability. The improvement size depends on the specific characteristics of datasets, particularly, on the relative decrease of maximum eigenvalues. This study shows that RWN generalization performance can be improved by optimizing random weight selection.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback