Erdol, Eda SenaErdol, HakanUstubioglu, BesteSolak, Fatma ZehraUlutas, Guzin2024-09-222024-09-222024979835038897897983503889612165-0608https://doi.org/10.1109/SIU61531.2024.10600747Federated learning is a distributed machine learning approach in which end-user devices update the learning model by training on their local data, rather than on a central server. Each device trains on its own data and the updated model parameters are aggregated on a central server to create a global model. Although this distributed learning structure has its advantages, it is still vulnerable to attacks by malicious actors. Current defenses against such attacks are limited to assumptions about end-user data distribution, and most work in the literature is not feasible to apply on large deep learning networks. Therefore, this article examines attacks and security vulnerabilities against Federated learning. Model poisoning scenarios, which are among the attack types that significantly affect model success, are applied to the learning network. Our proposed method, the Weight Pruning algorithm is used to select impactful neurons in the deep learning network. Then, the feature vectors created with the selected neurons are brought to a size suitable for classification by Principal Component Analysis. Finally, the Isolated Forest unsupervised learning algorithm was used for classification. Our results in defense success have been proven to exceed other approved defense algorithms in the literature.trinfo:eu-repo/semantics/closedAccessFederated learningpoisoning attackmodel poisoningdata poisoningByzantine attackImpactful Neuron-Based Secure Federated LearningEtkili Nöron Tabanlı Güvenli Federe ÖğrenmeConference Object10.1109/SIU61531.2024.106007472-s2.0-85200840419