Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.13091/5597
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAkdağ, Ali-
dc.contributor.authorBaykan, Ömer Kaan-
dc.date.accessioned2024-06-01T08:58:11Z-
dc.date.available2024-06-01T08:58:11Z-
dc.date.issued2024-
dc.identifier.issn2079-9292-
dc.identifier.urihttps://doi.org/10.3390/electronics13071188-
dc.identifier.urihttps://hdl.handle.net/20.500.13091/5597-
dc.description.abstractSign Language Recognition (SLR) systems are crucial bridges facilitating communication between deaf or hard-of-hearing individuals and the hearing world. Existing SLR technologies, while advancing, often grapple with challenges such as accurately capturing the dynamic and complex nature of sign language, which includes both manual and non-manual elements like facial expressions and body movements. These systems sometimes fall short in environments with different backgrounds or lighting conditions, hindering their practical applicability and robustness. This study introduces an innovative approach to isolated sign language word recognition using a novel deep learning model that combines the strengths of both residual three-dimensional (R3D) and temporally separated (R(2+1)D) convolutional blocks. The R3(2+1)D-SLR network model demonstrates a superior ability to capture the intricate spatial and temporal features crucial for accurate sign recognition. Our system combines data from the signer's body, hands, and face, extracted using the R3(2+1)D-SLR model, and employs a Support Vector Machine (SVM) for classification. It demonstrates remarkable improvements in accuracy and robustness across various backgrounds by utilizing pose data over RGB data. With this pose-based approach, our proposed system achieved 94.52% and 98.53% test accuracy in signer-independent evaluations on the BosphorusSign22k-general and LSA64 datasets.en_US
dc.language.isoenen_US
dc.publisherMDPIen_US
dc.relation.ispartofElectronicsen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectsign language recognitionen_US
dc.subjectdeep learningen_US
dc.subjectfeature fusionen_US
dc.subjectKernelen_US
dc.subjectClassifieren_US
dc.titleEnhancing Signer-Independent Recognition of Isolated Sign Language through Advanced Deep Learning Techniques and Feature Fusionen_US
dc.typeArticleen_US
dc.identifier.doi10.3390/electronics13071188-
dc.identifier.scopus2-s2.0-85190250199en_US
dc.departmentKTÜNen_US
dc.identifier.volume13en_US
dc.identifier.issue7en_US
dc.identifier.wosWOS:001201081800001en_US
dc.institutionauthorBaykan, Ömer Kaan-
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.authorscopusid57200269812-
dc.authorscopusid23090480800-
item.grantfulltextnone-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.fulltextNo Fulltext-
item.openairetypeArticle-
item.languageiso639-1en-
crisitem.author.dept02.03. Department of Computer Engineering-
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collections
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collections
Show simple item record



CORE Recommender

Page view(s)

24
checked on Aug 26, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.