Show simple item record

dc.contributorUniversitat Ramon Llull. IQS
dc.contributor.authorGonzalez-Acosta, Ana M. S.
dc.contributor.authorVargas Treviño, Marciano
dc.contributor.authorBatres-Mendoza, Patricia
dc.contributor.authorGuerra-Hernandez, Erick Israel
dc.contributor.authorGutierrez Gutierrez, Jaime C.
dc.contributor.authorCano Perez, Jose L.
dc.contributor.authorSolis Arrazola, Manuel Alejandro
dc.contributor.authorRostro Gonzalez, Horacio
dc.date.accessioned2025-04-29T06:35:03Z
dc.date.available2025-04-29T06:35:03Z
dc.date.issued2025
dc.identifier.issn2624-9898ca
dc.identifier.urihttp://hdl.handle.net/20.500.14342/5248
dc.description.abstractIntroduction: Facial expressions play a crucial role in human emotion recognition and social interaction. Prior research has highlighted the significance of the eyes and mouth in identifying emotions; however, limited studies have validated these claims using robust biometric evidence. This study investigates the prioritization of facial features during emotion recognition and introduces an optimized approach to landmark-based analysis, enhancing efficiency without compromising accuracy. Methods: A total of 30 participants were recruited to evaluate images depicting six emotions: anger, disgust, fear, neutrality, sadness, and happiness. Eye-tracking technology was utilized to record gaze patterns, identifying the specific facial regions participants focused on during emotion recognition. The collected data informed the development of a streamlined facial landmark model, reducing the complexity of traditional approaches while preserving essential information. Results: The findings confirmed a consistent prioritization of the eyes and mouth, with minimal attention allocated to other facial areas. Leveraging these insights, we designed a reduced landmark model that minimizes the conventional 68-point structure to just 24 critical points, maintaining recognition accuracy while significantly improving processing speed. Discussion: The proposed model was evaluated using multiple classifiers, including Multi-Layer Perceptron (MLP), Random Decision Forest (RDF), and Support Vector Machine (SVM), demonstrating its robustness across various machine learning approaches. The optimized landmark selection reduces computational costs and enhances real-time emotion recognition applications. These results suggest that focusing on key facial features can improve the efficiency of biometric-based emotion recognition systems without sacrificing accuracy.ca
dc.format.extentp.16ca
dc.language.isoengca
dc.publisherFrontiers Mediaca
dc.relation.ispartofFrontiers in Computer Science 2025, 7ca
dc.rights© L'autor/aca
dc.rightsAttribution 4.0 Internationalca
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subject.otherEmotion recognitionca
dc.subject.otherEye-tracking analysisca
dc.subject.otherFacial landmarksca
dc.subject.otherBiometric validationca
dc.subject.otherMachine learning and AIca
dc.subject.otherEmocionsca
dc.subject.otherSeguiment de la miradaca
dc.subject.otherExpressió facialca
dc.subject.otherIdentificació biomètricaca
dc.subject.otherAprenentatge automàticca
dc.subject.otherIntel·ligència artificialca
dc.titleThe first look: a biometric analysis of emotion recognition using key facial featuresca
dc.typeinfo:eu-repo/semantics/articleca
dc.rights.accessLevelinfo:eu-repo/semantics/openAccess
dc.embargo.termscapca
dc.subject.udc004ca
dc.subject.udc159.9ca
dc.identifier.doihttps://doi.org/10.3389/fcomp.2025.1554320ca
dc.description.versioninfo:eu-repo/semantics/publishedVersionca


Files in this item

 

This item appears in the following Collection(s)

Show simple item record

© L'autor/a
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by/4.0/
Share on TwitterShare on LinkedinShare on FacebookShare on TelegramShare on WhatsappPrint