Mostra el registre parcial de l'element

dc.contributorUniversitat Ramon Llull. IQS
dc.contributor.authorReshetnikov, Artem
dc.contributor.authormarinescu, maria-cristina
dc.contributor.authorMoré, Joaquim
dc.contributor.authorMendoza, Sergio
dc.contributor.authorFreire, Nuno
dc.contributor.authorMarrero, Monica
dc.contributor.authorTsoupra, Eleftheria
dc.contributor.authorIsaac, Antoine
dc.date.accessioned2025-09-10T10:53:10Z
dc.date.available2025-09-10T10:53:10Z
dc.date.issued2025-10
dc.identifier.issn1778-3674ca
dc.identifier.urihttp://hdl.handle.net/20.500.14342/5499
dc.description.abstractAnnotation of cultural heritage artefacts allows finding and exploration of items relevant to user needs, supports functionality such as question answering or scene understanding, and in general facilitates the exposure of the society to our history and heritage. But most artefacts lack a description of their visual content due to the assumption that one sees the object; this often means that the annotations effort focuses on the historical and artistic context, information about the painter, or details about the execution and medium. Without a significant body of visual content annotation, machines cannot integrate all this data to allow further analysis, query and inference, and cultural institutions cannot offer advanced functionality to their users and visitors. Given how time-consuming manual annotation is, and to enable the development of new technology and applications for cultural heritage, we have provided through DEArt the most extensive art dataset for object detection and pose classification to date. The current paper extends this work in several ways: (1) we introduce an approach for generating refined object and relationship labels without the need for manual annotations, (2) we compare the performance of our models with the most relevant state-of-the-art in both computer vision and cultural heritage, (3) we evaluate the annotations generated by our object detection model from a user viewpoint, for both correctness and relevance, and (4) we briefly discuss the fairness of our dataset.ca
dc.format.extentp.9ca
dc.language.isoengca
dc.publisherElsevierca
dc.relation.ispartofJournal of Cultural Heritage 2025, 75ca
dc.rights© L'autor/aca
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subject.otherObject detectionca
dc.subject.otherPose classificationca
dc.subject.otherIconographic artca
dc.subject.otherDeep learning architectureca
dc.subject.otherDataset/model fairnessca
dc.subject.otherDetecció d'objectesca
dc.subject.otherComposició (Art)ca
dc.subject.otherIconologiaca
dc.subject.otherConjunts de dadesca
dc.subject.otherAprenentatge profund (Aprenentatge automàtic)ca
dc.titleDEArt: Building and evaluating a dataset for object detection and pose classification for European artca
dc.typeinfo:eu-repo/semantics/articleca
dc.rights.accessLevelinfo:eu-repo/semantics/openAccess
dc.embargo.termscapca
dc.subject.udc004ca
dc.subject.udc7ca
dc.identifier.doihttps://doi.org/10.1016/j.culher.2025.07.022ca
dc.description.versioninfo:eu-repo/semantics/publishedVersionca


Fitxers en aquest element

 

Aquest element apareix en la col·lecció o col·leccions següent(s)

Mostra el registre parcial de l'element

© L'autor/a
Excepte que s'indiqui una altra cosa, la llicència de l'ítem es descriu com http://creativecommons.org/licenses/by-nc-nd/4.0/
Comparteix a TwitterComparteix a LinkedinComparteix a FacebookComparteix a TelegramComparteix a WhatsappImprimeix