Indexing multimedia content for textual querying: A multimodal approach
dc.contributor.author | Amrane, Abdesalam | |
dc.contributor.author | Mellah, Hakima | |
dc.contributor.author | Amghar, Youssef | |
dc.contributor.author | Aliradi, Rachid | |
dc.date.accessioned | 2013-05-30T07:41:24Z | |
dc.date.available | 2013-05-30T07:41:24Z | |
dc.date.issued | 2013-07 | |
dc.description.abstract | Multimedia retrieval approaches are classified into three categories: those using textual information, and those using low-level information and those that combine different information extracted from multimedia. Each approach has its advantages and disadvantages as well to improving multimedia retrieval systems. The recent works are oriented towards multimodal approaches. It is in this context that we propose an approach that combines the surrounding text with the information extracted from the visual content of multimedia and represented in the same repository in order to allow querying multimedia content based on keywords or concepts. Each word contained in queries or in description of multimedia is disambiguated by using the WordNet in order to define its semantic concept. | fr_FR |
dc.identifier.isrn | CERIST-DSISM/RR--13-000000013--dz | |
dc.identifier.uri | http://dl.cerist.dz/handle/CERIST/151 | |
dc.relation.ispartof | WEBI 2013 | |
dc.relation.ispartofseries | WEBI 2013; | |
dc.relation.place | Angers - France | fr_FR |
dc.structure | Interactions et Routage dans les Systèmes d'Information | fr_FR |
dc.subject | Images classification | fr_FR |
dc.subject | Automatic annotation | fr_FR |
dc.subject | Multimedia retrieval | fr_FR |
dc.subject | Query languages | fr_FR |
dc.subject | Images classification | fr_FR |
dc.subject | SVM classifier | fr_FR |
dc.subject | SIFT descriptors | fr_FR |
dc.subject | Semantic representation | fr_FR |
dc.subject | Multimodal retrieval | fr_FR |
dc.subject | Semantic similarity | fr_FR |
dc.subject | Textual indexing. | fr_FR |
dc.title | Indexing multimedia content for textual querying: A multimodal approach | fr_FR |
dc.type | Conference paper |