Academic & Scientific Articles

Permanent URI for this communityhttp://dl.cerist.dz/handle/CERIST/3

Browse

Search Results

Now showing 1 - 10 of 38
  • Thumbnail Image
    Item
    Multi-CNN Model for Multi-Classification of Cultural Heritage Monuments
    (CERIST, 2024-04) Djelliout, Toufik; Aliane, Hassina
    The use of convolutional neural networks (CNN) in the preservation of cultural heritage monuments, especially in conflict-affected regions such as Gaza, Ukraine, Iraq and others, represents a significant advancement in heritage conservation efforts. This paper presents an approach that uses a Multi-CNN model to classify images of cultural heritage monuments into various categories, encompassing period, monument type and location. By leveraging the capabilities of CNNs, this model demonstrates a high level of accuracy in categorizing heritage monuments based on multiple attributes. The study highlights the superior performance of the Multi-CNN model compared to other popular models such as DenseNet169, GoogleNet and MnasNet, highlighting its effectiveness in accurately classifying images of cultural heritage monuments in various dimensions. According to the evaluation results, the top-performing multi-CNN model achieves a classification accuracy of 94.52%, outperforming the single CNN models. The DenseNet196 model achieves 93.70% accuracy, the MnasNet model achieves 92.80% accuracy, and the GoogleNet model achieves 88.18% accuracy.
  • Thumbnail Image
    Item
    Face and kinship image based on combination descriptors-DIEDA for large scale features
    (IEEE, 2018-12-30) Aliradi, Rachid; Belkhir, Abdelkader; Ouamane, Abdelmalik; Aliane, Hassina
    In this paper, we introduce an efficient linear similarity learning system for face verification. Humans can easily recognize each other by their faces and since the features of the face are unobtrusive to the condition of illumination and varying expression, the face remains as an access of active recognition technique to the human. The verification refers to the task of teaching a machine to recognize a pair of match and non-match faces (kin or No-kin) based on features extracted from facial images and to determine the degree of this similarity. There are real problems when the discriminative features are used in traditional kernel verification systems, such as concentration on the local information zones, containing enough noise in non-facing and redundant information in zones overlapping in certain blocks, manual adjustment of parameters and dimensions high vectors. To solve the above problems, a new method of robust face verification with combining with a large scales local features based on Discriminative-Information based on Exponential Discriminant Analysis (DIEDA). The projected histograms for each zone are scored using the discriminative metric learning. Finally, the different region scores corresponding to different descriptors at various scales are fused using Support Vector Machine (SVM) classifier. Compared with other relevant state-of-the-art work, this system improves the efficiency of learning while controlling the effectiveness. The experimental results proved that both of these two initializations are efficient and outperform performance of the other state-of-the-art techniques.
  • Thumbnail Image
    Item
    AraCOVID19-MFH: Arabic COVID-19 Multi-label Fake News & Hate Speech Detection Dataset
    (Elsevier, 2021) Hadj Ameur, Mohamed Seghir; Aliane, Hassina
    Along with the COVID-19 pandemic, an "infodemic" of false and misleading information has emerged and has complicated the COVID-19 response efforts. Social networking sites such as Facebook and Twitter have contributed largely to the spread of rumors, conspiracy theories, hate, xenophobia, racism, and prejudice. To combat the spread of fake news, researchers around the world have and are still making considerable efforts to build and share COVID-19 related research articles, models, and datasets. This paper releases "AraCOVID19-MFH"1 a manually annotated multi-label Arabic COVID-19 fake news and hate speech detection dataset. Our dataset contains 10,828 Arabic tweets annotated with 10 different labels. The labels have been designed to consider some aspects relevant to the fact-checking task, such as the tweet’s check worthiness, positivity/negativity, and factuality. To confirm our annotated dataset’s practical utility, we used it to train and evaluate several classification models and reported the obtained results. Though the dataset is mainly designed for fake news detection, it can also be used for hate speech detection, opinion/news classification, dialect identification, and many other tasks.
  • Thumbnail Image
    Item
    ArA*summarizer: An Arabic text summarization system based on subtopic segmentation and using an A* algorithm for reduction
    (Wiley, 2020-04-19) Bahloul, Belahcene; Aliane, Hassina; Benmohammed, Mohamed
    Automatic text summarization is a field situated at the intersection of natural language processing and information retrieval. Its main objective is to automatically produce a condensed representative form of documents. This paper presents ArA*summarizer, an automatic system for Arabic single document summarization. The system is based on an unsupervised hybrid approach that combines statistical, cluster-based, and graph-based techniques. The main idea is to divide text into subtopics then select the most relevant sentences in the most relevant subtopics. The selection process is done by an A* algorithm executed on a graph representing the different lexical–semantic relationships between sentences. Experimentation is conducted on Essex Arabic summaries corpus and using recall-oriented understudy for gisting evaluation, automatic summarization engineering, merged model graphs, and n-gram graph powered evaluation via regression evaluation metrics. The evaluation results showed the good performance of our system compared with existing works.
  • Thumbnail Image
    Item
    Ontology learning: Grand tour and challenges
    (Elsevier, 2021-02-21) Chérifa Khadir, Ahlem; Aliane, Hassina; Guessoum, Ahmed
    Ontologies are at the core of the semantic web. As knowledge bases, they are very useful resources for many artificial intelligence applications. Ontology learning, as a research area, proposes techniques to automate several tasks of the ontology construction process to simplify the tedious work of manually building ontologies. In this paper we present the state of the art of this field. Different classes of approaches are covered (linguistic, statistical, and machine learning), including some recent ones (deep-learning-based approaches). In addition, some relevant solutions (frameworks), which offer strategies and built-in methods for ontology learning, are presented. A descriptive summary is made to point out the capabilities of the different contributions based on criteria that have to do with the produced ontology components and the degree of automation. We also highlight the challenge of evaluating ontologies to make them reliable, since it is not a trivial task in this field; it actually represents a research area on its own. Finally, we identify some unresolved issues and open questions.
  • Thumbnail Image
    Item
    A genetic algorithm feature selection based approach for Arabic Sentiment Classification
    (IEEE Computer Society, 2016-11-29) Aliane, Hassina; Aliane, A.A; Ziane, M.; Bensaou, N.
    With the recently increasing interest for opinion mining from different research communities, there is an evolving body of work on Arabic Sentiment Analysis. There are few available polarity annotated datasets for this language, so most existing works use these datasets to test the best known supervised algorithms for their objectives. Naïve Bayes and SVM are the best reported algorithms in the Arabic sentiment analysis literature. The work described in this paper shows that using a genetic algorithm to select features and enhancing the quality of the training dataset improve significantly the accuracy of the learning algorithm. We use the LABR dataset of book reviews and compare our results with LABR’s authors’ results.
  • Thumbnail Image
    Item
    Automatic Construction of Ontology from Arabic Texts
    (Université Djillali LIABES Sidi-Bel-Abbès, 2012-04-29) Mazari, Ahmed Cherif; Aliane, Hassina; Alimazighi, Zaia
    The work which will be presented in this paper is related to the building of an ontology of domain for the Arabic linguistics. We propose an approach of automatic construction that is using statistical techniques to extract elements of ontology from Arabic texts. Among these techniques we use two; the first is the "repeated segment" to identify the relevant terms that denote the concepts associated with the domain and the second is the "co-occurrence" to link these new concepts extracted to the ontology by hierarchical or non- hierarchical relations. The processing is done on a corpus of Arabic texts formed and prepared in advance.
  • Thumbnail Image
    Item
    Error Drift Compensation for Data Hiding of the H.264/AVC
    (Romanian Society of Control Engineering and Technical Informatics, 2013) Bouchama, Samira; Hamami, Latifa; Aliane, Hassina
    The error propagation problem is one of the most attractive issues in the field of data hiding of compressed video because the achievement of several data hiding characteristics remains dependant on it. In this paper, a solution to compensate the error propagation is proposed for data hiding of the H.264/AVC. The error compensation is performed by a prior measurement of the introduced error in the watermarked block or in the neighbouring blocks. Two schemes are proposed: The first algorithm exploits the method of watermarking paired-coefficients in each block in order to bring the error to the middle of the block matrix. The distortion caused by each paired-coefficient is calculated in order to give a watermarking priority to the pairs which introduce the minimum error. In the second scheme, the error estimated in the neighbouring blocks is reduced from the residuals during the encoding process. In both schemes, results show that an important improvement of the video quality can be achieved and a good compromise is provided between the video distortion, the bitrate and the embedding.
  • Thumbnail Image
    Item
    Système de Productions de Contenus Pédagogiques Multilingues
    (CERIST, 2014-01-29) Boughacha, Rime; Aliane, Hassina
    Les recherches sur les environnements informatiques d’apprentissage humain portent sur les principes de conception, de développement et d'évaluation de systèmes informatiques qui permettent à des apprenants d'apprendre en utilisant les TICs. En effet, pour médiatiser le savoir à transmettre, l’enseignant doit devenir producteur de supports pédagogiques. Notre objectif dans ce travail est de proposer un modèle générique et modulaire de conception de contenus pédagogiques multilingue décrivant l’ensemble des tâches attribuées aux auteurs. Pour ce faire nous proposons un système de production de contenus basé sur un modèle générique de conception de contenus distinguant plusieurs étapes notamment la conception de contenu, indexation et médiatisation.
  • Thumbnail Image
    Item
    Une approche pour l’opérationnalisation d’ontologies fondée sur les graphes conceptuels
    (CERIST, 2014) Bourai, Fouzia; Aliane, Hassina
    L’opérationnalisation d’ontologies consiste a exprimée une ontologie dans un langage opérationnel (computationnel), afin de pouvoir effectué des raisonnements et des inférences. Dans cet article nous proposons notre approche d’opérationnalisation d’ontologies basée sur les graphes conceptuels. Notre approche permet la création d’ontologies basées sur les graphes conceptuels ainsi que l’importation et la transformation d’ontologies existantes décrites en OWL. Une fois l’ontologie opérationnelle obtenue, celle-ci peut être vue comme une base de connaissances à laquelle nous ajoutons un mécanisme d’inférence permettant un raisonnement déductif.