Browsing by Author "Aliane, Hassina"
Now showing 1 - 20 of 38
Results Per Page
Sort Options
- ItemA comparative study between compressed video watermarking methods based on DCT coefficients and intra prediction(2011-04-20) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaSeveral watermarking methods have been applied to the newest video codec H.264/AVC to satisfy different applications such as authentication, tamper proof and copyright protection. Our objective through this paper is to present a comparative study between watermarking methods based on the quantized (Q) DCT coefficients and those based on the intra prediction of the 4x4 luma blocks, in terms of embedding capacity, video quality and bitrate. The use of intra prediction modes is motivating because it is possible to embed a relatively high embedding capacity while preserving the video quality; however it seems difficult to maintain the bitrate. In this paper we show that the Intra prediction based method outperforms the QDCT based method using the same codec configuration.
- ItemA comparative study between compressed video watermarking methods based on DCT coefficients and intra prediction(CERIST, 2011-09) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaSeveral watermarking methods have been applied to the newest video codec H.264/AVC to satisfy different applications such as authentication, tamper proof and copyright protection. Our objective through this paper is to present a comparative study between watermarking methods based on the quantized (Q) DCT coefficients and those based on the intra prediction of the 4x4 luma blocks, in terms of embedding capacity, video quality and bitrate. The use of intra prediction modes is motivating because it is possible to embed a relatively high embedding capacity while preserving the video quality; however it seems difficult to maintain the bitrate. In this paper we show that the Intra prediction based method outperforms the QDCT based method using the same codec configuration.
- ItemA genetic algorithm feature selection based approach for Arabic Sentiment Classification(IEEE Computer Society, 2016-11-29) Aliane, Hassina; Aliane, A.A; Ziane, M.; Bensaou, N.With the recently increasing interest for opinion mining from different research communities, there is an evolving body of work on Arabic Sentiment Analysis. There are few available polarity annotated datasets for this language, so most existing works use these datasets to test the best known supervised algorithms for their objectives. Naïve Bayes and SVM are the best reported algorithms in the Arabic sentiment analysis literature. The work described in this paper shows that using a genetic algorithm to select features and enhancing the quality of the training dataset improve significantly the accuracy of the learning algorithm. We use the LABR dataset of book reviews and compare our results with LABR’s authors’ results.
- ItemA Java-Web-Based-Learning Methodology, Case Study : Waterborne diseases(CERIST, Alger, 2001) Benmeziane, Souad; Aliane, Hassina; Allouti, F.; Khelalfa, H. M.Les développements récents des technologies Web ont fait émerger de nouvelles approches pour l’enseignement assisté par ordinateur. En effet, l’apprenant peut apprendre indépendamment des contraintes de temps et de lieu. Plusieurs systèmes basés sur le Web ont été développés. Cependant, ils demeurent limités : ils ne peuvent pas contrôler et guider l’apprenant vu le manque d’interactivité du Web. Dans cet article, nous présentons une approche basée sur Java pour la conception de systèmes d’enseignement qui surmontent les limites des systèmes existants. Cette approche offre un apprentissage à distance, invidualisé tenant compte du rythme de l’apprenant et supportant ainsi un haut niveau d’interactivité. Une étude de cas est présentée concernant les maladies à transmission hydrique.
- ItemAl –Khalil: The Arabic Linguistic Ontology Project(CERIST, 2010) Aliane, HassinaWe present in this paper our project to building an ontology centered infrastructure for Arabic resources and applications. The core of this infrastructure is a linguistic ontology that is founded on Arabic Traditional Grammar. The methodology we have chosen consists in reusing an existing ontology, namely the Gold linguistic ontology. We discuss the development of the ontology and present our vision for the whole project which aims at using this ontology for creating tools and resources for both linguists and NLP researchers.
- ItemUne Approche Non supervisée pour la Découverte Automatique des Morphèmes de la Langue Arabe(CERIST, 2012) Aliane, Hassina; Alimazighi, ZaiaNous présentons dans cet article une approche non supervisée pour la découverte automatique des morphèmes de la langue Arabe à partir d'un corpus électronique de textes bruts sans utiliser de lexique ni de règles et faisant seulement appelé à un minimum de connaissances générales sur la langue. Notre approche est fondée sur l'analyse distributionnelle dans la Tradition Grammaticale Arabe que nous comparons par ailleurs à l'analyse distributionnelle Harrisienne.
- ItemUne approche pour l’opérationnalisation d’ontologies fondée sur les graphes conceptuels(2014-03-10) Bourai, Fouzia; Aliane, HassinaL’opérationnalisation d’ontologies consiste a exprimée une ontologie dans un langage opérationnel (computationnel), afin de pouvoir effectué des raisonnements et des inférences. Dans cet article nous proposons notre approche d’opérationnalisation d’ontologies basée sur les graphes conceptuels. Notre approche permet la création d’ontologies basées sur les graphes conceptuels ainsi que l’importation et la transformation d’ontologies existantes décrites en OWL. Une fois l’ontologie opérationnelle obtenue, celle-ci peut être vue comme une base de connaissances à laquelle nous ajoutons un mécanisme d’inférence permettant un raisonnement déductif.
- ItemUne approche pour l’opérationnalisation d’ontologies fondée sur les graphes conceptuels(CERIST, 2014) Bourai, Fouzia; Aliane, HassinaL’opérationnalisation d’ontologies consiste a exprimée une ontologie dans un langage opérationnel (computationnel), afin de pouvoir effectué des raisonnements et des inférences. Dans cet article nous proposons notre approche d’opérationnalisation d’ontologies basée sur les graphes conceptuels. Notre approche permet la création d’ontologies basées sur les graphes conceptuels ainsi que l’importation et la transformation d’ontologies existantes décrites en OWL. Une fois l’ontologie opérationnelle obtenue, celle-ci peut être vue comme une base de connaissances à laquelle nous ajoutons un mécanisme d’inférence permettant un raisonnement déductif.
- ItemArA*summarizer: An Arabic text summarization system based on subtopic segmentation and using an A* algorithm for reduction(Wiley, 2020-04-19) Bahloul, Belahcene; Aliane, Hassina; Benmohammed, MohamedAutomatic text summarization is a field situated at the intersection of natural language processing and information retrieval. Its main objective is to automatically produce a condensed representative form of documents. This paper presents ArA*summarizer, an automatic system for Arabic single document summarization. The system is based on an unsupervised hybrid approach that combines statistical, cluster-based, and graph-based techniques. The main idea is to divide text into subtopics then select the most relevant sentences in the most relevant subtopics. The selection process is done by an A* algorithm executed on a graph representing the different lexical–semantic relationships between sentences. Experimentation is conducted on Essex Arabic summaries corpus and using recall-oriented understudy for gisting evaluation, automatic summarization engineering, merged model graphs, and n-gram graph powered evaluation via regression evaluation metrics. The evaluation results showed the good performance of our system compared with existing works.
- ItemAraCOVID19-MFH: Arabic COVID-19 Multi-label Fake News & Hate Speech Detection Dataset(Elsevier, 2021) Hadj Ameur, Mohamed Seghir; Aliane, HassinaAlong with the COVID-19 pandemic, an "infodemic" of false and misleading information has emerged and has complicated the COVID-19 response efforts. Social networking sites such as Facebook and Twitter have contributed largely to the spread of rumors, conspiracy theories, hate, xenophobia, racism, and prejudice. To combat the spread of fake news, researchers around the world have and are still making considerable efforts to build and share COVID-19 related research articles, models, and datasets. This paper releases "AraCOVID19-MFH"1 a manually annotated multi-label Arabic COVID-19 fake news and hate speech detection dataset. Our dataset contains 10,828 Arabic tweets annotated with 10 different labels. The labels have been designed to consider some aspects relevant to the fact-checking task, such as the tweet’s check worthiness, positivity/negativity, and factuality. To confirm our annotated dataset’s practical utility, we used it to train and evaluate several classification models and reported the obtained results. Though the dataset is mainly designed for fake news detection, it can also be used for hate speech detection, opinion/news classification, dialect identification, and many other tasks.
- ItemAutomatic Construction of Ontology from Arabic Texts(Université Djillali LIABES Sidi-Bel-Abbès, 2012-04-29) Mazari, Ahmed Cherif; Aliane, Hassina; Alimazighi, ZaiaThe work which will be presented in this paper is related to the building of an ontology of domain for the Arabic linguistics. We propose an approach of automatic construction that is using statistical techniques to extract elements of ontology from Arabic texts. Among these techniques we use two; the first is the "repeated segment" to identify the relevant terms that denote the concepts associated with the domain and the second is the "co-occurrence" to link these new concepts extracted to the ontology by hierarchical or non- hierarchical relations. The processing is done on a corpus of Arabic texts formed and prepared in advance.
- ItemCerist News(2010-12) CERIST; Aliane, Hassina
- ItemError Drift Compensation for Data Hiding of the H.264/AVC(Romanian Society of Control Engineering and Technical Informatics, 2013) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaThe error propagation problem is one of the most attractive issues in the field of data hiding of compressed video because the achievement of several data hiding characteristics remains dependant on it. In this paper, a solution to compensate the error propagation is proposed for data hiding of the H.264/AVC. The error compensation is performed by a prior measurement of the introduced error in the watermarked block or in the neighbouring blocks. Two schemes are proposed: The first algorithm exploits the method of watermarking paired-coefficients in each block in order to bring the error to the middle of the block matrix. The distortion caused by each paired-coefficient is calculated in order to give a watermarking priority to the pairs which introduce the minimum error. In the second scheme, the error estimated in the neighbouring blocks is reduced from the residuals during the encoding process. In both schemes, results show that an important improvement of the video quality can be achieved and a good compromise is provided between the video distortion, the bitrate and the embedding.
- ItemError Drift Compensation for Data Hiding of the H.264/AVC(CERIST, 2013) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaThe error propagation problem is one of the most attractive issues in the field of data hiding of compressed video because the achievement of several data hiding characteristics remains dependant on it. In this paper, a solution to compensate the error propagation is proposed for data hiding of the H.264/AVC. The error compensation is performed by a prior measurement of the introduced error in the watermarked block or in the neighbouring blocks. Two schemes are proposed: The first algorithm exploits the method of watermarking paired-coefficients in each block in order to bring the error to the middle of the block matrix. The distortion caused by each paired-coefficient is calculated in order to give a watermarking priority to the pairs which introduce the minimum error. In the second scheme, the error estimated in the neighbouring blocks is reduced from the residuals during the encoding process. In both schemes, results show that an important improvement of the video quality can be achieved and a good compromise is provided between the video distortion, the bitrate and the embedding.
- ItemFace and kinship image based on combination descriptors-DIEDA for large scale features(IEEE, 2018-12-30) Aliradi, Rachid; Belkhir, Abdelkader; Ouamane, Abdelmalik; Aliane, HassinaIn this paper, we introduce an efficient linear similarity learning system for face verification. Humans can easily recognize each other by their faces and since the features of the face are unobtrusive to the condition of illumination and varying expression, the face remains as an access of active recognition technique to the human. The verification refers to the task of teaching a machine to recognize a pair of match and non-match faces (kin or No-kin) based on features extracted from facial images and to determine the degree of this similarity. There are real problems when the discriminative features are used in traditional kernel verification systems, such as concentration on the local information zones, containing enough noise in non-facing and redundant information in zones overlapping in certain blocks, manual adjustment of parameters and dimensions high vectors. To solve the above problems, a new method of robust face verification with combining with a large scales local features based on Discriminative-Information based on Exponential Discriminant Analysis (DIEDA). The projected histograms for each zone are scored using the discriminative metric learning. Finally, the different region scores corresponding to different descriptors at various scales are fused using Support Vector Machine (SVM) classifier. Compared with other relevant state-of-the-art work, this system improves the efficiency of learning while controlling the effectiveness. The experimental results proved that both of these two initializations are efficient and outperform performance of the other state-of-the-art techniques.
- ItemH.264/AVC Data Hiding Algorithm with a Limitation of the Drift Distortion(2012-10-21) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaMany data hiding algorithms have been proposed for the latest video codec H.264/AVC, most of them are based on the 4x4 luma DCT coefficients. However, the drift distortion is the main reason which limits the embedding capacity in data hiding algorithms based on DCT coefficients. Few methods have been proposed to compensate or eliminate the error propagation. Though, they are either non-blind, only detectable or need prior knowledge of the encoded blocks and thus cannot be used for real time broadcasting. In this paper we show that it is possible to reduce considerably the error propagation for real time applications. The proposed algorithm exploits the method of watermarking paired-coefficients in each block in order to bring the error to the middle of the block matrix. We evaluate the distortion caused by each paired-coefficients in order to give a watermarking priority to the pairs which introduce the minimum error. The proposed scheme offers a very good compromise between the video distortion, the increase in bitrate and the embedding capacity
- ItemH.264/AVC Data Hiding Algorithm with a Limitation of the Drift Distortion(CERIST, 2012) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaMany data hiding algorithms have been proposed for the latest video codec H.264/AVC, most of them are based on the 4x4 luma DCT coefficients. However, the drift distortion is the main reason which limits the embedding capacity in data hiding algorithms based on DCT coefficients. Few methods have been proposed to compensate or eliminate the error propagation. Though, they are either non-blind, only detectable or need prior knowledge of the encoded blocks and thus cannot be used for real time broadcasting. In this paper we show that it is possible to reduce considerably the error propagation for real time applications. The proposed algorithm exploits the method of watermarking paired-coefficients in each block in order to bring the error to the middle of the block matrix. We evaluate the distortion caused by each paired-coefficients in order to give a watermarking priority to the pairs which introduce the minimum error. The proposed scheme offers a very good compromise between the video distortion, the increase in bitrate and the embedding capacity.
- ItemH.264/AVC data hiding based on Intra prediction modes for real time applications(CERIST, 2012) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaA compromise is usually made between the different constraints of video data hiding methods, which seems more difficult to achieve for real time applications. In this paper, a new data hiding approach is presented for the video codec H.264/AVC. This method uses the intra prediction modes for the 4x4 luminance blocks. The objective is to ensure a high data embedding capacity and to preserve the encoding and the decoding times in order to satisfy real-time applications. The data embedding is based on changing a mode to the one of the closest direction. Good results have been obtained in terms of increasing the embedding capacity, maintaining the visual quality and limiting the additional processing time.
- ItemH.264/AVC Data Hiding Based on Intra Prediction Modes for Real-time Applications(2012-10-24) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaThe existing data hiding methods for the newest video codec H.264/AVC exploit its several modules such as the discrete cosine transform coefficients or the prediction modes. In this paper, a new data hiding approach is presented by exploiting the intra prediction modes for the 4x4 luminance blocks. The objective is to ensure a relatively high embedding capacity and to preserve the encoding and the decoding times in order to satisfy real-time applications. The intra prediction modes are divided into four groups composed of modes of close prediction directions. The data embedding is based on modifying modes of the same group in order to maintain visual quality and limit the number of additional calculation procedures. The increase of embedding capacity relies on the group composed on four modes since it allows the embedding of two bits per mode.
- ItemIncrease of embedding capacity of the H.264/AVC data hiding based on the intra-prediction modes(CERIST, 2011-09) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaData hiding methods applied to the newest video codec H.264/AVC are exploiting several modules of this codec such as the discrete cosine transform (DCT) coefficients and the intra prediction modes. The objective is to realize a good compromise between the embedding capacity, the increase in bitrate and the video quality, to satisfy several applications such as authentication or covert communication. In this paper we present a new approach to exploit the intra prediction modes for data hiding of the 4x4 luminance blocks. The objective is to ensure a good data embedding capacity and to preserve the encoding and the decoding times to satisfy reel time applications. The proposed method is based on dividing the intra prediction modes into four groups. Each group in composed of modes that have close prediction directions. Embedding the secret data is based on applying modification between modes of the same group in order to maintain visual quality and limit the number of additional calculation procedures. The increase of embedding capacity relies on the group composed on four modes since it allows the embedding of two bits per mode instead of one.