International Conference Papers
Permanent URI for this collection
Browse
Browsing International Conference Papers by Author "Aliane, Hassina"
Now showing 1 - 11 of 11
Results Per Page
Sort Options
- ItemA genetic algorithm feature selection based approach for Arabic Sentiment Classification(IEEE Computer Society, 2016-11-29) Aliane, Hassina; Aliane, A.A; Ziane, M.; Bensaou, N.With the recently increasing interest for opinion mining from different research communities, there is an evolving body of work on Arabic Sentiment Analysis. There are few available polarity annotated datasets for this language, so most existing works use these datasets to test the best known supervised algorithms for their objectives. Naïve Bayes and SVM are the best reported algorithms in the Arabic sentiment analysis literature. The work described in this paper shows that using a genetic algorithm to select features and enhancing the quality of the training dataset improve significantly the accuracy of the learning algorithm. We use the LABR dataset of book reviews and compare our results with LABR’s authors’ results.
- ItemUne approche pour l’opérationnalisation d’ontologies fondée sur les graphes conceptuels(2014-03-10) Bourai, Fouzia; Aliane, HassinaL’opérationnalisation d’ontologies consiste a exprimée une ontologie dans un langage opérationnel (computationnel), afin de pouvoir effectué des raisonnements et des inférences. Dans cet article nous proposons notre approche d’opérationnalisation d’ontologies basée sur les graphes conceptuels. Notre approche permet la création d’ontologies basées sur les graphes conceptuels ainsi que l’importation et la transformation d’ontologies existantes décrites en OWL. Une fois l’ontologie opérationnelle obtenue, celle-ci peut être vue comme une base de connaissances à laquelle nous ajoutons un mécanisme d’inférence permettant un raisonnement déductif.
- ItemAutomatic Construction of Ontology from Arabic Texts(Université Djillali LIABES Sidi-Bel-Abbès, 2012-04-29) Mazari, Ahmed Cherif; Aliane, Hassina; Alimazighi, ZaiaThe work which will be presented in this paper is related to the building of an ontology of domain for the Arabic linguistics. We propose an approach of automatic construction that is using statistical techniques to extract elements of ontology from Arabic texts. Among these techniques we use two; the first is the "repeated segment" to identify the relevant terms that denote the concepts associated with the domain and the second is the "co-occurrence" to link these new concepts extracted to the ontology by hierarchical or non- hierarchical relations. The processing is done on a corpus of Arabic texts formed and prepared in advance.
- ItemFace and kinship image based on combination descriptors-DIEDA for large scale features(IEEE, 2018-12-30) Aliradi, Rachid; Belkhir, Abdelkader; Ouamane, Abdelmalik; Aliane, HassinaIn this paper, we introduce an efficient linear similarity learning system for face verification. Humans can easily recognize each other by their faces and since the features of the face are unobtrusive to the condition of illumination and varying expression, the face remains as an access of active recognition technique to the human. The verification refers to the task of teaching a machine to recognize a pair of match and non-match faces (kin or No-kin) based on features extracted from facial images and to determine the degree of this similarity. There are real problems when the discriminative features are used in traditional kernel verification systems, such as concentration on the local information zones, containing enough noise in non-facing and redundant information in zones overlapping in certain blocks, manual adjustment of parameters and dimensions high vectors. To solve the above problems, a new method of robust face verification with combining with a large scales local features based on Discriminative-Information based on Exponential Discriminant Analysis (DIEDA). The projected histograms for each zone are scored using the discriminative metric learning. Finally, the different region scores corresponding to different descriptors at various scales are fused using Support Vector Machine (SVM) classifier. Compared with other relevant state-of-the-art work, this system improves the efficiency of learning while controlling the effectiveness. The experimental results proved that both of these two initializations are efficient and outperform performance of the other state-of-the-art techniques.
- ItemH.264/AVC Data Hiding Algorithm with a Limitation of the Drift Distortion(2012-10-21) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaMany data hiding algorithms have been proposed for the latest video codec H.264/AVC, most of them are based on the 4x4 luma DCT coefficients. However, the drift distortion is the main reason which limits the embedding capacity in data hiding algorithms based on DCT coefficients. Few methods have been proposed to compensate or eliminate the error propagation. Though, they are either non-blind, only detectable or need prior knowledge of the encoded blocks and thus cannot be used for real time broadcasting. In this paper we show that it is possible to reduce considerably the error propagation for real time applications. The proposed algorithm exploits the method of watermarking paired-coefficients in each block in order to bring the error to the middle of the block matrix. We evaluate the distortion caused by each paired-coefficients in order to give a watermarking priority to the pairs which introduce the minimum error. The proposed scheme offers a very good compromise between the video distortion, the increase in bitrate and the embedding capacity
- ItemH.264/AVC Data Hiding Based on Intra Prediction Modes for Real-time Applications(2012-10-24) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaThe existing data hiding methods for the newest video codec H.264/AVC exploit its several modules such as the discrete cosine transform coefficients or the prediction modes. In this paper, a new data hiding approach is presented by exploiting the intra prediction modes for the 4x4 luminance blocks. The objective is to ensure a relatively high embedding capacity and to preserve the encoding and the decoding times in order to satisfy real-time applications. The intra prediction modes are divided into four groups composed of modes of close prediction directions. The data embedding is based on modifying modes of the same group in order to maintain visual quality and limit the number of additional calculation procedures. The increase of embedding capacity relies on the group composed on four modes since it allows the embedding of two bits per mode.
- ItemQueries reformulation for semantic web services discovery(Ajith Abraham, 2011-10-19) Mekhzoumi, Dalila; Aliane, Hassina; Nouali, Omarthe semantic web services have been introduced to better describe web services and improve their functioning. Several approaches have been proposed in this field. The most used is OWL-S, where the discovery is to match the user query and the service profile of OWL-S service. In this paper we present a query reformulation approach for searching semantic web services. We propose three methods to enrich the query by exploiting the semantic relationships given by synonymy and derivatives of the WordNet ontology. We select the semantic web services through a matching between the query and ontologies of semantic web services OWL-S using semantic and syntactic similarity measures
- ItemReversible data hiding scheme for the H.264/AVC codec(2013-06-24) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaVery few reversible data hiding methods are proposed for compressed video and particularly for the H.264/AVC video codec, despite the importance of both of the watermarking reversibility criterion and the codec. The reversible watermarking techniques of images, when applied to the compressed video, can affect particularly the video quality and bitrate. Thus, to make these techniques applicable, the embedding capacity is usually reduced. Therefore an adaptation is necessary to improve the tradeoff between the embedding capacity, the visual quality and the bitrate of the watermarked video. In this paper, we investigate the possibility to introduce the DCT based reversible data hiding method, proposed initially for compressed images, to H.264/AVC codec. The embedding is applied during the encoding process, in the quantized DCT coefficients of I and P frames. To enhance the embedding capacity a mapping rule is used to embed three bits in one coefficient. Results show that exploiting the P frames improves considerably the video quality and the embedding capacity.
- ItemSystème de production de contenus pédagogiques multilingue(2014-03-10) Boughacha, Rime; Aliane, HassinaLes recherches sur les environnements informatiques d’apprentissage humain portent sur les principes de conception, de développement et d'évaluation de systèmes informatiques qui permettent à des apprenants d'apprendre en utilisant les TICs. En effet, pour médiatiser le savoir à transmettre, l’enseignant doit devenir producteur de supports pédagogiques. Notre objectif dans ce travail est de proposer un modèle générique et modulaire de conception de contenus pédagogiques multilingue décrivant l’ensemble des tâches attribuées aux auteurs. Pour ce faire nous proposons un système de production de contenus basé sur un modèle générique de conception de contenus distinguant plusieurs étapes notamment la conception de contenu, indexation et médiatisation.
- ItemWatermarking of Compressed Video based on DCT Coefficients and Watermark Preprocessing(2011-03-05) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaConsidering the importance of watermarking of compressed video, several watermarking methods have been proposed for authentication, copyrights protection or simply for a secure data carrying through the Internet. Applied to the H.264/AVC video standard, in most of cases, these methods are based on the use of the quantized DCT coefficients often experimentally or randomly selected. In this paper, we introduce a watermarking method based on the DCT coefficients using two steps: the first one consists in a watermark pre-processing based on similarity measurement which can allow to adapt the best the watermark to the carrying coefficients of low frequencies. A second step takes advantage from the coefficients of high frequencies in order to maintain the video quality and reduce the bitrate. Results show that it is possible to achieve a very good compromise between video quality, embedding capacity and bitrate.
- ItemWatermarking techniques applied to H264/AVC video standard(2010-04-21) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaVideo watermarking describes the process of embedding information in video to satisfy applications such as the protection of intellectual property and the control of video authentication. In this field, researchers orient their investigations towards the new video standard H264/AVC which is increasingly used because of the coding efficiency it provides comparing to the previous standards. The different modules of this standard are exploited to embed watermark while responding to the application demands in terms of fragility or robustness to certain attacks and in terms of maintaining video quality and video bit rate. The aim of this paper is to give a global vision on the H264/AVC and to present some robust and fragile watermarking techniques applied to this video standard.