Browsing by Author "Aliradi, Rachid"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
- ItemA novel descriptor (LGBQ) based on Gabor filters(Springer, 2023-12-23) Aliradi, Rachid; Ouamane , AbdelmalikRecently, many existing automatic facial verification methods have focused on learning the optimal distance measurements between facials. Especially in the case of learning facial features by similarity which can make the proposed descriptors too weak. To justify filling this gap, we have proposed a new descriptor called Local Binary Gabor Quantization (LGBQ) for 3/2D face verification based on Gabor filters and uses tensor subspace transformation. Our main idea is to binarize the responses of eight Gabor filters based on eight orientations as a binary code which is converted into a decimal number and combines the advantage of three methods: Gabor, LBP, and LPQ. These descriptors provide more robustness to shape variations in face parts such as expression, pose, lighting, and scale. To do this, we have chosen to merge two techniques which are multilinear whitened principal component analysis (MWPCA) and tensor exponential discriminant analysis (TEDA). The experimentation is using two publicly available databases, namely, Bhosphorus, and CASIA 3D face database. The results show the supremacy of our method in terms of accuracy and execution time compared to state-of-the-art methods.
- ItemBSIF Features Learning using TXQEDA Tensor Subspace for kinship verification(Echahid Cheikh Larbi Tebessi university, 2023-06) Aliradi, Rachid; Ouamane , AbdealmalikFacial kinship verification is a hard research domain in vision that has very interesting regard in the latest decennial. Various applications were really realized in social media, biometrics, and development in studies of demographic. But the result accuracies obtained that is so weak to predict kinship relationships by facial appearance. To take up this challenge and tackle this problem. We use a new approach called Color BSIF learning an approach that has appeared as an encouraging solution. The aim is to solve problem KV by using the color BSIF learning features with the TXQEDA method for dimensionality reduction and data classification in order to train the model, Let's test the kinship facial verification application namely the Cornell Kinface database. This framework ameliorates the time cost and efficiency. The experimental results obtained surpass other states of the art methods
- ItemClassification of color textured images using linear prediction errors and support vector machines.(CERIST, 2013) Aliradi, RachidIn this paper we present a novel method for pixel classification. The goal is to approximate the distribution of the two dimensional multichannel linear prediction errors in order to improve the performance of color texture image classification. This method computes the class membership probability for each pixel of a given image. This probability uses the concept of clique potentials in a finite neighborhood of the pixel. The support vector machine (SVM) classification is applied to predict the class of each pixel belonging to the foreground. And finally, we do further refinement by neighborhood-check to omit all falsely-classified pixels. The results of the method are also compared to those of a non parametric and parametric pixel classification method respectively KNN and Bayes classifier. For the proposed method and with different color spaces, experimental results show better performances in terms percentage classification error, in comparison with the use of a multivariate Gaussian law.
- ItemDIEDA: discriminative information based on exponential discriminant analysis combined with local features representation for face and kinship verification(Springer, 2018-01-30) Aliradi, Rachid; Belkhir, Abdelkader; Ouamane, Abdelmalik; Elmaghraby , Adel S.Face and kinship verification using facial images is a novel and challenging problem in computer vision. In this paper, we propose a new system that uses discriminative information, which is based on the exponential discriminant analysis (DIEDA) combined with multiple scale descriptors. The histograms of different patches are concatenated to form a high dimensional feature vector, which represents a specific descriptor at a given scale. The projected histograms for each zone use the cosine similarity metric to reduce the feature vector dimensionality. Lastly, zone scores corresponding to various descriptors at different scales are fused and verified by using a classifier. This paper exploits discriminative side information for face and kinship verification in the wild (image pairs are from the same person or not). To tackle this problem, we take examples of the face samples with unlabeled kin relations from the labeled face in the wild dataset as the reference set. We create an optimized function by minimizing the interclass samples (with a kin relation) and maximizing the neighboring interclass samples (without a kinship relation) with the DIEDA approach. Experimental results on three publicly available face and kinship datasets show the superior performance of the proposed system over other state-of-the-art techniques.
- ItemFace and kinship image based on combination descriptors-DIEDA for large scale features(IEEE, 2018-12-30) Aliradi, Rachid; Belkhir, Abdelkader; Ouamane, Abdelmalik; Aliane, HassinaIn this paper, we introduce an efficient linear similarity learning system for face verification. Humans can easily recognize each other by their faces and since the features of the face are unobtrusive to the condition of illumination and varying expression, the face remains as an access of active recognition technique to the human. The verification refers to the task of teaching a machine to recognize a pair of match and non-match faces (kin or No-kin) based on features extracted from facial images and to determine the degree of this similarity. There are real problems when the discriminative features are used in traditional kernel verification systems, such as concentration on the local information zones, containing enough noise in non-facing and redundant information in zones overlapping in certain blocks, manual adjustment of parameters and dimensions high vectors. To solve the above problems, a new method of robust face verification with combining with a large scales local features based on Discriminative-Information based on Exponential Discriminant Analysis (DIEDA). The projected histograms for each zone are scored using the discriminative metric learning. Finally, the different region scores corresponding to different descriptors at various scales are fused using Support Vector Machine (SVM) classifier. Compared with other relevant state-of-the-art work, this system improves the efficiency of learning while controlling the effectiveness. The experimental results proved that both of these two initializations are efficient and outperform performance of the other state-of-the-art techniques.
- ItemIndexing multimedia content for textual querying: A multimodal approach(2013-07) Amrane, Abdesalam; Mellah, Hakima; Amghar, Youssef; Aliradi, RachidMultimedia retrieval approaches are classified into three categories: those using textual information, and those using low-level information and those that combine different information extracted from multimedia. Each approach has its advantages and disadvantages as well to improving multimedia retrieval systems. The recent works are oriented towards multimodal approaches. It is in this context that we propose an approach that combines the surrounding text with the information extracted from the visual content of multimedia and represented in the same repository in order to allow querying multimedia content based on keywords or concepts. Each word contained in queries or in description of multimedia is disambiguated by using the WordNet in order to define its semantic concept.
- ItemSemantic indexing of multimedia content using textual and visual information(Inderscience, 2014) Amrane, Abdesalam; Mellah, Hakima; Amghar, Youssef; Aliradi, RachidThe challenge in multimedia information retrieval remains in the indexing process, an active search area. There are three fundamental techniques for indexing multimedia content: those using textual information, and those using low-level information and those that combine different information extracted from multimedia. Each approach has its advantages and disadvantages as well to improve multimedia retrieval systems. The recent works are oriented towards multimodal approaches. In this paper we propose an approach that combines the surrounding text with the information extracted from the visual content of multimedia and represented in the same repository in order to allow querying multimedia content based on keywords or concepts. Each word contained in queries or in description of multimedia is disambiguated using the WordNet ontology in order to define its semantic concept. Support Vector Machines (SVMs) are used for image classification in one of the defined semantic concept based on SIFT (Scale Invariant Feature Transform) descriptors.