Research Reports
Permanent URI for this collection
Browse
Browsing Research Reports by Title
Now showing 1 - 20 of 240
Results Per Page
Sort Options
- ItemA Novel Approach to Preserving Privacy in Social Network Data Publishing(CERIST, 2016-10-24) Bensimessaoud, Sihem; Benmeziane, Souad; Badache, Nadjib; Djellalbia, AminaToday, more and more social network data are published for data analysis. Although this analysis is important, these publications may be targeted by re-identification attacks i.e., where an attacker tries to recover the identities of some nodes that were removed during the anonymization process. Among these attacks, we distinguish "the neighborhood attacks" where an attacker can have background knowledge about the neighborhoods of target victims. Researchers have developed anonymization models similar to k-anonymity, based on edge adding, but can significantly alter the properties of the original graph. In this work, a new anonymization algorithm based on the addition of fake nodes is proposed, which ensures that the published graph preserves another important utility that is the average path length “APL”.
- ItemA basic platform of collaborative filtring(CERIST, 2008) Nouali, Omar; Kirat, Sabah; Meziani, HadjerWith the explosive growth of the quantity of new information the development of information systems to target the best answers provided to users, so they are closer to their expectations and personal taste, has become an unavoidable necessity. The collaborative filtering systems are among these information systems with particular characteristics that make the difference. The term refers to collaborative filtering techniques using the familiar tastes of a group of users to predict the unknown preference of a new user. This article describes a basic platform of collaborative filtering, which allows users to discover interesting documents, through automation of the natural process of recommendation, it allows them to express their opinion about the relevance of documents, according to their tastes and documents’ quality they perceive; it offers the opportunity to benefit from the evaluations on documents of other users, with similar profile, have found interesting. All these benefits are provided to users by the principle of collaboration, in return for an individual effort: evaluating documents.
- ItemA Classification-based XML Information Retrieval Model(CERIST, 2014-11-27) Bessai, Fatma-ZohraThe main problem of content-based XML information retrieval is how to select the relevant unit of information that answers the user’s query composed of only key words (content only). Our objective is to select relevant elements that can belong to different parts of XML documents of the corpus for user’s information need. To do this, we propose a neural XML information retrieval model using Kohonen self-organizing maps. Kohonen self-organizing map lets classification of XML elements producing density map that form the foundations of the XML information retrieval system.
- ItemA Clustering Application for the Web Usage Mining(CERIST, 2012-12) Kouici, SalimaThe Web Usage Mining constitutes a new branch of the Web Mining. It allows the study of the behavior of both users and potential customers via their site navigation. The mainly used source for the Web Usage Mining is the servers Log Files. A Log File contains an important mass of data, including user’s information (username, used software, etc.) and all the queries he has made on the website (requested files, the number of bytes transferred, time spent on each page, the page of entry to the site .... etc.). In this work we shall outline an application, made on this type of data, which is based on a clustering method, namely KMEANS. This application allows the definition of homogeneous groups constituting users profiles so that to anticipate the needs and with a view of communication adapted to each segment of users. In this application we have recorded some technical problems. These problems concerns the data cleaning (removing queries of images and multimedia files associated with web pages, removing queries from search bots... etc.) and the setting up of visitor sessions, knowing that a session is a sequence of pages viewed by the same user.
- ItemA clustering technique to analyse anonymous systems(CERIST, 2014-07-20) Benmeziane, Souad; Badache, NadjibConcerns about privacy and anonymity have gained more and more attention in conjunction with the rapid growth of the Internet as a means of communication and information dissemination. A number of anonymous communication systems have been developed to protect the identity of communication participants. Nevertheless, it is important to provide ways to evaluate and measure the level of anonymity provided. We introduce in this report the use of hierarchical clustering technique to analyse anonymous systems. We propose a new measure to evaluate anonymous systems by introducing the concept of dominant cluster. We further show that using this measure will overcome the limitations of other existing measures.
- ItemA comparative study between compressed video watermarking methods based on DCT coefficients and intra prediction(CERIST, 2011-09) Bouchama, Samira; Hamami, Latifa; Aliane, HassinaSeveral watermarking methods have been applied to the newest video codec H.264/AVC to satisfy different applications such as authentication, tamper proof and copyright protection. Our objective through this paper is to present a comparative study between watermarking methods based on the quantized (Q) DCT coefficients and those based on the intra prediction of the 4x4 luma blocks, in terms of embedding capacity, video quality and bitrate. The use of intra prediction modes is motivating because it is possible to embed a relatively high embedding capacity while preserving the video quality; however it seems difficult to maintain the bitrate. In this paper we show that the Intra prediction based method outperforms the QDCT based method using the same codec configuration.
- ItemA formal approach for a self organizing protocol: Production System application(CERIST, 2009-10) Mellah, Hakima; Drias, Habiba; Hassas, SalimaAny dysfunction in production system (PS) is likely to be very expensive; so modelling by Multi Agent Systems (MAS) makes the production system (PS) possible to have aspects of robustness, reactivity and flexibility, which allow the PS control to be powerful and to react to all the risks being able to occur. In order to have a fault-tolerant PS, we propose when and how to recourse to a self organizing protocol making the MAS capable of changing its communication structure or organization, and thus reorganizing itself without any external intervention.
- ItemA framework for object classification in farfield videos(CERIST, 2014-10-26) Setitra, Insaf; Larabi, SlimaneObject classification in videos is an important step in many applications such as abnormal event detection in video surveillance, traffic analysis is urban scenes and behavior control in crowded locations. In this work, propose a framework for moving object classification in farfield videos. Much works have been dedicated to accomplish this task. We overview existing works and combine several techniques to implement a real time object classifier with offline training phase. We follow three main steps to classify objects in steady background videos : background subtraction, object tracking and classification. We measure accuracy of our classifier by experiments done using the PETS 2009 dataset.
- ItemA Generic Framework for Remote Practicals: Application to Computer Science and early feedbacks(CERIST, 2012) Bouabid, Mohamed AmineIn this paper we present a model-driven based framework to guide efficiently the design and integration of remote computer experiments into distant learning curricula taking into the account the related educational considerations (especially efficient online teamwork and tutoring support). Our approach is centered on a specific pedagogical object: the lab experiment and based on a standard management meta-model to describe and interact with any concrete lab experiment during its whole lifecycle. These models are carried out by a three-tier architecture comprising: (1) The upper learning environment, (2) A middleware layer, and (3) The lower remote laboratories (2). The middleware exposes a homogeneous set of services to the learning layer to interact with the experiments’ models that are matched with the corresponding raw lab resources; giving the opportunity to develop innovative and educational end-user HCIs. A concrete application in the computer science area is developed, followed by an early usability testing which brought promoting results
- ItemA Generic Model-driven Architecture for Online Lab-works: Application to Computer Science(CERIST, 2012-04) Bouabid, Mohamed Amine; Vidal, Phillipe; Broisin, JulienIn this paper we present a model-driven engineering approach to transparently and efficiently integrate remote computer experiments into distant learning curriculums. The originality of this framework stands on a middleware layer linking between existing Learning Management Systems and remote laboratories. Based on standard meta-models representing all the system’s components, the middleware allows to (1) manage resources allocation from remote laboratories, (2) manage remote experiments during their whole life cycle just by interacting with their abstract models, (3) federate remote distributed laboratories, and (4) support tutoring and collaborative activities to enhance pedagogical efficiency. Another novelty in this framework is its independence from any scientific learning area, while facilitating the development of dedicated GUIs specific to each discipline. A concrete implementation of our framework for the computer science education is presented, focusing on the available web based GUI to teachers, tutors and learners. Finally, early results from real-life pilot tests are presented.
- ItemA Graph Approach for Enhancing Process Models Matchmaking(CERIST, 2015-04) Belhoul, Yacine; Yahiaoui, SaïdRecent attempts have been done to measure similarity of process models based on graph-edit distance. This problem is known to be difficult and computational complexity of exact algorithms for graph matching is exponential. Thus, heuristics should be proposed to obtain approximations. Spectral graph matching methods, in particular eigenvalue-based projections, are know to be fast but they lost some quality in the obtained matchmaking. In this paper, we propose a graph approach for the problem of inexact matching of process models. Our approach combines a spectral graph matching method and a string comparator based algorithm in order to improve the quality of process model matchmaking. The proposed method performs the matchmaking at both structural and semantic levels. Experimentation is provided to show the performance of our method, compared to previous work, to rank a collection of process model according to a particular query.
- ItemA Layered Architecture for online Lab-works: Experimentation in the Computer Science Education(CERIST, 2012) Bouabid, Mohamed Amine; Vidal, Phillipe; Broisin, JulienPractical competencies are key components of any computing education curriculum. Today, several computer experiment tools exist, however, these tools are originally intended to experts, and do not integrate very well into the existing online learning environments, in particular, they lack efficient support for teamwork, tutoring and instructional design. In this paper we introduce a model-driven engineering approach to transparently integrate remote computer experiments into distant learning curriculums. The originality of this framework stands on two key components: a middleware layer that acts as glue between existing Learning Management Systems and remote laboratories and a set of standard unifying and extensible models representing the whole system including its lab components, the versatile experiments and the actors’ actions.
- ItemA Lightweight Key Management Scheme for E-health applications in the context of Internet of Things(CERIST, 2014-03-15) Abdmeziem, Riad; Tandjaoui, DjamelIn the context of Internet of Things where real world objects will automatically be part of the Internet, ehealth applications have emerged as a promising approach to provide unobtrusive support for elderly and frail people based on their situation and circumstances. However, due to the limited resource available in such systems and privacy concerns that might rise from the capture of personal data, security issues constitute a major obstacle to their deployment. Authentication of the different entities involved and data confidentiality constitute the main concerns for users that need to be addressed. In this paper, we propose a new key management scheme for an ehealth application to allow sensors and the Base Station (BS) to negotiate certain security credentials that will be used to protect the information flow. Our prtocol provides a strong level of security guaranteeing authentication and data confidentiality while the scarcity of resources is taken into consideration. The scheme is based on a lightweight Public Key Infrastructure (PKI) where the sensors have to perform only one Elliptic Curve Cryptography (ECC) decryption in the key establishment process. Data exchanges are then secured by the use of symmetric encryption. In addition, Time Stamps are used to prevent replay attacks along with Message Code Authentication (MAC) to ensure integrity.
- ItemA MOF-based Social Web Services Description Metamodel(CERIST, 2015-12-09) Benna, Amel; Maamar, Zakaria; Ahmed-Nacer, MohamedTo make IT community adopt social Web services, both social Web service-based applications and their support platforms should evolve independently from each other while sharing a common model that represent the characteristics of these social Web services. This paper proposes a model-driven approach that achieves this duality. First, the approach identifies a social Web service's properties. Then a Meta-Object-Facility(MOF)-based social Web services description metamodel is developed. A prototype illustrates how the proposed MOF-based metamodel is used.
- ItemA parallel BSO Metaheuristic for Molecular Docking problem(CERIST, 2018-09-12) Hocine, SAADI; Malika, MEHDI; Nadia, Nouali TaboudjematDans ce rapport, nous proposons un modèle parallèle d’une métaheuristique basée sur l’essaim d'abeilles (BSO) pour résoudre le problème de Docking moléculaire. Cette solution est basée sur le modèle MapReduce, nous utilisons le framework MapCG pour implémenter ce modèle sur les cartes de traitement graphique GPUs . MapCG a été développé pour simplifier le processus de programmation sur GPUs et pour concevoir des applications portables indépendamment de l'architecture matérielle. Notre solution peut fonctionner séquentiellement sur un CPU, ou en parallèle sur GPUs sans changer le code. Les expériences lors de docking d'un de complexes protéine-ligand montrent que notre solution atteint une bonne performance. l'implémentation parallèle utilisant MapCG sur GPU gagne une accélération moyenne de 10x par rapport à un seul CPU.
- ItemA Resource-based Mutual Exclusion Algorithm supporting Dynamic Acting Range and Mobility for Wireless Sensor and Actor Networks(CERIST, 2010-05) Derhab, Abdelouahid; Zair, MustaphaAchieving optimal actor resources usage is one of the fundamental issues in Wireless sensor and Actor Networks (WSANs). One solution is to maximize the mutually exclusive regions (i.e., regions covered by one actor). In this paper, we take a novel approach to define and resolve the mutual exclusion problem. We propose CRMEA, (Centralized Resource-based Mutual Exclusion Algorithm), that constructs an actor cover set whose cost is less than the sole mutual exclusion algorithm existing in the literature. In addition, extensions supporting dynamic acting range and mobility are added to CRMEA. Simulation results show that the proposed extensions can save up to 50%¡90% of actor resources when compared with CRMEA. In addition, the mobility extension can overcome the large event-to-action delay problem and meet the requirements of the delay-sensitive applications.
- ItemA simple approach to distinguish between Maghrebi and Persian calligraphy in old manuscripts(CERIST, 2014-04-07) Setitra, Insaf; Meziane, AbdelkrimManual annotation of images is usually a mandatory task in many applications where no knowledge about the image is available. In presence of huge number of images, this task becomes very tedious and prone to human errors. In this paper, we contribute in automatic annotation of Arabic old manuscripts by discovering manuscript calligraphy. Arabic manuscripts count a very large number of Persian and Maghrebi writing especially in Noth Africa. Distinguishing between these two calligraphies allow better classifying them and so annotating them. We use background constructing followed by extraction of simple features to classify Arabic manuscript calligraphies using a Quadratic Bayes classifier.
- ItemA spatial model for editing multimedia documents(CERIST, 2010-09) Tonkin, Nourredine; Maredj, Azze-Eddine; Sadallah, MadjidIn the multimedia documents authoring systems the management of spatial and temporal inter-objects relations is the most delicate task. Spatial relations management refers to the appropriate means to express relations between the document objects and guarantee their consistency. Usually it is represented by spatial model which performances depend on its expressivity degree, on its positioning precision and on the ability to express a desired overlap. One of the most important factors that affect performances is the distance associated to the relations. To enhance the expressivity and precision degrees and to allow the specification of desired overlap, we introduce, in this paper, the flexible distance concept.
- ItemA stochastic local search combined with support vector machine for web services classification(CERIST, 2016-04) Laachemi, Abdelouahab; Boughaci, DalilaIn this paper, we are interested in the Web service classification. We propose a classification method that first uses a stochastic local search (SLS) meta-heuristic for feature selection then call the Support Vector Machine (SVM) to do the classification task. The proposed method that combines SLS and SVM for Web service classification is validated on the QWS Dataset to measure its performance. We used a set of 364 Web services divided into four categories (Platinum, Gold, Silver and Bronze) in which quality is measured by 9 attributes. The experiments and the comparison show the effectiveness of our method for the classification of Web services.
- ItemA study of Wireless Sensor Network Architectures and Projects for Traffic Light Monitoring(CERIST, 2012) Kafi, Mohamed Amine; Badache, Nadjib; Challal, Yacine; Bouabdallah, Abdelmadjid; Djenouri, DjamelVehicular traffic is increasing around the world, especially in urban areas. This increase results in a huge traffic congestion, which has dramatic consequences on economy, human health, and environment. Traditional methods used for traffic management, surveillance and control become inefficient in terms of performance, cost, maintenance, and support, with the increased traffic. Wireless Sensor Networks (WSN) is an emergent technology with an effective potential to overcome these difficulties, and will have a great added value to intelligent transportation systems (ITS) overall. In this survey, we review traffic light projects and solutions. We discuss their architectural and engineering challenges, and shed some light on the future trends as well.