Browsing by Author "Bendjoudi, Ahcène"
Now showing 1 - 20 of 45
Results Per Page
Sort Options
- ItemA Parallel P2P Branch-and-Bound Algorithm for Computational Grids(2007-05-14) Bendjoudi, Ahcène; Melab, Nouredine; Talbi, El-GhazaliSolving exactly Combinatorial Optimization Problems (COPs) using a Branch-and-Bound algorithm requires a huge amount of computational resources. The efficiency of such algorithm can be improved by distributing at large scale the computation required by the exploration of the search tree. In this paper, we propose ParallelBB, which is a P2P-based parallelization of the Branch-and-Bound algorithm for the computational Grid. The algorithm has been implemented using the ProActive distributed object Grid middleware. The algorithm has been applied to a mono- criterion permutation flow-shop problem and promisingly experimented on the Grid5000 computational Grid.
- ItemAn adaptive hierarchical master-worker (AHMW) framework for grids - Application to B&B algorithms(Elsevier, 2012) Bendjoudi, Ahcène; Melab, Nouredine; Talbi, El-GhazaliWell-suited to embarrassingly parallel applications, the master–worker (MW) paradigm has largely and successfully used in parallel distributed computing. Nevertheless, such a paradigm is very limited in scalability in large computational grids. A natural way to improve the scalability is to add a layer of masters between the master and the workers making a hierarchical MW (HMW). In most existing HMW frameworks and algorithms, only a single layer of masters is used, the hierarchy is statically built and the granularity of tasks is fixed. Such frameworks and algorithms are not adapted to grids which are volatile, heterogeneous and large scale environments. In this paper, we revisit the HMW paradigm to match such characteristics of grids. We propose a new dynamic adaptive multi-layer hierarchical MW (AHMW) dealing with the scalability, volatility and heterogeneity issues. The construction and deployment of the hierarchy and the task management (deployment, decomposition of work, distribution of tasks, . . .) are performed in a dynamic collaborative distributed way. The framework has been applied to the parallel Branch and Bound algorithm and experimented on the Flow-Shop scheduling problem. The implementation has been performed using the ProActive grid middleware and the large experiments have been conducted using about 2000 processors from the Grid’5000 French nation-wide grid infrastructure. The results demonstrate the high scalability of the proposed approach and its efficiency in terms of deployment cost, decomposition and distribution of work and exploration time. The results show that AHMW outperforms HMW and MW in scalability and efficiency in terms of deployment and exploration time.
- ItemAn Efficient Measure for Evaluating Association Rules(CERIST, 2014-06-24) Djenouri, Youcef; Gheraibai, Youcef; Mehdi, Malika; Bendjoudi, Ahcène; Nouali-Taboudjemat, NadiaAssociation rules mining (ARM) has attracted a lot of attention in the last decade. It aims to extract a set of relevant rules from a given database. In order to evaluate the quality of the resulting rules, existing measures, such as support and confidence, allow to evaluate the resulted rules of ARM process separately, missing the different dependencies between the rules. This paper addresses the problem of evaluating rules by taking into account two aspects : (1) The accuracy of the returned rules on the input data and (2) the distance between the returned rules. The rules set that covers the maximum of rules space is considered. To analyze the behavior of the proposed measure, it has been tested on two recent ARM algorithms BSO-ARM and HBSO-TS.
- ItemAn Efficient Measure for Evaluating Association Rules(2014-08) Djenouri, Youcef; Gheraibai, Youcef; Mehdi, Malika; Bendjoudi, AhcèneAssociation rules mining (ARM) has attracted a lot of attention in the last decade. It aims to extract a set of relevant rules from a given database. In order to evaluate the quality of the resulting rules, existing measures, such as support and confidence, allow to evaluate the resulted rules of ARM process separately, missing the different dependencies between the rules. This paper addresses the problem of evaluating rules by taking into account two aspects: (1) The accuracy of the returned rules on the input data and (2) the distance between the returned rules. The rules set that covers the maximum of rules space is considered. To analyze the behavior of the proposed measure, it has been tested on two recent ARM algorithms BSO-ARM and HBSO-TS.
- ItemAssociation rules mining using evolutionary algorithms(LNCS, 2014-10-16) Djenouri, Youcef; Bendjoudi, Ahcène; Nouali-Taboudjemat, NadiaThis paper deals with association rules mining using evolutionary algorithms. All previous bio-inspired based association rules mining approaches generate non admissible rules which the end-user can not exploit them. In this paper, we propose two approaches permit to avoid non admissible rules by developing new strategy called delete and decomposition strategy. If an item is appeared in the antecedent and the consequent parts of given rule, this rule is composed on two admissible rules. Then, we delete such item to the antecedent part of the first rule and we delete the same item to the consequent part of the second rule. We also proposed two approaches (IARMGA and IARMMA), the first approach uses a classical genetic algorithm in the search process. However, the second one employs a mimetic algorithm to improve the quality of returned rules. To demonstrate the suggested approaches, several experiments have been carried out using both synthetic and reals instances. The results reveal that it has a compromise between the execution time and the quality of output rules. Indeed, IARMGA is faster than IARMMA whereas the last one outperforms IARMGA in terms of rules quality.
- ItemAssociation rules mining using evolutionary algorithms (CERIST, 2014-10-16) Djenouri, Youcef; Bendjoudi, Ahcène; Nouali-Taboudjemat, NadiaThis paper deals with association rules mining using evolutionary algorithms. All previous bio-inspired based association rules mining approaches generate non admissible rules which the end-user can not exploit them. In this paper, we propose two approaches permit to avoid non admissible rules by developing new strategy called delete and decomposition strategy. If an item is appeared in the antecedent and the consequent parts of given rule, this rule is composed on two admissible rules. Then, we delete such item to the antecedent part of the first rule and we delete the same item to the consequent part of the second rule. We also proposed two approaches (IARMGA and IARMMA), the first approach uses a classical genetic algorithm in the search process. However, the second one employs a mimetic algorithm to improve the quality of returned rules. To demonstrate the suggested approaches, several experiments have been carried out using both synthetic and reals instances. The results reveal that it has a compromise between the execution time and the quality of output rules. Indeed, IARMGA is faster than IARMMA whereas the last one outperforms IARMGA in terms of rules quality.
- ItemCerist News(2012-06) CERIST; Silhadi-Mehdi, Malika; Bendjoudi, Ahcène
- ItemData reordering for minimizing threads divergence in GPU-based evaluating association rules(CERIST, 2015-03-26) Djenouri, Youcef; Bendjoudi, Ahcène; Mehdi, Malika; Habbas, Zineb; Nouali-Taboudjemat, NadiaThis last decade, the success of Graphics Processor Units (GPUs) has led researchers to launch a lot of works on solving large complex problems by using these cheap and powerful architecture. Association Rules Mining (ARM) is one of these hard problems requiring a lot of computational resources. Due to the exponential increase of data bases size, existing algorithms for ARM problem become more and more inefficient. Thus, research has been focusing on parallelizing these algorithms. Recently, GPUs are starting to be used to this task. However, their major drawback is the threads divergence problem. To deal with this issue, we propose in this paper an intelligent strategy called Transactions- based Reordering "TR" allowing an efficient evaluation of association rules on GPU by minimizing threads divergence. This strategy is based on data base re-organization. To validate our proposition, theoretical and experimental studies have been carried out using well-known synthetic data sets. The results are very promising in terms of minimizing the number of threads divergence.
- ItemData reordering for minimizing threads divergence in GPU-based evaluating association rules(2015-06) Djenouri, Youcef; Bendjoudi, Ahcène; Mehdi, Malika; Habbas, Zineb; Nouali-Taboudjemat, NadiaThis last decade, the success of Graphics Processor Units (GPUs) has led researchers to launch a lot of works on solving large complex problems by using these cheap and powerful architecture. Association Rules Mining (ARM) is one of these hard problems requiring a lot of computational resources. Due to the exponential increase of data bases size, existing algorithms for ARM problem become more and more inefficient.Thus, research has been focusing on parallelizing these algorithms. Recently, GPUs are starting to be used to this task. However, their major drawback is the threads divergence problem. To deal with this issue, we propose in this paper an intelligent strategy called transactions-based Reordering ”TR” allowing an efficient evaluation of association rules on GPU by minimizing threads divergence. This strategy is based on data base re-organization. To validate our proposition, theoretical and experimental studies have been carried out using well-known synthetic datasets. The results are very promising in terms of minimizing the number of threads divergence.
- ItemEfficient parallel B&B method for the Blocking Job Shop Scheduling Problem(2016-07) Dabah, Adel; Bendjoudi, Ahcène; Ait Zai, AbdelhakimThe blocking job shop scheduling problem (BJSS) is a version of the classical job shop scheduling with no intermediate buffer between machines. It is known to be NP-hard in the strong sense. The major drawbacks of the previous works are the huge time needed to explore the search space and the low ratio of feasible to explored solutions when applying metaheuristics, leading to a poor quality of final solutions. In order to accelerate the exploration of the search space and overcome the drawback of metaheuristics, we propose in this paper a parallel Branch and Bound (B&B) algorithm to act both as an exact or approximate methods. The proposed parallel B&B approach is based on the master-worker paradigm, exploiting the computing power offered by cluster architectures. The master performs a breadth-first exploration to ensure high availability of tasks (sub-problems) while the workers perform a depth-first exploration in order to browse a large number of feasible solutions, therefore, obtaining a faster improvement of the solution quality. The proposed approach has been executed on a cluster based architecture using 32 nodes with 16 CPU cores each. The obtained results show a good speedup of the execution time with 80% efficiency. We have been able to solve optimally ten BJSS benchmark instances never solved before. Moreover, our proposed approach improved the best solutions for more than 22 benchmark instances.
- ItemFault-Tolerant Mechanism for Hierarchical Branch and Bound Algorithm(IEEE, 2011-05-16) Bendjoudi, Ahcène; Melab, Nouredine; Talbi, El-GhazaliSolving exactly large instances of Combinatorial Optimization Problems (COPs) using Branch and Bound (B&B) algorithms requires a huge amount of computing resources. These resources can be offered by computational grids and the scalability can be achieved using Hierarchical Master/Worker-based B&B pushing the limits of the traditional Master/Worker paradigm. However, the resources offered by grids are most of the time unreliable, volatile, and heterogeneous. Therefore, they must take into account fault tolerance. In this paper, we present FTH-B&B, a fault tolerant hierarchical B&B, in order to deal with the fault tolerance issue. It is composed of several fault tolerant Master/Worker-based sub-B&Bs organized hierarchically into groups and perform independently fault tolerant mechanism. Beside, a fault recovery mechanism is introduced to recover and avoid redundant exploration of sub-problems in case of failures. In addition, we propose a mechanism to maintain the hierarchy safe and balanced during the lifetime of the algorithm. Our algorithm is applied to the Flow-Shop scheduling problem (FSP) and implemented on top of the ProActive grid middleware. It has been promisingly experimented on the Grid’5000 French nation-wide grid and shows its ability to remain efficient even in presence of failures.
- ItemFTH-B&B: a Fault-Tolerant Hierarchical Branch and Bound for Large Scale Unreliable Environment(2013) Bendjoudi, Ahcène; Melab, Nouredine; Talbi, El-GhazaliSolving to optimality large instances of combinatorial optimization problems using Brand and Bound (B&B) algorithms requires a huge amount of computing resources. In this paper, we investigate the design and implementation of such algorithms on computational grids. Most of existing grid-based B&B algorithms are based on the Master-Worker paradigm, their scalability is therefore limited. Moreover, even if the volatility of resources is a major issue in grids fault tolerance is rarely addressed. We propose FTH-B&B, a fault tolerant hierarchical B&B. FTH-B&B is based on different new mechanisms enabling to efficiently build and maintain balanced the hierarchy, and to store and recover work units (sub-problems). FTH-B&B has been implemented on top of ProActive and applied to the Flow-Shop scheduling problem. Very often, the validation of existing grid-based B&B works is performed either through simulation or a very small real grid. In this paper, we experimented FTH-B&B on the Grid'5000 real French nation-wide computational grid using up to 1900 processor cores distributed over 6 sites. The reported results show that the overhead induced by the approach is very low and an efficiency close to 100% can be achieved on some Taillards benchmarks of the Flow-Shop problem. In addition, the results demonstrate the robustness of the approach even in extreme failure situations.
- ItemGPU parallel B&B for the Blocking job shop scheduling problem.(CERIST, 2016-02-22) Dabah, Adel; Bendjoudi, Ahcène; Ait Zai, Abdelhakim; El Baz, DidierBranch and bound algorithms (B&B) are well known techniques for solving optimally combinatorial optimization problems. Nevertheless, these algorithms remain inefficient when dealing with large instances. This paper deals with the blocking job shop scheduling (BJSS), which is a version of classical job shop scheduling with no intermediate buffer between machines. This problem is an NP-hard problem and its exact resolution using the sequential approach is impractical. We propose in this paper a GPU-based parallelization in which a two level scheme is used. The first level is a node-based parallelization in which the bounding process is faster because it is calculated in parallel using several threads organized in one GPU block. To fully occupy the GPU, we propose a second level of parallelization which is a generalization of the first level. Therefore, for each iteration several search tree nodes are evaluated on the GPU using several thread-blocks. The obtained results, using the well-known Taillard instances, confirm the efficiency of the proposed approach. Also, the results show that our approach is 65 times faster than an optimized sequential B&B version.
- ItemGPU-accelerated Bounding for Branch-and-Bound applied to a Permutation Problem using Data Access Optimization(John Wiley & Sons, 2013-11) Melab, Nouredine; Chakroun, Imen; Bendjoudi, AhcèneBranch-and-Bound (B\&B) algorithms are attractive methods for solving to optimality combinatorial optimization problems using an implicit enumeration of a dynamically built tree-based search space. Nevertheless, they are time-consuming when dealing with large problem instances. Therefore, pruning tree nodes (subproblems) is traditionally used as a powerful mechanism to reduce the size of the explored search space. Pruning requires to perform the bounding operation which consists of applying a lower bound function to the subproblems generated during the exploration process. Preliminary experiments performed on the Flow-Shop scheduling problem (FSP) have shown that the bounding operation consumes over $98\%$ of the execution time of the B\&B algorithm. In this paper, we investigate the use of GPU computing as a major complementary way to speed up the search. We revisit the design and implementation of the parallel bounding model on GPU accelerators. The proposed approach enables data access optimization. Extensive experiments have been carried out on well-known FSP benchmarks using an Nvidia Tesla C2050 GPU card. Compared to a CPU-based single core execution using an Intel Core i7-970 processor without GPU, speedups higher than $100$ times faster are achieved for large problem instances. At an equivalent peak performance, GPU-accelerated B\&B is twice faster than its multi-core counterpart.
- ItemGPU-based Bees Swarm Optimization for Association Rules Mining(Springer, 2014) Djenouri, Youcef; Bendjoudi, Ahcène; Mehdi, Malika; Nouali-Taboudjemat, Nadia; Habbas, ZinebAssociation Rules Mining (ARM) is a well-known combinatorial optimization problem aiming at extracting relevant rules from given large scale data sets. According to the state of the art, the bio-inspired methods proved their efficiency by generating acceptable solutions in a reasonable time when dealing with small and medium size instances. Unfortunately, to cope with large instances such as the webdocs benchmark, these methods require more and more powerful processors and are time expensive. Nowadays, computing power is no longer a real issue. It can be provided by the power of emerging technologies such as GPUs that are massively multi-threaded processors. In this paper, we investigate the use of GPUs to speed up the computation. We propose two GPU-based bees swarm algorithms for association rules mining (SE-GPU and ME-GPU). SE-GPU aims at evaluating one rule at a time where each thread is associated with one transaction, whereas ME-GPU evaluates multiple rules in parallel on GPU where each thread is associated with several transactions. To validate our approaches, the two algorithms have been executed to solve well-known large ARM instances. Real experiments have been carried out on an Intel Xeon 64 bit quad-core processor E5520 coupled to an Nvidia Tesla C2075 GPU device. The results show that our approaches improve the execution time up to x100 over the sequential mono-core BSO-ARM algorithm. Moreover, the proposed approaches have been compared with CPU multi-core ones (1 to 8 cores). The results show that they are faster than the multi-core versions what ever the number of used cores.
- ItemGPU-based Bio-inspired Model for Solving Association Rules Mining Problem(CERIST, 2017-03-06) Djenouri, Youcef; Bendjoudi, Ahcène; Djenouri, Djamel; Commuzi, Marcoproblem with the purpose of extracting the correlations between items in sizeable data instances. According to the state of the art, the bio-inspired approaches proved their usefulness by finding high number of satisfied rules in a reasonable time when dealing with medium size instances. These approaches are unsuitable for large databases and especially for those existing on the web such as the Webdocs instance. Recently, the Graphics Processor Units (GPU) is considered as one of the most used parallel hardware to solve large scientific complex problems. In this paper, we propose a new GPU-based model of the bio-inspired approaches for solving association rules mining problem. Our model benefits from the massively GPU threaded by evaluating multiple rules in parallel on GPU. To validate the proposed model, the most used bio-inspired approaches (GA, PSO, and BSO) have been executed on GPU to solve well-known large ARM instances. Real experiments have been carried out on an Intel Xeon 64 bit quad-core processor E5520 coupled to an Nvidia Tesla C2075 GPU device. The results show that the genetic algorithm outperforms PSO and BSO. Moreover, it outperforms the state-of-the-art GPU-based ARM approaches when dealing with the challenging Webdocs instance.
- ItemGPU-based two level parallel B&B for the Blocking job shop scheduling problem.(IEEE, 2016-05-23) Adel, Dabah; Bendjoudi, Ahcène; El Baz, Didier; Abdelhakim, Ait ZaiBranch and bound algorithms (B&B) are well known techniques for solving optimally combinatorial optimization problems. Nevertheless, these algorithms remain inefficient when dealing with large instances. This paper deals with the blocking job shop scheduling (BJSS), which is a version of classical job shop scheduling with no intermediate buffer between machines. This problem is an NP-hard problem and its exact resolution using the sequential approach is impractical. We propose in this paper a GPU-based parallelization in which a two level scheme is used. The first level is a node-based parallelization in which the bounding process is faster because it is calculated in parallel using several threads organized in one GPU block. To fully occupy the GPU, we propose a second level of parallelization which is a generalization of the first level. Therefore, for each iteration several search tree nodes are evaluated on the GPU using several thread-blocks. The obtained results, using the well-known Taillard instances, confirm the efficiency of the proposed approach. Also, the results show that our approach is 65 times faster than an optimized sequential B&B version.
- ItemH-B&B: A Hierarchical B&B for large scale environments(2011-05-25) Bendjoudi, Ahcène; Melab, Nouredine; Talbi, El-GhazaliIn this paper, we propose a parallel B&B (H-B&B) based on the Hierarchical Master/Worker paradigm. It aims at improving the scalability of the traditional M/W-based B&Bs (M/W-B&B) eliminating the bottlenecks created on the central master process. Unlike the state-of-the-art approaches, H-B&B is fully dynamic as it is composed of several levels of masters, and evolves over time according to the dynamic acquisition of new computing nodes and their disconnections. To evaluate our approach we propose three execution scenarios. First, we evaluate the ability of H-B&B to scale-up and deploy a huge number of nodes. Second, we evaluate its efficiency and its ability to avoid bottlenecks. Finally, we evaluate its ability to scale-down and to manage the release of a part or the votality of the nodes.
- ItemHierarchical Branch and Bound Algorithm for Computational Grids(Elsevier, 2012) Bendjoudi, Ahcène; Melab, Nouredine; Talbi, El-GhazaliBranch and Bound (B&B) algorithms are efficiently used for exact resolution of combinatorial optimization problems (COPs). They are easy to parallelize using the Master/Worker paradigm (MW) but limited in scalability when solving large instances of COPs on large scale environments such as Grids. Indeed, the master process rapidly becomes a bottleneck. In this paper, we propose a new approach H-B&B for parallel B&B based on a hierarchical MW paradigm in order to deal with the scalability issue of the traditional MW-based B&B. The hierarchy is built dynamically and evolves over time according to the dynamic acquisition of computing nodes. The inner nodes of the hierarchy (masters) perform branching operations to generate sub-trees and the leaves (workers) perform a complete exploration of these sub-trees. Therefore, in addition to the parallel exploration of sub-trees, a parallel branching is adopted. H-B&B is applied to the Flow-Shop scheduling problem.Unlike most existing grid-based B&B algorithms, H-B&B has been experimented on a real computational grid i.e. Grid’5000. The results demonstrate the scalability and efficiency of H-B&B.
- ItemImpact of Genetic Algorithms Operators on Association Rules Extraction(CERIST, 2016-10-02) Hamdad, Leila; Ournani, Zakaria; Benatchba, Karima; Bendjoudi, AhcèneIn this paper, we study the impact of GAs’ components such as encoding, different crossover, mutation and replacement strategies on the number of extracted association rules and their quality. Moreover, we propose a strategy to manage the population. The later is organized in classes where each one encloses same size rules. Each class can be seen as a population on which a GA is applied. All tests are conducted on two types of benchmarks : synthetic and real ones of different sizes.
- «
- 1 (current)
- 2
- 3
- »