Academic & Scientific Articles
Permanent URI for this communityhttp://dl.cerist.dz/handle/CERIST/3
Browse
3 results
Search Results
Item Privacy-preserving remote deep-learning-based inference under constrained client-side environment(Springer, 2023) Boulemtafes, Amine; Derhab, Abdelouahid; Ait Ali Braham, Nassim; Challal , YacineRemote deep learning paradigm raises important privacy concerns related to clients sensitive data and deep learning models. However, dealing with such concerns may come at the expense of more client-side overhead, which does not fit applications relying on constrained environments. In this paper, we propose a privacy-preserving solution for deep-learning-based inference, which ensures effectiveness and privacy, while meeting efficiency requirements of constrained client-side environments. The solution adopts the non-colluding two-server architecture, which prevents accuracy loss as it avoids using approximation of activation functions, and copes with constrained client-side due to low overhead cost. The solution also ensures privacy by leveraging two reversible perturbation techniques in combination with paillier homomorphic encryption scheme. Client-side overhead evaluation compared to the conventional homomorphic encryption approach, achieves up to more than two thousands times improvement in terms of execution time, and up to more than thirty times improvement in terms of the transmitted data size.Item PReDIHERO – Privacy-Preserving Remote Deep Learning Inference based on Homomorphic Encryption and Reversible Obfuscation for Enhanced Client-side Overhead in Pervasive Health Monitoring(IEEE, 2021) Boulemtafes, Amine; Derhab, Abdelouahid; Ait Ali Braham, Nassim; Challal, YacineHomomorphic Encryption is one of the most promising techniques to deal with privacy concerns, which is raised by remote deep learning paradigm, and maintain high classification accuracy. However, homomorphic encryption-based solutions are characterized by high overhead in terms of both computation and communication, which limits their adoption in pervasive health monitoring applications with constrained client-side devices. In this paper, we propose PReDIHERO, an improved privacy-preserving solution for remote deep learning inferences based on homomorphic encryption. The proposed solution applies a reversible obfuscation technique that successfully protects sensitive information, and enhances the client-side overhead compared to the conventional homomorphic encryption approach. The solution tackles three main heavyweight client-side tasks, namely, encryption and transmission of private data, refreshing encrypted data, and outsourcing computation of activation functions. The efficiency of the client-side is evaluated on a healthcare dataset and compared to a conventional homomorphic encryption approach. The evaluation results show that PReDIHERO requires increasingly less time and storage in comparison to conventional solutions when inferences are requested. At two hundreds inferences, the improvement ratio could reach more than 30 times in terms of computation overhead, and more than 8 times in terms of communication overhead. The same behavior is observed in sequential data and batch inferences, as we record an improvement ratio of more than 100 times in terms of computation overhead, and more than 20 times in terms of communication overhead.Item PRIviLY: Private Remote Inference over fulLY connected deep networks for pervasive health monitoring with constrained client-side(Elsevier, 2023-09) Boulemtafes, Amine; Derhab, Abdelouahid; Challal, YacineRemote deep learning paradigm enables to better leverage the power of deep neural networks in pervasive health monitoring (PHM) applications, especially by addressing the constrained client-side environment. However, remote deep learning in the context of PHM requires to ensure three properties: (1) meet the high accuracy requirement of the healthcare domain, (2) satisfy the client-side constraints, and (3) cope with the privacy requirements related to the high sensitivity of health data. Different privacy-preserving solutions for remote deep learning exit in the literature but many of them fail to fully address the PHM requirements especially with respect to constrained client-side environments. To that end, we propose PRIviLY, a novel privacy-preserving remote inference solution, designed specifically for the popular Fully Connected Deep Networks (FCDNs). PRIviLY avoids the use of encryption for privacy preservation of sensitive information, in order to fully prevent accuracy loss, and to alleviate the server-side hardware requirements. Besides, PRIviLY adopts a non-colluding two-server architecture, and leverages the linear computations of FCDNs along with reversible random perturbation and permutation techniques in order to preserve privacy of sensitive information, while meeting low overhead requirement of constrained client-sides. At the cloud server, efficiency evaluation shows that PRIviLY achieves an improvement ratio of 4 to more than 15 times for communication, and a minimum improvement ratio of 135 times for computation overhead. At the intermediate server, the minimum improvement ratio is at least more than 10,900 for computation, while for communication, the improvement ratio varies from 5 to more than 21 times. As for the client-side, PRIviLY incurs an additional overhead of about 27% in terms of communication, and between 16% and at most 27% in terms of computation.