References of "Varrette, Sébastien 50003258"
     in
Bookmark and Share    
See detailThéorie des Codes : Compression, Cryptage et Correction
Dumas, J.-G.; Roch, J.-L.; Tannier, E. et al

Book published by Dunod - 2nde (2014)

Detailed reference viewed: 57 (3 UL)
Full Text
See detailEvaluating the HPC Performance and Energy-Efficiency of Intel and ARM-based systems with synthetic and bioinformatics workloads
Plugaru, Valentin UL; Varrette, Sébastien UL; Pinel, Frédéric UL et al

Report (2014)

The increasing demand for High Performance Computing (HPC) paired with the higher power requirements of the ever-faster systems has led to the search for both performant and more energy-efficient ... [more ▼]

The increasing demand for High Performance Computing (HPC) paired with the higher power requirements of the ever-faster systems has led to the search for both performant and more energy-efficient architectures. This article compares and contrasts the performance and energy efficiency of two modern, traditional Intel Xeon and low power ARM-based clusters, which are tested with the recently developed High Performance Conjugate Gradient (HPCG) benchmark and the ABySS, FASTA and MrBayes bioinformatics applications. We show a higher Performance per Watt valuation of the ARM cluster, and lower energy usage during the tests, which does not offset the much faster job completion rate obtained by the Intel cluster, making the latter more suitable for the considered workloads given the disparity in the performance results. [less ▲]

Detailed reference viewed: 103 (19 UL)
See detailFoundations of Coding: Compression, Encryption, Error-Correction
Dumas, J.-G.; Roch, J.-L.; Tannier, E. et al

Book published by Wiley Sons (2014)

The topic of this book is the automatic transmission of numerical information. We focus on the structure of information, without regarding the type of transmission support. Information can be of any kind ... [more ▼]

The topic of this book is the automatic transmission of numerical information. We focus on the structure of information, without regarding the type of transmission support. Information can be of any kind as long as we can give a numerical representation of it: for example texts, images, sounds, videos. Transmission of this type of data is ubiquitous in technology, especially in telecommunications. Hence, it is necessary to rely on solid foundations for that transmission to be reliable. In this context, we explain how to structure the information so that its transmission is safe, error-free, efficient and fast. It can be used as a reference book by teachers, researchers or companies involved in telecommunication or information security, such that the information it now contains will be still relevant for the coming years as it was the case until now. [less ▲]

Detailed reference viewed: 122 (0 UL)
Full Text
Peer Reviewed
See detailA Holistic Model of the Performance and the Energy-Efficiency of Hypervisors in an HPC Environment
Guzek, Mateusz UL; Varrette, Sébastien UL; Plugaru, Valentin UL et al

in Concurrency & Computation : Practice & Experience (2014)

Detailed reference viewed: 92 (26 UL)
Full Text
Peer Reviewed
See detailComparing the Performance and Power Usage of GPU and ARM Clusters for Map-Reduce
Delplace, V.; Manneback, P.; Pinel, Frédéric UL et al

in Proc. of the 3rd Intl. Conf. on Cloud and Green Computing (CGC'13) (2013, October)

Detailed reference viewed: 33 (0 UL)
Full Text
Peer Reviewed
See detailHPC Performance and Energy-Efficiency of Xen, KVM and VMware Hypervisors
Varrette, Sébastien UL; Guzek, Mateusz UL; Plugaru, Valentin UL et al

in Proc. of the 25th Symposium on Computer Architecture and High Performance Computing (SBAC-PAD 2013) (2013, October)

With a growing concern on the considerable energy consumed by HPC platforms and data centers, research efforts are targeting green approaches with higher energy efficiency. In particular, virtualization ... [more ▼]

With a growing concern on the considerable energy consumed by HPC platforms and data centers, research efforts are targeting green approaches with higher energy efficiency. In particular, virtualization is emerging as the prominent approach to mutualize the energy consumed by a single server running multiple VMs instances. Even today, it remains unclear whether the overhead induced by virtualization and the corresponding hypervisor middleware suits an environment as high-demanding as an HPC platform. In this paper, we analyze from an HPC perspective the three most widespread virtualization frameworks, namely Xen, KVM, and VMware ESXi and compare them with a baseline environment running in native mode. We performed our experiments on the Grid’5000 platform by measuring the results of the reference HPL benchmark. Power measures were also performed in parallel to quantify the potential energy efficiency of the virtualized environments. In general, our study offers novel incentives toward in-house HPC platforms running without any virtualized frameworks. [less ▲]

Detailed reference viewed: 95 (19 UL)
Full Text
See detailPerformance tuning of applications for HPC systems employing Simulated Annealing optimization
Plugaru, Valentin UL; Georgatos, Fotis UL; Varrette, Sébastien UL et al

Report (2013)

Building fast software in an HPC environment raises great challenges as software used for simulation and modelling is generally complex and has many dependencies. Current approaches involve manual tuning ... [more ▼]

Building fast software in an HPC environment raises great challenges as software used for simulation and modelling is generally complex and has many dependencies. Current approaches involve manual tuning of compilation parameters in order to minimize the run time, based on a set of predefined defaults, but such an approach involves expert knowledge, is not scalable and can be very expensive in person-hours. In this paper we propose and develop a modular framework called POHPC that uses the Simulated Annealing meta-heuristic algorithm to automatically search for the optimal set of library options and compilation flags that can give the best execution time for a library-application pair on a selected hardware architecture. The framework can be used in modern HPC clusters using a variety of batch scheduling systems as execution backends for the optimization runs, and will discover optimal combinations as well as invalid sets of options and flags that result in failed builds or application crashes. We demonstrate the optimization of the FFTW library working in conjunction with the high- profile community codes GROMACS and QuantumESPRESSO, whereby the suitability of the technique is validated. [less ▲]

Detailed reference viewed: 103 (30 UL)
Full Text
Peer Reviewed
See detailDependency Analysis for Critical Infrastructure Security Modelling: A Case Study within the Grid'5000 Project
Schaberreiter, Thomas UL; Varrette, Sébastien UL; Bouvry, Pascal UL et al

in Proc. of the 3th IFIP Intl. SeCIHD'2013 Workshop, part of the 8th Intl. Conf. on Availability, Reliability and Security (ARES'13) (2013, September)

Detailed reference viewed: 29 (0 UL)
Full Text
Peer Reviewed
See detailShadObf: A C-source Obfuscator based on Multi-objective Optimization Algorithms
Bertholon, Benoit UL; Varrette, Sébastien UL; Martinez, S.

in Proc. of the 16th Intl. Workshop on Nature Inspired Distributed Computing (NIDISC 2013), part of the 27th IEEE/ACM Intl. Parallel and Distributed Processing Symposium (IPDPS 2013) (2013, May)

The Development of the new Cloud Computing paradigm as lead to a reorganisation in the order of the priorities of security issues. When running a private code on a Public Cloud or on any remote machine ... [more ▼]

The Development of the new Cloud Computing paradigm as lead to a reorganisation in the order of the priorities of security issues. When running a private code on a Public Cloud or on any remote machine, its owner have no guarantees that the code cannot be reverse engineered, understood and modified. One of the solution for the code owner in order to protect his intellectual property is to obfuscate his algorithms. The Obfuscation of source code is a mechanism to modify a source code to make unintelligible by humans even with the help of computing resources. More precisely, the objective is to conceal the purpose of a program or its logic without altering its functionality, thus preventing the tampering or the reverse engineering of the program Obfuscation is usually performed by applying transformations to the initial source code, but it reveals many open questions: what transformation should be chosen? In which order should the obfuscator apply them? How can we quantify the obfuscation capacity of a given program? In order to answer these questions, we propose here SHADOBF, an obfuscation framework based on evolutionary heuristics designed to optimize for a given input C program, the sequence of transformations that should be applied to the source code to improve its obfuscation capacity. This last measure involves the combination of well known metrics, coming from the Software Engineering area, which are optimized simultaneously thanks to Multi Objective Evo- lutionary Algorithms (MOEAs). We have validated our approach over a classical matrix multiplication program – experiments on other applications is still in progress. Some experiments, presented here, has been performed on some basic but representative examples to valid the feasibility of the method. [less ▲]

Detailed reference viewed: 67 (2 UL)
Full Text
Peer Reviewed
See detailPerformance Evaluation and Energy Efficiency of High-Density HPC Platforms Based on Intel, AMD and ARM Processors
Jarus, M.; Varrette, Sébastien UL; Oleksiak, A. et al

in Proc. of the Intl. Conf. on Energy Efficiency in Large Scale Distributed Systems (EE-LSDS'13) (2013, April)

Due to growth of energy consumption by HPC servers and data centers many research efforts aim at addressing the problem of energy efficiency. Hence, the use of low power processors such as Intel Atom and ... [more ▼]

Due to growth of energy consumption by HPC servers and data centers many research efforts aim at addressing the problem of energy efficiency. Hence, the use of low power processors such as Intel Atom and ARM Cortex have recently gained more interest. In this arti- cle, we compare performance and energy efficiency of cutting-edge high- density HPC platform enclosures featuring either very high-performing processors (such as Intel Core i7 or E7) yet having low power-efficiency, or the reverse i.e. energy efficient processors (such as Intel Atom, AMD Fusion or ARM Cortex A9) yet with limited computing capacity. Our objective was to quantify in a very pragmatic way these general pur- pose CPUs using a set of reference benchmarks and applications run in an HPC environment, the trade-off that could exist between computing and power efficiency. [less ▲]

Detailed reference viewed: 48 (5 UL)
Full Text
Peer Reviewed
See detailA Holistic Model of the Performance and the Energy-Efficiency of Hypervisors in an HPC Environment
Guzek, Mateusz UL; Varrette, Sébastien UL; Plugaru, Valentin UL et al

in Energy Efficiency in Large Scale Distributed Systems (2013)

Detailed reference viewed: 46 (6 UL)
Full Text
Peer Reviewed
See detailShadObf: A C-Source Obfuscator Based on Multi-objective Optimisation Algorithms
Bertholon, Benoit UL; Varrette, Sebastien UL; Martinez, Sebastien

in Proceedings of the 2013 IEEE 27th International Symposium on Parallel and Distributed Processing Workshops and PhD Forum (2013)

Detailed reference viewed: 38 (4 UL)
Full Text
Peer Reviewed
See detailJShadObf: A JavaScript Obfuscator Based on Multi-Objective Optimization Algorithms
Bertholon, Benoit UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Lopez, Javier; Huang, Xinyi; Sandhu, Ravi (Eds.) Network and System Security (2013)

Detailed reference viewed: 86 (5 UL)
Full Text
Peer Reviewed
See detailExpected Running Time of Parallel Evolutionary Algorithms on Unimodal Pseudo-Boolean Functions over Small-World Networks
Muszynski, Jakub UL; Varrette, Sébastien UL; Bouvry, Pascal UL

in Proc. of the IEEE Congress on Evolutionary Computation (CEC'2013) (2013)

This paper proposes a theoretical and experimental analysis of the expected running time for an elitist parallel Evolutionary Algorithm (pEA) based on an island model executed over small-world networks ... [more ▼]

This paper proposes a theoretical and experimental analysis of the expected running time for an elitist parallel Evolutionary Algorithm (pEA) based on an island model executed over small-world networks. Our study assumes the resolution of optimization problems based on unimodal pseudo-boolean funtions. In particular, for such function with d values, we improve the previous asymptotic upper bound for the expected parallel running time from O(d√n) to O(d log n). This study is a first step towards the analysis of influence of more complex network topologies (like random graphs created by P2P networks) on the runtime of pEAs. A concrete implementation of the analysed algorithm have been performed on top of the ParadisEO framework and run on the HPC platform of the University of Luxembourg (UL). Our experiments confirm the expected speed- up demonstrated in this article and prove the benefit that pEA can gain from a small-world network topology. [less ▲]

Detailed reference viewed: 26 (1 UL)
Full Text
Peer Reviewed
See detailTrust Based Interdependency Weighting for On-line Risk Monitoring in Interdependent Critical Infrastructures
Caldeira, F.; Schaberreiter, Thomas UL; Varrette, Sébastien UL et al

in International Journal of Secure Software Engineering (2013), 4(4), 47-69

Detailed reference viewed: 39 (4 UL)
Full Text
See detailUsing Data-flow analysis in MAS for power-aware HPC runs
Varrette, Sébastien UL; Danoy, Grégoire UL; Guzek, Mateusz UL et al

in Proc. of the IEEE Intl. Conf. on High Performance Computing and Simulation (HPCS'13) (2013)

Detailed reference viewed: 43 (2 UL)
Full Text
Peer Reviewed
See detailHash function generation by means of Gene Expression Programming
Varrette, Sébastien UL; Muszynski, Jakub UL; Bouvry, Pascal UL

in Intl. Conf. on Cryptography and Security System (CSS’12) (2012, September), XII(3), 37-53

Cryptographic hash functions are fundamental primitives in modern cryptography and have many security applications (data integrity checking, cryptographic protocols, digital signatures, pseudo random ... [more ▼]

Cryptographic hash functions are fundamental primitives in modern cryptography and have many security applications (data integrity checking, cryptographic protocols, digital signatures, pseudo random number generators etc.). At the same time novel hash functions are designed (for instance in the framework of the SHA-3 contest organized by the National Institute of Standards and Technology (NIST)), the cryptanalysts exhibit a set of statistical metrics (propagation criterion, frequency analysis etc.) able to assert the quality of new proposals. Also, rules to design "good" hash functions are now known and are followed in every reasonable proposal of a new hash scheme. This article investigates the ways to build on this experiment and those metrics to generate automatically compression functions by means of Evolutionary Algorithms (EAs). Such functions are at the heart of the construction of iterative hash schemes and it is therefore crucial for them to hold good properties. Actually, the idea to use nature-inspired heuristics for the design of such cryptographic primitives is not new: this approach has been successfully applied in several previous works, typically using the Genetic Programming (GP) heuristic [1]. Here, we exploit a hybrid meta-heuristic for the evolutionary process called Gene Expression Programming (GEP) [2] that appeared far more e?cient computationally speaking compared to the GP paradigm used in the previous papers. In this context, the GEPHashSearch framework is presented. As it is still a work in progress, this article focuses on the design aspects of this framework (individuals de?nitions, ?tness objectives etc.) rather than on complete implementation details and validation results. Note that we propose to tackle the generation of compression functions as a multi-objective optimization problem in order to identify the Pareto front i.e. the set of non-dominated functions over the four ?tness criteria considered. If this goal is not yet reached, the ?rst experimental results in a mono-objective context are promising and open the perspective of fruitful contributions to the cryptographic community [less ▲]

Detailed reference viewed: 47 (1 UL)
Full Text
Peer Reviewed
See detailConvergence Analysis of Evolutionary Algorithms in the Presence of Crash-Faults and Cheaters
Muszynski, Jakub UL; Varrette, Sébastien UL; Bouvry, Pascal UL et al

in Computers & Mathematics with Applications (2012), 64(12), 3805-3819

This paper analyzes the fault-tolerance nature of Evolutionary Algorithms (EAs) when executed in a distributed environment subjected to malicious acts. More precisely, the inherent resilience of EAs ... [more ▼]

This paper analyzes the fault-tolerance nature of Evolutionary Algorithms (EAs) when executed in a distributed environment subjected to malicious acts. More precisely, the inherent resilience of EAs against two types of failures is considered: (1) crash faults, typically due to resource volatility which lead to data loss and part of the computation loss; (2) cheating faults, a far more complex kind of fault that can be modeled as the alteration of output values produced by some or all tasks of the program being executed. This last type of failure is due to the presence of cheaters on the computing platform. Most often in Global Computing (GC) systems such as BOINC, cheaters are attracted by the various incentives provided to stimulate the volunteers to share their computing resources: cheaters typically seek to obtain rewards with little or no contribution to the system. In this paper, the Algorithm-Based Fault Tolerance (ABFT) aspects of EAs against the above types of faults is characterized. Whereas the inherent resilience of EAs has been previously observed in the literature, for the first time, a formal analysis of the impact of the considered faults over the executed EA including a proof of convergence is proposed in this article. By the variety of problems addressed by EAs, this study will hopefully promote their usage in the future developments around distributed computing platform such as Desktop Grids and Volunteer Computing Systems or Cloud systems where the resources cannot be fully trusted. [less ▲]

Detailed reference viewed: 11 (0 UL)