References of "Cordy, Maxime 50027892"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailAre mutants really natural? A study on how “naturalness” helps mutant selection
Jimenez, Matthieu UL; Titcheu Chekam, Thierry UL; Cordy, Maxime UL et al

in 12th International Symposium on 
 Empirical Software Engineering and Measurement (ESEM'18) (2018, October 11)

Background: Code is repetitive and predictable in a way that is similar to the natural language. This means that code is ``natural'' and this ``naturalness'' can be captured by natural language modelling ... [more ▼]

Background: Code is repetitive and predictable in a way that is similar to the natural language. This means that code is ``natural'' and this ``naturalness'' can be captured by natural language modelling techniques. Such models promise to capture the program semantics and identify source code parts that `smell', i.e., they are strange, badly written and are generally error-prone (likely to be defective). Aims: We investigate the use of natural language modelling techniques in mutation testing (a testing technique that uses artificial faults). We thus, seek to identify how well artificial faults simulate real ones and ultimately understand how natural the artificial faults can be. %We investigate this question in a fault revelation perspective. Our intuition is that natural mutants, i.e., mutants that are predictable (follow the implicit coding norms of developers), are semantically useful and generally valuable (to testers). We also expect that mutants located on unnatural code locations (which are generally linked with error-proneness) to be of higher value than those located on natural code locations. Method: Based on this idea, we propose mutant selection strategies that rank mutants according to a) their naturalness (naturalness of the mutated code), b) the naturalness of their locations (naturalness of the original program statements) and c) their impact on the naturalness of the code that they apply to (naturalness differences between original and mutated statements). We empirically evaluate these issues on a benchmark set of 5 open-source projects, involving more than 100k mutants and 230 real faults. Based on the fault set we estimate the utility (i.e. capability to reveal faults) of mutants selected on the basis of their naturalness, and compare it against the utility of randomly selected mutants. Results: Our analysis shows that there is no link between naturalness and the fault revelation utility of mutants. We also demonstrate that the naturalness-based mutant selection performs similar (slightly worse) to the random mutant selection. Conclusions: Our findings are negative but we consider them interesting as they confute a strong intuition, i.e., fault revelation is independent of the mutants' naturalness. [less ▲]

Detailed reference viewed: 40 (5 UL)
Full Text
Peer Reviewed
See detailTUNA: TUning Naturalness-based Analysis
Jimenez, Matthieu UL; Cordy, Maxime UL; Le Traon, Yves UL et al

in 34th IEEE International Conference on Software Maintenance and Evolution, Madrid, Spain, 26-28 September 2018 (2018, September 26)

Natural language processing techniques, in particular n-gram models, have been applied successfully to facilitate a number of software engineering tasks. However, in our related ICSME ’18 paper, we have ... [more ▼]

Natural language processing techniques, in particular n-gram models, have been applied successfully to facilitate a number of software engineering tasks. However, in our related ICSME ’18 paper, we have shown that the conclusions of a study can drastically change with respect to how the code is tokenized and how the used n-gram model is parameterized. These choices are thus of utmost importance, and one must carefully make them. To show this and allow the community to benefit from our work, we have developed TUNA (TUning Naturalness-based Analysis), a Java software artifact to perform naturalness-based analyses of source code. To the best of our knowledge, TUNA is the first open- source, end-to-end toolchain to carry out source code analyses based on naturalness. [less ▲]

Detailed reference viewed: 50 (7 UL)
Full Text
Peer Reviewed
See detailOn the impact of tokenizer and parameters on N-gram based Code Analysis
Jimenez, Matthieu UL; Cordy, Maxime UL; Le Traon, Yves UL et al

Scientific Conference (2018, September)

Recent research shows that language models, such as n-gram models, are useful at a wide variety of software engineering tasks, e.g., code completion, bug identification, code summarisation, etc. However ... [more ▼]

Recent research shows that language models, such as n-gram models, are useful at a wide variety of software engineering tasks, e.g., code completion, bug identification, code summarisation, etc. However, such models require the appropriate set of numerous parameters. Moreover, the different ways one can read code essentially yield different models (based on the different sequences of tokens). In this paper, we focus on n- gram models and evaluate how the use of tokenizers, smoothing, unknown threshold and n values impact the predicting ability of these models. Thus, we compare the use of multiple tokenizers and sets of different parameters (smoothing, unknown threshold and n values) with the aim of identifying the most appropriate combinations. Our results show that the Modified Kneser-Ney smoothing technique performs best, while n values are depended on the choice of the tokenizer, with values 4 or 5 offering a good trade-off between entropy and computation time. Interestingly, we find that tokenizers treating the code as simple text are the most robust ones. Finally, we demonstrate that the differences between the tokenizers are of practical importance and have the potential of changing the conclusions of a given experiment. [less ▲]

Detailed reference viewed: 85 (8 UL)
Full Text
Peer Reviewed
See detailEnabling lock-free concurrent workers over temporal graphs composed of multiple time-series
Fouquet, Francois; Hartmann, Thomas UL; Mosser, Sébastien et al

in 33rd Annual ACM Symposium on Applied Computing (SAC'18) (2018, April)

Time series are commonly used to store temporal data, e.g., sensor measurements. However, when it comes to complex analytics and learning tasks, these measurements have to be combined with structural ... [more ▼]

Time series are commonly used to store temporal data, e.g., sensor measurements. However, when it comes to complex analytics and learning tasks, these measurements have to be combined with structural context data. Temporal graphs, connecting multiple time- series, have proven to be very suitable to organize such data and ultimately empower analytic algorithms. Computationally intensive tasks often need to be distributed and parallelized among different workers. For tasks that cannot be split into independent parts, several workers have to concurrently read and update these shared temporal graphs. This leads to inconsistency risks, especially in the case of frequent updates. Distributed locks can mitigate these risks but come with a very high-performance cost. In this paper, we present a lock-free approach allowing to concurrently modify temporal graphs. Our approach is based on a composition operator able to do online reconciliation of concurrent modifications of temporal graphs. We evaluate the efficiency and scalability of our approach compared to lock-based approaches. [less ▲]

Detailed reference viewed: 40 (1 UL)
Full Text
See detailOn the Naturalness of Mutants
Jimenez, Matthieu UL; Cordy, Maxime UL; Kintis, Marinos UL et al

E-print/Working paper (2017)

Detailed reference viewed: 73 (6 UL)