References of "Dissertations and theses"
     in
Bookmark and Share    
Full Text
See detailPROACTIVE COMPUTING PARADIGM APPLIED TO THE PROGRAMMING OF ROBOTIC SYSTEMS
Chaychi, Samira UL

Doctoral thesis (2023)

This doctoral thesis is concerned with the development of advanced software for robotic systems, an area still in its experimental infancy, lacking essential methodologies from generic software ... [more ▼]

This doctoral thesis is concerned with the development of advanced software for robotic systems, an area still in its experimental infancy, lacking essential methodologies from generic software engineering. A significant challenge within this domain is the absence of a well-established separation of concerns from the design phase. This deficiency is exemplified by Navigation 2, a realworld reference application for (semi-) autonomous robot journeys developed for and on top of the Robot Operating System (ROS): the project’s leading researchers encountered difficulties in maintaining and evolving their complex software, even for supposed-to-be straightforward new functions, leading to a halt in further development. In response, this thesis first presents an alternative design and implementation approach that not only rectifies the issues but also elevates the programming level of consistent robot behaviors. By leveraging the proactive computing paradigm, our dedicated software engineering model provides programmers with enhanced code extension, reusability and maintenance capabilities. Furthermore, a key advantage of the model lies in its dynamic adaptability via on-the-fly strategy change in decision-making. Second, in order to provide a comprehensive evaluation of the two systems, an exhaustive comparative study between Navigation 2 and the same application implemented along the lines of our model, is conducted. This study covers thorough assessments at both compile-time and runtime. Software metrics such as coupling, lack of cohesion, complexity, and various size measures are employed to quantify and visualize code quality and efficiency attributes. The CodeMR software tool aids in visualizing these metrics, while runtime analysis involves monitoring CPU and memory usage through the Datadog monitoring software. Preliminary findings indicate that our implementation either matches or surpasses Navigation 2’s performance while simultaneously enhancing code structure and simplifying modifications and extensions of the code base. [less ▲]

Detailed reference viewed: 1452 (0 UL)
Full Text
See detailArtificial Intelligence-enabled Automation for Compliance Checking against GDPR
Amaral Cejas, Orlando UL

Doctoral thesis (2023)

Requirements engineering (RE) is concerned with eliciting legal requirements from applicable regulations to enable developing legally compliant software. Current software systems rely heavily on data ... [more ▼]

Requirements engineering (RE) is concerned with eliciting legal requirements from applicable regulations to enable developing legally compliant software. Current software systems rely heavily on data, some of which can be confidential, personal, or sensitive. To address the growing concerns about data protection and privacy, the general data protection regulation (GDPR) has been introduced in the European Union (EU). Organizations, whether based in the EU or not, must comply with GDPR as long as they collect or process personal data of EU residents. Breaching GDPR can be charged with large fines reaching up to up to billions of euros. Privacy policies (PPs) and data processing agreements (DPAs) are documents regulated by GDPR to ensure, among other things, secure collection and processing of personal data. Such regulated documents can be used to elicit legal requirements that are inline with the organizations’ data protection policies. As a prerequisite to elicit a complete set of legal requirements, however, these documents must be compliant with GDPR. Checking the compliance of regulated documents entirely manually is a laborious and error-prone task. As we elaborate below, this dissertation investigates utilizing artificial intelligence (AI) technologies to provide automated support for compliance checking against GDPR. • AI-enabled Automation for Compliance Checking of PPs: PPs are technical documents stating the multiple privacy-related requirements that a system should satisfy in order to help individuals make informed decisions about sharing their personal data. We devise an automated solution that leverages natural language processing (NLP) and machine learning (ML), two sub-fields of AI, for checking the compliance of PPs against the applicable provisions in GDPR. Specifically, we create a comprehensive conceptual model capturing all information types pertinent to PPs and we further define a set of compliance criteria for the automated compliance checking of PPs. • NLP-based Automation for Compliance Checking of DPAs: DPAs are legally binding agreements between different organizations involved in the collection and processing of personal data to ensure that personal data remains protected. Using NLP semantic analysis technologies, we develop an automated solution that checks at phrasal-level the compliance of DPAs against GDPR. Our solution is able to provide not only a compliance assessment, but also detailed recommendations about avoiding GDPR violations. • ML-enabled Automation for Compliance Checking of DPAs: To understand how different representations of GDPR requirements and different enabling technologies fare against one another, we develop an automated solution that utilizes a combination of conceptual modeling and ML. We further empirically compare the resulting solution with our previously proposed solution, which uses natural language to represent GDPR requirements and leverages rules alongside NLP semantic analysis for the automated support. [less ▲]

Detailed reference viewed: 31 (1 UL)
Full Text
See detailDistributed cohesive radio systems for spaceborne applications
Martinez Marrero, Liz UL

Doctoral thesis (2023)

During the last decade, the use of small satellites has revolutionized the field of space exploration and communications. This has opened up new possibilities, such as the feasibility of grouping them ... [more ▼]

During the last decade, the use of small satellites has revolutionized the field of space exploration and communications. This has opened up new possibilities, such as the feasibility of grouping them into Cohesive Distributed Satellite Systems (CDSSs). A CDSS is a multi-satellite configuration that appears as a single solid entity from an external perspective, which includes data reception, processing, and transmission operations. This entails improving several DSS applications such as Earth observation, geolocation, navigation, imaging, and communications. The synchronization of CDSSs involves precisely aligning time, frequency, and phase among multiple satellites, which is a significant challenge due to the inherent characteristics of space-based environments. For instance, the spacecraft mobility, the round-trip delay, and the resource constraints make the synchronization of DSSs more challenging than its equivalent in wireless terrestrial networks. However, it is simultaneously an unavoidable challenge for future space communications. This requirement does not only apply to small satellites DSS but also to avoid interference in crowded orbits and enable the federated satellites’ system paradigm. This thesis aims to identify the technical synchronization requirements and design the synchronization and coordination techniques to perform cohesive transmission in a DSS. First, we studied the state-of-the-art synchronization techniques and analyzed their feasibility for DSS. Additionally, we summarized other methods related to the synchronization of DSS, such as inter-satellite ranging and positioning. Then, we considered a first approximation to the problem, assuming accurate time synchronization and relative positioning among the satellites in a DSS. This problem is equivalent to synchronizing the local oscillators’ phase in a precoding-enabled multi-beam satellite system. One of the most significant synchronization impairments for implementing CDSSs is the phase noise of the LOs in different spacecraft. In this regard, the two-state phase noise model was implemented and integrated into the channel emulator of the MIMO end-to-end satellite emulator, which allowed us to validate the results included in this thesis. Next, we analyzed the impact of the phase errors and uncertainties in operating a precoded forward link satellite communication system. We formally demonstrated that the uplink phase variations affect precoding performance even when all the LOs share a single frequency reference. Additionally, we identified the individual contributions of each system element to the overall synchronization uncertainties in practical precoding implementations. Besides, for linear and non-linear precoding, we formally demonstrated that the UTs can track slow time variations in the channel if they equally affect all the beams. The compensation loop to mitigate these impairments was designed, implemented, and integrated into the GW of the MIMO end-to-end satellite emulator. The solution is a closed-loop algorithm that uses the periodical channel phase measurements sent to the GW by the UTs as part of traditional precoding implementations. The proportional-integral controller included in the GW calculates the compensation phase required to align all the beams to the phase of the designated reference beam. Besides, we compared different approaches to combine the channel phase estimations obtained from the UTs using the amplitude of the estimated channel and the UT’s thermal noise. The compensation loop and the combining estimations hardware implementations were used in real-time experiments to assess the feasibility of the precoding technique for GEO satellite systems. [less ▲]

Detailed reference viewed: 26 (2 UL)
Full Text
See detailCOMMUTING SATISFACTION AND SUBJECTIVE WELL-BEING: Linking life domains, workplace relocation and working from home practices
Maheshwari, Richa UL

Doctoral thesis (2023)

This dissertation examines the relationship between commuting satisfaction (CS) and subjective well-being (SWB) and investigates the dynamics of commuting. For the analysis, secondary data (EU-SILC, P ... [more ▼]

This dissertation examines the relationship between commuting satisfaction (CS) and subjective well-being (SWB) and investigates the dynamics of commuting. For the analysis, secondary data (EU-SILC, P-SELL III) as well as self-collected data from an online survey about changes in workplace location and working conditions were collected. The combination of these datasets allows the exploration of three important aspects of the relationship between CS and SWB. First, the direct and indirect effects of CS on SWB are examined by considering the interplay with satisfaction with other life domains than commuting, including among others work, accommodation, time-use, leisure time, personal relationships, and health. This is an important contribution to the field of travel satisfaction because it provides an in-depth analysis of how SWB depends not only on satisfaction with a typical commute to work, but also on satisfaction with other activities that are linked to commuting. Previous studies have examined the relationship between commuting satisfaction and SWB but have largely ignored satisfaction with other life domains. This is rather surprising given that commuting depends to a large extent on decisions people make regarding other life domains such as where to live and work. This dissertation thus provides a broader conceptualization of commuting satisfaction, avoiding certain biases that otherwise might exist when interactions with satisfaction with other life domains are ignored. Second, it explores the dynamics of commuting by analyzing the impact of life events on commuting (dis)satisfaction, and the reverse. This temporal dimension of CS adds a dynamic layer to the current static interpretation of travel satisfaction by examining changes in individuals' longer-term life decisions, such as residence and/or workplace location, focusing on voluntary and involuntary relocation. Voluntary workplace relocation occurs when the employee willingly decide to change their jobs, while the latter occurs when the employee is forced to move with their employer in order to retain their jobs. This distinction in terms of workplace relocation thus provides a first empirical analysis on the dynamics of CS. Third, it allows us to examine the extent to which the relationships between CS, satisfaction with other life domains, and SWB are still applicable today, in post-pandemic times where working from home became more important than ever. This is an important contribution to the field of travel satisfaction as it provides first-hand insights into how the relationship between CS and SWB differs in post-pandemic times. The main findings from this consolidated work on travel satisfaction, particularly commuting satisfaction, are manifold. First, commuting is not a stand-alone life domain, but is connected to all other life domains, especially time-use satisfaction. Therefore, it is recommended for future studies to invest more in time-use research to understand the complexity and interplay between CS and SWB. Second, individuals who are dissatisfied with their commute do not necessarily have the financial resources and stability to change either residence or workplace to cope with dissatisfying commute patterns. These individuals who tolerate commuting dissatisfaction in their personal lives might simultaneously have a negative impact on their time-use satisfaction due to time-poverty that arises from commuting longer distances or for longer time, which obviously comes at the expense of dissatisfaction with leisure-time or personal relationships. Future research should therefore address the question of whether people make changes in their lives, for example by changing workplace location or residence, or whether they tolerate dissatisfaction with commuting, which in turn could affect their satisfaction with other life domains and SWB. This will help practitioners and policy makers in formulating the necessary transport and planning policies to accommodate these dissatisfied commuters. Fourth, people seem to be more satisfied with their commute after a voluntary workplace relocation than those who changed workplaces involuntarily. However, the question of how lasting this effect of a workplace relocation on CS is and whether CS changes over time as people become accustomed to the changed environment (treadmill effect) remains unanswered. Future research to understand the dynamics of commuting is therefore needed, using a rigorous panel design. Fifth, a workplace relocation could also lead to residential mobility. This is often noted in previous studies and somewhat addressed in this dissertation, but is not fully explored in the travel satisfaction literature. Therefore, further research is needed on the co-occurrence of life events and their impact on CS, i.e. how a workplace relocation triggers residential mobility and how lasting are its impact on CS. This can be achieved using a life-course approach to gain a better understanding of the life choices individuals make in terms of changes in their travel behavior and satisfaction, to enable better evaluation of transport and land use policies. Finally, hybrid workers (who work from home two to three days per week) seem to have higher levels of SWB compared to occasional teleworkers (who work from home less than one day per week). This implies that the well-documented relationship between CS and SWB needs to be re-examined as commuting has been limited for some people due to the outbreak of the COVID-19 pandemic as they have shifted to working from home. Future research is therefore needed to identify whether commuting actually lengthens occasional teleworkers' total workday and reduces the time they could have spent on other non-travel activities, and whether the time hybrid workers save by not commuting to work every day influences the time they spend on other non-travel activities such as household chores, childcare and sleep. Such an in-depth analysis of the interplay between CS, SWB and satisfaction with non-travel-related life domains is indeed needed to determine not only in which areas employees' well-being can be improved, but also how. On a final note, although commuting has a significant impact on individuals' SWB, it is not necessarily the most important life domain. Previous studies have shown that commuting is a stressful activity and has a direct negative impact on individual SWB; however, the results of this dissertation did not find a negative relationship between CS and SWB. In contrast to previous findings, we conclude that satisfaction with time use has the strongest total effect on SWB; regardless of how often individuals commute to work. This might suggest that individuals can maximize their utility and thus their overall SWB as long as they are free to optimize their time. As for the prospective approach of CS, we know that dissatisfaction with commute triggers changes in life event, such as (but not limited to) changing workplace or residence. However, for the majority of dissatisfied individuals who are unable to make a change, the question of how this dissatisfaction spill over onto satisfaction with non-travel-related life domains due to time poverty that results from commuting longer distances seeks further investigation. As for the dynamics, although workers who voluntary changed their workplace have higher CS than those who changes on an involuntary basis, the question of how lasting this is, and whether CS changes over time when people get accustomed to the changed environment (treadmill effect), is a topic for future research. [less ▲]

Detailed reference viewed: 101 (2 UL)
See detailAnother Wild West Web for Critical Information Systems Research: A Sceptical-Empirical Approach to the Ethereum Mainnet
Smethurst, Reilly UL

Doctoral thesis (2023)

The early twenty-first century is marked by the 2007 Global Financial Crisis and the 2013 Snowden revelations about online surveillance. This period cursed many, yet it smiled upon developers of financial ... [more ▼]

The early twenty-first century is marked by the 2007 Global Financial Crisis and the 2013 Snowden revelations about online surveillance. This period cursed many, yet it smiled upon developers of financial technologies and blockchain networks. Led by Bitcoin in 2009 and Ethereum in 2015, blockchain networks are treated as potential panaceas for a range of societal ills. For the problem of crisis-riven financial institutions, blockchain developers propose Decentralised Finance. For the problem of online surveillance, they propose Self-Sovereign Identity. In response to Big Tech companies’ exploitation of content creators, they propose NFTs. In response to everyday mundanity and the limits of the physical world, they propose avatar-based role-play and simulated environments – the metaverse. Meanwhile, critics deride blockchain solutions as potentially worse than the status quo – a passage from the World Wide Web, dominated by Big Tech companies, to a new Wild West Web of pseudonymity, hyper-volatility, and “degens” (degenerates). Critical Information Systems researchers are spoilt for choice. This cumulative thesis consists of a dissertation plus six publications. The dissertation conceives the Ethereum Mainnet as an actor-network rather than a cause of empowerment and emancipation. The six publications use sceptical-empirical methods to investigate Ethereum’s close ties with Decentralised Finance, Self-Sovereign Identity, the OpenSea NFT marketplace, and the metaverse. A prescriptive or normative dimension – a moral Cause – is absent from the six publications. The dissertation defends this absence, and it encourages critical Information Systems researchers to set aside ideologies that posit Ethereum as a Cause of individual empowerment or world improvement. Critical researchers should instead follow the network’s transactions and powerful actors. [less ▲]

Detailed reference viewed: 45 (2 UL)
See detailGeneration and phenotyping of NAD(P)HX repair deficient zebrafish and iPSC-derived microglia-like cells
Patraskaki, Myrto UL

Doctoral thesis (2023)

PEBEL (progressive early onset encephalopathy with brain edema and/or leukoencephalopathy) disorder is a severe infantile neurometabolic disorder that falls within the broader spectrum of rare inherited ... [more ▼]

PEBEL (progressive early onset encephalopathy with brain edema and/or leukoencephalopathy) disorder is a severe infantile neurometabolic disorder that falls within the broader spectrum of rare inherited diseases known as Inborn Errors of Metabolism. The underlying cause of PEBEL is a genetic deficiency in the NAD(P)HX metabolite repair system. This system is responsible for correcting hydration damage of the central cofactors NADH and NADPH, which leads to the formation of non-canonical and inactive metabolites called NAD(P)HX. The NAD(P)HX repair system comprises two key enzymes: S-NAD(P)HX dehydratase, also referred to as NAXD, and NAD(P)HX epimerase, known as NAXE. Mutations in either NAXD or NAXE result in the onset of PEBEL during the early years of life, and in most documented cases, it has so far ledto premature death. The appearance of disease symptoms is typically triggered by febrile incidents and/or viral/bacterial infections. In a few recently published trials, the treatment of PEBEL patients with high doses of Vitamin B3, a precursor of NAD+, was very efficient in treating the severe skin lesions and exerted a stabilizing effect of the neurological condition of patients. However, the exact molecular mechanisms underlying the disorder remain unclear. It remains yet to be establishedwhether the depletion of normal cofactors, the accumulation of non-canonical metabolites, or a combination of both factors or even additional perturbations play most critical roles in disease development. Similarly, the reason for the immunerelated trigger-dependent onset of the disease symptoms remains fully unclear. To shed light on these aspects, in this study, we aimed to establish zebrafish models of NAXD and NAXE deficiency using the CRISPR/Cas9 technology. Through gross and molecular phenotyping of these zebrafish models, we observed that naxe mutants exhibit a mild immune deficiency that does not apparently impact their behavior or survival. On the other hand, naxd mutants display a severe phenotype, with readily detectable differences in locomotion behavior and a significant decrease in survival rate. To complement the zebrafish model work and study specifically the impact of NAXD deficiency on the immune system, we used patient-derived and genetically modified induced pluripotent stem cells (iPSCs) to generate microglia-like cells(iMGLs) lacking functional NAXD. Patient-derived microglia-like cells with NAXD deficiency demonstrated main hallmarks of this cell type showing that the methodology can be 10 used to generate a suitable disease model. NADHX accumulation, the main molecular characteristic of NAXD disease, could also be shown to occur in the NAXD deficient iMGLs. Significant differences between NAXD mutant and isogenic control iMGLs could be measured for several microglia specific functions such as phagocytic activity, ramification morphology, and expression of inflammation responsive genes. For some of these parameters, conflicting results were obtained in NAXD patient-derived and knockout iMGLs, which calls for repeating this type of phenotyping in a larger collection of cell lines with different genetic backgrounds, to identify the traits specifically affected by mutations in the NAXD gene. The original findings in this thesis provide initial insights into the effect of NAXD and NAXE deficiency in developmental processes and their connection with the immune system in the context of PEBEL disorder. Longitudinal studies in our cell and whole-organism models, guided by our findings here, will be required to understand the causal chain of events leading from a failure to repair damaged NAD(P)H to an impaired immune function and an eventually fatal neurological decline. [less ▲]

Detailed reference viewed: 55 (11 UL)
See detailHuman Capital Mobility, Firm Strategy, and Innovation
Samson, Ruth Ngilisho UL

Doctoral thesis (2023)

Human capital is at a pivotal position in the knowledge creation and diffusion paradigm; it is the fount of knowledge while being critical to firms’ absorption and amplification of knowledge. A firm’s ... [more ▼]

Human capital is at a pivotal position in the knowledge creation and diffusion paradigm; it is the fount of knowledge while being critical to firms’ absorption and amplification of knowledge. A firm’s sustained competitive advantage calls for organizational knowledge, which entails the integration of individual-specific knowledge embedded in employees. Yet, human beings possess attributes such as mobility and intrinsic motivation, whose implications conflict with the firm’s strategic objectives (e.g., sustaining a competitive advantage) and challenge the firm’s decision to invest in R&D. Further, given the dynamic nature of the technology market driven by constant innovation and competition, policymakers have an ongoing dialogue on how to establish sustainable innovation policies that protect the innovators’ interests without risking society’s welfare. As a result, it is of interest to managers, policymakers, and researchers to further understand the human capital position in the creation and diffusion of knowledge as well as the strategic and policy implications. This thesis contributes by exploring the reciprocal interrelations between human capital mobility, firm strategy, and innovation. Chapter 1 distinguishes voluntary from involuntary (forced) mobility and investigates the effect of forced mobility on the skilled workforce’s innovation performance from the perspectives of the source and recipient firms. We exploit the sudden fall of Nortel Networks as a natural experiment in a difference in differences event study. We construct a unique, comprehensive dataset by matching patent data from the United States Patents and Trademarks Office and online CV information from LinkedIn. We follow inventors’ careers with various employers between 1985 and 2010 and identify voluntary and involuntary (forced) movers while observing the innovation output measured by the annual number of patent applications. Our analysis shows that forced mobility distinctly affects inventors’ innovation performance when analyzed from the perspectives of the source and destination firms. Relative to voluntary mobility, the effect is insignificant, though negative, from the destination firm’s perspective. And relative to no mobility, the effect is negative, though insignificant from the destination firm’s perspective. These findings contribute to the literature on skilled workforce mobility as a driver for knowledge creation and diffusion, particularly forced mobility which has so far received very little attention. We also provide insights to policymakers and managers in the knowledge-intensive industry. Chapter 2 investigates human capital mobility behavior when unconstrained. How is the inter-firm mobility behavior of the skilled workforce altered when mobility constraints are lessened or are in favor of employees? Considering the sensitivity of knowledge sharing in the technology-intensive industry – driven by competition, constant innovation, and human capital capabilities – the mobility behavior of the skilled workforce when unconstrained is not explicit. We use changes in trade secrets protection regimes to identify mobility constraints and investigate the mobility behavior of the skilled workforce when these constraints are lessened. In a difference-in-differences setting, we find that establishing an employee-friendly trade secrets protection environment – lessening mobility constraints – reduces the likelihood of skilled workforce mobility. Further, we show that the negative effect is conveyed through the firm-specific innovation-related skills acquired by the skilled workforce. These findings align with our postulation on the increased skilled workforce’s intrinsic motivation to perform in response to the reinforced possibilities of applying the acquired technological know-how elsewhere. We illustrate the managerial dilemma of imperfect intellectual property protection and contribute to the research on intrinsic motivation as a catalyst for the observed mobility behavior and innovation output of employees working in R&D. Chapter 3 analyses firm strategy regarding R&D search and patenting while taking into account the dynamic shaping of the technology landscape by patent protection. Scholars have considerably explored the implications of patent protection and strategic patenting on innovation and R&D investment. Yet, we are still without a nuanced understanding of how firms should search the fragmented technology landscape, considering the various motives to patent and the dynamic shaping of the technology landscape by patent protection. Offensively? Defensively? Or adaptively based on payoff? We model innovation as a process of building technological bridges where new technological discoveries combine elements of new and precedent technologies. Simulation results of our model show that when patent protection shapes the technology landscape dynamically, it is advantageous for firms to search around “own” technologies globally in a complex product industry. In contrast, in a discrete product industry, it is advantageous for firms to search locally based on the payoff and globally either around “own” technologies or based on the payoff. We show that the search distance, appropriability regime, and adaptiveness of the payoff-based search influence the firms’ ultimate performance. Our study contributes to the existing research on organizational search and provides a distinct approach to modeling the structure and evolution of the technology landscape using network representation. We also provide managers and policymakers with insights regarding strategic R&D investment, patenting, and the implications on consumers and innovation welfare. [less ▲]

Detailed reference viewed: 83 (6 UL)
Full Text
See detailDeniability, Plaintext-Awareness, and Non-Malleability in the Quantum and Post-Quantum Setting
van Wier, Jeroen UL

Doctoral thesis (2023)

Secure communication plays an important role in our everyday life, from the messages we send our friends to online access to our banking. In fact, we can hardly imagine a world without it. With quantum ... [more ▼]

Secure communication plays an important role in our everyday life, from the messages we send our friends to online access to our banking. In fact, we can hardly imagine a world without it. With quantum computers on the rise, it is critical for us to consider what security might look like in the future. Can we rely on the principles we use today? Or should we adapt them? This thesis asks exactly those questions. We will look at both the quantum setting, where we consider communication between quantum computers, and the post-quantum setting, where we consider communication between classical computers in the presence of adversaries with quantum computers. In this thesis, we will consider security questions centred around misleading others, by considering to what extent the exchange of secrets can be denied, misconstructed, or modified. We do this by exploring three security principles. Firstly, we consider deniability for quantum key exchange, which describes the ability to generate secure keys without leaving evidence. As quantum key exchange can be performed without a fully-fledged quantum computer, using basic quantumcapable machines, this concept is already close to becoming a reality. We explore the setting of public-key authenticated quantum key exchange, and define a simulationbased notion of deniability. We show how this notion can be achieved through an adapted form of BB84, using post-quantum secure strong designated-verifier signature schemes. Secondly, we consider plaintext-awareness, which addresses the security of a scheme by looking at the ability of an adversary to generate ciphertexts without knowing the plaintext. Here two settings are considered. Firstly, the post-quantum setting, in which we formalize three different plaintext-awareness notions in the superposition access model, show their achievability and the relations between them, as well as in which settings they can imply ciphertext indistinguishability. Next, the quantum setting, in which we adapt the same three plaintext-awareness notions to a setting where quantum computers are communicating with each other, and we again show achievability and relations with ciphertext indistinguishability. Lastly, we consider non-malleability, which protects a message from attacks that alter the underlying plaintext. Overcoming the notorious “recording barrier” known from generalizing other integrity-like security notions to quantum encryption, we generalize one of the equivalent classical definitions, comparison-based non-malleability, to the quantum setting and show how this new definition can be fulfilled. We also show its equivalence to the classical definition when restricted to a post-quantum setting. [less ▲]

Detailed reference viewed: 35 (3 UL)
Full Text
See detailTransitional states in NRAS-mutated melanoma preceding drug resistance
Randic, Tijana UL

Doctoral thesis (2023)

Patients diagnosed with advanced-stage melanoma carrying NRAS-activating mutations face a dismal prognosis, as reflected by their short progression-free survival. While targeted therapy involving MEK1/2 ... [more ▼]

Patients diagnosed with advanced-stage melanoma carrying NRAS-activating mutations face a dismal prognosis, as reflected by their short progression-free survival. While targeted therapy involving MEK1/2 and CDK4/6 co-inhibition has shown partial effectiveness in NRAS-mutant melanoma patients and has advanced to clinical trials, the molecular processes underlying acquired resistance to these drugs remain largely unknown. A triple therapeutic regimen involving inhibition of MAPK, CDK4/6, and PI3K-Akt-mTOR signalling pathways has shown promising results in impairing NRASmut melanoma progression. However, this treatment is accompanied by toxicity, highlighting the importance of identifying alternative strategies to improve efficacy and overcome the resistance of MEK and CDK4/6 inhibitors. Thus, the work performed in the framework of this doctoral thesis aimed to depict cell state transitions that propel NRASmut melanoma cells to become drug-resistant and reconstruct the cell state transcriptional trajectories associated with this phenomenon. First, we monitored cell growth and proliferation to determine the sensitivity of NRAS-mutant melanoma cell lines to MEK1/2 and CDK4/6 co-inhibition. Two main cellular responses have been observed, depending on the dynamics of the cells under drug exposure, namely fast and slow drug adaptation, and progression along the trajectory of resistance development. Next, we performed time-series single-cell RNA sequencing of distinct NRASmut melanoma cell lines over prolonged MEK/CDK4/6 co-targeting. The development of single-cell technologies has facilitated a deeper understanding of the heterogeneous transcriptional background and cell state plasticity observed in BRAFmut melanoma and to a lesser extent in NRASmut melanoma. To the best of our knowledge, this study represents the first endeavour to address the response and resistance to MEK/CDK4/6 co-inhibition at the level of single NRASmut melanoma cells. This approach enabled us to characterise cell populations that were sensitive to targeted therapy, as well as those that were intrinsically resistant or developed resistance over time. Upon early drug exposure, we detected slow-proliferating melanoma cells enriched in genes related to transmembrane transport, epithelial-mesenchymal transition (EMT), and adhesion, among others. Once melanoma cells resist inhibitory stress and resume proliferation, they transit towards a state, which is highly enriched in an interferon response gene signature. We then focused on supporting the inhibitory potential of drugs in transitional states. Activation of the transmembrane transport marker purinergic receptor P2RX7 improved the tumouricidal effects of dual MEK/CDK4/6 inhibition. Overall, this study gives a snapshot of melanoma transitional mechanisms towards resistance to targeted drugs and contributes to the establishment of novel treatment approaches for NRAS-mutant tumours. [less ▲]

Detailed reference viewed: 45 (1 UL)
See detailInterfaces between national and EU law: time limits in cross-border civil proceedings and their impact on the free circulation of judgments
Chiapponi, Giovanni UL

Doctoral thesis (2023)

The thesis aims at exploring possible legal solutions to remove the obstacles to the free circulation of judgments in the civil justice area that arise from the remarkably diverging national rules on ... [more ▼]

The thesis aims at exploring possible legal solutions to remove the obstacles to the free circulation of judgments in the civil justice area that arise from the remarkably diverging national rules on procedural time limits. As shown by the case-law of the CJEU, time limits have recently come under closer scrutiny. The interplay between national and EU law illustrates that time limits raise significant deficiencies connected with the right to a fair trial under Art. 6 ECHR and Art. 47 CFR – e.g. the effective recovery of claims, effective judicial protection, effective cross-border enforcement of judgments – which negatively impact EU cross-border civil litigation. In order to overcome some of the weaknesses of the current legal framework governing the cross-border enforcement of judgments and strengthen the parties’ fundamental procedural rights the PhD thesis intends to determine whether and, to what extent time limits can be harmonised at EU level. EU action on time limits would indeed favour the speed, efficiency and proportionality of cross-border proceedings without sacrificing the fairness of the judicial process and the equality of the parties. [less ▲]

Detailed reference viewed: 50 (2 UL)
Full Text
See detailLUNAR REMOTE SENSING DATA ENHANCEMENT FOR PRECISE ROBOTIC MISSION PLANNING
Delgado Centeno, José Ignacio UL

Doctoral thesis (2023)

In recent times, there has been a resurgence of interest in not only revisiting the Moon but also establishing a lasting and sustainable human presence there. Numer- ous agencies and private corporations ... [more ▼]

In recent times, there has been a resurgence of interest in not only revisiting the Moon but also establishing a lasting and sustainable human presence there. Numer- ous agencies and private corporations, gearing up for future lunar missions, have established key goals involve both scientific exploration and the emerging lunar economy. These objectives include carrying out scientific experiments, harvesting and utilizing lunar resources on-site, and examining potential methods of using the Moon as an efficient launchpad to go further into the Solar System. In facilitating these lunar missions, robotics emerges as the main disruptive technology. It can allow the execution of various mission-critical activities in harsh environments, ensuring mission success without jeopardizing human safety. This thesis addresses the challenge presented by the limited resolution of lu- nar data and the overwhelming amount of non-processed information provided by the different remote sensing missions. Robotic operations on the Moon require tremendous precision in the mission planning phase. For this purpose, the remote sensing data collected by various satellites must be processed and relayed to the mission planning teams in the highest possible resolution and in an easily under- standable and digested format. To address this issue, the research presented on this thesis adopts a Machine Learning (ML) approach to enhance and process lunar data gathered by different satellite sensors, providing more detailed and comprehensive insights into the lunar surface, which is crucial for future robotic mission planning. ML provides the necessary tools to efficiently handle and analyze vast volumes of data, a critical aspect in deriving meaningful results, reaching valid conclusions, and deepening our understanding of the subject under study. In this thesis, the author suggests two distinct methods for enhancing and increasing the resolution of lunar images to facilitate improved robot navigation on the lunar surface. The first method involves creating a training dataset using a digital analog environment and utilizing multiple frames of the same location for image enhancement. The second approach proposes a unique architecture that utilizes a single capture from the lunar surface for resolution upscaling, accompanied by an uncertainty estimation of the process. Lastly, a thermophysical analysis of the lunar surface is conducted, which involves processing lunar thermal data in the search for recent asteroid impacts on the lunar surface. [less ▲]

Detailed reference viewed: 68 (1 UL)
See detailINTEGRATION AND INTERFACIAL ENGINEERING OF STRAIN SENSORS IN ADDITIVELY MANUFACTURED POLYMERIC STRUCTURES
Mashayekhi, Fatemeh UL

Doctoral thesis (2023)

Additive Manufacturing (AM) (more popularly referred to as 3D printing) provides a novel solution to create multifunctional designs and parts with geometries and properties, which were impossible to ... [more ▼]

Additive Manufacturing (AM) (more popularly referred to as 3D printing) provides a novel solution to create multifunctional designs and parts with geometries and properties, which were impossible to manufacture by traditional processes. Polymer fused filament fabrication (FFF) technology as a process category of AM is currently the most popular on the market. The structure of the 3D printed neat thermoplastic polymers (TPs) and continuous fiber-reinforced thermoplastic composites (CFRTPCs) manufactured by this technology bears interfaces at different scales being the cause of non-optimal mechanical properties. Although some work has been carried out to optimize the interfacial bonding and so the structure, yet still requires further improvement. AM also opens up the possibility for embedding functional elements such as fiber Bragg grating (FBG) sensors to carry out structural health monitoring (SHM). The technology landscape of CFRTPCs created by FFF is still in an early stage of development. Further, there is also still limited work reported on interfacial engineering investigations between the polymeric matrix and the embedded FBG sensor polymer jacket for health monitoring. This dissertation studied the influence of the interfacial bonding quality on the strain transfer between the 3D printed polylactic acid (PLA) matrix and the polyimide (PI) jacketed FBG strain sensor compared to a reference strain measurement which was the digital image correlation (DIC) method. The bonding quality between the PLA matrix and the PI jacket was discussed based on the most tangible parameters which were the intrinsic adhesion and porosity. Developing bonding methodologies for improving the interfacial adhesion was therefore the first priority and were tested on PI films and PLA plates as model materials. These bonding methodologies made use of cleaning, plasma activation, roughness modification and/or the use of polydopamine (PDA) nanocoating, cyanoacrylate (CAC) glue and dichloromethane (DCM). The solvent welding by DCM and adhesive bonding with CAC and PDA as the bonding methodologies providing the highest adhesion were then implemented for the integration of a PI jacketed FBG strain sensor in PLA FFF printed structure. The main results revealed that an interfacial adhesion between the embedded FBG sensor jacket and 3D printed matrix has been necessary but not sufficient to have a good interfacial bonding since the pores’ volume in the interfacial region also contributed. Indeed, the interfacial porosity fraction which must be minimized when embedding the sensor was primarily responsible for the sensor’s strain measurement accuracy of the actual strain in the FFF printed structure. By considering the percentage of strain in the 3D printed specimen transferred to the embedded FBG strain sensor, the deposition of a PDA nanocoating onto the sensor or the addition of a CAC adhesive at the PLA-PI interface with no channel employment were identified as the best method to embed an FBG strain sensor into 3D printed PLA. The use of CAC adhesive was regarded as the most reliable method based on its lower standard deviation value for strain transfer when compared to PDA. The PDA treatment of the sensor, however, was conducted prior to printing, thus not affecting the practical aspects and with a water-based solution. Nonetheless, when selecting a methodology, the practical aspects linked to the sensor embedding in the 3D printed polymer components can also be considered based on the specific needs, and production type. To illustrate, the proper introduction of a sensor into non-linear paths in complex geometries can be facilitated by a channel. That being the case, the combination of a channel and a CAC adhesive could be considered to best suit. The original findings presented in this thesis explain the link between the matrix-sensor effective strain transfer and interfacial properties and demonstrate that the reliability of an embedded FBG sensor’s measurement depends on its good bonding with the surrounding matrix. The results presented shed light on the understanding of the importance of interfacial engineering in FFF built structures with FBG sensors, which is a fundamental step in this technology towards the development of more efficient smart polymer structures. [less ▲]

Detailed reference viewed: 77 (6 UL)
Full Text
See detailResonant Raman scattering and other new coupling phenomena in ferroelastic BiVO4
Hill, Christina UL

Doctoral thesis (2023)

With the introduction of ferroic oxides into the research field of photovoltaic and photocatalytic applications, an increasing interest in understanding the coupling mechanisms of light with the ... [more ▼]

With the introduction of ferroic oxides into the research field of photovoltaic and photocatalytic applications, an increasing interest in understanding the coupling mechanisms of light with the electronic bands in this class of materials has been developed over the last years. In many of these examples, reliable band structure calculations are not yet available mostly because of the complexity in the crystal structure which demands for many experimental evidences to verify even basic characteristics of the structure. Also classical experimental techniques based on the optical absorption seem to reach their limits here resulting often in controversial publications on the nature of the band gap. The goal of this work is to exploit resonant Raman scattering to reveal information on the electronic band structure and thereby create a better understanding of the light-matter coupling mechanisms in ferroic oxides. As a model material, we choose to work on ferroelastic bismuth vanadate, which is known for its second-order phase transition from the high-symmetry tetragonal to the low-symmetry monoclinic phase. On the way to understand resonant Raman scattering in bismuth vanadate, i.e. the coupling of lattice vibrations and electronic transitions, we extensively studied the phonons and optical properties and discovered new coupling phenomena of various kinds. We found a phonon-phonon coupling in the lattice vibrations of the vanadium oxygen tetrahedron that are strongly temperature and polarization dependent. The coupling strength is of opposite sign depending on the polarization conditions and diminishes at the structural phase transition at which one of the phonon modes changes its symmetry. Additionally, the transmission data evidence the coupling between the spontaneous shear strain and the electronic structure that primarily defines the strong temperature dependence of the optical absorption in monoclinic bismuth vanadate. Further, we report on resonant Raman scattering effects that allow us to study the coupling between phonons and electrons. We succeeded to measure multiple resonant Raman bands corresponding to not only the band gap at 2.4 eV but also to a polaronic defect level at 2.0 eV. The coupling strength to the electronic states in the conduction band and in the defect level varies between the Raman modes corresponding to different lattice vibrations. By varying the light polarization in the experiment, we could access another band-to-band transition that lies 120 meV above the band gap energy. This energy difference matches the optical anisotropy quantified by transmission measurements. With this work we demonstrate that resonant Raman spectroscopy is a very powerful tool to study not only structural but also electronic properties in ferroic oxides. It has the potential to go deeper in the analysis on the electronic band structure than classical techniques such as UV/Vis spectroscopy or ellipsometry. [less ▲]

Detailed reference viewed: 58 (2 UL)
Full Text
See detailTest Flakiness Prediction Techniques for Evolving Software Systems
Haben, Guillaume UL

Doctoral thesis (2023)

Detailed reference viewed: 71 (10 UL)
See detail(DE-) SCAFFOLDING AND THE EVOLUTION OF AUTONOMY IN THE TEACHING AND LEARNING PROCESS - AN INTERGENERATIONAL CASE STUDY
Geberth, Lisa-Marie UL

Doctoral thesis (2023)

Psychology has produced many theories that do not concern themselves (enough) with the fact that human beings are constantly moving. This thesis takes psychological theories and empirical data gathering ... [more ▼]

Psychology has produced many theories that do not concern themselves (enough) with the fact that human beings are constantly moving. This thesis takes psychological theories and empirical data gathering on to the slopes of the Austrian Alps. Alpine downhill skiing is viewed from different psychological points – teaching/learning with the notion of scaffolding, moving in a special environment, family dynamics using Dialogical Self theory, resilience. Empirical data was collected in a longitudinal single case study over the span of four years. A combination of interviews, fill-in-the-blanks texts, and videos was used to grasp the complex reality of going down the slope. Action cameras are introduced as a practical way to gather unique data from the participants’ point of view without the disturbance of the researcher’s presence. Double Direction Theme Completion (DDTC) method was used for the fill-in-the-blanks to provoke and show ambivalence parents experience when teaching their children how to ski. The interviews were held using the method of the narrative interview to obtain long passages of storytelling. This cumulative thesis consists of three peer-reviewed papers and one book chapter. This project showed a way to bring more movement into psychological theories and how to gather empirical data in moving research settings. Outdoor settings have proven to provide interesting new data to analyse using already existing theories like scaffolding, dialogical self theory (DST), or resilience. The use of DST was expanded from mainly therapy settings and classrooms to an action setting. The unique conditions of demanding environments like the slope challenge I-Position to change which can be examined using the DDTC method. The notion of descaffolding is introduced in this thesis. Descaffolding describes the process of withdrawing support in a learning setting and setting the stage for autonomous mastery. The descaffolding process is described in detail and further applications are discussed. [less ▲]

Detailed reference viewed: 45 (2 UL)
See detailAiming towards more equal opportunities? Analyzing the implementation of the plurilingual education policy in the non-formal early childhood education and care sector in Luxembourg
Simoes Loureiro, Kevin UL

Doctoral thesis (2023)

The plurilingual education policy in Luxembourg stands out as one of the pioneering multilingual policies in early childhood education that goes beyond a bilingual approach implemented in other countries ... [more ▼]

The plurilingual education policy in Luxembourg stands out as one of the pioneering multilingual policies in early childhood education that goes beyond a bilingual approach implemented in other countries. On this basis, given that investment in early childhood education and care aims to tackle social and origin-related inequalities and that educational inequalities in Luxembourg are frequently attributed to multilingualism, promoting children’s language development is at the centre of early childhood. This interdisciplinary research project fills a significant research gap in the social science fields of policy implementation and early childhood by focusing on the aspect of implementation in relation to educational inequalities. This study answers the following research questions: (I) how is the plurilingual education policy implemented in the Luxembourgish non-formal early childhood education and care sector, and (II) what is the salience of the aspect of reducing educational inequalities. The theoretical lenses employed in this study encompass concepts relevant to both aspects of the study. On the one hand, (I) the coupling theory, theory of planned behavior, and theory of habitus and field, and, on the other hand, (II) the reproduction of educational inequality, linguistic capital theory, and theory of social and ethnic origin. By using a mixed methods design, examining both policy and practice levels provides a holistic perspective on the plurilingual education policy implementation in Luxembourg. Methods include a policy document analysis of the plurilingual education program, expert interviews with policy-level stakeholders, and a cross-sectional survey with early childhood practitioners in the non-formal education sector. The main findings of this research shed light on sectorial differences at policy and practice levels, leading to ambiguities in policy goals and implementation measures, as well as challenges encountered during policy implementation. The results indicate a lack of an epistemological approach in the policy, regarding the linguistic and organizational diversity within the Luxembourgish mixed economy of early childhood education and care. Concerns arise regarding the goal of promoting more equal opportunities, as the policy may – according to the findings – still provide greater benefits to Luxembourgish-speaking children rather than to other ethnic groups. The research thus highlights the importance of developing multilingual policies that consider the diverse linguistic and organizational contexts, in order to ensure a more successful policy in practice implementation as well as more equal educational opportunities for all children. [less ▲]

Detailed reference viewed: 86 (1 UL)
Full Text
See detailFormal Verification of Verifiability in E-Voting Protocols
Baloglu, Sevdenur UL

Doctoral thesis (2023)

Election verifiability is one of the main security properties of e-voting protocols, referring to the ability of independent entities, such as voters or election observers, to validate the outcome of the ... [more ▼]

Election verifiability is one of the main security properties of e-voting protocols, referring to the ability of independent entities, such as voters or election observers, to validate the outcome of the voting process. It can be ensured by means of formal verification that applies mathematical logic to verify the considered protocols under well-defined assumptions, specifications, and corruption scenarios. Automated tools allow an efficient and accurate way to perform formal verification, enabling comprehensive analysis of all execution scenarios and eliminating the human errors in the manual verification. The existing formal verification frameworks that are suitable for automation are not general enough to cover a broad class of e-voting protocols. They do not cover revoting and cannot be tuned to weaker or stronger levels of security that may be achievable in practice. We therefore propose a general formal framework that allows automated verification of verifiability in e-voting protocols. Our framework is easily applicable to many protocols and corruption scenarios. It also allows refined specifications of election procedures, for example accounting for revote policies. We apply our framework to the analysis of several real-world case studies, where we capture both known and new attacks, and provide new security guarantees. First, we consider Helios, a prominent web-based e-voting protocol, which aims to provide end-to-end verifiability. It is however vulnerable to ballot stuffing when the voting server is corrupt. Second, we consider Belenios, which builds upon Helios and aims to achieve stronger verifiability, preventing ballot stuffing by splitting the trust between a registrar and the server. Both of these systems have been used in many real-world elections. Our third case study is Selene, which aims to simplify the individual verification procedure for voters, providing them with trackers for verifying their votes in the clear at the end of election. Finally, we consider the Estonian e-voting protocol, that has been deployed for national elections since 2005. The protocol has continuously evolved to offer better verifiability guarantees but has no formal analysis. We apply our framework to realistic models of all these protocols, deriving the first automated formal analysis in each case. As a result, we find several new attacks, improve the corresponding protocols to address their weakness, and prove that verifiability holds for the new versions. [less ▲]

Detailed reference viewed: 41 (2 UL)
Full Text
See detailWhat Matters in Model Training to Transfer Adversarial Examples
Gubri, Martin UL

Doctoral thesis (2023)

Despite state-of-the-art performance on natural data, Deep Neural Networks (DNNs) are highly vulnerable to adversarial examples, i.e., imperceptible, carefully crafted perturbations of inputs applied at ... [more ▼]

Despite state-of-the-art performance on natural data, Deep Neural Networks (DNNs) are highly vulnerable to adversarial examples, i.e., imperceptible, carefully crafted perturbations of inputs applied at test time. Adversarial examples can transfer: an adversarial example against one model is likely to be adversarial against another independently trained model. This dissertation investigates the characteristics of the surrogate weight space that lead to the transferability of adversarial examples. Our research covers three complementary aspects of the weight space exploration: the multimodal exploration to obtain multiple models from different vicinities, the local exploration to obtain multiple models in the same vicinity, and the point selection to obtain a single transferable representation. First, from a probabilistic perspective, we argue that transferability is fundamentally related to uncertainty. The unknown weights of the target DNN can be treated as random variables. Under a specified threat model, deep ensemble can produce a surrogate by sampling from the distribution of the target model. Unfortunately, deep ensembles are computationally expensive. We propose an efficient alternative by approximately sampling surrogate models from the posterior distribution using cSGLD, a state-of-the-art Bayesian deep learning technique. Our extensive experiments show that our approach improves and complements four attacks, three transferability techniques, and five more training methods significantly on ImageNet, CIFAR-10, and MNIST (up to 83.2 percentage points), while reducing training computations from 11.6 to 2.4 exaflops compared to deep ensemble on ImageNet. Second, we propose transferability from Large Geometric Vicinity (LGV), a new technique based on the local exploration of the weight space. LGV starts from a pretrained model and collects multiple weights in a few additional training epochs with a constant and high learning rate. LGV exploits two geometric properties that we relate to transferability. First, we show that LGV explores a flatter region of the weight space and generates flatter adversarial examples in the input space. We present the surrogate-target misalignment hypothesis to explain why flatness could increase transferability. Second, we show that the LGV weights span a dense weight subspace whose geometry is intrinsically connected to transferability. Through extensive experiments, we show that LGV alone outperforms all (combinations of) four established transferability techniques by 1.8 to 59.9 percentage points. Third, we investigate how to train a transferable representation, that is, a single model for transferability. First, we refute a common hypothesis from previous research to explain why early stopping improves transferability. We then establish links between transferability and the exploration dynamics of the weight space, in which early stopping has an inherent effect. More precisely, we observe that transferability peaks when the learning rate decays, which is also the time at which the sharpness of the loss significantly drops. This leads us to propose RFN, a new approach to transferability that minimises the sharpness of the loss during training. We show that by searching for large flat neighbourhoods, RFN always improves over early stopping (by up to 47 points of success rate) and is competitive to (if not better than) strong state-of-the-art baselines. Overall, our three complementary techniques provide an extensive and practical method to obtain highly transferable adversarial examples from the multimodal and local exploration of flatter vicinities in the weight space. Our probabilistic and geometric approaches demonstrate that the way to train the surrogate model has been overlooked, although both the training noise and the flatness of the loss landscape are important elements of transfer-based attacks. [less ▲]

Detailed reference viewed: 95 (15 UL)
Full Text
See detailCODE-CHANGE AWARE MUTATION BASED TESTING IN CONTINUOUSLY EVOLVING SYSTEMS
Ojdanic, Milos UL

Doctoral thesis (2023)

In modern software development practices, testing activities must be carried out frequently and preferably after each code change to bring confidence in anticipated system behaviour and, more importantly ... [more ▼]

In modern software development practices, testing activities must be carried out frequently and preferably after each code change to bring confidence in anticipated system behaviour and, more importantly, to avoid introducing faults. When it comes to software testing, it is not only about what we are expecting; it is equally about what we are not expecting. Developers desire to test and assess the testing adequacy of the delta of behaviours between stable and modified software versions. Many test adequacy criteria have been proposed through the years, yet very few have been placed for continuous development. Among all proposed, one has been empirically verified to be the most effective in finding faults and evaluating test adequacy. Mutation Testing has been widely studied, but its current traditional form is impractical to keep up with the rapid pace of modern software development standards and code evolution due to a large number of test requirements, i.e., mutants. This dissertation proposes change-aware mutation testing, a novel approach that points to relevant change-aware test requirements, allows reasoning to what extent code modification is tested and captures behavioural relations of changed and unchanged code from which faults often arise. In particular, this dissertation builds contributions around challenges related to the code-mutants' behavioural properties, testing regular code modifications and mutants' fault detection effectiveness. First, this dissertation examines the ability of the mutants to capture the behaviour of regression faults and evaluates the relationship between the syntactic and semantic distance metrics often used to capture mutant-real fault similarity. Second, this dissertation proposes a commit-aware mutation testing approach that focuses rather on change-aware mutants that bring significant values in capturing regression faults. The approach shows 30\% higher fault detection in comparison with baselines and sheds light on the suitability of commit-aware mutation testing in the context of evolving systems. Third, this dissertation proposes the usage of high-order mutations to identify change-impacted mutants, resulting in the most extensive dataset, to date, of commit-relevant mutants, which are further thoroughly studied to provide the understanding and elicit properties of this particular novel category. The studies led to the discovery of long-standing mutants, demonstrated as suitable to maintain a high-quality test suite for a series of code releases. Fourth, this dissertation proposes the usage of learning-based mutant selection strategies when questioning how effective are the mutants of fundamentally different mutation generation approaches in finding faults. The outcomes raise awareness of the risk that the suitability of different kinds of mutants can be misinterpreted if not using intelligent approaches to remove the noise of impractical mutants. Overall, this dissertation proposes a novel change-aware testing approach and provides insights for software testing gatekeepers towards more effective mutation testing in the context of continuously evolving systems. [less ▲]

Detailed reference viewed: 56 (8 UL)
Full Text
See detailDevelopment of the bioinformatics pipeline DREMflow for the identification of cell-type and time point specific transcriptional regulators
de Lange, Nikola Maria UL

Doctoral thesis (2023)

A detailed understanding of the mechanism that drive cell differentiation of stem cells into a desired cell type provides opportunities to study diseases and disease progression in patient derived cells ... [more ▼]

A detailed understanding of the mechanism that drive cell differentiation of stem cells into a desired cell type provides opportunities to study diseases and disease progression in patient derived cells and enable the development of new therapy approaches. The main challenge in this directed differentiation is the identification of the essential transcriptional regulators involved that are specific to a cell type or lineage and the inference of the underlying gene regulatory network. Transcription factor activity during cell differentiation can be measured through gene expression and chromatin accessibility, ideally jointly over time. Integrated time course regulatory analysis yields more detailed gene regulatory networks than expression data alone. Due to the large number of parameters and tools employed in such analysis, computational workflows help to manage the inherent complexity of such analyses. This thesis describes Dynamics Regulatory Events Miner Snakemake workflow (DREMflow) which combines temporally-resolved RNA-seq and ATAC-seq data to identify cell type and time point specific gene regulatory networks. DREMflow builds on the Differentially Regulatory Events Miner (DREM), the workflow management system Snakemake and the package manager Mamba. It includes the processing starting from sequencing reads, quality control reports and parameters as well as additional downstream analyses for the inference of key transcription factors during differentiation. DREMflow is applied to multiple data sets obtained during the differentiation of midbrain dopaminergic neurons as well as blood cells and compared to TimeReg, a pipeline with similar aims. The expansion to accommodate for single-cell data is explored. Results from other studies were reproduced and extended, identifying additional key transcriptional regulators. LBX1 was found as key regulator in differentiation of midbrain dopaminergic neurons while exploring different settings of the pipeline. Members of the AP-1 family of transcription factors were identified in all blood cell differentiation data sets. The comparison to TimeReg resulted in DREMflow being more sensitive in the identification of known transcriptional regulators in macrophages. Computationally, DREMflow outperforms TimeReg as well. DREMflow enables users to perform time-resolved multi-omics analysis reproducibly with minimal setup and configuration. [less ▲]

Detailed reference viewed: 59 (3 UL)
Full Text
See detailECONOMIC AND POLITICAL SUSTAINABILITY OF FREE MOBILITY POLICIES
Gaponiuk, Nikita UL

Doctoral thesis (2023)

Detailed reference viewed: 48 (5 UL)
Full Text
See detailSpatial adaptive settlement systems in archaeology. Modelling long-term settlement formation from spatial micro interactions
Sikk, Kaarel UL

Doctoral thesis (2023)

Despite research history spanning more than a century, settlement patterns still hold a promise to contribute to the theories of large-scale processes in human history. Mostly they have been presented as ... [more ▼]

Despite research history spanning more than a century, settlement patterns still hold a promise to contribute to the theories of large-scale processes in human history. Mostly they have been presented as passive imprints of past human activities and spatial interactions they shape have not been studied as the driving force of historical processes. While archaeological knowledge has been used to construct geographical theories of evolution of settlement there still exist gaps in this knowledge. Currently no theoretical framework has been adopted to explore them as spatial systems emerging from micro-choices of small population units. The goal of this thesis is to propose a conceptual model of adaptive settlement systems based on complex adaptive systems framework. The model frames settlement system formation processes as an adaptive system containing spatial features, information flows, decision making population units (agents) and forming cross scale feedback loops between location choices of individuals and space modified by their aggregated choices. The goal of the model is to find new ways of interpretation of archaeological locational data as well as closer theoretical integration of micro-level choices and meso-level settlement structures. The thesis is divided into five chapters, the first chapter is dedicated to conceptualisation of the general model based on existing literature and shows that settlement systems are inherently complex adaptive systems and therefore require tools of complexity science for causal explanations. The following chapters explore both empirical and theoretical simulated settlement patterns based dedicated to studying selected information flows and feedbacks in the context of the whole system. Second and third chapters explore the case study of the Stone Age settlement in Estonia comparing residential location choice principles of different periods. In chapter 2 the relation between environmental conditions and residential choice is explored statistically. The results confirm that the relation is significant but varies between different archaeological phenomena. In the third chapter hunter-fisher-gatherer and early agrarian Corded Ware settlement systems were compared spatially using inductive models. The results indicated a large difference in their perception of landscape regarding suitability for habitation. It led to conclusions that early agrarian land use significantly extended land use potential and provided a competitive spatial benefit. In addition to spatial differences, model performance was compared and the difference was discussed in the context of proposed adaptive settlement system model. Last two chapters present theoretical agent-based simulation experiments intended to study effects discussed in relation to environmental model performance and environmental determinism in general. In the fourth chapter the central place foragingmodel was embedded in the proposed model and resource depletion, as an environmental modification mechanism, was explored. The study excluded the possibility that mobility itself would lead to modelling effects discussed in the previous chapter. The purpose of the last chapter is the disentanglement of the complex relations between social versus human-environment interactions. The study exposed non-linear spatial effects expected population density can have on the system and the general robustness of environmental inductive models in archaeology to randomness and social effect. The model indicates that social interactions between individuals lead to formation of a group agency which is determined by the environment even if individual cognitions consider the environment insignificant. It also indicates that spatial configuration of the environment has a certain influence towards population clustering therefore providing a potential pathway to population aggregation. Those empirical and theoretical results showed the new insights provided by the complex adaptive systems framework. Some of the results, including the explanation of empirical results, required the conceptual model to provide a framework of interpretation. [less ▲]

Detailed reference viewed: 210 (25 UL)
Full Text
See detailThe Quality of Democracy Embedded into Political Culture - A Comparative Study of Luxembourg, Hungary, and the United Kingdom
Darabos, Agnes UL

Doctoral thesis (2023)

The thesis focuses on three main questions: the quality of democracy, democratic backsliding, political culture, and the relationship among them. The theoretical basis of the thesis starts with the ... [more ▼]

The thesis focuses on three main questions: the quality of democracy, democratic backsliding, political culture, and the relationship among them. The theoretical basis of the thesis starts with the quality of democracy model of Larry Diamond and Leonardo Morlino. The theoretical purpose of the research is to reconceptualise the quality of democracy and embed it into political culture to find answers to democratic stagnation and backsliding experienced in European states today. To achieve this objective, the thesis offers a four-level model of political culture that provides a foundation for explaining the links among different aspects of political culture and factors of democratic quality. Additionally, the thesis proposes a two-fold hierarchical interpretation of the factors of the quality of democracy to define the deficiency needs of democratic functioning. The main hypotheses cover the connections among the three factors mentioned above. The first hypothesis focuses on the continuity and fragmentation of political culture as explanatory factors that play a crucial role in the thesis. The second hypothesis examines the ‘citizen dimension’ of democratic quality and its impact on democratic stability through factors such as the citizens’ interest in the political dynamics of their own country, their perceptions of their democracy, and their confidence in democratic institutions. The third hypothesis investigates the potential reasons and levels of democratic stagnation and backsliding in democracies, arguing that political culture and the weakening of the rule of law are the main influencing factors, as the latter serves as the basis and guarantee for the other factors of the quality of democracy. The empirical research is based on a comparative analysis of the quality of democracy in three European countries with fundamentally different democratic systems and traditions of democratic evolution: Luxembourg, the UK, and Hungary. The methodology used is based on the most-different cases with similar results model. [less ▲]

Detailed reference viewed: 80 (8 UL)
Full Text
See detailCorrespondent banking, SWIFT, and the geographies of financial infrastructure: Technological and organizational change in cross-border payments
Robinson, Gary UL

Doctoral thesis (2023)

This thesis examines the impacts of technical and organizational change on the geographies of finance via infrastructure for cross-border payments, employing a qualitative methodology of semi-structured ... [more ▼]

This thesis examines the impacts of technical and organizational change on the geographies of finance via infrastructure for cross-border payments, employing a qualitative methodology of semi-structured expert interviews. The study finds that SWIFT’s messaging system together with the correspondent banking system, a decentralized global network of bilateral contracts between banks, remain a geographically and historically foundational sociotechnical infrastructure connecting IFCs. To stave off fintech challengers and preserve banks’ incumbency, SWIFT’s system is platformizing with the aim of changing banks’ business models from fee-extraction towards economic use of transaction data. Collaborative action in bringing about change across a global network is a key finance industry agency for maintaining its collective dominance. SWIFT’s cooperative organizational form is a significant locus for this agency, engendering trust as a relational aspect of power to resolve tensions among actors and processes across scales. Specialized infrastructure is instrumental in how the geographies of finance are (re)shaped. [less ▲]

Detailed reference viewed: 49 (3 UL)
Full Text
See detailLe statut des parlementaires. Etude comparative : France,Suisse, Parlement européen.
Ouhida, Morgan Patrick Michel UL

Doctoral thesis (2023)

Le statut des parlementaires est un objet en plein mutation et qui s'inscrit dans les réflexions liées à la crise de la démocratie représentative (crise de confiance, déficit de représentativité ... [more ▼]

Le statut des parlementaires est un objet en plein mutation et qui s'inscrit dans les réflexions liées à la crise de la démocratie représentative (crise de confiance, déficit de représentativité, professionnalisation du mandat). Le statut mêle éléments traditionnels (immunités, indemnités, droits et obligations procéduraux) et des éléments nouveaux prenant en compte les défis actuels de la représentation tels que des règles déontologiques ou encore des dispositions facilitant la représentativité. Ce travail de recherche s'interroge sur le caractère indispensable du statut des parlementaires au sein des démocraties représentatives, et notamment en France, en Suisse ainsi qu'au Parlement européen. [less ▲]

Detailed reference viewed: 59 (8 UL)
Full Text
See detailAnalysis of Smartcard-based Payment Protocols in the Applied Pi-calculus using Quasi-Open Bisimilarity
Yurkov, Semen UL

Doctoral thesis (2023)

Cryptographic protocols are instructions explaining how the communication be- tween agents should be done. Critical infrastructure sectors, such as communication networks, financial services, information ... [more ▼]

Cryptographic protocols are instructions explaining how the communication be- tween agents should be done. Critical infrastructure sectors, such as communication networks, financial services, information technology, transportation, etc., use security protocols at their very core to establish the information exchange between the components of the system. Symbolic verification is a discipline that investigates whether a given protocol satisfies the initial requirements and delivers exactly what it intends to deliver. An immediate goal of symbolic verification is to improve the reliability of existing systems – if a protocol is vulnerable, actions must be taken asap before a malicious attacker exploits it; a far-reaching goal is to improve the system design practices – when creating a new protocol, it must be proven correct before the implementation. Properties of cryptographic protocols roughly fall into two categories. Either reachability-based, i.e. that a system can or cannot reach a state satisfying some condition, or equivalence-based, i.e. that a system is indistinguishable from its idealised version, where the desired property trivially holds. Security properties are often formulated as a reachability problem and privacy properties as an equivalence problem. While the study of security properties is relatively settled, and powerful tools like Tamarin and ProVerif, where it is possible to check reachability queries, exist, the study of privacy properties expressed as equivalence only starts gaining momentum. Tools like DeepSec, Akiss, and, again, ProVerif offer only limited support when it comes to indistinguishability. This is partially due to the question of “What is an attacker capable of?” is not answered definitively in the second case. The widely-accepted default attacker, when it comes to security, is the so-called Dolev-Yao attacker, which has full control of the communication network; however, there is no default attacker who attempts to break the privacy of a protocol. The capabilities of such an attacker are reflected in the equivalence relation used to define a privacy property; hence the choice of such relation is crucial. This dissertation justifies a particular equivalence relation called quasi-open bisimilarity which satisfies several natural requirements. It has sound and complete modal logic characterisation, meaning that any attack on privacy has a practical interpretation; it enables compositional reasoning, meaning that if a privacy property of a system automatically extends to a bigger system having the initial one as a component, and, it captures the capability of an attacker to make decisions dynamically during the execution of the protocol. We not only explain the notion of quasi-open bisimilarity, but we also employ it to study real-world protocols. The first protocol, UBDH, is an authenticated key agreement suitable for card payments, and the second protocol, UTX, is a smartcard-based payment protocol. Using quasi-open bisimilarity, we define the target privacy property of unlinkability, namely that it is impossible to link protocol sessions made with the same card and prove that it holds for UBDH and UTX. The proofs that UBDH and UTX satisfy their privacy requirements to our knowledge are the first ones that demonstrate that a privacy property of a security protocol, defined as bisimilarity equivalence, is satisfied for an unbounded number of protocol sessions. Moreover, these proofs illustrate the methodology that could be employed to study the privacy of other protocols. [less ▲]

Detailed reference viewed: 58 (14 UL)
Full Text
See detail50 Years of Crises: the Development of the European Financial Assistance Regime
Rehm, Moritz UL

Doctoral thesis (2023)

This dissertation provides a comprehensive historical study of extraordinary financial assistance in the European Economic Community and European Union, a topic with growing relevance in political science ... [more ▼]

This dissertation provides a comprehensive historical study of extraordinary financial assistance in the European Economic Community and European Union, a topic with growing relevance in political science. The analysis covers the development of the European financial assistance regime from 1968 until 2023 using a combined historical institutionalist approach. The built theoretical framework for this study is composed of a critical juncture and a legacy analysis and is applied to four typical cases of crisis-induced change in the regime’s development. These four cases are the monetary instability of 1968-1969, the first oil shock of 1973-1974, the international financial crisis starting in 2008 and the Covid-19 crisis starting in 2020. Through the application of the theoretical approach of ‘actor-centred historical institutionalism’, this dissertation explains first, the necessary conditions for the occurrence of punctuated change, and second, the long-term implications of punctuated change in the form of processes of piecemeal adjustment. The findings demonstrate that crisis-induced change is facilitated by exogenous and endogenous factors linked to the financial impact of crisis and the resilience of the ex ante assistance regime, as well as by rationalist considerations of relevant actors reflected in a mix-preference game. In addition, this dissertation argues that crisis-outcomes need to be analysed in subsequent periods to fully grasp their causal force and to uncover legacy mechanisms accounting for continuity and piecemeal adjustment of punctuated changes. Based on the findings presented in the four cases, this dissertation furthers the understanding of crisis-induced integration in financial assistance and provides a detailed explanation of the development of financial assistance instruments since 1971. [less ▲]

Detailed reference viewed: 61 (5 UL)
See detailA BIOMECHANICAL STUDY OF THE PELVIS WITH AND WITHOUT FRACTURES AND IMPLANTS: COMBINING COMPUTATIONAL DESIGN AND EXPERIMENTAL TESTING FOR TYPICAL DAILY MOVEMENTS
Soliman, Ahmed Abdelsalam Mohamed UL

Doctoral thesis (2023)

This study fulfills the need for a dedicated pelvis testing setup that is widely accepted with physiological relevance. There is limited comparative biomechanical data available for the Supraacetabular ... [more ▼]

This study fulfills the need for a dedicated pelvis testing setup that is widely accepted with physiological relevance. There is limited comparative biomechanical data available for the Supraacetabular External Fixator (SEF) and the Subcutaneous Iliopubic Plate (SIP) used in the treatment of anterior fragility fractures of the pelvis (FFP). Most experimental studies so far have relied on simplified loading and boundary conditions. There has been a growing interest in personalizing motion analysis to develop customized implants that can optimize implant performance and accelerate fractured bone recovery for individual patients. Therefore, the main objective of this study is to develop a biomechanical test bench that can emulate the physiological gait loading of the pelvis, experimentally evaluate the stabilizing effect of the SEF and the SIP in the treatment of FFP, expand the test stand's capability to emulate other common daily movements, investigate the impact of customized musculoskeletal (MS) models, and assess the potential benefits of using personalized 3D metallic printed subcutaneous plates for the treatment of FFP type Ia fractures. The study uses the Computational Experiment Design procedure to design a biomechanical test stand that realistically emulates the pelvis' physiological gait loading. The test stand is designed to iteratively reduce all muscles and joints' contact forces of the pelvis to only four force actuators while still producing a similar stress distribution in the pelvis. The study conducts repeatability and reproducibility tests to ensure the test stand's capabilities. Next, the FFP type Ia is created on a synthetic pelvis for biomechanical testing under gait loading. The osteotomy on the right pelvic ring is then stabilized with the SEF or the nonlocking/locking SIP, and the stability provided by both implants is assessed numerically and experimentally under physiological loading. Motion analysis is conducted to calculate joint and muscle force envelopes for the common daily movement of interest. Stress, strain and displacement of the pelvis under these loads are assessed numerically and then implemented in the biomechanical test stand to emulate each movement following the computational experiment design concept. A metallic 3D-printed SIP is developed to match the anatomical landmarks of the insertion points on the pelvis used in the experiments. This 3D printed plate is assessed numerically and experimentally under physiological load to evaluate its performance compared to conventional plates. Personalization of the MS model is conducted for the pelvis by matching the anatomical landmarks of the pelvis in the generic MS model and the model of the pelvis used in the actual experiments. The developed test stand and the concept of computational experiment design behind it provide guidelines on how to design biomechanical testing equipment with physiological relevance. The boundary conditions and the nature of loading adopted in this study are more realistic regarding physiological relevance compared to the state-of-the-art. The numerically developed biomechanical testing setup of the pelvis in this study is a significant step forward in developing a physiologically relevant pelvis testing setup. [less ▲]

Detailed reference viewed: 73 (6 UL)
Full Text
See detailStalled Democracy: Europeanization, Democratic Backsliding and the Rule of Law in the EU’s Multi-Level System. The Case of Bulgaria’s Judicial Reforms (2001-2021)
Stanimirov, Branimir Plamenov UL

Doctoral thesis (2023)

Bulgaria joined the European Union (EU) on 1 January 2007, together with its fellow EU-accession laggard Romania, even though various problems continued to afflict the country’s still fragile democracy ... [more ▼]

Bulgaria joined the European Union (EU) on 1 January 2007, together with its fellow EU-accession laggard Romania, even though various problems continued to afflict the country’s still fragile democracy. This meant that further transformative efforts were necessary and the EU introduced a special supervision mechanism to catalyse further political reforms related to the corruption and rule of law problems of the country. However, the “transformative power” of the organisation turned out to be limited and did not manage to induce definitive and irreversible reforms, even though its efforts generated various positive developments on the ground. Domestically, these processes were related, among other things to the particularly puzzling case of the never-ending reform processes of the Bulgarian judicial system. While the country implemented numerous reforms from the mid-1990s onwards, these changes seemed far from achieving their goal of a functioning judiciary. Even though more than ten governments worked on new legislation in this area, and despite the external pressure, the country did not achieve a satisfactory rule of law record and the reform efforts continue to the present moment. This rather bleak state of the art prompts an important question that remains answered: why after years of constant domestic reforms, and extensive EU reform pressure, the Bulgarian judiciary still exhibits significant nonconformity with established norms and standards and what are the causes for this stalled reform process? In order to account for this puzzling situation, this study seeks to build a novel explanatory framework that theorises the Bulgarian case and provides insights for other relevant cases in which processes of political reforms appear to reach a point of stasis, experiencing neither perceptible progress nor strong backtracking. Drawing on insights from the literature in the fields of Europeanization and democratic backsliding in Central and Eastern Europe (CEE), this study explores why, when and how judicial reforms stall. The reform difficulties are explained through the integration of the weaknesses of the EU’s own approach in tackling rule of law problems in CEE, the strong resistance of domestic veto players to adopt new rules and the inherent weaknesses of the actors demanding change. By coupling the impact of these exogenous and endogenous factors for the implementation of democratic reforms, a “stalled reform” model, absent so far in the literature, is crafted in order to capture the interactions and the dynamics among the various actors in this process. Furthermore, this model also demonstrates how these actors generate countervailing pressures which cancel each other out and thus produce the outcome of interest. Formulated in this way, this study makes several contributions related to the relevant academic literature, but also to the work of the policy community dealing with the rule of law problems in Bulgaria and the wider region. On the conceptual level, it theorises in a novel way political processes in countries that are “stuck” in their democratisation efforts. Empirically, it provides a detailed analysis of Bulgaria’s protracted judicial reforms and the country’s stalled democratisation. The study provides important lessons and recommendations for the EU’s approach to strengthening the rule of law and democracy in its members and membership candidates and as regards the elaboration of more solid reforms in the area. [less ▲]

Detailed reference viewed: 53 (3 UL)
Full Text
See detailEnabling Resilient and Efficient Communication for the XRP Ledger and Interledger
Trestioreanu, Lucian Andrei UL

Doctoral thesis (2023)

The blockchain technology is relatively new and still evolving. Its development was fostered by an enthusiastic community of developers, which sometimes forgot about the lessons from the past related to ... [more ▼]

The blockchain technology is relatively new and still evolving. Its development was fostered by an enthusiastic community of developers, which sometimes forgot about the lessons from the past related to security, resilience and efficiency of communication which can impact network scalability, service quality and even service availability. These challenges can be addressed at network level but also at operating system level. At network level, the protocols and the architecture used play a major role, and overlays have interesting advantages like custom protocols and the possibility of arbitrary deployments. This thesis shows how overlay networks can be designed and deployed to benefit the security and performance in communication for consensus-validation based blockchains and blockchain inter-operativity, taking as concrete cases the XRP ledger and respectively the Interledger protocol. XRP Ledger is a consensus-validation based blockchain focused on payments which currently uses a flooding mechanism for peer to peer communication, with a negative impact on scalability. One of the proposed overlays is based on Named Data Networking, an Internet architecture using for propagation the data name instead of data location. The second proposed overlay is based on Spines, a solution offering improved latency on lossy paths, intrusion tolerance and resilience to routing attacks. The system component was also interesting to study, and the contribution of this thesis centers around methodologies to evaluate the system performance of a node and increase the security from the system level. The value added by the presented work can be synthesized as follows: i) investigate and propose a Named Data Networking-based overlay solution to improve the efficiency of intra-blockchain communication at network level, taking as a working case the XRP Ledger; ii) investigate and propose an overlay solution based on Spines, which improves the security and resilience of inter-blockchain communication at network level, taking as a working case the Interledger protocol; iii) investigate and propose a host-level solution for non-intrusive instrumentation and monitoring which helps improve the performance and security of inter-blockchain communication at the system level of machines running Distributed Ledger infrastructure applications treated as black-boxes, with Interledger Connectors as a concrete case. [less ▲]

Detailed reference viewed: 81 (9 UL)
Full Text
See detailTowards a further Understanding on the Nature and Psychological Effects of Job Demands: Examining the Challenge-Hindrance-Threat Distinction of Job Demands, Demand Appraisals and the Role of Job Resources
Fernandez de Henestrosa, Martha UL

Doctoral thesis (2023)

The experience of work-related stress and ill-/health is of major concern for employers and organizations in the European Union (EU-OSHA, 2022). From an occupational health perspective, work-related ... [more ▼]

The experience of work-related stress and ill-/health is of major concern for employers and organizations in the European Union (EU-OSHA, 2022). From an occupational health perspective, work-related stress is expected to result from employees’ exposure to certain psychological and social characteristics of the work, so called job demands, which are presumed to lead to diminished health and performance among employees (Eurofound & EU-OSHA, 2014). To explain associations between such job characteristics and employees’ health, scholars have generally relied on prominent theoretical frameworks, such as the Job Demands-Resources model (JD-R; Bakker & Demerouti, 2017). Although the JD-R model has successfully contributed to the prediction of work-related health and motivational outcomes in the past years, it has also resulted in a number of unresolved issues (Bakker & Demerouti, 2017). Drawing on multiple theoretical frameworks, the present dissertation examined in greater detail inconsistencies related to the nature and functioning of job demands; and in doing so, aimed to provide a further understanding on work-related demands and their psychological effects. The present thesis encompasses three published articles and addresses open research avenues as regards to (i) the categorization of job demands, (ii) the cognitive appraisal of job demands, and (iii) organizational determinants of demand appraisal. A review of the occupational health literature revealed that most research conducted on the categorization of work-related demands continues to apply a two-fold differentiation of job demands (i.e., Challenge-Hindrance framework, Cavanaugh et al., 2000). However, a more differentiated approach to distinguish between different types of job demands has recently been introduced (i.e., Challenge-Hindrance-Threat framework; Tuckey et al., 2015). Few studies have been conducted on Tuckey et al.'s (2015) extended dimensionality of workplace stressors (e.g., Espedido & Searle, 2018) and not much is known on how job threats, job hindrances and job challenges relate to well-being outcomes, once job resources (i.e., motivational aspects of the job) are taking into account. Therefore, the first aim of the present dissertation (i.e., Article 1) was to examine Tuckey et al.'s (2015) expanded dimensionality of workplace stressors within the JD-R framework by analyzing job threats, job hindrances and job challenges alongside job resources. Results from a heterogenous occupational sample of Luxemburgish employees supported the distinctiveness of Tuckey et al.'s (2015) threefold differentiation of job demands based on their associations with well-being outcomes, while accounting for the effects of job resources. Results further corroborated the health-impairing nature of job threats, job hindrances and job challenges, and supported the motivational nature of job resources. Contrary to expectations, job challenges did not relate to employees’ experiences of vigor. Prior research has often relied on classification frameworks to categorize job demands a priori and to explain their effects (Mellupe, 2020). However, a more recent stream of research has moved towards the examination of employees’ subjective evaluations (i.e., appraisals) of work-related demands (LePine, 2022). Although first studies examining employees’ appraisal of job demands have yielded promising insights into the nature of work-related demands, research does not yet consider appraisal in a systematic manner (Li et al., 2019). In addition, scholars have exclusively considered primary appraisal (i.e., motivational relevance of the stressor) when examining job demands and their associated effects, thereby ignoring the notion of secondary appraisal (i.e., individuals’ assessment to cope with the stressor; Lazarus & Folkman, 1984; Podsakoff et al., 2023). To address these limitations, and taking into account previous findings on the well-being of nurses during the Covid-19 pandemic (e.g., Mo et al., 2020), the second aim of the present dissertation (i.e., Article 2) was to examine how nursing professionals appraise job demand during the health crisis, and to analyze the predictive contribution of nurses’ secondary appraisal of job demands in view of their proximal affective responses. Results from a sample of nursing professionals working in Luxembourg indicated that secondary appraisal was the most important predictor of nurses’ affective states. In addition, negative affective states were predicted by threat appraisals and job demands (i.e., time pressure, emotional demands), whereas positive affect was predicted by challenge appraisals of emotional and physical demands. Results further showed that emotional and physical demands were exclusively appraised as threatening, whereas time pressure associated with challenge and threat appraisal. Lastly, a literature review revealed that not much is known about which organizational factors might contribute to employees’ demand appraisals (LePine, 2022). Therefore, the third aim of the present dissertation (i.e., Article 3) was to investigate determinants and possible boundary conditions of nurses’ demand appraisals among matching job resources. Results showed that corresponding job resources predicted challenge appraisals of job demands. Regarding the prediction of threat appraisal, with the exception of social support, all proposed job resources significantly associated with nurses’ threat appraisal of corresponding job demands. Contrary to expectations, job resources did not moderate the associations between matching job demands and their respective challenge/threat appraisals. In sum, findings of the present dissertation highlight the importance (i) to adopt a threefold understanding of job demands (i.e., challenges, hindrance, threats) while taking into account job resources, (ii) to consider secondary appraisals alongside job demands and their primary appraisals, and (iii) to consider matching job resources as organizational determinants of challenge and threat appraisals. These findings may serve to guide occupational health interventions strategies. [less ▲]

Detailed reference viewed: 87 (20 UL)
Full Text
See detailUnravelling the early pathological mechanisms in Parkinson's Disease: Insights from alpha-synuclein dependent and independent models
Sciortino, Alessia UL

Doctoral thesis (2023)

Affecting over 10 million people worldwide, Parkinson’s disease is the second most common neurodegenerative disorder. With only 10% of cases having a known genetic cause, PD aetiology largely remains an ... [more ▼]

Affecting over 10 million people worldwide, Parkinson’s disease is the second most common neurodegenerative disorder. With only 10% of cases having a known genetic cause, PD aetiology largely remains an enigma. Endogenous factors such as genetic predisposition, and exogenous factors such as exposure to toxins and lifestyle choices, interplay in the initiation and acceleration of the disease. Despite some common hallmarks such as nigrostriatal degeneration and Lewy bodies pathology, PD clinical picture largely varies across patients. Non-motor symptoms are common and thought to emerge up to 20 years prior to diagnosis, and they range from gastro-intestinal dysfunction to sleep disturbances to hallucinations. 90% of PD patients present at least one neuropsychiatric symptom, and about 30% of total patients develop dementia. Interventions aimed to prevent or slowdown disease progression require a better understanding of the early molecular events which foster neuronal dysfunction and death. Longitudinal studies which allow the investigation of early pathological stages are challenging to achieve on patients-based study only, thus largely rely on the use of animal models. Specifically, rodents have very similar anatomy, physiology, and genetics to humans, and a good set of genetic/molecular tools are available to generate pathological models. In the present thesis, we ventured into the investigation of alpha-synuclein dependent and independent models of PD, to unravel the early molecular events driving PD pathogenesis. Firstly, we investigated a genetic mouse model overexpressing the human, E46K mutated alpha-synuclein gene. We characterised neurodegeneration in the nigrostriatal pathway and motor deficits, detecting characteristics of an early-PD phenotype. Aiming to understand the molecular events driving neurodegeneration, we profiled the ventral midbrain transcriptome at different ages, uncovering that transcriptional changes are an early response to the alpha-synuclein challenge. Being the E46K mutation associated with dementia, we also profiled the hippocampus to investigate early transcriptional events linked with cognitive dysfunction in PD. We revealed that hippocampal dysfunction is mostly driven by the ageing process, operating over the interplay of genetic and gender predisposition. Secondly, we profiled transcriptomic changes in the midbrain of the alpha-synuclein independent, Park7-/- (DJ-1 KO) mouse model. Once again, we uncovered the interplay of sex and age in determining the susceptibility to the disease challenge, with males being more affected than females. Specifically, the response to DJ-1 loss of function appeared to be largely sex-specific, and to be mediated by the oestrogen pathway and the DJ-1/Nrf2/CYP1B1 axis. Even if sex-dimorphism has not been directly investigated in human Park7 PD cases due to their paucity, it has been reported in sporadic PD for several populations. Thus, our findings might significantly contribute to uncovering the reasons behind gender differences in PD. Thirdly, we investigated a moderate overexpressor of wild-type alpha-synuclein (Thy1-Syn14), aiming to reach a compromise between genetic and idiopathic PD modelling. To understand how endogenous and exogenous factors interplay in disease onset and progression, we exposed transgenic mice to the amyloidogenic protein Curli and to a fibre deprived diet. We uncovered that microbiome insults and diet act in combination to promote disease progression, and we provided supporting evidence to the concept of a gut-brain axis in PD. Our results underline the relevance of lifestyle adjustments in the management of PD patients. Finally, we investigated how different alpha-synuclein moieties and glutamate exposure might contribute to neurodegeneration. Even if these studies were left incomplete, we gained some preliminary indications which can represent a starting point for future research. We observed that both oligomers and fibrils are toxic forms of alpha-synuclein, and that a lack of standardisation in recombinant moieties production is a current issue that may halt reproducibility in alpha-synuclein research. We also reported a higher susceptibility of DJ-1 knock-down cells to glutamate excitotoxicity, potentially underlying an additional mechanism through which DJ-1 loss of function is responsible for PD development. [less ▲]

Detailed reference viewed: 89 (5 UL)
Full Text
See detailSTUDYING THE IMPACT OF A53T α-SYNUCLEIN ON ASTROCYTIC FUNCTIONS AND ACTIVATION IN HUMAN IPSC-DERIVED CULTURES
Mulica, Patrycja UL

Doctoral thesis (2023)

With its high prevalence among the elderly, the movement disorder Parkinson’s disease (PD) poses a major challenge for our society. Unfortunately, despite continuous efforts from the research community ... [more ▼]

With its high prevalence among the elderly, the movement disorder Parkinson’s disease (PD) poses a major challenge for our society. Unfortunately, despite continuous efforts from the research community, we still lack the disease-modifying treatments for this condition. Therefore, it is of great importance to develop suitable models, which can be employed to better understand the molecular mechanisms underlying PD. In this context, iPSC technology offers a possibility to study the disease pathogenesis using patient-derived brain cells. In recent years, astrocytes have come into the spotlight as potential major contributors to PD development. Yet, there is a limited number of studies utilizing iPSC-derived models to examine PD-linked mutations at endogenous levels. This thesis aims to address the described gap by studying the effect of the A53T α-synuclein on the physiology of human iPSC-derived astrocytes. To identify a suitable model, we first compared two published protocols for the generation of iPSC astrocytes, referred to as Oksanen and Palm method, respectively. A transcriptomic analysis revealed higher maturation characteristics for Oksanen astrocytes. Furthermore, these astrocytes showed a higher similarity to their human postmortem counterparts. Applying the Oksanen protocol, we generated astrocytes derived from a healthy individual and a patient carrying the G209A mutation in SNCA, corresponding to p.A53T substitution in α-synuclein. The utilization of single-cell RNA sequencing allowed us to identify perturbed molecular mechanisms exclusively in pure astrocytic populations. We could demonstrate that astrocytes have a decreased capacity to differentiate. Furthermore, we observed a distinct response of the two cell lines to triggers of activation. Interestingly, activated patient astrocytes also showed changes in pathways related to mitochondrial homeostasis. Taken together, we show that A53T α-synuclein has a profound effect on the function of iPSC-derived astrocytes. In particular, we could demonstrate that patient astrocytes differ from healthy control cells in their activation status and with respect to mitochondrial biology. Further investigation will be required to elucidate the impact of the identified perturbations on the astrocyte-neuron interplay in the context of PD. [less ▲]

Detailed reference viewed: 67 (8 UL)
Full Text
See detailKÖRPERLICHES AKTIVITÄTSVERHALTEN VON KINDERN UND JUGENDLICHEN IN LUXEMBURG – EINE ERSTE ERHEBUNG AKZELEROMETER BASIERTER DATEN
Eckelt, Melanie UL

Doctoral thesis (2023)

Zusammenfassung Körperliche Aktivität steht im Zusammenhang mit unzähligen gesundheitlichen Vorteilen (Janssen & LeBlanc, 2010). Allerdings nimmt die körperliche Aktivität weltweit immer mehr ab (Guthold ... [more ▼]

Zusammenfassung Körperliche Aktivität steht im Zusammenhang mit unzähligen gesundheitlichen Vorteilen (Janssen & LeBlanc, 2010). Allerdings nimmt die körperliche Aktivität weltweit immer mehr ab (Guthold et al., 2020), was wiederum mit zahlreichen körperlichen als auch psychischen gesundheitlichen Folgen verbunden ist (Poitras et al., 2016). Frühere Studien haben in diesem Zusammenhang bereits gezeigt, dass nicht die Dauer des täglichen Sitzens/Ruhens, sondern überwiegend die ungenügende körperliche Aktivität am Tag mit einer höheren Mortalitätsrate verbunden ist (Ekelund et al., 2016; Van der Ploeg & Hillsdon, 2017). Um zu intervenieren, ist es sinnvoll mit Bewegungsförderung bereits im Kindes- und Jugendalter zu beginnen. Nicht nur, weil die körperliche Aktivität auch für Kinder und Jugendliche durch beispielsweise gesteigerte Fitness und ein reduziertes Risiko von Übergewicht von enormem gesundheitlichem Nutzen ist (Bouchard et al., 2012; Poitras et al., 2016), sondern auch, weil Verhaltensweisen, die im Kindes- und Jugendalter etabliert, in den erwachsenen Lebensstil übertragen werden (Rudolf et al., 2016). Um die körperliche Aktivität zu erfassen, existieren verschiedene Methoden, die sich auf Selbstauskünfte (subjektive Erhebung) oder auf physiologische Werte (objektive Erhebung) stützen. Speziell für Luxemburg wurden bisher nur subjektive Erhebungen in Form von Fragebögen durchgeführt. Das Ziel der vorliegenden Thesis ist deshalb in erster Linie die erstmalige Status Quo Erhebung objektiver (Akzelerometer basierter) körperlicher Aktivität von Kindern und Jugendlichen in Luxemburg. Die genauen Analysen und erzielten Ergebnisse werden in vier einzelnen wissenschaftlichen Artikeln beschrieben, die sich jeweils mit unterschiedlichen Fragestellungen befassen. Im ersten Artikel wurde zusätzlich zur allgemeinen objektiven Aktivitätserhebung analysiert, wie das Aktivitätsverhalten in Schule, Freizeit und im Sportunterricht ist und ob, bzw. wieviele der Kinder und Jugendlichen die Bewegungsempfehlungen zur allgemeinen Aktivität und im Sportunterricht erfüllen. Wir konnten zeigen, dass nur ein Viertel der Kinder und Jugendlichen die Bewegungsempfehlung der Weltgesundheitsorganisation (WHO) von mindestens 60 Minuten moderat anstrengender körperlicher Aktivität erfüllen. Außerdem wurde der größte Anteil an moderat anstrengender Aktivität in der Freizeit, verglichen mit der Schulzeit, generiert. Die Bewegungsempfehlung zum Sportunterricht, 50% der Unterrichtszeit moderat anstrengend aktiv zu sein, erfüllte lediglich ein Schüler. Zusätzlich konnten wir zeigen, dass die Jungen in allen Bereichen signifikant aktiver sind als die Mädchen und dass mit zunehmendem Alter die körperliche Aktivität abnimmt. Im zweiten Artikel wurde untersucht, ob saisonale Bedingungen wie die kühlen Jahreszeiten Herbst/Winter und die wärmeren Monate Frühling/Sommer Einfluss auf die körperliche Aktivität haben. Die Kinder und Jugendlichen waren, wie wir erwarteten, im Frühling/Sommer aktiver und wir konnten einen signifikanten Unterschied in der moderat anstrengenden körperlichen Aktivität zwischen den Jahreszeiten feststellen. Zusätzlich wurde die subjektive Aktivität erhoben und der durchschnittliche Differenzwert zu den objektiven Daten blieb über das Jahr stabil. Die Kinder und Jugendlichen zeigten höhere Werte in den subjektiv per Fragebogen erhobenen Daten als in den objektiv mit Akzelerometer erfassten Daten. Im dritten Artikel wurden die objektive Aktivität und das subjektive Anstrengungsempfinden im Sportunterricht erfasst und untersucht, ob die Motivationsregulation Einfluss auf diese nehmen kann. Dabei wurde ein signifikanter Zusammenhang von Kompetenzunterstützung sowie ein signifikant negativer Zusammenhang von extrinsischer Motivation und prozentualer moderat anstrengender körperlicher Aktivität festgestellt. Außerdem zeigte sich ein signifikanter Zusammenhang zwischen der Unterstützung des Zugehörigkeitsgefühls und der Kompetenzbefriedigung mit dem subjektiven Anstrengungsempfinden. Der vierte Artikel, der im Bildungsbericht Luxemburg veröffentlich wurde, befasst sich mit dem Einsatz digitaler Technologie im Sportunterricht. Hier konnten wir u.a. zeigen, dass die subjektiven Daten zur körperlichen Aktivität höher sind als die objektiv erfassten. Zusammenfassend kann in dieser Thesis ein Bewegungsmangel von Kindern und Jugendlichen in Luxemburg anhand Akzelerometer basierter Daten festgestellt werden und durch die Analyse unterschiedlicher Settings können Bereiche für Interventionsansätze beschrieben werden. [less ▲]

Detailed reference viewed: 34 (1 UL)
Full Text
See detailGuiding Quality Assurance Through Context Aware Learning
Garg, Aayush UL

Doctoral thesis (2023)

Software Testing is a quality control activity that, in addition to finding flaws or bugs, provides confidence in the software’s correctness. The quality of the developed software depends on the strength ... [more ▼]

Software Testing is a quality control activity that, in addition to finding flaws or bugs, provides confidence in the software’s correctness. The quality of the developed software depends on the strength of its test suite. Mutation Testing has shown that it effectively guides in improving the test suite’s strength. Mutation is a test adequacy criterion in which test requirements are represented by mutants. Mutants are slight syntactic modifications of the original program that aim to introduce semantic deviations (from the original program) necessitating the testers to design tests to kill these mutants, i.e., to distinguish the observable behavior between a mutant and the original program. This process of designing tests to kill a mutant is iteratively performed for the entire mutant set, which results in augmenting the test suite, hence improving its strength. Although mutation testing is empirically validated, a key issue is that its application is expensive due to the large number of low-utility mutants that it introduces. Some mutants cannot be even killed as they are functionally equivalent to the original program. To reduce the application cost, it is imperative to limit the number of mutants to those that are actually useful. Since it requires manual analysis and test executions to identify such mutants, there is a lack of an effective solution to the problem. Hence, it remains unclear how to mutate and test a code efficiently. On the other hand, with the advancement in deep learning, several works in the literature recently focused on using it on source code to automate many nontrivial tasks including bug fixing, producing code comments, code completion, and program repair. The increasing utilization of deep learning is due to a combination of factors. The first is the vast availability of data to learn from, specifically source code in open-source repositories. The second is the availability of inexpensive hardware able to efficiently run deep learning infrastructures. The third and the most compelling is its ability to automatically learn the categorization of data by learning the code context through its hidden layer architecture, making it especially proficient in identifying features. Thus, we explore the possibility of employing deep learning to identify only useful mutants, in order to achieve a good trade-off between the invested effort and test effectiveness. Hence, as our first contribution, this dissertation proposes Cerebro, a deep learning approach to statically select subsuming mutants based on the mutants’ surrounding code context. As subsuming mutants reside at the top of the subsumption hierarchy, test cases designed to only kill this minimal subset of mutants kill all the remaining mutants. Our evaluation of Cerebro demonstrates that it preserves the mutation testing benefits while limiting the application cost, i.e., reducing all cost factors such as equivalent mutants, mutant executions, and the mutants requiring analysis. Apart from improving test suite strength, mutation testing has been proven useful in inferring software specifications. Software specifications aim at describing the software’s intended behavior and can be used to distinguish correct from incorrect software behaviors. Specification inference techniques aim at inferring assertions by generating and filtering candidate assertions through dynamic test executions and mutation testing. Due to the introduction of a large number of mutants during mutation testing such techniques are also computationally expensive, hence establishing a need for the selection of mutants that fit best for assertion inference. We refer to such mutants as Assertion Inferring Mutants. In our analysis, we find that the assertion inferring mutants are significantly different from the subsuming mutants. Thus, we explored the employability of deep learning to identify Assertion Inferring Mutants. Hence, as our second contribution, this dissertation proposes Seeker, a deep learning approach to statically select Assertion Inferring Mutants. Our evaluation demonstrates that Seeker enables an assertion inference capability comparable to the full mutation analysis while significantly limiting the execution cost. In addition to testing software in general, a few works in the literature attempt to employ mutation testing to tackle security-related issues, due to the fault-based nature of the technique. These works propose mutation operators to convert non-vulnerable code to vulnerable by mimicking common security bugs. However, these pattern-based approaches have two major limitations. Firstly, the design of security-specific mutation operators is not trivial. It requires manual analysis and comprehension of the vulnerability classes. Secondly, these mutation operators can alter the program semantics in a manner that is not convincing for developers and is perceived as unrealistic, thereby hindering the usability of the method. On the other hand, with the release of powerful language models trained on large code corpus, e.g. CodeBERT, a new family of mutation testing tools has arisen with the promise to generate natural mutants. We study the extent to which the mutants produced by language models can semantically mimic the behavior of vulnerabilities aka Vulnerability-mimicking Mutants. Designed test cases failed by these mutants will also tackle mimicked vulnerabilities. In our analysis, we found that a very small subset of mutants is vulnerability-mimicking. Though, this set mimics more than half of the vulnerabilities in our dataset. Due to the absence of any defined features to identify vulnerability-mimicking mutants, as our third contribution, this dissertation introduces Mystique, a deep learning approach that automatically extracts features to identify vulnerability-mimicking mutants. Despite the scarcity, Mystique predicts vulnerability-mimicking mutants with a high prediction performance, demonstrating that their features can be automatically learned by deep learning models to statically predict these without the need of investing any effort in defining features. Since our vulnerability-mimicking mutants cannot mimic all the vulnerabilities, we perceive that these mutants are not a complete representation of all the vulnerabilities and there exists a need for actual vulnerability prediction approaches. Although there exist many such approaches in the literature, their performance is limited due to a few factors. Firstly, vulnerabilities are fewer in comparison to software bugs, limiting the information one can learn from, which affects the prediction performance. Secondly, the existing approaches learn on both, vulnerable, and supposedly non-vulnerable components. This introduces an unavoidable noise in training data, i.e., components with no reported vulnerability are considered non-vulnerable during training, and hence, results in existing approaches performing poorly. We employed deep learning to automatically capture features related to vulnerabilities and explored if we can avoid learning on supposedly non-vulnerable components. Hence, as our final contribution, this dissertation proposes TROVON, a deep learning approach that learns only on components known to be vulnerable, thereby making no assumptions and bypassing the key problem faced by previous techniques. Our comparison of TROVON with existing techniques on security-critical open-source systems with historical vulnerabilities reported in the National Vulnerability Database (NVD) demonstrates that its prediction capability significantly outperforms the existing techniques. [less ▲]

Detailed reference viewed: 100 (14 UL)
Full Text
See detailLanguage Policy in Luxembourg and the German-speaking Community of Belgium: Ideologies of Language
Rivera Cosme, Gabriel Alejandro UL

Doctoral thesis (2023)

The language policy discourses of Luxembourg and the German-speaking Community of Belgium (GC) exhibit fundamental differences, yet interesting similarities that so far have not been subject to a ... [more ▼]

The language policy discourses of Luxembourg and the German-speaking Community of Belgium (GC) exhibit fundamental differences, yet interesting similarities that so far have not been subject to a discourse analysis from a mixed framework of linguistic anthropology and discourse linguistics (Diskurslinguistik). On the basis of a corpus consisting of current language policy texts and semi-structured interviews with key actors involved in current policy design and implementation, this research aims to answer the question regarding the interplay of ideology and discourse in the design and implementation of the language policy of Luxembourg and the GC. The bulk of the analysis is made up of three layers for each case. Starting point of the analysis is a historical overview that identifies ideologies and language policy discourses that emerged, predominated, and transformed from the 19th century until the 21st century in each case. The second layer is a discourse analysis of current language policy texts with a focus on the ideologies informing current discourses about Luxembourgish in Luxembourg and German in the GC. Finally, the third layer is a discourse analysis of interview extracts with equal focus on ideologies. Through a combined thematic and discourse analysis based on the social semiotics of language, this research provides a description of the discursive patterns of the linguistic structure of passages of each text and interview with the aim of linking these patterns to the identified ideologies that inform the policy discourses. It was found that the connecting node between Luxembourg and the GC lies in the tension between the two themes of standardization and multilingualism. It is shown that standardization and multilingualism are thematic centers from which discourses about language, identity, and nation emanate in these two cases. Through the combination of the historical overview and the meticulous analysis of discursive patterns identified in the linguistic structure of language policy texts and interview extracts, it is not only shown how ideology informs current language policy discourses in Luxembourg and the GC, but also why language policy discourses transform or sediment through time. [less ▲]

Detailed reference viewed: 70 (0 UL)
Full Text
See detailAGE- AND SEX-SPECIFIC TRANSCRIPTOME CHANGES IN THE MIDBRAIN OF PARK7-DEPLETED MICE
Helgueta Romero, Sergio UL

Doctoral thesis (2023)

To date, various functions have been reported for DJ-1 protein, encoded by PARK7 gene, mostly associated to maintenance of a balance between the reactive oxygen species (ROS) production and the ... [more ▼]

To date, various functions have been reported for DJ-1 protein, encoded by PARK7 gene, mostly associated to maintenance of a balance between the reactive oxygen species (ROS) production and the antioxidant response. Indeed, DJ-1 has been related with several oxidative-stress associated diseases, either by its overexpression or by its absence. Loss-of-function mutations in PARK7 can lead to an early onset PD. PD is the second most common neurodegenerative disease that usually affects the population above the age of 65. PD is characterized by its motor symptoms, although prodromal non-motor symptoms can appear up to 20 years earlier and can have a major impact on quality of life for the patients. One of the most characteristic neuropathological hallmarks of PD is the progressive dopaminergic neuronal loss in the substantia nigra pars compacta (SNpc) of the midbrain, leading to striatal dopamine deficiency. One of the main drivers of the disease is oxidative stress caused by mitochondrial dysfunction. Sex-differences in the incidence, prevalence and severity of the disease are observed in PD, with males affected more than females. In mice, Park7 deletion leads to dopaminergic deficits during aging, and increased sensitivity to oxidative stress. However, the severity of the reported phenotypes varies, and the findings are often not separated by sex. In the present study, gene expression signatures of in vivo midbrain sections from male and female Park7 knock-out mice were investigated at different ages to understand the early, prodromal molecular changes upon loss of DJ-1. Interestingly, while at 3 months the transcriptomes of both male and female mice were unchanged compared to their wild type littermates, an extensive deregulation was observed specifically in 8-month-old males. The affected genes were enriched for processes such as focal adhesion, extracellular matrix (ECM) interaction, and epithelial-to-mesenchymal transition (EMT), while the most enriched transcription factor at the deregulated genes was nuclear factor erythroid 2-related factor 2 (NRF2). Among others, the EMT marker gene Cdh1 as well as antioxidant response genes were altered specifically in the midbrain, but not in the cortex, of male DJ-1 deficient mice. Moreover, many of the misregulated genes are known target genes of estradiol (E2) and all-trans-retinoic acid (ATRA) signaling and show sex-specific expression in wild type mice. In line with this, downregulation of the expression of Cyp1b1, encoding an enzyme involved in the metabolism of both E2 and ATRA was also observed only in the midbrain of male DJ-1 deficient mice. Depletion of DJ-1 or NRF2 in primary male astrocytes recapitulated many of the in vivo changes, including downregulation of Cyp1b1. Interestingly, knock-down of Cyp1b1 led to gene expression changes in focal adhesion and EMT in cultured male astrocytes. Moreover, iPSC-derived astrocytes from PD patient with loss-of-function PARK7 mutation showed changes in genes associated with EMT pathway and NRF2 signaling. Taken together, our data indicate that loss of Park7 leads to sex-specific gene expression changes specifically in males through astrocytic alterations in NRF2-CYP1B1 axis. In addition, since an extensive deregulation occurs in the midbrain of 8-month-old males, the single nuclei chromatin accessibility profile at this age and sex was also investigated, with the aim of identifying the upstream regulatory events that lead to the observed transcriptomic changes and the implication of each cell type on those changes. Despite the low number of recovered nuclei, the major brain cell types were successfully identified. Similar representation in terms of proportion of the total nuclei was observed between the genotypes for all identified cell type populations except for astrocytes, that showed lower numbers in Park7-/- mice in comparison to wild type mice. Moreover, the biggest differences in chromatin accessibility were observed in the astrocyte population that also showed the strongest overlap with the transcriptomic changes. Enrichment analysis performed over the genes showing both epigenomic and transcriptomic changes in astrocytes were consistent with pathways identified in the analysis of the entire midbrain sections. Further indicating the relevance of these changes and their association with astrocytes in the mouse midbrain with Park7 depletion at both chromatin and mRNA level. These findings provide new information about Park7-/- PD mouse model, showing specific changes during aging and suggesting higher sensitivity of males to loss of DJ-1, which might help to better understand variation in the reported phenotypes of Park7-/- mice. These results also point to astrocytes as the main cell population involved in the gene expression changes. Finally, this study gives an insight into the molecular changes that may occur in early stages of PD and help us to understand why males are more affected by PD than females. [less ▲]

Detailed reference viewed: 48 (4 UL)
Full Text
See detailConvergence of national prudential supervision under the European Single Supervisory Mechanism
Valieva, Farida UL

Doctoral thesis (2023)

This dissertation starts with an overview of the recent and ongoing efforts to achieve greater convergence in national banking supervision within the European Single Supervisory Mechanism (SSM). However ... [more ▼]

This dissertation starts with an overview of the recent and ongoing efforts to achieve greater convergence in national banking supervision within the European Single Supervisory Mechanism (SSM). However, the persistence of distinct national preferences on banking supervision has resulted in ongoing differences in the practice of banking supervision at the national level. More specifically, the supervision of Less Significant Institutions (LSIs) has remained under the direct control of national supervisors and to, a certain extent, under national law, thus allowing significant ongoing margin of manoeuvre on supervision. This dissertation examines the consequences of this margin for manoeuvre left to national supervisors, despite strong convergence pressures through post-financial crisis EU institutional developments. The analysis focus upon the national supervision of LSIs. The main research question guiding this work is, therefore: under what conditions do pre-existing national institutional configurations continue to determine the trajectory of national supervisory practice in the context of European-level convergence pressures (through the European Banking Authority and the SSM)? To answer this question, I use a four-part analytical framework based on, first, Europeanisation which provides insight into top-down processes of integration, second, Historical Institutionalism which provides an understanding of path dependency from earlier policy decisions shaping national supervisory institutions and practice, third, the Epistemic Communities approach and fourth Transnational Policy Network framework. Based on this combined analytical framework, I formulate the following hypothesis: the more discretion exercised by the national supervisor in relation to its government, the more likely the adoption of policies and practices that result in greater convergence with the rules and practices developed at the EU / Banking Union level. To test this hypothesis, I start with a broad assessment of the provisions that provide margin of manoeuvre to national authorities, specifically the options and national discretions (ONDs) explicitly granted to national authorities — member state governments or supervisors — in EU capital requirements legislation: the CRD IV/V and CRR I/II. This assessment provides an initial confirmation of my hypothesis, showing a more important degree of convergence in the cases where national supervisors benefit from full discretion with no intervention from national governments. I then test the hypothesis on a typical case where NCAs can exercise discretion — the Supervisory Review and Evaluation Process (SREP) — and a typical case with national government intervention that limits supervisory discretion — Non Performing Loans (NPLs). Through an analysis of the French and German national cases with regard to SREP and NPLs, I conclude that the convergence of prudential supervision within the SSM was largely observed in cases where the national supervisor benefitted from discretion as a result of cooperation opportunities and socialisation processes. [less ▲]

Detailed reference viewed: 84 (9 UL)
Full Text
See detailThree Essays on the General Equilibrium Effects of Human Interactions
Ünsal, Alper UL

Doctoral thesis (2023)

The overarching theme of this PhD thesis is human mobility and its externalities, particularly in the context of labour and health economics. Through rigorous modelling and analysis, the three chapters of ... [more ▼]

The overarching theme of this PhD thesis is human mobility and its externalities, particularly in the context of labour and health economics. Through rigorous modelling and analysis, the three chapters of the thesis demonstrate the potential benefits of policies that regulate human mobility. In the first chapter of my PhD, I examine how language training can improve the functioning of the labour market, with a particular focus on immigrants with high skills who face language barriers. I argue that fully funding the cost of language acquisition for migrants can bring significant benefits to the economy and migrants, but may marginally worsen the labour market performance of low-skilled natives. Using a search and matching framework with two-dimensional skill heterogeneity, I model the effects of a language acquisition subsidy on migrants' labour market integration and its impact on natives' labour market performance. My study finds that subsidizing language acquisition costs may increase the GDP of the German economy by approximately ten billion dollars by decreasing the aggregate unemployment rate and skill mismatch rate and increasing the share of job vacancies requiring high generic skills. The second chapter of my PhD explores the challenges involved in devising social contact limitation policies as a means of controlling infectious disease transmission. Using an economic-epidemiological model of COVID-19 transmission, I evaluate the effectiveness of different intervention strategies and their consequences on public health, social welfare and economic outcomes. The findings emphasize the importance of responsiveness in implementing social contact limitations, rather than solely focusing on their stringency, and suggest that early interventions lead to the lowest losses in economy and mental well-being for a given number of life losses. The study has broader implications for managing the societal impact of infectious diseases and highlights the need to continue refining our understanding of these trade-offs and developing adaptable models and policy tools to safeguard public health while minimizing social and economic consequences. Overall, the study offers a robust and versatile framework for understanding and navigating the challenges posed by public health crises and pandemics. The third chapter of my PhD builds on the economic-epidemiological model developed in Chapter 2 to analyze the multifaceted effects of vaccine hesitancy in controlling the spread of infectious diseases, with a particular focus on the COVID-19 pandemic in Belgium. The study utilizes actual vaccination rates by age group until June 2021 and simulates the following months by incorporating realistic properties such as temporary immunity, age-specific vaccination hesitancy rates, daily vaccination capacity, and vaccine efficacy rate. The baseline scenario with an overall 27.1$\%$ vaccine hesitancy rate indicates that current vaccination rates in Belgium are sufficient to control the spread of COVID-19 without imposing social contact limitations. However, hypothetical scenarios with higher disease transmission rates demonstrate the high costs of vaccine hesitancy, resulting in significant losses in labour supply, mental well-being, and life losses. Throughout this thesis, I have described the costs and benefits induced by mobility, and shown that mobility policies make winners and losers. In Chapter 1, subsidizing the cost of language acquisition for migrants can bring significant benefits to the economy and migrants, but may marginally worsen the labour market performance of low-skilled natives. In Chapter 2, stringent policies alleviate health losses, but they impact economic activity and mental health. In Chapter 3, the health externalities generated by human interactions impose a potential tradeoff between values, namely the freedom to move and the freedom to choose to get vaccinated. In each of these chapters, I quantify these tradeoffs. Another important insight from this thesis is the need to incorporate behavioural aspects into macro models evaluating the consequences of policies related to human mobility. In the thesis, these aspects include individual investments in language training, decision-making on infection avoidance, social contacts, labour supply, and vaccination decisions. can lead to more effective policies that balance the interests of various stakeholders. Overall, this thesis contributes to the literature on human mobility by highlighting the potential benefits and challenges associated with it, and the need for nuanced and responsive policymaking that takes into account behavioural aspects and externalities. The insights gained from this thesis can be relevant for future research in economics on topics related to human mobility, public health, and labour market integration. [less ▲]

Detailed reference viewed: 79 (7 UL)
See detailLe Phénomène des textes jumeaux : l’acte d’autocorrection (dialogue entre Andreï Tarkovski et Arséni Tarkovski, et autres exemples de textes jumeaux d’un écrivain/artiste des XXe et XXIe siècles)
Pavlikova, Polina UL

Doctoral thesis (2023)

This thesis analyses the texts of authors who developed the capacity to regularly apply different systems of expression in their creative writing/art. These writers/artists generally experiment with ... [more ▼]

This thesis analyses the texts of authors who developed the capacity to regularly apply different systems of expression in their creative writing/art. These writers/artists generally experiment with poetic/prosaic, verbal/visual forms or several languages. One of the artistic results that they can produce, consciously or unconsciously, is a “twin text,” the object of the current study. The objective of the present research is to establish the concept of twin texts as a cultural supranational phenomenon, propose a method to identify and study them, and suggest reasons why the author creates them systematically. The twin texts are two or more « texts » (verbal and/or visual) that are linked to each other on the thematic level by repeating their images or plots or by showing the same characters, often in a contradictory way. Twin texts can be the result of a non-monolingual situation and non-identified artistic position of an author. For these reasons, we find twin texts among émigré and translingual writers, « non-professionals » (prose writers creating poems or artists applying verbal forms of expression, and vice versa), and other writers/artists whose identity can be defined as « multiple ». By elaborating a series of images, the author of twin texts is placed in a situation described by Mikhail Bakhtin as polyphonic. By being above the meanings his/her works contain, the author of twin texts establishes a space for dialogue. In other words, the writer/artist is not above the text, but above all the texts, « avant-texte », to borrow this term from genetic criticism. [less ▲]

Detailed reference viewed: 116 (21 UL)
Full Text
See detailIntermarriage and the Integration of Immigrants
Justiniano Medina, Adda Carla UL

Doctoral thesis (2023)

Detailed reference viewed: 62 (6 UL)
Full Text
See detailL’avènement du futur cadre juridique international dédié aux activités d’exploration et d’utilisation des ressources de l’espace extra-atmosphérique
Homov, Ella UL

Doctoral thesis (2023)

L’espace extra-atmosphérique n’est pas un espace sans droit. Toute activité d’exploration et d’utilisation de l’espace extra-atmosphérique, de la Lune et des corps célestes doit être menée conformément au ... [more ▼]

L’espace extra-atmosphérique n’est pas un espace sans droit. Toute activité d’exploration et d’utilisation de l’espace extra-atmosphérique, de la Lune et des corps célestes doit être menée conformément au droit international de l’espace. Quid des activités axées sur les ressources spatiales ? Peuvent-elles être légalement conduites dans l’espace ? Sont-elles conformes au droit international de l’espace ? Afin de répondre, entre autres, aux questions posées ci-avant, le cadre légal international qui régit actuellement les activités spatiales ainsi que trois exemples de lois nationales autorisant l’appropriation des ressources de l’espace extra-atmosphérique ont été étudiés. Par ailleurs, il a été avancé qu’afin (i) de maintenir la paix et la sécurité internationales et de favoriser la coopération et la compréhension internationales, (ii) d’assurer un niveau de sécurité juridique élevé, (iii) d’encourager le développement du secteur de l’industrie spatiale axée sur l’exploration et l’utilisation des ressources de l’espace extra-atmosphérique, l’adoption d’un cadre juridique international dédié aux activités d’exploration et d’utilisation des ressources de l’espace extra-atmosphérique était indispensable. Les recherches menées dans le cadre de la thèse ont permis de discerner et de proposer un certain nombre d’éléments et de mesures techniques et organisationnelles qui pourraient composer le futur cadre juridique international qui régira les activités axées sur les ressources spatiales. [less ▲]

Detailed reference viewed: 51 (5 UL)
Full Text
See detailTHE ORAL MICROBIOME IN THE SCOPE OF EARLY LIFE ADVERSITY
Charalambous, Eleftheria UL

Doctoral thesis (2023)

In a way, the microbiome has been on earth long before we humans appeared in any evolution theory. This ethereal “organ” has grabbed the attention of biological research and will continue to expand in the ... [more ▼]

In a way, the microbiome has been on earth long before we humans appeared in any evolution theory. This ethereal “organ” has grabbed the attention of biological research and will continue to expand in the years to come. Growing evidence supports that its presence is vital to the host and particularly to us humans. Although largest part of the mirobiome research focuses on the gut microbiome, the microbial communities actually stretch in the farthest off body sites. The last few years, oral microbiome (OM) has started to steal some light. Despite of the geolocation, the microbiome maintains an intimate relationship with its host. Our long term health appears to be closely linked and sustained by a “happy”, healthy and symbiotic microbiome.Disease and state of illness, whether it is of physiological or psychological nature or both can result from many different causes and routes. Remarkably, about a third of the adult population worldwide seems to become ill as concequence of certain difficult, stressful and intense experiences during their childhood. The field of social, biological, medical and environmental sciences describes these experiences as “Early Life Adversity (ELA)” and the marrow of the “Developmental Origins of Health and Disease (DOHaD)” . Since the 80s that this sociobiological theory was introduced and described, research revealed that ELA factors could be anything from pregnancy complications, medication, birth complications, infections, social isolation, socioeconomic status, orphanhood, toxic environment and any other event that is capable of inducing extreme stress to an individual. Concequences of ELA experiences have also been introduced as wide range of common later life diseases. An institutionalisation cohort as a model of ELA, the EPIPATH, which was initially established to study the longterm immune and cardiometabolic effect of institutionalisation was used for this Thesis. At first, I aimed to investigate what was the effect of institutionalisation on the OM of these individuals using their oral taxonomic composition. Secondly, I aimed to identify a link between this composition and the immune profiles of the participants. Thirdly, knowing the stress signatures of the cohort following pre-existing cortisol and glucose measurements, I aimed to detect plausible interaction between the microbiome and those data. Lastly, I aimed to find mechanistic evidence of our prior observed associations by looking into the metabolome of the oral microbiome. Altogether our data revealed observations of a complex system of bidirectional interactions between ELA, the OM and the immune markers of cytotoxicity and immune senescence together with a particular profile of glucose and cortisol kinetics following exposure to social stress. Besides, our data expanded to the exposure of the first mechanistic cues of ELA traces on the OM metabolome. The research conducted for this thesis brings to light important evidence on how ELA interacts with the OM leading to a certain disease phenotype. [less ▲]

Detailed reference viewed: 78 (5 UL)
See detailneuroHuMiX: a gut-on-a-chip model to study the gut microbiome-nervous system axis
Sedrani, Catherine Marie UL

Doctoral thesis (2023)

The human body is colonized by at least the same number of microbial cells as it is composed of human cells, whereby most of these microorganisms are located in the gut. In this context, the human gut ... [more ▼]

The human body is colonized by at least the same number of microbial cells as it is composed of human cells, whereby most of these microorganisms are located in the gut. In this context, the human gut microbiome plays an essential role in human health and disease. Dysbiosis, an imbalanced microbial community state, is associated with diseases and functionally contributes to the aetiology, diagnosis, or treatment of them. It has been linked to different diseases, from gastrointestinal tract infections to neurodegenerative diseases. As more and more evidence points towards a role for the gut microbiome in neurological disorders, interactions between the gut microbiome and the nervous system can be implied. Though studies have shown that the gut communicates with the brain via the vagus nerve, it still remains largely unknown how the gut microbiome interacts with the enteric nervous system. In order to close this knowledge gap, a physiologically representative in vitro model is required to study the linked gut microbiome-nervous system interactions. In this project, I adapted the Human Microbial Crosstalk (neuroHuMiX) gut-on-chip by including induced pluripotent stem cell-derived enteric neurons in the model and thus developed ‘neuroHuMiX’. The resulting model, ‘neuroHuMiX’, allows the co-culture of bacterial, epithelial, and neuronal cells across microfluidic channels, separated by semi-permeable membranes. Despite the separation of the different cell types, the cells are able to communicate with each other via soluble factors, simultaneously providing an opportunity to study each cell type separately. neuroHuMiX not only allows for first insights into the gut microbiome-nervous system interactions but may also serve as a model for therapeutic testing. This is a critical first step in studying the human gut-brain axis which can be further expanded to personalized models for neurological disorders research. [less ▲]

Detailed reference viewed: 153 (13 UL)
Full Text
See detailDistributed Ledger Technologies between Anonymity and Transparency: AML/CFT Regulation of Cryptocurrency Ecosystems in the EU
Pocher, Nadia UL

Doctoral thesis (2023)

The advent of Bitcoin suggested a disintermediated economy to which Internet users can take part directly. The conceptual disruption brought about by this Internet of Money (IoM) mirrors the cross ... [more ▼]

The advent of Bitcoin suggested a disintermediated economy to which Internet users can take part directly. The conceptual disruption brought about by this Internet of Money (IoM) mirrors the cross-industry impacts of blockchain and distributed ledger technologies (DLTs). While related instances of non-centralization thwart the regulatory efforts to establish accountability, in the financial domain further challenges arise from the presence in the IoM of two seemingly opposing traits: anonymity and transparency. Indeed, DLTs are often described as architecturally transparent, but the perceived level of anonymity of cryptocurrency transfers fuels fears of illicit exploitation. This is a primary concern for the framework to prevent the misuse of the financial system for money laundering and the financing of terrorism and proliferation (AML/CFT/CPF), and a top priority both globally and at the European Union level. Nevertheless, the anonymous and transparent features of the IoM are far from clear-cut, and the same is true for its levels of disintermediation and non-centralization. Almost fifteen years after the first Bitcoin transaction, the IoM comprises today a diverse set of socio-technical ecosystems. Building on a preliminary analysis of their phenomenology, this dissertation shows how there is more to their traits of anonymity and transparency than it may seem, and how these features range across a broad spectrum of combinations and degrees. In this context, given implemented trade-offs can be evaluated by referring to techno-legal benchmarks, to be established through socio-technical assessments grounded on teleological interpretation. Valuable insights are drawn to this end from the various models of central bank digital currency. Against this backdrop, this work provides framework-level recommendations for the EU to respond to the two-fold nature of the IoM legitimately and effectively. The methodology cherishes the mutual interaction between regulation and technology when drafting regulation whose compliance can be eased by design. Consistently, it presents the idea of creating a transposition model between red flag indicators and techno-regulatory standards, informed by a preliminary risk-based taxonomy of the trade-offs displayed by IoM ecosystems. It suggests its implementation should be informed by an institutionalized and multi-stakeholder model of co-regulation, known in the literature as polycentric. This approach mitigates the risk of overfitting in a fast-changing environment, while acknowledging specificities in compliance with the risk-based approach that sits at the core of the AML/CFT/CPF regime. [less ▲]

Detailed reference viewed: 28 (0 UL)
Full Text
See detailLearning Optimisation Algorithms over Graphs
Duflo, Gabriel Valentin UL

Doctoral thesis (2023)

The paradigm of learning to optimise relies on the following principle: instead of designing an algorithm to solve a problem, we design an algorithm which will automate the design of such a solver. The ... [more ▼]

The paradigm of learning to optimise relies on the following principle: instead of designing an algorithm to solve a problem, we design an algorithm which will automate the design of such a solver. The initial idea was to alleviate the limitations stated by the No Free Lunch Theorem by producing an algorithm which efficiency is less dependent upon known instances of the problem to tackle. Hyper-heuristics constitute the main learning-to-optimise techniques. These rely on a high-level algorithm performing a search process into a space of low-level heuristics to tackle a given problem. Because the latter search space is problem-dependent, the vast majority of hyper-heuristics are designed to tackle a specific problem. Due to this lack of generality, existing works fully redesign hyper-heuristics when tackling a new problem, despite the fact that they may share a similar structure. In this dissertation, we tackle this challenge by proposing a generic way for learning to optimise any problem. To this end, this thesis introduces three main contributions: (i) an analysis of the formal functioning of learning-to-optimise techniques; (ii) a model of generic hyper-heuristic, named Algorithm Learner for Graph Optimisation problems (ALGO), constituting the central point of this work; (iii) a real-world use case where we use our generic hyper-heuristic to automate the design of behaviours within a swarm of drones. In the first part, we provide a formalism for optimisation and learning concepts, which we use to describe the large body of knowledge that combines two layers of optimisation and/or learning. We then put an emphasis on approaches using learning to improve an optimisation process, i.e., aiming at learning to optimise. In the second part, we present ALGO, our model of generic hyper-heuristic. We explain how we abstract from a given problem with a graph structure so that it can be used to tackle any optimisation problem. We also detail the steps to follow in order to use ALGO to tackle a given problem. We finally present the modularity of ALGO with inner components that a user can implement. The second part ends with a validation of our model, i.e., using ALGO to tackle a classical optimisation problem. In the third part, we use ALGO to tackle the problem of area surveillance with a swarm of drones. We demonstrate that ALGO constitutes a novel and efficient way to automate the design of such a distributed and multi-objective problem. [less ▲]

Detailed reference viewed: 81 (44 UL)
Full Text
See detailEssays On Skills, Labor Market Institutions and Wage Inequality
Grasso, Giuseppe UL

Doctoral thesis (2023)

Detailed reference viewed: 67 (17 UL)
See detailNLP De Luxe - Challenges for Natural Language Processing in Luxembourg
Lothritz, Cedric UL

Doctoral thesis (2023)

The Grand Duchy of Luxembourg is a small country in Western Europe, which, despite its size, is an important global financial centre. Due to its highly multilingual population, and the fact that one of ... [more ▼]

The Grand Duchy of Luxembourg is a small country in Western Europe, which, despite its size, is an important global financial centre. Due to its highly multilingual population, and the fact that one of its national languages - Luxembourgish - is regarded as a low-resource language, this country lends itself naturally to a wide variety of interesting research opportunities in the domain of Natural Language Processing (NLP). This thesis discusses and addresses challenges with regard to domain-specific and language-specific NLP, using the unique linguistic situation in Luxembourg as an elaborate case study. We focus on three main topics: (I) NLP challenges present in the financial domain, specifically handling personal names in sensitive documents, (II) NLP challenges related to multilingualism, and (III) NLP challenges for low-resource languages with Luxembourgish as the language of interest. With regard to NLP challenges in the financial domain, we address the challenge of finding and anonymising names in documents. Firstly, an empirical study on the usefulness of Transformer-based deep learning models is presented on the task of Fine-Grained Named Entity Recognition. This empirical study was conducted for a wide array of domains, including the financial domain. We show that Transformer-based models, and in particular BERT models, yield the best performance for this task. We furthermore show that the performance is also strongly dependent on the domain itself, regardless of the choice of model. The automatic detection of names in text documents in turn facilitates the anonymisation of these documents. However, anonymisation can distort data and have a negative effect on models that are built on that data. We investigate the impact of anonymisation of personal names on the performance of deep learning models trained on a large number of NLP tasks. Based on our experiments, we establish which anonymisation strategy should be used to guarantee accurate NLP models. Regarding NLP challenges related to multilingualism, we address the need for polyglot conversational AI in a multilingual environment such as Luxembourg. The trade-off between a single multilingual chatbot and multiple monolingual chatbots trained on Intent Classification and Slot Filling for the banking domain is evaluated in an empirical study. Furthermore, we publish a quadrilingual, parallel dataset that we built specifically for this study, and which can be used to train a client support assistant for the banking domain. With regard to NLP challenges for the Luxembourgish language, we predominantly address the lack of a suitable language model and datasets for NLP tasks in Luxembourgish. First, we present the most impactful contribution of this PhD thesis, which is the first BERT model for the Luxembourgish language which we name LuxemBERT. We explore a novel data augmentation technique based on partially and systematically translating texts to Luxembourgish from a closely related language in order to artificially increase the training data to build our LuxemBERT model. Furthermore, we create datasets for a variety of downstream NLP tasks in Luxembourgish to evaluate the performance of LuxemBERT. We use these datasets to show that LuxemBERT outperforms mBERT, the de facto state-of-the-art model for Luxembourgish. Finally, we compare different approaches to pre-train BERT models for Luxembourgish. Specifically, we investigate whether it is preferable to pre-train a BERT model from scratch or continue pre-training an already existing pre-trained model on new data. To this end, we further pre-train the multilingual mBERT model and the German GottBERT on the Luxembourgish dataset that we used to pre-train LuxemBERT and compare all models in terms of performance and robustness. We make all our language models as well as the datasets available to the NLP community. [less ▲]

Detailed reference viewed: 116 (20 UL)
Full Text
See detailSecurity and Privacy of Resource Constrained Devices
Chiara, Pier Giorgio UL

Doctoral thesis (2023)

The thesis aims to present a comprehensive and holistic overview on cybersecurity and privacy & data protection aspects related to IoT resource-constrained devices. Chapter 1 introduces the current ... [more ▼]

The thesis aims to present a comprehensive and holistic overview on cybersecurity and privacy & data protection aspects related to IoT resource-constrained devices. Chapter 1 introduces the current technical landscape by providing a working definition and architecture taxonomy of ‘Internet of Things’ and ‘resource-constrained devices’, coupled with a threat landscape where each specific attack is linked to a layer of the taxonomy. Chapter 2 lays down the theoretical foundations for an interdisciplinary approach and a unified, holistic vision of cybersecurity, safety and privacy justified by the ‘IoT revolution’ through the so-called infraethical perspective. Chapter 3 investigates whether and to what extent the fast-evolving European cybersecurity regulatory framework addresses the security challenges brought about by the IoT by allocating legal responsibilities to the right parties. Chapters 4 and 5 focus, on the other hand, on ‘privacy’ understood by proxy as to include EU data protection. In particular, Chapter 4 addresses three legal challenges brought about by the ubiquitous IoT data and metadata processing to EU privacy and data protection legal frameworks i.e., the ePrivacy Directive and the GDPR. Chapter 5 casts light on the risk management tool enshrined in EU data protection law, that is, Data Protection Impact Assessment (DPIA) and proposes an original DPIA methodology for connected devices, building on the CNIL (French data protection authority) model. [less ▲]

Detailed reference viewed: 77 (3 UL)
See detailLiquidity post resolution in the (European) Banking Union: status quo, comparative analysis and proposals
Lupinu, Pier Mario UL

Doctoral thesis (2023)

More than a decade ago, the Global Financial Crisis and the subsequent euro-area sovereign debt crisis generated a strong regulatory impetus. This significantly strengthened the European Union (EU ... [more ▼]

More than a decade ago, the Global Financial Crisis and the subsequent euro-area sovereign debt crisis generated a strong regulatory impetus. This significantly strengthened the European Union (EU) banking and financial system, which shifted from the epicentre of the crisis to a mean of countering it; however, significant gaps remain. Following international standards, the EU legislator has introduced a new mechanism for the orderly management of banking crises with the objective of preserving critical activities and safeguarding financial stability while avoiding additional costs to taxpayers. Unlike other jurisdictions, such as the United States, where resolution powers have been consolidated in the hands of the Federal Deposit Insurance Corporation, the resolution of significant banks based in the European Banking Union (EBU) is managed by a number of authorities, resulting in the fragmentation of mandates, powers and tools. This is a fragmentation that has serious consequences for the financing of the resolution procedures. While the financing of liquidity during the execution of the resolution procedure is still quite controversial, the rules governing such financing during the stabilisation phase (post resolution) are completely absent. Depending on the resolution strategy adopted by the Single Resolution Board, the resolved bank might experience important liquidity needs during its stabilisation phase that could jeopardise the success of the whole resolution procedure. In the context of the resolution regime for significant banks, the euro area resolution authority has thus far dealt with cases in which a buyer was immediately available or the resolution of the distressed bank was considered not in the public interest and, therefore, liquidation was the best approach. The apparent ease of the recent resolution experience belies the fact that the panorama of the scenarios is broader and with more adverse implications than can be imagined. Even though past cases show that crises are usually generated by liquidity problems, the EU legislator has not had to deal with one of the most critical aspects of the resolution process, namely the provision of liquidity. While liquidity gaps can arise during resolution, this study focuses on gaps in the post-resolution scenario, which is considered the most critical for the success of the entire procedure. This sudden lack of liquidity may occur following more complex scenarios in which the bank could find itself unable to meet its short-term obligations at the end of the application of the resolution measures. This is due to information asymmetries concerning the outcome of resolution, the capabilities of euro area authorities to tackle emergency liquidity needs according to the current unclear regulatory framework and the fragility of the resolved bank. The bank may be unable to resort to ordinary and emergency sources for the provision of liquidity, such as those provided by the central bank, due its former distressed nature and the state in which it is presented to markets following its resolution, which could result in information asymmetries that generate either sudden liquidity outflows from customers or market freezes. To date, there are no solutions to this issue in the mandates of the authorities involved nor are resources available to stem a lack of liquidity during the stabilisation period. In this context, all the potentially available liquidity sources will be examined, starting with the identification of the stabilisation phase, which is also influenced by the period preceding resolution. Subsequently, it will be possible to analyse the various resolution strategies to understand how the needs for liquidity are generated and the legal and operational limits that prevent liquidity from being provided post resolution. As a result of this analysis, the guidelines of standard setters and a comparison with more advanced systems, it will be possible to formulate proposals aimed at solving the research problem of post-resolution liquidity in the EBU. [less ▲]

Detailed reference viewed: 44 (2 UL)
Full Text
See detailFrom Standards of International Law for Outer Space Resources Exploitation to Sustainable Mining
Leterre, Gabrielle Céline Giliane UL

Doctoral thesis (2023)

As space activities continue to develop and increase in number, so do environmental concerns in outer space. For decades now, humanity has continuously sent satellites into Earth orbits without caring for ... [more ▼]

As space activities continue to develop and increase in number, so do environmental concerns in outer space. For decades now, humanity has continuously sent satellites into Earth orbits without caring for potential environmental consequences in outer space. Ultimately, these actions have proven to raise issues regarding the sustainability of the activity; issues which are now being addressed legally. Satellites were the first venture of humanity into space, and it is fair to admit we did not know better at the time. We do now. With the development of new types of space missions, such as space resources-related activities, it is safe to assume that new serious environmental problems will arise as well. Based on previous experience both on Earth and in outer space, it is logical, but also imperative, to question the environmental impact of these space resources activities and to consider legal solutions to promote and facilitate their sustainability. Accordingly, this research assesses the applicability of existing rules and mechanisms promoting environmental protection and sustainability in outer space to the case of the exploitation of space resources. To that end, an array of mechanisms is considered such as the framework of the UN Space Treaties, international environmental law, non-legally binding instruments, such as the space debris mitigation guidelines and COSPAR’s planetary protection policy, as well as national space legislations. Ultimately, this work aims at drafting the roadmap for the environmentally sustainable exploitation of space resources from a legal standpoint. It recommends the adoption of a mix of interdisciplinary approaches which balances a national effective approach with international guiding rules. [less ▲]

Detailed reference viewed: 85 (5 UL)
Full Text
See detailTARGETED, REALISTIC AND NATURAL FAULT INJECTION : (USING BUG REPORTS AND GENERATIVE LANGUAGE MODELS)
Khanfir, Ahmed UL

Doctoral thesis (2023)

Artificial faults have been proven useful to ensure software quality, enabling the simulation of its behaviour in erroneous situations, and thereby evaluating its robustness and its impact on the ... [more ▼]

Artificial faults have been proven useful to ensure software quality, enabling the simulation of its behaviour in erroneous situations, and thereby evaluating its robustness and its impact on the surrounding components in the presence of faults. Similarly, by introducing these faults in the testing phase, they can serve as a proxy to measure the fault revelation and thoroughness of current test suites, and provide developers with testing objectives, as writing tests to detect them helps reveal and prevent eventual similar real ones. This approach – mutation testing – has gained increasing fame and interest among researchers and practitioners since its appearance in the 1970s, and operates typically by introducing small syntactic transformations (using mutation operators) to the target program, aiming at producing multiple faulty versions of it (mutants). These operators are generally created based on the grammar rules of the target programming language and then tuned through empirical studies in order to reduce the redundancy and noise among the induced mutants. Having limited knowledge of the program context or the relevant locations to mutate, these patterns are applied in a brute-force manner on the full code base of the program, producing numerous mutants and overwhelming the developers with a costly overhead of test executions and mutants analysis efforts. For this reason, although proven useful in multiple software engineering applications, the adoption of mutation testing remains limited in practice. Another key challenge of mutation testing is the misrepresentation of real bugs by the induced artificial faults. Indeed, this can make the results of any relying application questionable or inaccurate. To tackle this challenge, researchers have proposed new fault-seeding techniques that aim at mimicking real faults. To achieve this, they suggest leveraging the knowledge base of previous faults to inject new ones. Although these techniques produce promising results, they do not solve the high-cost issue or even exacerbate it by generating more mutants with their extended patterns set. Along the same lines of research, we start addressing the aforementioned challenges – regarding the cost of the injection campaign and the representativeness of the artificial faults – by proposing IBIR; a targeted fault injection which aims at mimicking real faulty behaviours. To do so, IBIR uses information retrieved from bug reports (to select relevant code locations to mutate) and fault patterns created by inverting fix patterns, which have been introduced and tuned based on real bug fixes mined from different repositories. We implemented this approach, and showed that it outperforms the fault injection performed by traditional mutation testing in terms of semantic similarity with the originally targeted fault (described in the bug report), when applied at either project or class levels of granularity, and provides better, statistically significant, estimations of test effectiveness (fault detection). Additionally, when injecting only 10 faults, IBIR couples with more real bugs than mutation testing even when injecting 1000 faults. Although effective in emulating real faults, IBIR’s approach depends strongly on the quality and existence of bug reports, which when absent can reduce its performance to that of traditional mutation testing approaches. In the absence of such prior and with the same objective of injecting few relevant faults, we suggest accounting for the project’s context and the actual developer’s code distribution to generate more “natural” mutants, in a sense where they are understandable and more likely to occur. To this end, we propose the usage of code from real programs as a knowledge base to inject faults instead of the language grammar or previous bugs knowledge, such as bug reports and bug fixes. Particularly, we leverage the code knowledge and capability of pre-trained generative language models (i.e. CodeBERT) in capturing the code context and predicting developer-like code alternatives, to produce few faults in diverse locations of the input program. This way the approach development and maintenance does not require any major effort, such as creating or inferring fault patterns or training a model to learn how to inject faults. In fact, to inject relevant faults in a given program, our approach masks tokens (one at a time) from its code base and uses the model to predict them, then considers the inaccurate predictions as probable developer-like mistakes, forming the output mutants set. Our results show that these mutants induce test suites with higher fault detection capability, in terms of effectiveness and cost-efficiency than conventional mutation testing. Next, we turn our interest to the code comprehension of pre-trained language models, particularly their capability in capturing the naturalness aspect of code. This measure has been proven very useful to distinguish unusual code which can be a symptom of code smell, low readability, bugginess, bug-proneness, etc, thereby indicating relevant locations requiring prior attention from developers. Code naturalness is typically predicted using statistical language models like n-gram, to approximate how surprising a piece of code is, based on the fact that code, in small snippets, is repetitive. Although powerful, training such models on a large code corpus can be tedious, time-consuming and sensitive to code patterns (and practices) encountered during training. Consequently, these models are often trained on a small corpus and thus only estimate the language naturalness relative to a specific style of programming or type of project. To overcome these issues, we propose the use of pre-trained generative language models to infer code naturalness. Thus, we suggest inferring naturalness by masking (omitting) code tokens, one at a time, of code sequences, and checking the models’ ability to predict them. We implement this workflow, named CodeBERT-NT, and evaluate its capability to prioritize buggy lines over non-buggy ones when ranking code based on its naturalness. Our results show that our approach outperforms both, random-uniform- and complexity-based ranking techniques, and yields comparable results to the n-gram models, although trained in an intra-project fashion. Finally, We provide the implementation of tools and libraries enabling the code naturalness measuring and fault injection by the different approaches and provide the required resources to compare their effectiveness in emulating real faults and guiding the testing towards higher fault detection techniques. This includes the source code of our proposed approaches and replication packages of our conducted studies. [less ▲]

Detailed reference viewed: 128 (16 UL)
Full Text
See detailCROWDSOURCED DATA FOR MOBILITY ANALYSIS
Vitello, Piergiorgio UL

Doctoral thesis (2023)

The importance of data in transportation research has been widely recognized since it plays a crucial role in understanding and analyzing the movement of people, identifying inefficiencies in ... [more ▼]

The importance of data in transportation research has been widely recognized since it plays a crucial role in understanding and analyzing the movement of people, identifying inefficiencies in transportation systems, and developing strategies to improve mobility services. This use of data, known as mobility analysis, involves collecting and analyzing data on transport infrastructure and services, traffic flows, demand, and travel behavior. However, traditional data sources have limitations. The widespread use of mobile devices, such as smartphones, has enabled the use of Information and Communications Technology (ICT) to improve data sources for mobility analysis. Mobile crowdsensing (MCS) is a paradigm that uses data from smart devices to provide researchers with more detailed and real-time insights into mobility patterns and behaviors. However, this new data also poses challenges, such as the need to fuse it with other types of information to obtain mobility insights. In this thesis, the primary source of data that is being examined and leveraged is the popularity index of local businesses and points of interest from Google Popular Times (GPT) data. This data has significant potential for mobility analysis as it overcomes limitations of traditional mobility data, such as data availability and lack of reflection of demand for secondary activities. The main objective of this thesis is to investigate how crowdsourced data can contribute to reduce the limitations of traditional mobility datasets. This is achieved by developing new tools and methodologies to utilize crowdsourced data in mobility analysis. The thesis first examines the potential of GPT as a source to provide information on the attractiveness of secondary activities. A data-driven approach is used to identify features that impact the popularity of local businesses and classify their attractiveness based on these features. Secondly, the thesis evaluates the possible use of GPT as a source to estimate mobility patterns. A tool is created to use the crowdness of a station to estimate transit demand information and map the precise volume and temporal dynamics of entrances and exits at the station level. Thirdly, the thesis investigates the possibility of leveraging the popularity of activities around stations to estimate flows in and out of stations. A method is proposed to profile stations based on the dynamic information of activities in catchment areas. Through this data, machine learning techniques are used to estimate transit flows at the station level. Finally, this study concludes by exploring the possibility of exploiting crowdsourced data not only for extracting mobility insights under normal conditions but also to extract mobility trends during anomalous events. To this end, we focused on analyzing the recovery of mobility during the first outbreak of COVID-19 for different cities in Europe. [less ▲]

Detailed reference viewed: 88 (11 UL)
See detailThe Application of the European Account Preservation Order in Germany, Luxembourg and Spain: A Comparative-Empirical Analysis
Santaló, Carlos UL

Doctoral thesis (2023)

Regulation (EU) 655/2014, establishing a European Account Preservation Order (‘EAPO Regulation’), introduced the very cross-border interim measure at the European Union level. As its name indicates, it ... [more ▼]

Regulation (EU) 655/2014, establishing a European Account Preservation Order (‘EAPO Regulation’), introduced the very cross-border interim measure at the European Union level. As its name indicates, it permits the direct cross-border attachment of funds in bank accounts. Furthermore, it contains a specific mechanism to search for the debtors’ bank accounts that does not exist in all EU-Member States. Although the EAPO is a self-standing procedure, it combines uniform standards with other aspects of the procedure that depend on national law. This dissertation studies and compares the application of the EAPO Regulation in three jurisdictions: Germany, Luxembourg, and Spain. It aims to understand the incorporation of the EAPO procedure within a national civil procedural system and the impact national law has on the EAPO procedure as a whole. The comparative approach determines the different ways Member States apply the EAPO procedures. This comparative analysis follows both a theoretical and empirical approach. The empirical approach relies on qualitative and quantitative data on the functioning of the EAPO obtained from stakeholders, case law databases, and institutional statistical data. The empirical side of the research seeks to identify specific issues courts and practitioners encounter in real-life EAPO cases, and whether such issues are the same or different depending on the jurisdiction where the EAPO is applied. Relying on the outcome of the comparative-empirical analysis, specific policy-making recommendations are designed and intended to improve the application of the EAPO at the EU and the national level. [less ▲]

Detailed reference viewed: 84 (7 UL)
Full Text
See detailCONNECTION TECHNOLOGIES FOR FAST ERECTION OF STEEL STRUCTURES FOR BUILDINGS (FEOSBUILD)
Yolacan, Taygun Firat UL

Doctoral thesis (2023)

Steel-concrete hybrid building systems offer sustainable and effective structural solutions for multi-story and high-rise buildings considering that steel is a completely recyclable material and that the ... [more ▼]

Steel-concrete hybrid building systems offer sustainable and effective structural solutions for multi-story and high-rise buildings considering that steel is a completely recyclable material and that the most advantageous mechanical properties of steel and concrete could be used simultaneously against the effects of tension and compression stress resultants. On the other hand, a small percentage of multi-story buildings and a small number of high-rise structures are actually constructed using steel-concrete hybrid building technologies. This is mostly a result of general contractors’ orientation toward the completion of construction projects using traditional reinforced-concrete construction techniques. Therefore, they generally do not employ a sufficient and competent workforce to execute labor-intensive and complex on-site manufacturing activities such as welding of fin plates and pre-tensioning applications for high-strength bolts required to assemble steel beams and reinforced-concrete columns and walls of steel-concrete hybrid building systems. In order to reduce labor-intensive on-site tasks, general construction contractors typically utilize conventional construction approaches using only reinforced concrete building systems. As a result, the structural and environmental benefits of steel-concrete hybrid building systems could not be widely adopted by the construction industry. This research project proposes three different novel structural joint configurations with cutting-edge saw-tooth interface mechanical interlock bolted connection, bolt-less plug-in connection, and grouted joint details for beam-to-column joints of steel-concrete hybrid building systems. The proposed joint configurations eliminate on-site welding and enable the accommodation of construction and manufacturing tolerances in three spatial directions to achieve fast erection strategies for the construction of steel-concrete hybrid building systems. Therefore, the outcomes of the research project make it possible for general construction contractors to use their existing workforce to complete construction tasks for steel-concrete hybrid building systems without the requirement of specialized tools or training. In this study, a total of six separate experimental test campaigns were established to determine the load-deformation behaviors of the proposed joint configurations and to identify their load-bearing components. In order to show that the suggested joint configurations are appropriate for mass production without the utilization of special equipment or machinery, the experimental test prototypes of the proposed joint configurations were produced in partnership with commercial producers. The experimental test campaigns were simulated with numerical models by means of advanced computer-aided finite element analyses for the identification of the ultimate deformation limits of the proposed joint components and to clarify their progressive failure mechanisms under quasi-static loading conditions. A set of analytical resistance models were developed to estimate the load-bearing capacities of the proposed joint configurations based on the failure modes identified by the observations made during the experimental tests and in accordance with the output results of the numerical simulations. Based on the analytical expressions, the most significant, in other words, the basic variables impacting the load-bearing capacities of the proposed joint configurations were identified. Additionally, the load-deformation behaviors of the proposed joint configurations were further investigated with numerical parametric studies by parametrizing the basic variables to understand their impact on the load-deformation behaviors of the proposed joint configurations. To verify the accuracy of the analytical resistance models of the proposed joint configurations, the estimations of the analytical expressions were compared with the output results of the numerical parametric studies. Based on the distribution of the estimations of the analytical expression against the output result of the numerical parametric studies, characteristic and design partial safety factors were established according to EN1990, Annex D for the analytical resistance models of the saw-tooth interface mechanical interlock bolted connection and bolt-less plug-in connection. The estimations of the analytical resistance model of grouted joint details for beam-to-column joints of steel-concrete hybrid building systems were also compared with the output results of a numerical parametric study but no partial safety factor was established for this joint detail. [less ▲]

Detailed reference viewed: 98 (9 UL)
Full Text
See detailObserving unseen flowlines and their contribution to near stream endmembers in forested headwater catchments.
van Zweel, Karl Nicolaus UL

Doctoral thesis (2023)

The general scope of the PhD research project falls within the framework of developing integrated catchment hydro-biogeochemical theories in the context of the Critical Zone (CZ). Significant advances in ... [more ▼]

The general scope of the PhD research project falls within the framework of developing integrated catchment hydro-biogeochemical theories in the context of the Critical Zone (CZ). Significant advances in the understanding of water transit time theory, subsurface structure controls, and the quantification of catchment scale weathering rates have resulted in the convergence of classical biogeochemical and hydrological theories. This will potentially pave the way for a more mechanistic understanding of CZ because many challenges still exist. Perhaps the most difficult of all is a unifying hydro-biogeochemical theory that can compare catchments across gradients of climate, geology, and vegetation. Understanding the processes driving the evolution of chemical tracers as they move through space and time is of cardinal importance to validating mixing hypotheses and assisting in determining the residence time of water in CZ. The specific aim of the study is to investigate what physical and biogeochemical processes are driving variations in observable endmembers in stream discharge as a function of the hydrological state at headwater catchment scale. This requires looking beyond what can be observed in the stream and what is called ”unseen flowlines” in this thesis. The Weierbach Experimental Catchment (WEC) in Luxembourg provides a unique opportunity to study these processes, with an extensive biweekly groundwater chemistry dataset spanning over ten years. Additionally, WEC has been the subject of numerous published works in the domain of CZ science, adding to an already detailed hydrological and geochemical understanding of the system. Multivariate analysis techniques were used to identify the unseen flowlines in the catchment. Together with the excising hydrological perception model and a geochemical modelling approach, these flowlines were rigorously investigated to understand what processes drive their respective manifestations in the system. The existing perceptual model for WEC was updated by the new findings and tested on 27 flood events to assess if it could adequately explain the c − Q behaviour observed during these periods. The novelty of the study lies in the fact that it uses both data-driven modelling approaches and geochemical processbased modelling to look beyond what can be observed in the near-stream environment of headwaters. [less ▲]

Detailed reference viewed: 86 (7 UL)
Full Text
See detailDATA-DRIVEN DEFEATURING OF COMPLEX GEOMETRICAL STRUCTURES
Perney, Antoine UL

Doctoral thesis (2023)

This work is part of the reconstruction of data from images. Its purpose is to develop methods to generate a surface of CAO type (B-Spline or NURBS). Indeed, obtaining a mathematical representation of the ... [more ▼]

This work is part of the reconstruction of data from images. Its purpose is to develop methods to generate a surface of CAO type (B-Spline or NURBS). Indeed, obtaining a mathematical representation of the surface of a solid body from point clouds, images or tetrahedral meshes is a fundamental task of 21st century digital engineering, where simulators interact with real systems. In this thesis, we will develop new algorithms for reconstruction of CAO geometry. First, in order to determine a NURBS surface, a control network, i.e. a quadrangular mesh, is required. Using the eigenfunctions of a Graph Laplacian problem and thanks to the discrete Morse theory, a control network is determined. The surface obtained using this mesh is not a priori optimal, which is why an optimization algorithm is introduced. It allows to adjust the NURBS surface to the triangulation and thus to best approximate the geometry of the object. Then, a model selection is carried out. To do so, a regression model is set up to compare the surfaces obtained with 3D images, and a surface is chosen using a information criterion. These steps being established, we will no longer consider only the noise of the data, but also that of the solution. Thus, using a sampling method, a probabilistic distribution of surface is determined. Finally, in perspective, constraints are applied to the graph Laplacian problem in order to align the NURBS patches along a given curve, for example in the case of an object with a marked edge. The methods developed are robust and do not depend on the topology of the desired 3D object, that is to say that the algorithm works on a wide range of shapes. We apply the developed methodologies in the biomedical field, with examples of vertebrae and femurs. This would make it possible to have the scanned object, a bone, and the implant in the same "format" and thus the adjustment of the implant will be carried out more easily. [less ▲]

Detailed reference viewed: 53 (8 UL)
Full Text
See detailSenescence as a Converging Mechanism in Parkinson's Disease
Muwanigwa, Mudiwa Nathasia UL

Doctoral thesis (2023)

Neurodegenerative diseases are one of the leading causes of disability and mortality, affecting millions of people worldwide. Parkinson’s disease (PD) is the second most common neurodegenerative disease ... [more ▼]

Neurodegenerative diseases are one of the leading causes of disability and mortality, affecting millions of people worldwide. Parkinson’s disease (PD) is the second most common neurodegenerative disease globally, and while it was first described over 200 years ago, curative treatments remain elusive. One of the main challenges in developing effective therapeutic strategies for PD is the complex molecular pathophysiology of the disease has not been well recapitulated in classically used animal models systems, and studies using post-mortem tissue from patients only represents the end point of disease. Human derived brain organoid models have revolutionized the field of neurological disease modeling, as they are able to recapitulate key cellular and physiological features reminiscent of the human brain. This thesis describes the use of human midbrain organoids (hMO) to model and gain a deeper understanding of genetic forms of PD. In the first manuscript, patient-specific hMO harboring a triplication in the SNCA gene (3xSNCA hMO) were able to recapitulate the key neuropathological hallmarks of PD. We observed the progressive loss and dysfunction of midbrain dopaminergic neurons in 3xSNCA hMO, and the accumulation of pathological α-synuclein including elevated levels of pS129 α-synuclein and the presence of α-synuclein aggregates. We also identified a phenotype indicative of senescence in the 3xSNCA hMO, which represents a mechanism that has recently gained more attention as a driving factor in PD pathogenesis and progression. The second manuscript of this thesis investigated the pathogenic role of LRRK2-G2019S in astrocytes using a combination of post-mortem brain tissue, induced pluripotent stem cell derived astrocytes and hMO. The iPSC derived astrocytes and organoids recapitulated the phenotypes seen in the post-mortem tissue, emphasizing the validity of these models in reflecting the in vivo situation. Interestingly, single-cell RNA sequencing of the hMO revealed that astrocytes from the LRRK2-G2019S organoids showed a senescent-like phenotype. Thus, this thesis highlights the relevance of senescence as a converging mechanism in PD. Finally, this thesis explores the future development of organoid models as they are combined with technologies such as microfluidic devices as in Manuscript III to improve their complexity and reproducibility. Ultimately, this will lead to the development of more representative models that can better recapitulate and model PD as well as other neurodegenerative disorders. [less ▲]

Detailed reference viewed: 117 (7 UL)
See detailSpaceborne GNSS reflectometry for land remote sensing studies
Setti Junior, Paulo de Tarso UL

Doctoral thesis (2023)

Understanding, quantifying and monitoring soil moisture is important for many applications, e.g., agriculture, weather forecasting, occurrence of heatwaves, droughts and floods, and human health. At a ... [more ▼]

Understanding, quantifying and monitoring soil moisture is important for many applications, e.g., agriculture, weather forecasting, occurrence of heatwaves, droughts and floods, and human health. At a large scale, satellite microwave remote sensing has been used to retrieve soil moisture information. Surface water has also been detected and monitored through remote sensing orbital platforms equipped with passive microwave, radar, and optical sensors. The use of reflected L-band Global Navigation Satellite System (GNSS) signals represents an emerging remote sensing concept to retrieve geophysical parameters. In GNSS Reflectometry (GNSS-R) these signals are repurposed to infer properties of the surface from which they reflect as they are sensitive to variations in biogeophysical parameters. NASA's Cyclone GNSS (CYGNSS) is the first mission fully dedicated to spaceborne GNSS-R. The eight-satellite constellation measures Global Positioning System (GPS) reflected L1 (1575.42 MHz) signals. Spire Global, Inc. has also started developing their GNSS-R mission, with four satellites currently in orbit. In this thesis we propose and validate a method to retrieve large-scale near-surface soil moisture and a method to map and monitor inundations using spaceborne GNSS-R. Our soil moisture model is based on the assumption that variations in surface reflectivity are linearly related to variations in soil moisture and uses a new method to normalize the observations with respect to the angle of incidence. The normalization method accounts for the spatially varying effects of coherent and incoherent scattering. We found a median unbiased root-mean-square error (ubRMSE) of 0.042 cm3 cm-3 when comparing our method to two years of Soil Moisture Active Passive (SMAP) data and a median ubRMSE of 0.059 cm3 cm-3 compared to the observations of 207 in-situ stations. Our results also showed an improved temporal resolution compared to sensors traditionally used for this purpose. Assessing Spire and CYGNSS data over a region in south east Australia, we observed a similar behavior in terms of surface reflectivity and sensitivity to soil moisture. As Spire satellites collect data from multiple GNSS constellations, we found that it is important to differentiate the observations when calibrating a soil moisture model. The inundation mapping method that we propose is based on a track-wise approach. When classifying the reflections track by track the influence of the angle of incidence and the GNSS transmitted power are minimized or eliminated. With CYGNSS data we produced more than four years of monthly surface water maps over the Amazon River basin and the Pantanal wetlands complex with a spatial resolution of 4.5 km. With GNSS-R we could overcome some of the limitations of optical and microwave remote sensing methods for inundation mapping. We used a set of metrics commonly used to evaluate classification performance to assess our product and discussed the differences and similarities with other products. [less ▲]

Detailed reference viewed: 186 (13 UL)
Full Text
See detailSTUDY OF EARLY MELANOMA BRAIN METASTASIS MECHANISMS USING IN VITRO AND IN VIVO MODELS OF TUMOR INVASION
Slimani, Rédouane UL

Doctoral thesis (2023)

Of all skin cancers, melanoma is the most fatal. Of all cancer types, melanoma is also the cancer with the highest level of brain tropism. Approximately 50% of patients with stage IV melanoma are ... [more ▼]

Of all skin cancers, melanoma is the most fatal. Of all cancer types, melanoma is also the cancer with the highest level of brain tropism. Approximately 50% of patients with stage IV melanoma are diagnosed with melanoma brain metastases. A percentage that rises when postmortem patients are also taken into account. Following lung cancer and breast cancer, melanoma is the leading cause of malignant metastasis to the central nervous system. Of all metastatic brain tumors, melanoma represents 6-12% of cases. The overall survival rate following a diagnosis of melanoma brain metastases has been historically low. However, over the past ten years, advances in targeted therapies as well as in immunotherapies have significantly improved the survival rate of patients with advanced melanoma. Melanoma brain metastases most frequently occur at the junction between the gray and the white matter and in the frontal lobe. In order to reach the brain parenchyma, metastases must cross the brain vasculature. The specific properties of the blood vessels that perfuse the central nervous system are referred to as the blood-brain barrier. They allow these vessels to finely regulate the flow of cells, ions and molecules between the bloodstream and the brain parenchyma in order to preserve brain homeostasis for the proper functioning of neurons and the protection of the brain against toxic and pathogenic agents. Abnormalities in this functional interfacing barrier that separates the brain from the bloodstream are a critical element in the development and progression of several neurological pathologies. A poor understanding of the early mechanisms of metastasis crossing the blood-brain barrier constitutes an obstacle to the development of effective preventive therapeutic strategies as well as a particularly challenging domain of interest as it is one of the most crucial and least documented steps in the metastasizing process to the brain. Here, we focused on the ideation and consequent creation of effective in vitro and in vivo models to help identify and characterize as meticulously as possible, the players that are implicated in the crossing of melanoma metastases through the blood-brain barrier to reach the brain parenchyma. We used human immortalized cells (endothelial cells, pericytes and astrocytes) in triple coculture to recreate a blood-brain barrier in vitro and be able to investigate eventual changes in the gene expression of the tumor cells crossing the model. In parallel, we have set up an in vivo murine model to recreate the process of brain metastasis by injecting melanoma tumor cells into the left ventricle of the heart and thus be able to study the early stages of blood-brain barrier invasion. The analysis of the murine tissues was performed by Correlative light-electron microscopy (CLEM) and the results obtained revealed the presence of cells in the brain that present artifacts that have the same appearance as melanosomes. Experiments using focused ion beam scanning electron microscopes (FIB-SEM) as well as nanoscale secondary ion mass spectrometry (NanoSIMS) may be conducted to take the investigation further. [less ▲]

Detailed reference viewed: 61 (5 UL)
Full Text
See detailFigures de l'intime et de l'extime : réflexions à partir du jeu de Marina Hands et d'Éric Ruf face à Phèdre de Jean Racine et à Partage de midi de Paul Claudel
Deregnoncourt, Marine UL

Doctoral thesis (2023)

Entitled “The Figures of intimacy and extimacy: a Reflection on Marina Hands and Eric Ruf’s acting in Jean Racine's Phèdre and Paul Claudel's Partage de midi”, this PhD dissertation addresses the concepts ... [more ▼]

Entitled “The Figures of intimacy and extimacy: a Reflection on Marina Hands and Eric Ruf’s acting in Jean Racine's Phèdre and Paul Claudel's Partage de midi”, this PhD dissertation addresses the concepts of “intimacy” and “extimacy” as witnessed through Marina Hands and Eric Ruf’s vocal and scenic acting in Patrice Chereau and Yves Beaunesne’s respective productions of Racine's and Claudel's works. In the frame of these two shows both actors manage to suggest “extimacy” with their body language, and to render “intimacy” thanks to their “singing” diction of Racine’s alexandrine and Claudel’s free verse. Throughout our development, we will display perpetual links between Racine and Claudel, as suggested by the singular acting of these two performers. The problematic axis of this dissertation is thus the following: “how does the combination of the “extimacy” of Marina Hands and Éric Ruf’s body-language and the “intimacy” of their “singing” diction reveal the musicality of Racine’s and Claudel’s languages?”. We shall see that “intimacy” turns out to be “extimacy” on stage, and that the two concepts are nothing but two sides of the same coin. “Intimacy” is the constant object of both Patrice Chereau's and Yves Beaunesne's research, to the extent that it constitutes the essence of their artistic creations. In order to become “extimacy”, “intimacy” has to be mediated by actors’ bodies if it means to serve the text actually heard on stage. The union between body and text is therefore a central issue. [less ▲]

Detailed reference viewed: 60 (5 UL)
See detailEstimating terrestrial water storage variations from GNSS vertical displacements in the Island of Haiti
Sauveur, Renaldo UL

Doctoral thesis (2023)

In the field of hydro-geodesy, ill-posed inverse problems are very common. Those problems need to be regularized to find a stabilized solution. Usually, to solve those problems, two regularization methods ... [more ▼]

In the field of hydro-geodesy, ill-posed inverse problems are very common. Those problems need to be regularized to find a stabilized solution. Usually, to solve those problems, two regularization methods are often used, Tikhonov’s regularization and Truncated Singular Value Decomposition (TSVD), with some common regularization parameter choice methods such as L-curve or General Cross Validation (GCV). This study aims to test the capacity of the Least Squares Collocation (LSC) method to estimate the terrestrial water storage variations as an original approach. First, for the forward model, we calculated the hydrological crustal loading deformation in the island of Haiti by convolving Farrell (1972) Green's function with the surface mass loading from the Global Land Data Assimilation (GLDAS). After, a dense synthetic Global Navigation Satellite System (GNSS) network is used with the LSC method to estimate the Terrestrial Water Storage (TWS) variations for the inverse problem. LSC is a natural way to stabilize an ill-posed inverse problem. Unlike Tikhonov’s or TSVD regularization method, LSC allows us to stabilize the inverse problem by including more physical information. The latter is introduced through a covariance function characterizing the observations, the parameters, and the functional link between them. One of the advantages of the LSC method is that it does not require any regularization parameter. First, we showed that, for the island of Haiti, the near field can extend until 24° around a GNSS station. Secondly, we proved that the hydrology-induced vertical deformation is part of the GNSS vertical displacement over the island. Finally, we demonstrated that the LSC may be used as a method to estimate TWS variations in dense GNSS network area. [less ▲]

Detailed reference viewed: 48 (8 UL)
Full Text
See detailDECIPHERING THE ROLE OF CELL-CELL COMMUNICATION IN HEALTH AND DISEASE - USING SYSTEMS BIOLOGY BASED COMPUTATIONAL MODELLING
Singh, Kartikeya UL

Doctoral thesis (2023)

Cell-cell communication plays a significant role in shaping the functionality of the cells. Communication between the cells is also responsible for maintaining the physiological state of the cells and the ... [more ▼]

Cell-cell communication plays a significant role in shaping the functionality of the cells. Communication between the cells is also responsible for maintaining the physiological state of the cells and the tissue. Therefore, it is important to study the different ways by which cell-cell communication impacts the functional state of cells. Alterations in cell-cell communication can contribute to the development of disease conditions. In this thesis, we present two computational tools and a study to explore the different aspects of cell-cell communication. In the first manuscript, FunRes was developed to leverage the cell-cell communication model to investigate functional heterogeneity in cell types and characterize cell states based on the integration of interand intra-cellular signalling. This tool utilizes a combination of receptors and transcription factors (TFs) based on the reconstructed cell-cell communication network to split the cell types into functional states. The tool was applied to the TabulaMurisSenis atlas to discover functional cell states in both young and old mouse datasets. In addition, we compared our tool with state-of-the-art tools and validated the cell states using available markers from the literature. Secondly, we studied the evolution of gene expression in developing astrocytes under normal and inflammatory conditions. We characterized these cells using both transcriptional and chromatin accessibility data which were integrated to reconstruct the gene regulatory networks (GRNs) specific to the condition and timepoints. The GRNs were then topologically analyzed to identify key regulators of the developmental process under both normal and inflammatory conditions. In the final manuscript, we developed a computational tool that identified regulators of allergy and tolerance in a mouse model. The tool works by first reconstructing the cell-cell communication network and then analyzing the network for feedback loops. These feedback loops are important as they contribute to the sustenance of the tissue’s state. Identification of the feedback loops allows for the discovery of important molecules by comparative analysis of these feedback loops between various conditions. In summary, this thesis encompasses various ways of cellular regulation using cell-cell communication in a tissue. These studies contribute to a better understanding of the role cell-cell communication plays in health and disease along with the identification of therapeutic targets to design novel strategies against diseases [less ▲]

Detailed reference viewed: 69 (15 UL)
Full Text
See detailComparing inclusive education in Luxembourg and in Japan
Chiba, Miwa UL

Doctoral thesis (2023)

This doctoral dissertation analyze the inclusive education at the local level, with the examples of Luxembourg and Japan since the 1940s, taking into account the influence of the global discourses and the ... [more ▼]

This doctoral dissertation analyze the inclusive education at the local level, with the examples of Luxembourg and Japan since the 1940s, taking into account the influence of the global discourses and the complexities surrounding interpretations and implementation of inclusive education. [less ▲]

Detailed reference viewed: 89 (9 UL)
Full Text
See detailMolecular Factors Involved in Tick-Bite Mediated Allergy to the Carbohydrate Alpha-Gal
Chakrapani, Neera UL

Doctoral thesis (2023)

Red meat allergy aka α-Gal allergy is a delayed allergic response occurring upon consumption of mammalian meat and by-products. Patients report eating meat without any problems for several years before ... [more ▼]

Red meat allergy aka α-Gal allergy is a delayed allergic response occurring upon consumption of mammalian meat and by-products. Patients report eating meat without any problems for several years before developing the allergy. Although children can develop red meat allergy, it is more prevalent in adults. In addition to a delayed onset of reactions, immediate hypersensitivity is reported in case of contact with the allergen via intravenous route. Galactose-α-1,3-galactose (α-Gal) is the first highly allergenic carbohydrate that has been identified to cause allergy all across the world. In general, carbohydrates exhibit low immunogenicity and are not capable of inducing a strong immune response on their own. Although the α-Gal epitope is present in conjugation with both proteins and lipids, due to an overall accepted role of proteins in allergy, glycoproteins from mammalian food sources were first characterized. However, a unique feature of α-Gal allergy is the delayed occurrence of allergic symptoms upon ingestion of mammalian meat and an allergenic role of glycolipids has been proposed to explain these delayed responses. A second important feature of the disease is that the development of specific IgE to α-Gal has been associated with tick bites belonging to various tick species, depending on the geographical region. In this tick-mediated allergy an intriguing factor is the absence of an α-1,3-GalT gene in ticks, a gene coding for an enzyme capable of α-Gal synthesis, which raises questions on the source and identity of the sensitizing molecule within ticks, immune responses to tick bites, and effect of increased exposure. In this study, we sought to elucidate the origin of sensitization to α-Gal by investigating a cohort of individuals exposed to recurrent tick bites and by exploring the proteome of ticks in a longitudinal study. Furthermore, we analysed the allergenicity of glycoproteins and glycolipids in order to determine the food components responsible for the delayed onset of symptoms. The aim of the Chapter I was to determine IgG profiles and the prevalence rate of sensitization to α-Gal in a high-risk cohort of forestry employees from Luxembourg. The aim of Chapter II was to analyse the presence of host blood in Ixodes ricinus after moulting and upon prolonged starvation in order to support or reject the host blood transmission hypothesis. The aim of the Chapter III was to investigate and compare the allergenicity of glycolipids and glycoproteins to understand their role in the allergic response. Moreover, we have analysed the stability of glycoproteins and compared extracts from different food sources. This chapter is in the form of a published article. In Chapter IV, I have made an attempt to create mutant models with specified α-Gal glycosylation in order to study role of spatial distribution of α-Gal in IgE cross linking and effector cell activation. [less ▲]

Detailed reference viewed: 98 (5 UL)
Full Text
See detailLe processus scriptural nothombien : une nouvelle expérience esthétique. Faire entendre ce qui appartient au silence.
Verstichel-Boulanger, Eolia Emilienne Muriel UL

Doctoral thesis (2023)

Amélie Nothomb est présente sur la scène littéraire depuis 31 ans et pourtant il y a en France un vide dans la recherche universitaire concernant son oeuvre. Cette thèse aborde la place encore et toujours ... [more ▼]

Amélie Nothomb est présente sur la scène littéraire depuis 31 ans et pourtant il y a en France un vide dans la recherche universitaire concernant son oeuvre. Cette thèse aborde la place encore et toujours problématique réservée aux autrices dans le champ et la critique littéraires français, les obstacles posés à leur légitimation et à leur consécration du fait de leur genre, obstacles renforcés si elles appartiennent non au centre mais aux marges de la francophonie, et plus encore si leur œuvre obtient un succès commercial indéniable, avec pour effet, auprès de la critique, que la légitimité et la littérarité de leurs ouvrages se voient remises en doute lorsqu’elles ne sont pas tout simplement niées. Amélie Nothomb fait donc l'objet d'une triple marginalité qui est renforcée par la difficulté de classifier son oeuvre. [less ▲]

Detailed reference viewed: 54 (6 UL)
Full Text
See detailApplications of Boolean modelling to study and stratify dynamics of a complex disease
Hemedan, Ahmed UL

Doctoral thesis (2023)

Interpretation of omics data is needed to form meaningful hypotheses about disease mechanisms. Pathway databases give an overview of disease-related processes, while mathematical models give qualitative ... [more ▼]

Interpretation of omics data is needed to form meaningful hypotheses about disease mechanisms. Pathway databases give an overview of disease-related processes, while mathematical models give qualitative and quantitative insights into their complexity. Similarly to pathway databases, mathematical models are stored and shared on dedicated platforms. Moreover, community-driven initiatives such as disease maps encode disease-specific mechanisms in both computable and diagrammatic form using dedicated tools for diagram biocuration and visualisation. To investigate the dynamic properties of complex disease mechanisms, computationally readable content can be used as a scaffold for building dynamic models in an automated fashion. The dynamic properties of a disease are extremely complex. Therefore, more research is required to better understand the complexity of molecular mechanisms, which may advance personalized medicine in the future. In this study, Parkinson’s disease (PD) is analyzed as an example of a complex disorder. PD is associated with complex genetic, environmental causes and comorbidities that need to be analysed in a systematic way to better understand the progression of different disease subtypes. Studying PD as a multifactorial disease requires deconvoluting the multiple and overlapping changes to identify the driving neurodegenerative mechanisms. Integrated systems analysis and modelling can enable us to study different aspects of a disease such as progression, diagnosis, and response to therapeutics. Therefore, more research is required to better understand the complexity of molecular mechanisms, which may advance personalized medicine in the future. Modelling such complex processes depends on the scope and it may vary depending on the nature of the process (e.g. signalling vs metabolic). Experimental design and the resulting data also influence model structure and analysis. Boolean modelling is proposed to analyse the complexity of PD mechanisms. Boolean models (BMs) are qualitative rather than quantitative and do not require detailed kinetic information such as Petri nets or Ordinary Differential equations (ODEs). Boolean modelling represents a logical formalism where available variables have binary values of one (ON) or zero (OFF), making it a plausible approach in cases where quantitative details and kinetic parameters 9 are not available. Boolean modelling is well validated in clinical and translational medicine research. In this project, the PD map was translated into BMs in an automated fashion using different methods. Therefore, the complexity of disease pathways can be analysed by simulating the effect of genomic burden on omics data. In order to make sure that BMs accurately represent the biological system, validation was performed by simulating models at different scales of complexity. The behaviour of the models was compared with expected behavior based on validated biological knowledge. The TCA cycle was used as an example of a well-studied simple network. Different scales of complex signalling networks were used including the Wnt-PI3k/AKT pathway, and T-cell differentiation models. As a result, matched and mismatched behaviours were identified, allowing the models to be modified to better represent disease mechanisms. The BMs were stratified by integrating omics data from multiple disease cohorts. The miRNA datasets from the Parkinson’s Progression Markers Initiative study (PPMI) were analysed. PPMI provides an important resource for the investigation of potential biomarkers and therapeutic targets for PD. Such stratification allowed studying disease heterogeneity and specific responses to molecular perturbations. The results can support research hypotheses, diagnose a condition, and maximize the benefit of a treatment. Furthermore, the challenges and limitations associated with Boolean modelling in general were discussed, as well as those specific to the current study. Based on the results, there are different ways to improve Boolean modelling applications. Modellers can perform exploratory investigations, gathering the associated information about the model from literature and data resources. The missing details can be inferred by integrating omics data, which identifies missing components and optimises model accuracy. Accurate and computable models improve the efficiency of simulations and the resulting analysis of their controllability. In parallel, the maintenance of model repositories and the sharing of models in easily interoperable formats are also important. [less ▲]

Detailed reference viewed: 80 (19 UL)
Full Text
See detailTowards a Computational Model of General Cognitive Control Using Artificial Intelligence, Experimental Psychology and Cognitive Neuroscience
Ansarinia, Morteza UL

Doctoral thesis (2023)

Cognitive control is essential to human cognitive functioning as it allows us to adapt and respond to a wide range of situations and environments. The possibility to enhance cognitive control in a way ... [more ▼]

Cognitive control is essential to human cognitive functioning as it allows us to adapt and respond to a wide range of situations and environments. The possibility to enhance cognitive control in a way that transfers to real life situations could greatly benefit individuals and society. However, the lack of a formal, quantitative definition of cognitive control has limited progress in developing effective cognitive control training programs. To address this issue, the first part of the thesis focuses on gaining clarity on what cognitive control is and how to measure it. This is accomplished through a large-scale text analysis that integrates cognitive control tasks and related constructs into a cohesive knowledge graph. This knowledge graph provides a more quantitative definition of cognitive control based on previous research, which can be used to guide future research. The second part of the thesis aims at furthering a computational understanding of cognitive control, in particular to study what features of the task (i.e., the environment) and what features of the cognitive system (i.e., the agent) determine cognitive control, its functioning, and generalization. The thesis first presents CogEnv, a virtual cognitive assessment environment where artificial agents (e.g., reinforcement learning agents) can be directly compared to humans in a variety of cognitive tests. It then presents CogPonder, a novel computational method for general cognitive control that is relevant for research on both humans and artificial agents. The proposed framework is a flexible, differentiable end-to-end deep learning model that separates the act of control from the controlled act, and can be trained to perform the same cognitive tests that are used in cognitive psychology to assess humans. Together, the proposed cognitive environment and agent architecture offer unique new opportunities to enable and accelerate the study of human and artificial agents in an interoperable framework. Research on training cognition with complex tasks, such as video games, may benefit from and contribute to the broad view of cognitive control. The final part of the thesis presents a profile of cognitive control and its generalization based on cognitive training studies, in particular how it may be improved by using action video game training. More specifically, we contrasted the brain connectivity profiles of people that are either habitual action video game players or do not play video games at all. We focused in particular on brain networks that have been associated with cognitive control. Our results show that cognitive control emerges from a distributed set of brain networks rather than individual specialized brain networks, supporting the view that action video gaming may have a broad, general impact of cognitive control. These results also have practical value for cognitive scientists studying cognitive control, as they imply that action video game training may offer new ways to test cognitive control theories in a causal way. Taken together, the current work explores a variety of approaches from within cognitive science disciplines to contribute in novel ways to the fascinating and long tradition of research on cognitive control. In the age of ubiquitous computing and large datasets, bridging the gap between behavior, brain, and computation has the potential to fundamentally transform our understanding of the human mind and inspire the development of intelligent artificial agents. [less ▲]

Detailed reference viewed: 202 (41 UL)
Full Text
See detailLa Grande Stratégie des petites puissances – Etude des mécanismes de fondation d’une grande stratégie face à un dilemme de sécurité appliqués au Luxembourg, à la Lituanie et à Singapour (1965-2025)-
Fouillet, Thibault UL

Doctoral thesis (2023)

The capacity of small powers to think strategically remains a limited field of interest in historical thinking and international relations. Thus, beyond the debate concerning the capacity of small states ... [more ▼]

The capacity of small powers to think strategically remains a limited field of interest in historical thinking and international relations. Thus, beyond the debate concerning the capacity of small states to be full-fledged actors in the international system, there appears to be a denial of the conceptualization and doctrinal innovation capacity of small powers despite the historical redundancy of the victory of the weak over the strong. However, small powers are by nature more sensitive to threats due to their limited response capabilities, and are therefore more inclined to rationalize their action over the long term in order to develop national (military, economic, diplomatic capabilities, etc.) and international (alliances, international organizations, etc.) mechanisms for containing these threats. In this respect, this thesis proposes to look at the construction of the strategic thinking of small powers in the face of perceived threats and the means used to try to contain them. The aim is therefore to study the mechanisms by which small powers found a Grand Strategy (transcribed in the form of doctrines) to deal with the security dilemmas they face. To this end, three case studies were analyzed (Luxembourg, Singapore, Lithuania), chosen for the diversity of their strategic and historical contexts offering a variety of security dilemmas. The Grand Strategy being in essence a conceptual construction with a prospective and applicative aim, a theoretical as well as a practical methodology (through the use of immediate history and wargaming) was then implemented. Two sets of lessons can be drawn from this thesis. The first is methodological, confirming the interest of doctrinal studies as a field of strategic reflection, and establishing wargaming as a prospective tool adapted to the conduct of fundamental research. The second, conceptual, allows for a better understanding of the capacity of small powers to create great and efficient strategies, which must be taken into account within the strategic genealogy because of their conceptual dynamism, which can be used to teach lessons even to great powers. [less ▲]

Detailed reference viewed: 105 (8 UL)
See detailReading aloud practices: Providing joint accessibility to text within an unfamiliar interface-mediated game activity
Heuser, Svenja UL

Doctoral thesis (2023)

This work examines the practice of reading aloud in the interactional context of adult participants engaging in an interface-mediated collaborative game activity. With a conversation analytic approach ... [more ▼]

This work examines the practice of reading aloud in the interactional context of adult participants engaging in an interface-mediated collaborative game activity. With a conversation analytic approach onto video data of user studies, empirical cases of reading aloud are presented. It is shown how participants multimodally co-organise reading aloud in-interaction for providing accessibility to game text in a game that is unfamiliar to them. With reading aloud, participants meet the interactional challenge of making game text audibly accessible that is not always visually accessible for all participant alike. This practice is not only conducted for another but with another in a truly joint fashion, working as a continuer to accomplish the unfamiliar game. [less ▲]

Detailed reference viewed: 35 (3 UL)
See detailREVISITING AND BOOSTING STATE-OF-THE-ART ML-BASED ANDROID MALWARE DETECTORS
Daoudi, Nadia UL

Doctoral thesis (2023)

Android offers plenty of services to mobile users and has gained significant popularity worldwide. The success of Android has resulted in attracting more mobile users but also malware authors. Indeed ... [more ▼]

Android offers plenty of services to mobile users and has gained significant popularity worldwide. The success of Android has resulted in attracting more mobile users but also malware authors. Indeed, attackers target Android markets to spread their malicious apps and infect users’ devices. The consequences vary from displaying annoying ads to gaining financial benefits from users. To overcome the threat posed by Android malware, Machine Learning has been leveraged as a promising technique to automatically detect malware. The literature on Android malware detection lavishes with a huge variety of ML-based approaches that are designed to discriminate malware from legitimate samples. These techniques generally rely on manually engineered features that are extracted from the apps’ artefacts. Reported to be highly effective, Android malware detection approaches seem to be the magical solution to stop the proliferation of malware. Unfortunately, the gap between the promised and the actual detection performance is far from negligible. Despite the rosy excellent detection performance painted in the literature, the detection reports show that Android malware is still spreading and infecting mobile users. In this thesis, we investigate the reasons that impede state-of-the-art Android malware detection approaches to surround the spread of Android malware and propose solutions and directions to boost their detection performance. In the first part of this thesis, we focus on revisiting the state of the art in Android malware detection. Specifically, we conduct a comprehensive study to assess the reproducibility of state-of-the-art Android malware detectors. We consider research papers published at 16 major venues over a period of ten years and report our reproduction outcome. We also discuss the different obstacles to reproducibility and how they can be overcome. Then, we perform an exploratory analysis on a state-of-the-art malware detector, DREBIN, to gain an in-depth understanding of its inner working. Our study provides insights into the quality of DREBIN’s features and their effectiveness in discriminating Android malware. In the second part of this thesis, we investigate novel features for Android malware detection that do not involve manual engineering. Specifically, we propose an Android malware detection approach, DexRay, that relies on features extracted automatically from the apps. We convert the raw bytecode of the app DEX files into an image and train a 1-dimensional convolutional neural network to automatically learn the relevant features. Our approach stands out for the simplicity of its design choices and its high detection performance, which make it a foundational framework for further developing this domain. In the third part, we attempt to push the frontier of Android malware detection via enhancing the detection performance of the state of the art. We show through a large-scale evaluation of four state-of-the-art malware detectors that their detection performance is highly dependent on the experimental dataset. To solve this issue, we investigate the added value of combining their features and predictions using 22 combination methods. While it does not improve the detection performance reported by individual approaches, the combination of features and predictions maintains the highest detection performance independently of the dataset. We further propose a novel technique, Guided Retraining, that boosts the detection performance of state-of-the-art Android malware detectors. Guided Retraining uses contrastive learning to learn a better representation of the difficult samples to improve their prediction. [less ▲]

Detailed reference viewed: 175 (25 UL)
Full Text
See detailSoil viral particles as tracers of surface water sources and flow paths
Florent, Perrine Julie UL

Doctoral thesis (2023)

Albeit recent technological developments (e.g. field deployable instruments operating at high temporal frequencies), experimental hydrology is a discipline that remains measurement limited. From this ... [more ▼]

Albeit recent technological developments (e.g. field deployable instruments operating at high temporal frequencies), experimental hydrology is a discipline that remains measurement limited. From this perspective, trans-disciplinary approaches may create valuable opportunities to enlarge the number of tools available for investigating hydrological processes. Tracing experiments are usually performed in order to investigate the water flow pathways and water sources in underground areas. Since the 19th century, researchers have worked with hydrological tracers to do this. Among them, the fluorescent dyes and the isotopes are the most commonly used to follow the water flow while others like salts or bacteriophages are employed as additional tracers to those mentioned above. Bacteriophages are the least known of all, but it has been studied since the 1960s as hydrological tracers, especially in karstic environments. The purpose is to evaluate the potential for bacteriophages naturally occurring in soils to serve as a new environmental tracer of hydrological processes. We hypothesize that such viral particles can be a promising tool in water tracing experiments since they are safe for ecosystems. In both fields of hydrology and virology, the knowledge regarding the fate of bacteriophages within the pedosphere is still limited. Their study would not only allow proposing potential new candidates to enlarge the hydrological tracers available, but also improving the current knowledge about the bacteriophage communities in soil and their interactions with certain environmental factors. For this purpose, we aim at describing the bacteriophage communities occurring in the soil through shotgun metagenomics analysis. Those viruses are widely spread in the pedosphere, and we assume that they have specific signatures according to the type of soil. Then, bacteriophage populations will be investigated in the soil water to analyse the dis/similarities between the two communities as well as their dynamics in the function of the precipitation events. This way, with a relatively high abundance of soil and soil water and a capacity of being mobilised, good bacteriophage candidates could be selected as hydrological tracers. [less ▲]

Detailed reference viewed: 89 (3 UL)
See detailLa présence inactive. Droit & littérature : le paradigme du témoignage pour penser le lien
Ceci, Jean-Marc UL

Doctoral thesis (2023)

Le courant « droit et littérature » n’est jamais défini qu’à travers le prisme de déclinaisons particulières qui ne rendent pas compte du lien originel qui unit ces deux discours. Au contraire, ces ... [more ▼]

Le courant « droit et littérature » n’est jamais défini qu’à travers le prisme de déclinaisons particulières qui ne rendent pas compte du lien originel qui unit ces deux discours. Au contraire, ces déclinaisons étouffent ce lien et lui jettent une ombre empêchant de le mettre au jour. Notre recherche vise à combler cette absence de définition originelle. Elle propose une définition générique du lien sous le terme de « présence ina$ive » afin de résoudre ce problème. [less ▲]

Detailed reference viewed: 89 (3 UL)
Full Text
See detailRobust estimation in exponential families: from theory to practice
Chen, Juntong UL

Doctoral thesis (2023)

Detailed reference viewed: 70 (7 UL)
See detailThe creative process in(between) humans and machines
Gubenko, Alla UL

Doctoral thesis (2023)

Arguably, embodiment is the most neglected aspect of cognitive psychology and creativity research. Whereas most existing theoretical frameworks are inspired by or implicitly imply “cognition as a ... [more ▼]

Arguably, embodiment is the most neglected aspect of cognitive psychology and creativity research. Whereas most existing theoretical frameworks are inspired by or implicitly imply “cognition as a computer” metaphor, depicting creative thought as a disembodied idea generation and processing of amodal symbols, this thesis proposes that “cognition as a robot” may be a better metaphor to understand how creative cognition operates . In this thesis, I compare and investigate human creative cognition in relation to embodied artificial agents that has to learn to navigate and act in complex and changing material and social environments from a set of multimodal streams (e.g., vision, haptic). Instead of relying on divergent thinking or associative accounts of creativity, I attempt to elaborate an embodied and action-oriented vision of creativity grounded in the 4E cognition paradigm. Situated at the intersection of the psychology of creativity, technology, and embodied cognitive science, the thesis attempts to synthesize disparate lines of work and look at a complex problem of human creativity through interdisciplinary lenses. In this perspective, the study of creativity is no longer a prerogative of social scientists but a collective and synergistic endeavor of psychologists, engineers, designers, and computer scientists. [less ▲]

Detailed reference viewed: 101 (13 UL)
Full Text
See detailREMOTE PLASMA CHEMICAL VAPOUR DEPOSITION FOR GAS DIFFUSION LAYER AND PROTON EXCHANGE MEMBRANE SYNTHESIS FOR FUEL CELLS
Bellomo, Nicolas UL

Doctoral thesis (2023)

Climate change due to the increase in GHG emissions and energy crisis due to scarcity of fossil fuel availability are an ever growing issue for the planet and countries. The decarbonization and the ... [more ▼]

Climate change due to the increase in GHG emissions and energy crisis due to scarcity of fossil fuel availability are an ever growing issue for the planet and countries. The decarbonization and the sustainability of the energy sector is one of the top priority to achieve a resilient system. Hydrogen has been considered for decades to be used as an alternative for fossil fuel and now is the time of development for an hydrogen based economy. Fuel cells are devices that convert the hydrogen chemical energy into electrical energy and is one the main component considered for the hydrogen economy. However, much is yet to be achieved to make their manufacturing as cheap and as efficient as possible. Chemical vapour deposition (CVD) is a technique used to synthesize solid materials from gaseous precursors which has the advantages, over wet chemistry, to reduce wastes of production, to be cheap, to make pure solid materials and to be easily scalable. In this thesis we investigated the possibility to use CVD to produce two major components of fuel cells, namely the gas diffusion layer and the proton exchange membrane. The results were highly promising regarding the elaboration of gas diffusion layers and a CVD prototype was assembled to make the highly complex copolymerization of proton exchange membrane a reality with promising initial results. [less ▲]

Detailed reference viewed: 137 (6 UL)
Full Text
See detailDerived algebraic geometry over differential operators
Govzmann, Alisa UL

Doctoral thesis (2023)

Detailed reference viewed: 52 (3 UL)
Full Text
See detailAnalyzing the Unanalyzable: an Application to Android Apps
Samhi, Jordan UL

Doctoral thesis (2023)

In general, software is unreliable. Its behavior can deviate from users’ expectations because of bugs, vulnerabilities, or even malicious code. Manually vetting software is a challenging, tedious, and ... [more ▼]

In general, software is unreliable. Its behavior can deviate from users’ expectations because of bugs, vulnerabilities, or even malicious code. Manually vetting software is a challenging, tedious, and highly-costly task that does not scale. To alleviate excessive costs and analysts’ burdens, automated static analysis techniques have been proposed by both the research and practitioner communities making static analysis a central topic in software engineering. In the meantime, mobile apps have considerably grown in importance. Today, most humans carry software in their pockets, with the Android operating system leading the market. Millions of apps have been proposed to the public so far, targeting a wide range of activities such as games, health, banking, GPS, etc. Hence, Android apps collect and manipulate a considerable amount of sensitive information, which puts users’ security and privacy at risk. Consequently, it is paramount to ensure that apps distributed through public channels (e.g., the Google Play) are free from malicious code. Hence, the research and practitioner communities have put much effort into devising new automated techniques to vet Android apps against malicious activities over the last decade. Analyzing Android apps is, however, challenging. On the one hand, the Android framework proposes constructs that can be used to evade dynamic analysis by triggering the malicious code only under certain circumstances, e.g., if the device is not an emulator and is currently connected to power. Hence, dynamic analyses can -easily- be fooled by malicious developers by making some code fragments difficult to reach. On the other hand, static analyses are challenged by Android-specific constructs that limit the coverage of off-the-shell static analyzers. The research community has already addressed some of these constructs, including inter-component communication or lifecycle methods. However, other constructs, such as implicit calls (i.e., when the Android framework asynchronously triggers a method in the app code), make some app code fragments unreachable to the static analyzers, while these fragments are executed when the app is run. Altogether, many apps’ code parts are unanalyzable: they are either not reachable by dynamic analyses or not covered by static analyzers. In this manuscript, we describe our contributions to the research effort from two angles: ① statically detecting malicious code that is difficult to access to dynamic analyzers because they are triggered under specific circumstances; and ② statically analyzing code not accessible to existing static analyzers to improve the comprehensiveness of app analyses. More precisely, in Part I, we first present a replication study of a state-of-the-art static logic bomb detector to better show its limitations. We then introduce a novel hybrid approach for detecting suspicious hidden sensitive operations towards triaging logic bombs. We finally detail the construction of a dataset of Android apps automatically infected with logic bombs. In Part II, we present our work to improve the comprehensiveness of Android apps’ static analysis. More specifically, we first show how we contributed to account for atypical inter-component communication in Android apps. Then, we present a novel approach to unify both the bytecode and native in Android apps to account for the multi-language trend in app development. Finally, we present our work to resolve conditional implicit calls in Android apps to improve static and dynamic analyzers. [less ▲]

Detailed reference viewed: 153 (18 UL)