References of "Doctoral thesis"
     in
Bookmark and Share    
See detailCRISPR/CAS9 AND PIGGYBAC MEDIATED LRRK2-G2019S IN VITRO PARKINSON’S DISEASE MODELING
Qing, Xiaobing UL

Doctoral thesis (2017)

Parkinson’s disease (PD), the second most common neurodegenerative disorder, is characterized by the progressive loss of dopaminergic (DA) neurons in the substantial nigra pars compacta (SNpc) area of the ... [more ▼]

Parkinson’s disease (PD), the second most common neurodegenerative disorder, is characterized by the progressive loss of dopaminergic (DA) neurons in the substantial nigra pars compacta (SNpc) area of the human midbrain with an unclear cause. Mutations revealed by whole genome sequencing (WGS) from familial PD cases may explain how cell loss occurs. Confirmation of this hypothesis has been hampered by the lack of available cell types from affected patients. Transgenic animal models have been used, but differences between these animals and humans have greatly impacted their usefulness for studying human diseases. Additionally, because PD is regarded to only affect humans, reliable human material-based experimental models are urgently needed. Human-induced pluripotent stem cells (hiPSCs)- derived DA neurons provide an opportunity to establish in vitro mutation-related PD models of disease-relevant cells that represent replacement alternatives to in vivo animal experiments. However, these hiPSCs-based PD models have limitations regarding the genetic background differences between patients and healthy controls. Genomic editing of hiPSCs allows for the generation of isogenic cellular models that differ only in the disease-specific mutations of interest. Currently, the biggest concern regarding nuclease-mediated genomic editing is the potentially undesirable alterations associated with remnant sequences, off-target effects and random integration, which may result in cell lines not being truly isogenic. To avoid potential confounding effects and establish a causal link between genotype and phenotype, robust isogenic cell lines free of unwanted mutagenesis are absolutely required for the study of PD. To better understand the pathogenesis of the most prevalent leucine-rich repeat kinase 2 (LRRK2) mutation, G2019S, which causes both familial and sporadic PD, patient hiPSCs have been corrected using the Cre/LoxP recombination system. However, the LoxP site inevitably remaining after excision of the selection cassette can influence gene expression. In this thesis, a “footprint-free” LRRK2-G2019S isogenic model was created using clustered regularly interspaced short palindromic repeats/Cas9-associated (CRISPR/Cas9) system and a piggyBac transposon that can remove selection cassettes without leaving remnants. In LRRK2-G2019S DA neurons, the percentage of tyrosine hydroxylase (TH)-positive neurons with a total neurite length greater than 2 cm was significantly reduced, and the average branch number was also decreased. These PD-like phenotypes could be rescued by administration of the specific LRRK2 inhibitor LRRK2-IN-1 and by the compound BRF110, which activates the Nurr1:RXRa heterodimer to replenish the DA shortage. Our data suggest that the “footprint- free” LRRK2-G2019S isogenic cell lines allow standardized, genetic background-independent, in vitro PD modeling and are suitable for screening novel drugs that have clinical applications. In addition, we have shown that in vitro TH-positive neurons with a total neurite length greater than 2 cm were positive for serine 129 phosphorylated (S129P) α-synuclein, and we hypothesize that S129P α-synuclein plays a role in the maintenance or formation of long neurites. Thus, we have also provided new insights into the roles of LRRK2-G2019S and S129P α-synuclein in PD pathogenesis. Furthermore, we have optimized CRISPR/Cas9-mediated genomic editing in hiPSCs by establishing a FACS-assisted CRISPR/Cas9 editing (FACE) strategy that uses three fluorescent proteins to isolate biallelic-edited cells with no random integration and by using Exonuclease III (ExoIII)-facilitated long single stranded DNA (ssDNA) donor to reduce random integration. [less ▲]

Full Text
See detailParkinson's disease: Evaluation of a neuroprotective target and identification of candidate biomarker signatures using murine models
Ashrafi, Amer UL

Doctoral thesis (2017)

Parkinson's disease (PD) is one of the most common age-related neurologic diseases. While existing therapeutic approaches, focusing on dopamine replacement, can alleviate some of the cardinal symptoms ... [more ▼]

Parkinson's disease (PD) is one of the most common age-related neurologic diseases. While existing therapeutic approaches, focusing on dopamine replacement, can alleviate some of the cardinal symptoms, they are associated with severe adverse effects in the long-term. Therefore, identification of new therapeutic interventions to reverse, stop or slow down the progression of Parkinson’s disease is a major focus of PD research. Similarly, identifying reliable biomarkers that would enable early therapeutic intervention is another key area of current research. Here, we evaluated a recently proposed non-dopaminergic protein drug target for PD, Regulator of G-Protein Signaling 4 (RGS4), and performed preliminary studies aimed at the identification of novel biomarker signatures using two murine models of Parkinson’s disease. Recent research on new non-dopaminergic PD drug targets has indicated that inhibition of RGS4, a member of the RGS family of proteins that inactivate G-proteins, could be an effective adjuvant treatment option. However, the effectiveness of RGS4 inhibition for an array of PDlinked functional and structural neuroprotection endpoints had not yet been demonstrated. Here, we used the 6-Hydroxydopamine (6-OHDA) lesioning mouse model to address this question. We observed, using a battery of behavioral and pathological measures, that mice deficient for RGS4 are not protected from 6-OHDA induced injury, and showed enhanced susceptibility in some measures of motor function. Our results suggest that inhibition of RGS4 as a non-dopaminergic target for PD should be approached with caution. In the second part of this study, two alpha-synuclein based PD mouse models, human E46K mutated overexpressed alpha-synuclein and alpha-synuclein fibril spreading models, were used to investigate early pathological events in PD and identify novel candidate biomarker signatures for subsequent validation. Two different time points, before disease onset, and at peak disease manifestation, were analyzed in the two models. Using multiple histopathology and molecular biology techniques, we were able to identify complex changes in patterns of gene expression at early stages of the disease, well before neurodegeneration is detectable. These findings might open venues for new therapeutic strategies and provide insights on the molecular perturbations occurring during the earliest stages of the disease, paving the way for the development of a biomarker signatures for early diagnosis of Parkinson’s disease. [less ▲]

Detailed reference viewed: 11 (3 UL)
Full Text
See detailDevelopment and analysis of individual-based gut microbiome metabolic models
Magnusdottir, Stefania UL

Doctoral thesis (2017)

The human gut microbiota plays a large role in the metabolism of our diet. These microorganisms can break down indigestible materials such as polysaccharides and convert them into metabolites that the ... [more ▼]

The human gut microbiota plays a large role in the metabolism of our diet. These microorganisms can break down indigestible materials such as polysaccharides and convert them into metabolites that the human body can take up and utilize (e.g., vitamins, essential amino acids, and short-chain fatty acids). Disbalances in the gut microbiome have been associated with several diseases, including diabetes and obesity. However, little is known about the detailed metabolic crosstalk that occurs between individual organisms within the microbiome and between the microbiome and the human intestinal cells. Because of the complexity of the intestinal ecosystem, these interactions are difficult to determine using existing experimental methods. Constraint-based reconstruction and analysis (COBRA) can help identify the possible metabolic mechanisms at play in the human gut. By combining mathematical, computational, and experimental methods, we can generate hypotheses and design targeted experiments to elucidate the metabolic mechanisms in the gut microbiome. In this thesis, I first applied comparative genomics to analyze the biosynthesis pathways of eight B-vitamins in hundreds of human gut microbial species. The results suggested that many gut microbes do not synthesize any B-vitamins, that is, they depend on the host’s diet and neighboring bacteria for these essential nutrients. Second, I developed a semi-automatic reconstruction refinement pipeline that quickly generates biologically relevant genome-scale metabolic reconstructions (GENREs) of human gut microbes based on automatically generated metabolic reconstructions, comparative genomics data, and data extracted from biochemical experiments on the relevant organisms. The pipeline generated metabolically diverse reconstructions that maintain high accuracy with known biochemical data. Finally, the refined GENREs were combined with metagenomic data from individual stool samples to build personalized human gut microbiome metabolic reconstructions. The resulting large-scale microbiome models were both taxonomically and functionally diverse. The work presented in this thesis has enabled the generation of biologically relevant human gut microbiome metabolic reconstructions. Metabolic models resulting from such reconstructions can be applied to study metabolism within the human gut microbiome and between the gut microbiome and the human host. Additionally, they can be used to study the effects of different dietary components on the metabolic exchanges in the gut microbiome and the metabolic differences between healthy and diseased microbiomes. [less ▲]

Detailed reference viewed: 108 (5 UL)
Full Text
See detailComputational Aspects of Classical and Hilbert Modular Forms
van Hirtum, Jasper UL

Doctoral thesis (2017)

The main topic of this thesis is the study of classical and Hilbert modular forms and computational aspects of their q-expansions. The coefficients of q-expansions of eigenforms are particularly ... [more ▼]

The main topic of this thesis is the study of classical and Hilbert modular forms and computational aspects of their q-expansions. The coefficients of q-expansions of eigenforms are particularly interesting because of their arithmetic significance. Most notably, modular forms are an essential ingredient in Andrew Wiles’s proof of Fermat’s last theorem. This thesis consists of two parts: the first part concerns the distribution of the coefficients of a given classical eigenform; the second part studies computational aspects of the adelic q-expansion of Hilbert modular forms of weight 1. Part I of this thesis is an adapted version of the article On the Distribution of Frobenius of Weight 2 Eigenforms with Quadratic Coefficient Field published in Experimental Mathematics [38]. It presents a heuristic model that settles the following question related to the Sato-Tate and Lang-Trotter conjectures: given a normalised eigenform of weight 2 with quadratic coefficient field, what is the asymptotic behaviour of the number of primes p such that the p-th coefficient of this eigenform is a rational integer? Our work contributes to this problem in two ways. First, we provide an explicit heuristic model that describes the asymptotic behaviour in terms of the associated Galois representation. Secondly, we show that this model holds under reasonable assumptions and present numerical evidence that supports these assumptions. Part II concerns the study of (adelic) q-expansions of Hilbert modular forms. Our main achievements are the design, proof and implementation of several algorithms that compute the adelic q-expansions of Hilbert modular forms of weight 1 over C and over finite fields. One reason we are studying such q-expansions is that their coefficients (conjecturally) describe the arithmetic of Galois extensions of a totally real number field with Galois group in GL 2 (F p ) that are unramified at p. Using the adelic q-expansions of Hilbert modular forms of higher weight, these algorithms enable the explicit computation of Hilbert modular forms of any weight over C and the computation of Hilbert modular forms of parallel weight both over C and in positive characteristics. The main improvement to existing methods is that this algorithm can be applied in (partial) weight 1, which fills the gap left by standard computational methods. Moreover, the algorithm computes in all characteristics simultaneously. More precisely, we prove that, under certain conditions in higher weight, the output of the algorithm for given level N and quadratic character E includes a finite set of primes L such that all Hilbert modular forms of given parallel weight, level N and quadratic character E over F p are liftable for all primes p outside the set L. In particular, testing primes in the set L enabled the computation of examples of non-liftable Hilbert modular forms of weight 1. [less ▲]

Detailed reference viewed: 22 (8 UL)
See detailAspekte der luxemburgischen Syntax
Döhmer, Caroline Elisabeth UL

Doctoral thesis (2017)

Das Ziel dieser Arbeit ist eine empirische und systematische Beschreibung ausgewählter syntaktischer Phänomene im Luxemburgischen. Im Vordergrund der deskriptiven Analyse stehen vier Themenbereiche ... [more ▼]

Das Ziel dieser Arbeit ist eine empirische und systematische Beschreibung ausgewählter syntaktischer Phänomene im Luxemburgischen. Im Vordergrund der deskriptiven Analyse stehen vier Themenbereiche: Kasussyntax und -funktionen (Genitiv, Possession, Partitiv), Pronominalsyntax (Syntax und Semantik von Personalpronomen), Verbcluster (2-, 3- und 4-gliedrige Cluster im Nebensatz) sowie die syntaktischen Eigenschaften von Nebensatzeinleitungen (Kongruenz und Verdopplungen). Die Arbeit soll einerseits dazu beitragen, die luxemburgische Sprache in ihren strukturellen Eigenschaften besser verstehen zu können und andererseits die (syntaktische) Erschließung des Kontinentalwestgermanischen weiter voranbringen. Somit liefert diese Dissertation einen wichtigen Beitrag in der derzeit aufblühenden Forschungsrichtung der linguistischen Luxemburgistik und ordnet sich gleichzeitig in die allgemeine westgermanische Syntaxforschung ein. [less ▲]

Detailed reference viewed: 28 (6 UL)
See detailVERHALTEN AUSGEWÄHLTER NANOPARTIKEL IN KOMMUNALEN KLÄRANLAGEN UNTER BESONDERER BERÜCKSICHTIGUNG DER ANAEROBEN SCHLAMMBEHANDLUNG
Rahimi, Anahita Bahareh UL

Doctoral thesis (2017)

The increasing use of nanoparticles and the market to commercialize this innovative technology requires a deeper understanding of the behavior of nanoparticles (NPs) and the resulting consequences in an ... [more ▼]

The increasing use of nanoparticles and the market to commercialize this innovative technology requires a deeper understanding of the behavior of nanoparticles (NPs) and the resulting consequences in an environmentally The increasing use of nanoparticles and the market to commercialize this innovative technology requires a deeper understanding of the behavior of nanoparticles (NPs) and the resulting consequences in an environmentally relevant matrix. However, since the fate and behavior of nanoparticles is largely unknown to the environment, this study was done to investigate the key properties of nanoparticles, parameters to describe them and to discuss how these parameters can influence their fate and behavior in the natural environment. The core goal of this discussion is to relate sludge treatment to relevant properties of the nanoparticle, which may interact with a range of substances naturally present in wastewater treatment plants (WWTPs) and activated sludge. Understanding these properties is necessary for interpreting the fate of nanoparticles and predicting their effects in the actual environment. Environmentally matrix are challenging to investigate, for instance as in sludge and anaerobic sludge stabilization. According to the literature, it will be assumed that about 95% of the NPs entering the WWTP end up in the sludge. This work reports on the investigation and observation of NPs behavior in the different stages, using different experiments and metering devices. The different experiments include (i) characterization of NPs before their application (NanoSight), (ii) behavior analysis in terms of kinetic transformations (Turbiscan), (iii) long- and short-term analysis of NPs effects on anaerobic sludge stabilization (laboratory fermentation plants). The characterization represents the current size distribution of the NPs in a controlled environment (H2O). Size is one of the defining properties of NPs, which causes changes in physical and chemical properties of NPs compared to their original materials. The results show that the distribution of NPs depends on the type of particle, surface coating and dispersion concentration. These parameters are subject to permanent changes in sludge. That is why an exact allocation to the effects seems almost impossible. Characterization is indispensable for the subsequent comprehension of the effects, and for the understanding of findings, which may result from the behavioral analysis, as well as from the effect analysis. The behavioral analysis was carried out by Turbiscan technology to consider relevant sludge parameters and substances (Chemical Oxygen Demand COD, Polymers, Humic acids, digestion process) to the behavior of NPs in sludge. The initial aim was to progressively add of representative substances to approximate the experiment to natural conditions. The phase separation detection identified a clear sedimentation behavior in all samples. To explain this and the kinetic modification as well as the resultant kinetic instability, the NPs’ diameter (particle size detection) were studied. The more instable samples had often smaller diameters. The substances within the sludge seem to influence the kinetic stability of NPs. This could be caused by the dynamic corona (active interaction with environment) and hard corona (strongly bound and restrained interaction with environment) formation, as well as agglomeration or modification of surface charges. Changes in kinetic stability means changes in the behavior and fate of NPs in such an environmental media like sludge. However, since the sludge is filtered for the behavioral analysis, the microorganism’s role is neglected. In order to complete the behavioral analysis considering the microorganism’s contribution, the consequences of the NP-behavior are measured as effects under various realistic conditions. The effects of NPs’ behavior on the anaerobic microorganisms were investigated based on their potential and efficiency in anaerobic sludge stabilization for gaining additional insights. The long-term effects of NPs in sludge treatment plants (STP) were investigated using four parallel pilot reactors operated under identical conditions, with daily feeding (substrate and two different NP-concentration). There was a temporary decrease of Biogas especially at the third phase (second concentration) except for the ZnO-reactor while the proportion of methane remained stable. Based on the knowledge obtained from the characterization and behavioral analysis, this might be due to the kinetic transformation of NPs. When the particle size for instability is exceeded as a consequence of surface changes and agglomerations, rapid sedimentation occurs. After eliminating the NP-reactivity and obtaining NP-agglomerations, NPs are extracted from the digestion system (sedimentation). Due to that, the microorganisms are able to operate to their full potential and efficiency, which is an indication of extracellular inhibition and preclude probably the intracellular inhibition caused by ions release. The short-term effects of ENPs in batch experiments were investigated under identical conditions. Furthermore, oxygen was used as an inhibitor to intensify the NP’s effects on the anaerobic microorganisms and digestion process. Oxygen is toxic for anaerobic microorganisms and enhance the release of ions. NP-effects were evaluated for methane yield. It will be assumed that the corona formations created by the sludge substances, reduce the release of ions and the NP-toxicity, even after decreasing the biomass. In addition, the NPs are able to eliminate O2 as an inhibitor, as well as positively influence the production dynamics and methane yield. To sum up, the growing interest in nanoparticles and their potential has presented many challenges for science, in toxicology and innovative technologies. These challenges bring new questions related to understanding NPs and their interplay with certain circumstances. A considerable contribution has been made for understanding the behavior of nanoparticles in the environmentally relevant media. However, a partial understanding for the fate of nanoparticles in complex matrices has been gained. As the nanoparticles themselves are such a complex matter, they require more research in developing a more complete understanding of the way they function. Having this knowledge would enable to explore progressively their behavior and fate in complex matrices.. [less ▲]

Detailed reference viewed: 13 (5 UL)
Full Text
See detailSecond language learners’ self-initiated topic changes during book-related activities in preschool and their impact on Luxembourgish proficiency
Wirtz, Delia UL

Doctoral thesis (2017)

The present research traces the second language learning process in Luxembourgish during book related activities by 4- to 5-year old pre-schoolers with Portuguese, Cap Verdean and Brazilian origins. With ... [more ▼]

The present research traces the second language learning process in Luxembourgish during book related activities by 4- to 5-year old pre-schoolers with Portuguese, Cap Verdean and Brazilian origins. With 47,2% of the preschool population being of foreign origins, the Lusophone community forms the largest group with 24,1%. This salient fast growing multilingual and multicultural population learns Luxembourgish for integration and everyday interaction and, hence, challenges public education with its diverse and altering demands. The present study enlarges second language research in the Luxembourgish context and links to previous investigation on topics, however, by taking a pragmatic stance towards topics. Through the foregrounding of the local topic management as well as its impact on activities, which are less teacher controlled, the study pictures second language learning as a product of co-constructed interaction. The focus lies on the negotiation of story meaning through self-initiated topic changes during three book related activities: Joint reading, storytelling and play. The data consists of video recorded lessons and on stimulated recall interviews with the teachers. A multi-method framework is used to investigate pupils’ interaction and language learning processes. From a quantitative point of view, the study analyses how pupils’ utterance length varies according to the openness of the lesson by allowing self-initiated topic changes as well as the design of the book activity (1) led by teachers or (2) by the pupils. From a qualitative stance, a sequence-by-sequence analysis of the jointly constructed narrative identifies the interactional dynamics of the collaborative storytelling activities and the use of self-initiated topic changes which children draw upon to express themselves more freely. The results show that children’s utterances vary according to the activity type. Pupils produce longer utterances, when they can self-initiate a topic hereby boosting their second language proficiency – either because the teacher is withdrawing or because the participation framework is open enough for them to make creative use of the language. The children also show their capability of successfully managing topic changes without the presence of the teacher while at the same time co-constructing the meaning of the story and paying attention to lexical details. The interviews reveal the teachers’ astonishment for the degree of pupil participation as well as their pedagogical practices. Implications from the analysis are gathered in a theoretical model that links opportunities for self-initiated topic changes to language proficiency. Recommendations for a more active pupil participation during book related activities point to sense-making, joint topic negotiation and story enactment. [less ▲]

Detailed reference viewed: 35 (9 UL)
Full Text
See detailCryptanalysis, Reverse-Engineering and Design of Symmetric Cryptographic Algorithms
Perrin, Léo Paul UL

Doctoral thesis (2017)

In this thesis, I present the research I did with my co-authors on several aspects of symmetric cryptography from May 2013 to December 2016, that is, when I was a PhD student at the university of ... [more ▼]

In this thesis, I present the research I did with my co-authors on several aspects of symmetric cryptography from May 2013 to December 2016, that is, when I was a PhD student at the university of Luxembourg under the supervision of Alex Biryukov. My research has spanned three different areas of symmetric cryptography. In Part I of this thesis, I present my work on lightweight cryptography. This field of study investigates the cryptographic algorithms that are suitable for very constrained devices with little computing power such as RFID tags and small embedded processors such as those used in sensor networks. Many such algorithms have been proposed recently, as evidenced by the survey I co-authored on this topic. I present this survey along with attacks against three of those algorithms, namely GLUON, PRINCE and TWINE. I also introduce a new lightweight block cipher called SPARX which was designed using a new method to justify its security: the Long Trail Strategy. Part II is devoted to S-Box reverse-engineering, a field of study investigating the methods recovering the hidden structure or the design criteria used to build an S-Box. I co-invented several such methods: a statistical analysis of the differential and linear properties which was applied successfully to the S-Box of the NSA block cipher Skipjack, a structural attack against Feistel networks called the yoyo game and the TU-decomposition. This last technique allowed us to decompose the S-Box of the last Russian standard block cipher and hash function as well as the only known solution to the APN problem, a long-standing open question in mathematics. Finally, Part III presents a unifying view of several fields of symmetric cryptography by interpreting them as purposefully hard. Indeed, several cryptographic algorithms are designed so as to maximize the code size, RAM consumption or time taken by their implementations. By providing a unique framework describing all such design goals, we could design modes of operations for building any symmetric primitive with any form of hardness by combining secure cryptographic building blocks with simple functions with the desired form of hardness called plugs. Alex Biryukov and I also showed that it is possible to build plugs with an asymmetric hardness whereby the knowledge of a secret key allows the privileged user to bypass the hardness of the primitive. [less ▲]

Detailed reference viewed: 130 (14 UL)
Full Text
See detailDynamic change of the human gastrointestinal microbiome in relation to mucosal barrier effects during chemotherapy and immune ablative intervention
Kaysen, Anne UL

Doctoral thesis (2017)

Numerous studies have demonstrated that the gastrointestinal tract (GIT) microbiota plays important roles for the human host. Since the GIT microbiota interfaces with the immune system and represents a ... [more ▼]

Numerous studies have demonstrated that the gastrointestinal tract (GIT) microbiota plays important roles for the human host. Since the GIT microbiota interfaces with the immune system and represents a first line of defense against infectious agents, interest has grown in whether the GIT microbiota may influence the outcome of different anticancer treatments. In this study, the GIT of pediatric patients with different cancer types as well as adult patients with hematologic malignancies undergoing an allogeneic hematopoietic stem cell transplantation were sampled throughout their treatment. In order to deeply profile not only the composition of the community, but also the functional capacity and expression, recently developed wet- and dry-lab methodologies for integrated multi-omic analyses were applied. The trajectories of the prokaryotic and microeukaryotic GIT communities of the patients were described in detail using 16S, 18S rRNA gene amplicon sequencing, as well as metagenomic and metatranscriptomic shotgun sequencing. Indeed, changes in the GIT microbiome in response to treatment were detected. Some changes that are generally thought to be detrimental for human health were detected during treatment, such as a decrease in alpha-diversity, a decrease in relative abundance of bacteria associated with health-promoting properties (such as Blautia spp., Roseburia spp. and Faecalibacterium spp.), as well as an increase in the relative abundance of antibiotic resistance genes. These changes were more pronounced in the adult hematology patients than in the pediatric patients, which is likely due to the more intensive treatment. Some observations need further investigation in order to explain their implication in human health. For example, in the pediatric patients, lower relative abundance of Akkermansia muciniphila was associated with mucositis and functional gene categories that are linked to bacteriophages or the bacterial defense mechanism against bacteriophages were associated with the overall status of the patient and mucositis development. Importantly, in both cohorts, high inter-individual but also high intra-individual variation in the prokaryotic communities were detected while the microeukaryotic community did not exhibit drastic changes. In conclusion, the employed integrated multi-omics analysis allowed detailed profiling of the GIT community including archaea, bacteria, eukaryotes and viruses as well as the functional potential including antibiotic resistance genes. In the future, analysis of the individual-specific processes within the GIT microbial community of patients throughout treatment might allow to adjust therapy regimens accordingly and improve the overall outcome of the therapy. [less ▲]

Detailed reference viewed: 99 (38 UL)
Full Text
See detailWhat Dignity Demands: From Political to Poetical Liberalism
Mailey, Richard Samuel David UL

Doctoral thesis (2017)

The thesis attempts to measure the disjoint between the promise of human dignity that appears at the heart of Western law (e.g. in national constitutions and international human rights instruments), and ... [more ▼]

The thesis attempts to measure the disjoint between the promise of human dignity that appears at the heart of Western law (e.g. in national constitutions and international human rights instruments), and the experiences of exclusion and frustration that, in 2017, have seen many Westerners turn to anti-liberal, populist demagogues for relief. In measuring this disjoint, the thesis looks to the work of liberal and anti-liberal theorists alike, including John Rawls, Bruce Ackerman, Carl Schmitt and Jacques Derrida. It then uses the insights gained to construct a liberal theory that can overcome the key problems identified, before using this theory to critically engage with the constitutional jurisprudence of three very different states: Canada, South Africa and the United States. [less ▲]

Detailed reference viewed: 14 (3 UL)
See detailInvestigation of the Nuclear Function of Tes, an Actin-Binding LIM Protein and Potential Tumor Suppressor
Vaccaroli, Raffaella UL

Doctoral thesis (2017)

The nucleus and the actin cytoskeleton are two distinct components of the eukaryotic cell. While the actin cytoskeleton is a dynamic structure that confers a high degree of cellular mobility as well as a ... [more ▼]

The nucleus and the actin cytoskeleton are two distinct components of the eukaryotic cell. While the actin cytoskeleton is a dynamic structure that confers a high degree of cellular mobility as well as a connection to the extracellular environment, the nucleus contains the genome and is strongly associated with gene expression from the steps of signal transduction to the actual transcription and the resulting translation. Although both components were initially studied separately, the ever-growing knowledge has lately asserted that they are tightly connected. In line, multitude of actin binding proteins were found to have roles in the nucleus, in addition to their well described roles in the cytoplasm. This research work focuses on the actin cytoskeleton protein Tes and its role in the nucleus. Tes is considered as a tumor suppressor protein, since its expression is reduced in different types of cancer cell lines and primary tumors, while its re-expression inhibits various aspects of cancer progression such as invasiveness and metastasis. As a cytoskeletal protein, Tes has been associated with actin polymerization as well as with cell migration and cell spreading. Our results show that, in addition to its cytoplasmic localization, Tes is localized in the nucleus. Artificial size increase of Tes, using a tag based on multiple GFP proteins, allowed us to assign Tes nuclear import to an active mechanism. Through fluorescence microscopy analysis of different modular fragments of Tes, we demonstrated that its PET and LIM1/2 domains are required for its nuclear localization. We also identified a classical monopartite nuclear localization signal (NLS) harbored within its PET domain, required for Tes import in the nucleus. In addition to Tes nuclear import, we also examined its nuclear export. Using the drug Leptomycin B, we demonstrated that Tes nuclear export is active and dependent on the CRM1-export mechanism. By combining protein sequence analysis with point mutations, we identified and characterized the nuclear export signal (NES) at the N-terminus of Tes. Using a photoconvertible probe, we demonstrated the presence of a slower photoconverted fraction of Tes in the nucleus, compared to the more dynamic cytoplasmic fraction. This prompted us to investigate further the existence of potential nuclear partners of Tes. Quantitative mass spectrometry analysis provided evidence for a potential involvement of Tes in nuclear pathways mediating the remodeling of actin, thus suggesting a novel role for Tes in the reorganization of the nuclear actin cytoskeleton. Furthermore, the proteomics studies associated the nuclear localization of Tes with signaling pathways promoting inflammation and cellular death. These results raised the hypothesis that Tes might promote tumor suppression not only through its cytoplasmic functions, but also through these newly described pathways connected to its nuclear localization. Further investigation will be necessary to fully elucidate the function of Tes in the nuclear compartment. [less ▲]

Detailed reference viewed: 37 (7 UL)
Full Text
See detailSpin texture of two-dimensional topological insulators
Rod, Alexia Nibal UL

Doctoral thesis (2017)

Since the discovery of two-dimensional topological insulators a decade ago, their one-dimensional edge states have attracted significant attention due to their unique properties. For example due to time ... [more ▼]

Since the discovery of two-dimensional topological insulators a decade ago, their one-dimensional edge states have attracted significant attention due to their unique properties. For example due to time-reversal symmetry, they are protected against elastic backscattering and they propagate such that electrons with opposite spins move in opposite directions. In fact, the only necessary symmetry to sustain the edge states is time-reversal symmetry. Moreover in experimental setups, the axial spin symmetry seems to be absent. This absence allows new processes to appear such as inelastic backscattering. However, these consequences were neglected in most theoretical works where the spins are considered to be polarized in the z direction. The aim of this thesis is to provide a more realistic model taking into account a broken axial spin symmetry. In this scheme, we show that a rotation of the spin quantization axis as a function of momentum always appears. This observation leads us to develop a deeper understanding of the size of the rotation related to the material parameters and material models, using also realistic values. It also leads us to understand the implications in real space in cases where translation invariance is lost and how to quantify the rotation in such systems. The new processes which arise when the axial spin symmetry is broken have important consequences for transport in real materials. To see this, we consider a Hall bar with a hole in its middle, i.e. an antidot. This enables us to create two tunneling regions in order to probe the effect of this generic model. We also consider the effect of Coulomb interactions around the hole, as they can be important in such geometry. We discover that it is possible to probe directly the absence of axial spin symmetry. As experimental evidence is important to investigate our theoretical findings, we propose spectroscopic means to probe the spin texture. Finally, we also consider one of the experimentally-known candidate materials, namely InAs/GaSb heterostructures. From the k.p Hamiltonian, it is possible to show that their bandstructure shows some anisotropies. The latter is also reflected in the spin texture of their edge states. [less ▲]

Detailed reference viewed: 83 (16 UL)
See detailCross-border evidence gathering: equality of arms within the EU?
van Wijk, Marloes Chantal UL

Doctoral thesis (2017)

The European Union (EU) has set the objective to develop an Area of Freedom, Security and Justice, in which on the one hand freedom of movement is promoted and on the other hand a high level of security ... [more ▼]

The European Union (EU) has set the objective to develop an Area of Freedom, Security and Justice, in which on the one hand freedom of movement is promoted and on the other hand a high level of security is ensured. The EU is therefore adopting measures to enhance international cooperation in criminal matters among the police and judicial authorities of its Member States. The adopted instruments concerning evidentiary matters, such as the gathering, freezing and/or confiscation of information and materials in another EU Member State, seem to serve the main purpose of assisting the authorities in investigating and prosecuting (cross-border) crime. This raises the question to what extent the defence is also given the possibility to gather – or to have gathered – information and materials in another EU Member State with the aim of preparing and presenting its case at trial and, in particular, whether the current (EU) legal framework on cross-border evidence gathering meets the requirements of the principle of equality of arms. This thesis addresses this question by, first of all, discussing the concept of equality of arms, as enshrined in both Article 6 ECHR and Article 47 CFR. It explains to what extent this principle is applicable to cross-border or transnational criminal proceedings and whether it has an autonomous meaning within the EU. In addition, it discusses which requirements can be deduced from the principle in relation to the possibilities of the defence to gather evidence in another EU Member State to prepare and presents its case. Subsequently, the focus is on the development of the European legislation – from both the Council of Europe and the EU – regulating the procedure of cross-border evidence gathering over the last decades. The aim is to explain the position of the defence in this development and to what extent the European legislation gives opportunities to the defence to request the assistance of foreign authorities in obtaining specific information and materials in another EU Member State. In order to understand how the European legislation is applied in practice by the EU Member States, this thesis includes a comparative study of three national jurisdictions: the Netherlands, England and Wales, and Italy. These three jurisdictions each represent a different criminal justice system, either more inquisitorial or adversarial in nature. The comparative study describes how a chosen jurisdiction interprets the principle of equality of arms. Furthermore, it examines to what extent the national jurisdiction allows the defence to carry out independent investigations abroad and how it gives the defence the opportunity to trigger the mechanism of international cooperation and to participate in the requested investigation. Finally, this thesis also includes an analysis of the criminal justice system of the International Criminal Court. In this system evidence gathering depends most of the time on State cooperation and both the Prosecutor and the defence are allowed to conduct independent investigations and seek the assistance of States. It is therefore used as a source of inspiration for potential changes of the EU legislation on cross-border evidence gathering. [less ▲]

Detailed reference viewed: 69 (6 UL)
Full Text
See detailIntegration of the analysis of non-functional properties in Model-Driven Engineering for embedded systems
Brau, Guillaume Sylvain Denis UL

Doctoral thesis (2017)

The engineering of embedded systems relies on two complementary activities: modeling on the one hand enables to represent the system, analysis on the other hand makes it possible to evaluate the various ... [more ▼]

The engineering of embedded systems relies on two complementary activities: modeling on the one hand enables to represent the system, analysis on the other hand makes it possible to evaluate the various non-functional properties (for example, temporal properties with the real-time scheduling analysis). This thesis deals with the integration between these models and analyses: how to apply an analysis on a model? How to manage the analysis process? The first part of this thesis presents a comprehensive approach to answer these questions. This approach is based on four application layers: (1) models to represent the system, (2) accessors to extract data from a model, (3) analyses to compute output data and/or properties from input data (4) contracts to represent the analysis interfaces and orchestrate the analysis process. The second part of this thesis deals with the experimentation of this approach with concrete systems coming from the aerospace: a drone, an exploratory robot and a flight management system. We demonstrate that the accessors enable to apply various real-time scheduling analyses on heterogeneous architectural models, for example written with the industry standard AADL (Architecture Analysis and Design Language) or the new time-triggered language CPAL (Cyber-Physical Action Language). In addition, contracts make it possible to automate complex analysis procedures: which analysis can be applied on a given model? Which are the analyses that meet a given goal? Are there analyses to be combined? Are there interferences between analyses? Etc. [less ▲]

Detailed reference viewed: 65 (9 UL)
Full Text
See detailHospice. Lieux et expériences de vieillesses. Bruxelles 1830-1914
Richelle, Sophie Marthe UL

Doctoral thesis (2017)

History of old age and experiences of ageing poeple in those places in Brussels between 1830 and 1914

Detailed reference viewed: 12 (3 UL)
See detailEntwicklung eines EDV-basierten Frühwarnsystems für die Blankaalabwanderung an der Mosel
Wendling, David UL

Doctoral thesis (2017)

The eel (Anguilla anguilla L.) is a fish that is mainly found in European waters. The River Moselle is among the bodies of water inhabited by this specimen. During the downstream migration into their ... [more ▼]

The eel (Anguilla anguilla L.) is a fish that is mainly found in European waters. The River Moselle is among the bodies of water inhabited by this specimen. During the downstream migration into their Atlantic spawning ground, silver eels often experience severe to fatal injuries while passing the turbines at the barrages. This annual migration takes place in a relatively narrow timeframe. Therefore, knowing the trigger or beginning of said migration, the mortality rate of the eels could be reduced by a fish-adapted turbine control or comparable protective measures. This thesis introduces an early warning system, which predicts the periods of silver eel emigration by means of certain abiotic factors. On the basis of the information gleaned from different studies and the experience gained from many years of professional fishing, those environmental factors were identified which are connected with the migration of the silver eel. Extensive data analyses were used to substantiate these findings. The water flow, the flow differences and the lunar phase were particularly relevant. Furthermore, the season and the water temperature were taken into account. In view of the different sources of information (experience and expert knowledge, data sets and the findings derived from it), a hybrid structure of the early warning system was realized. After examining different methods from the fields of soft computing and mathematics or statistics, the fuzzy logic (knowledge-based), the case-based reasoning (casebased) and the artificial neural networks (data-based) were selected. With each of these methods, an independent prediction model was designed, tested and optimized. Special characteristics were found during the data analysis and were taken into account by the use of adequate modifiers. The models were tested on the basis of the present data sets for the Moselle. It was shown that it is possible to correctly predict most of the situations with increased catches (suggesting a migration). Threshold values for a migration were defined based on the catches. The same was done for the forecast values. Thus, for the 1963 to 1973 data record, a total of 63% (artificial neural networks), 74% (fuzzy logic), and 83% (case-based reasoning) of the events with increased catches could be detected. Since not every situation with a favorable constellation of abiotic factors also led to a migration or higher catches, a lot of "false" forecasts (up to 50%) were made as well. Good results have also been achieved when using data from recent years and most events were identified. A stand-alone program was developed for the practical application of the prognosis models. This early warning system is a software which contains a user interface for reading data and displaying prognosis values and into which the developed prognosis models are implemented. In addition, recommendations for use were compiled and presented. [less ▲]

Detailed reference viewed: 49 (14 UL)
See detailMicroRNA regulation of hypoxia-induced tumorigenicity and metastatic potential of colon tumor-initiating cells
Ullmann, Pit UL

Doctoral thesis (2017)

The initiaton and progression of colorectal cancer (CRC), which is the second most common cause of cancer mortality in Western countries, are driven by a subpopulation of highly tumorigenic cells, known ... [more ▼]

The initiaton and progression of colorectal cancer (CRC), which is the second most common cause of cancer mortality in Western countries, are driven by a subpopulation of highly tumorigenic cells, known as cancer stem cells or tumor-initiating cells (TICs). These self-renewing TICs are, to a large extent, responsible for therapy resistance, cancer recurrence, and metastasis formation. TICs are known to extensively interact with their microenvironment and can be influenced by various extrinsic factors, such as inflammatory signaling or tumor hypoxia. Previous expression profiling studies have shown that microRNAs (miRNAs) are involved in the regulation of CRC inititation and metastatic progression. Moreover, specifc miRNAs have been identified as potential mediators of the cellular response to hypoxia. On the other hand, the molecular mechanisms that link hypoxia, miRNA expression, colon TIC regulation, and CRC progression, remain poorly understood. Thus, the main objectives of this work were to analyze the effects of hypoxia on the miRNA expression of colon TICs and to identify miRNAs that regulate metastasis initiation. In a first phase, we generated and thoroughly characterized different stable TIC-enriched spheroid cultures (SCs), both from CRC cell lines and from primary patient material. Each established SC was thereby shown to display key TIC properties, including substantial plasticity, in vitro and in vivo self-renewal capacity and, most importantly, extensive tumorigenic potential. Moreover, the individual SCs displayed increased chemoresistance capacity, compared to adherent counterpart cultures. Taken together, we could demonstrate that the spheroid system is a suitable model to study colon TICs, thereby laying the methodological foundation for the following subparts of this project. In a second step, we studied the influence of hypoxia on the miRNA expression profile of our established SCs. MiR-210-3p was thereby identified as the miRNA with the strongest response to hypoxia. Importantly, both hypoxic culture conditions and stable overexpression of miR-210 were shown to promote in vitro and in vivo self-renewal capacity of our colon TIC-enriched cultures. Moreover, by promoting lactate production and by repressing mitochondrial respiration, miR-210 was found to trigger the metabolic reprogramming of colon TICs towards a glycolytic and aggressive phenotype. Finally, we studied the role of miRNAs in the context of TIC-driven metastasis formation. By comparing primary tumor- and lymph node metastasis-derived SCs, we were able to identify the miR-371~373 cluster as an important regulator of tumorigenic and metastatic potential. Stable overexpression of the entire miR-371~373 cluster, followed by gene and protein expression analysis, enabled us to uncover the transforming growth factor beta receptor II (TGF-βRII) and the inhibitor of DNA binding 1 (Id1) as miR-371~373 cluster-responsive proteins. Most importantly, different sphere, tumor, and metastasis formation assays revealed that the miR-371~373/TGF-βRII/Id1 signaling axis regulates the self-renewal capacity and metastatic colonization potential of colon TICs. Taken together, our findings emphasize the strong plasticity of colon TICs and clearly illustrate that miRNAs can act as potent modulators of essential TIC properties. Accordingly, we could show that miR-210 and the miR-371~373 cluster are involved in metabolic reprogramming of TICs and in the regulation of metastasis formation, respectively. Altogether, our study contributes to a better understanding of the molecular mechanisms that drive TIC-induced tumor progression and may provide indications for interesting miRNA biomarker candidates and target molecules for future TIC-specific therapies. [less ▲]

Detailed reference viewed: 53 (22 UL)
Full Text
See detailScarring effects across the life course and the transition to retirement
Ponomarenko, Valentina UL

Doctoral thesis (2017)

This thesis investigates the long-term negative effects of unemployment, labour market inactivity and atypical employment. Within the theoretical framework of cumulative advantages and disadvantages, it ... [more ▼]

This thesis investigates the long-term negative effects of unemployment, labour market inactivity and atypical employment. Within the theoretical framework of cumulative advantages and disadvantages, it is outlined how life-course differentiation creates gaps between age peers and cohorts and how this leads to social inequality in old age. In the three separate, but linked studies, disadvantages across the career and their associations to retirement are analysed. The focus of the analyses is laid on the outcomes of career disadvantages in form of subjective and financial well-being. The three studies all use the Survey of Health, Ageing and Retirement in Europe. This large and multidimensional panel study provides not only prospective, but also retrospective data on European countries. The data base is employed in different combinations in the studies. In the first and second study, the retrospective wave SHARELIFE provides information on employment biography and is related to well-being indicators of the regular waves. In the third study, the persistence of disadvantages upon retirement is observed with a causal model. The first study investigates how disadvantages are affecting careers and subjective well-being of older Europeans. In two complementary analyses, first the employment history of older Europeans is studied with sequence analysis methods to show how non-employment and part-time work shape careers and to illustrate gender differences. In a second step, indicators of timing and duration, exemplifying the accumulation mechanisms, are related to subjective well-being in old age. The results indicate that women experience more turbulent careers with more periods of non-employment and part-time employment. However, this is not reflected in lower subjective well-being in old age. Accumulation of non-employment disadvantages is far more comprehensive for men than for women. Part-time employment has an ambiguous effect for women, but is not relevant for men. In the second study, the household level is added and it is analysed how an adverse employment history is related to wealth accumulation. The results show that cumulative non-employment and employment in lower occupations has significant disadvantages for wealth accumulation in old age. However, large differences for men and women remain. Particularly, the household composition and household factors are decisive in the effectuality of these disadvantages. The third study includes the scarring question, that means if career disadvantages continue beyond the working life. The study examines whether non-employment disadvantages are still found in retirement and the extent to which well-being levels change in the transition to retirement. Well-being scores before and after retirement are obtained and unbiased effects of the retirement transition are identified. Results indicate that being unemployed before retirement is associated with an increase in life satisfaction, but presents mainly a catching-up effect compared to employed persons transitioning to retirement. Findings are robust to selection into unemployment and country differences. [less ▲]

Detailed reference viewed: 87 (15 UL)
Full Text
See detailNew models to study the cross-talk between the protein repair L-isoaspartyl methyltransferase and cell signalling
Soliman, Remon UL

Doctoral thesis (2017)

Isomerization of L-aspartyl and L-asparaginyl residues to form L-isoaspartyl residues in proteins is one type of protein damage that can occur under physiological conditions and can potentially lead to ... [more ▼]

Isomerization of L-aspartyl and L-asparaginyl residues to form L-isoaspartyl residues in proteins is one type of protein damage that can occur under physiological conditions and can potentially lead to conformational change, loss of function and enhanced protein degradation. Protein L-isoaspartyl methyltransferase (PCMT or PIMT) is a repair enzyme, which allows the reconversion of L-isoaspartyl residues to L-aspartyl residues in protein. Although the catalytic function of PCMT is known, its physiological roles remain less well understood. Pcmt1 gene knockout in mice leads for example, via molecular mechanisms that remain mostly obscure, to activation of insulin/IGF-1 and MAPK signalling pathways in the brain, and premature death due to massive epileptic seizure events. In this doctoral research project, we have used both mammalian cells and zebrafish models to investigate the impact of PCMT deficiency on insulin/IGF-1, MAPK and calcium signalling as well as how PCMT may be involved in epilepsy. In mammalian cells we used shRNA and CRISPR/Cas9 technology to reduce or completely silence PCMT expression, with the main objective being to mimic, in cell culture, the activation of the IGF-1 and MAPK signalling pathways observed in Pcmt1 knockout mice in the hope to thereby increase the chances to elucidate the underlying molecular mechanisms. In zebrafish we used an antisense morpholino-based strategy to knock down both PCMT homologs and thereby establish a new whole organism model to further study the physiological functions of PCMT, more particularly in the brain. Our results indicate that insulin/IGF-1 signalling is not affected by PCMT knockdown or knockout in mammalian cells whereas a time-dependent MAPK pathway activation could be detected in a Pcmt1 knockout mouse hippocampal cell line. In zebrafish, we showed that the two PCMT homologs Pcmt and Pcmtl (Pcmt/l) possess isoaspartyl methyltransferase activity. In pcmt/l knockdown (or morphant) zebrafish larvae we did not detect abnormal electrical activity in the brain, but we identified movement impairment and strongly perturbed brain calcium fluxes. Abnormal calcium responses were also observed in the Pcmt1 knockout mouse hippocampal cell line. We concluded that the interplay between PCMT and growth signalling pathways is highly dependent on experimental model and may not be amenable to investigation in cell culture. Importantly, our results clearly show that PCMT plays a pivotal role in calcium signalling and suggest that PCMT-dependent repair mechanisms may be important to prevent calcium-related neurological disorders. [less ▲]

Detailed reference viewed: 32 (5 UL)
See detailStudy of Metabolite Repair in Eukaryotic Cells: Metabolic origin and fate of D-2-hydroxyglutarate in yeast and effect of NAD(P)HX repair deficiency on yeast and human cells
Becker-Kettern, Julia UL

Doctoral thesis (2017)

Abnormal metabolites, which are useless and can even be toxic, are constantly generated inside the cell by unwanted chemical reactions or by enzymatic side reactions. Metabolite repair enzymes clean the ... [more ▼]

Abnormal metabolites, which are useless and can even be toxic, are constantly generated inside the cell by unwanted chemical reactions or by enzymatic side reactions. Metabolite repair enzymes clean the metabolite pool from these molecules. The proportion of proteins annotated as metabolite repair enzymes is currently very small but accumulating evidence suggests that a bigger part might be hidden among proteins of unknown function. The aim of this thesis was to study two of these metabolite repair systems and their physiological relevance in more detail as their importance is well illustrated through implication in disease processes. D-2-hydroxyglutaric aciduria, a severe human neurometabolic disorder, can be caused by a deficiency in the metabolite repair enzyme D-2-hydroxyglutarate (D-2HG) dehydrogenase. Higher levels of D-2HG have also been observed in cancerous cells with a mutated form of isocitrate dehydrogenase. Strikingly, in the model organism Saccharomyces cerevisiae, 2-hydroxyglutarate metabolism had remained completely unexplored. We elucidated the metabolic pathways involved in D-2HG formation and degradation in yeast using bioinformatics, metabolomics, yeast genetics, and classical biochemical tools. We discovered that Dld3, currently annotated as a D-lactate dehydrogenase, actually degrades D-2HG to α-ketoglutarate while reducing pyruvate to D-lactate, thereby acting as a transhydrogenase. We also demonstrated that the yeast phosphoglycerate dehydrogenases Ser3 and Ser33 are major sources for D-2HG formation. These findings paved the way to integrate 2HG and its associated genes into the yeast metabolic network and might help, on the long-term, to better understand underlying mechanisms in human disease as well. Other recently identified metabolite repair enzymes, NAD(P)HX dehydratase and NAD(P)HX epimerase (encoded in yeast by the YKL151C and YNL200C genes, respectively), specifically act on NADHX and NADPHX, hydrated and inactive forms of the central NADH and NADPH cofactors. Although extensively biochemically characterized, the physiological importance of these two enzymes still remains largely unclear. Only very recently, case reports were published indicating a correlation between NAD(P)HX repair deficiency and severe neuropathological symptoms starting in early childhood upon events of febrile illnesses and rapidly leading to a fatal outcome. We systematically analyzed extracts of NAD(P)HX repair deficient yeast and human cells using HPLC and LC-MS/MS methods. This enabled us to demonstrate that NADHX and NADPHX can be formed intracellularly. In the yeast system, NADHX accumulation, which could be modulated by the cultivation temperature, was accompanied by a decrease in intracellular NAD+ levels. Furthermore, we showed that NADHX interferes with serine metabolism by inhibiting the first step of the main synthesis pathway of this amino acid. In the human cell system, NAD(P)HX dehydratase deficiency led, as in yeast, to intracellular NADHX accumulation, but also to a marked decrease in cell viability after prolonged cultivation times. This is, to our knowledge, the first report about the effect of NADHX accumulation on cellular metabolism. Expanding our experimental strategy of combined transcriptomics and metabolomics approaches to the human cell model might ultimately lead to the discovery of the disease-causing cellular process. The findings in both projects led to an unexpected connection between NAD(P)HX and 2HG metabolism via the yeast homologues of 3-phoshpoglycerate dehydrogenase, Ser3 and Ser33. Both proteins catalyze the oxidation of 3-phosphoglycerate to 3-phosphohydroxypyruvate in the initial step of de novo serine biosynthesis with a concomitant reduction of α-ketoglutarate to D-2-hydroxyglutarate. By acting as transhydrogenases, they substantially, even though not exclusively, contribute to D-2HG formation in yeast. The very same enzymes were strongly inhibited in vitro and, as suggested by our findings, also in vivo by the presence of NADHX, leading to serine depletion in NAD(P)HX repair deficient cells. [less ▲]

Detailed reference viewed: 47 (15 UL)
See detailESSAYS IN PRICE DISCOVERY
Wells, René Joseph Guy UL

Doctoral thesis (2017)

I claim that uninformed traders prefer ending the size of their orders with a zero (e.g. 110 shares) but it is not the case for informed traders, creating an information channel and providing a signal. I ... [more ▼]

I claim that uninformed traders prefer ending the size of their orders with a zero (e.g. 110 shares) but it is not the case for informed traders, creating an information channel and providing a signal. I propose the Last Digit Hypothesis (LDH): i) some traders exhibit a last digit preference for the digit 0 and other traders do not while ii) the latter are better able to trade on information than the former. The LDH predicts that a trade arising from a marketable order with a size ending with a 0 on average contributes less to price discovery than other trades. My empirical findings support the LDH. However, the LDH is not an equilibrium since informed traders have an incentive to mimic the preferences of uninformed traders to avoid detection and face little constraints or costs to do so. It is puzzling that I find no evidence of such mimicking. I offer plausible explanations for this finding. I carefully test the Stealth Trading Hypothesis (STH) using comprehensive datasets for the three largest European equity markets over 2002 to 2015, a period that saw trading moved into a new era. I find little support for the STH and, in fact, the commonality between these three distinct markets is the convergence over time of price discovery by trade size. It could be explained by informed traders once facing less frictions are better able to mimic the trade size choice of uninformed traders and/or more price discovery now going through resting limit orders. [less ▲]

Detailed reference viewed: 57 (4 UL)
Full Text
See detailSLA Violation Detection Model and SLA Assured Service Brokering (SLaB) in Multi-Cloud Architecture
Wagle, Shyam Sharan UL

Doctoral thesis (2017)

Cloud brokering facilitates Cloud Service Users (CSUs) to find cloud services according to their requirements. In the current practice, CSUs or Cloud Service Brokers (CSBs) select cloud services according ... [more ▼]

Cloud brokering facilitates Cloud Service Users (CSUs) to find cloud services according to their requirements. In the current practice, CSUs or Cloud Service Brokers (CSBs) select cloud services according to Service Level Agreement (SLA) committed by Cloud Service Providers (CSPs) in their website. In our observation, it is found that most of the CSPs do not fulfill the service commitment mentioned in the SLA agreement. Verified cloud service performances against their SLA commitment of CSPs provide an additional trust on CSBs to recommend services to the CSUs. In this thesis work, we propose a SLA assured service-brokering framework, which considers both committed and delivered SLA by CSPs in cloud service recommendation to the users. For the evaluation of the performance of CSPs, two evaluation techniques: Heat Map and Intuinistic Fuzzy Logic (IFL) are proposed, which include both directly measurable and non-measurable parameters in the performance evaluation CSPs. These two techniques are implemented using real data measured from CSPs. Both performance evaluation techniques rank/- sort CSPs according to their service performances. The result shows that Heat Map technique is more transparent and consistent in CSP performance evaluation than IFL technique. As cloud computing is location independent technology, CSPs should respect the current regulatory framework in delivering services to the users. In this work, regulatory compliance status of the CSPs is also analyzed and visualized in performance heat map table to provide legal status of CSPs. Moreover, missing points in their terms of service and SLA document are analyzed and recommended to add in the contract document. In the revised European data protection regulation (GPDR), data protection impact assessment (DPIA) is going to be mandatory for all organizations/tools. The decision recommendation tool developed using above mentioned evaluation techniques may cause potential harm to individuals in assessing data from multiple CSPs. So, DPIA is carried out to assess the potential harm/risks to individuals due to decision recommendation tool and necessary precaution to be taken in decision recommendation tool to minimize possible data privacy risks. To help CSUs in easy decision making to select cloud services from multi-cloud environment, service pattern analysis techniques and prediction of future performance behavior of CSPs are also proposed in the thesis work. Prediction patterns and error measurement shows that automatic prediction methods can be implemented for short time period as well as longer time period. [less ▲]

Detailed reference viewed: 45 (5 UL)
Full Text
See detailKey-Recovery Attacks Against Somewhat Homomorphic Encryption Schemes
Chenal, Massimo UL

Doctoral thesis (2017)

In 1978, Rivest, Adleman and Dertouzos introduced the concept of privacy homomorphism and asked whether it is possible to perform arbitrary operations on encrypted ciphertexts. Thirty years later, Gentry ... [more ▼]

In 1978, Rivest, Adleman and Dertouzos introduced the concept of privacy homomorphism and asked whether it is possible to perform arbitrary operations on encrypted ciphertexts. Thirty years later, Gentry gave a positive answer in his seminal paper at STOC 2009, by proposing an ingenious approach to construct fully homomorphic encryption (FHE) schemes. With this approach, one starts with a somewhat homomorphic encryption (SHE) scheme that can perform only limited number of operations on ciphertexts (i.e. it can evaluate only low-degree polynomials). Then, through the so-called bootstrapping step, it is possible to turn this SHE scheme into an FHE scheme. After Gentry's work, many SHE and FHE schemes have been proposed; in total, they can be divided into four categories, according to the hardness assumptions underlying each SHE (and hence, FHE) scheme: hard problems on lattices, the approximate common divisor problem, the (ring) learning with errors problem, and the NTRU encryption scheme. Even though SHE schemes are less powerful than FHE schemes, they can already be used in many useful real-world applications, such as medical and financial applications. It is therefore of primary concern to understand what level of security these SHE schemes provide. By default, all the SHE schemes developed so far offer IND-CPA security - i.e. resistant against a chosen-plaintext attack - but nothing is said about their IND-CCA1 security - i.e. secure against an adversary who is able to perform a non-adaptive chosen-ciphertext attack. Considering such an adversary is in fact a more realistic scenario. Gentry emphasized it as a future work to investigate SHE schemes with IND-CCA1 security, and the task to make some clarity about it was initiated by Loftus, May, Smart and Vercauteren: at SAC 2011 they showed how one family of SHE schemes is not IND-CCA1 secure, opening the doors to an interesting investigation on the IND-CCA1 security of the existing schemes in the other three families of schemes. In this work we therefore continue this line of research and show that most existing somewhat homomorphic encryption schemes are not IND-CCA1 secure. In fact, we show that these schemes suffer from key recovery attacks (stronger than a typical IND-CCA1 attack), which allow an adversary to completely recover the private keys through a number of decryption oracle queries. As a result, this dissertation shows that all known SHE schemes fail to provide IND-CCA1 security. While it is true that IND-CPA security may be enough to construct cryptographic protocols in presence of semi-honest attackers, key recovery attacks will pose serious threats for practical usage of SHE and FHE schemes: if a malicious attacker (or a compromised honest party) submits manipulated ciphertexts and observes the behavior (side channel leakage) of the decryptor, then it may be able to recover all plaintexts in the system. Therefore, it is very desirable to design SHE and FHE with IND-CCA1 security, or at least design them to prevent key recovery attacks. This raises the interesting question whether it is possible or not to develop such IND-CCA1 secure SHE scheme. Up to date, the only positive result in this direction is a SHE scheme proposed by Loftus et al. at SAC 2011 (in fact, a modification of an existing SHE scheme and IND-CCA1 insecure). However, this IND-CCA1 secure SHE scheme makes use of a non standard knowledge assumption, while it would be more interesting to only rely on standard assumptions. We propose then a variant of the SHE scheme proposed by Lopez-Alt, Tromer, and Vaikuntanathan at STOC 2012, which offers good indicators about its possible IND-CCA1 security. [less ▲]

Detailed reference viewed: 27 (3 UL)
Full Text
See detailDevelopment of an integrated omics in silico workflow and its application for studying bacteria-phage interactions in a model microbial community
Narayanasamy, Shaman UL

Doctoral thesis (2017)

Microbial communities are ubiquitous and dynamic systems that inhabit a multitude of environments. They underpin natural as well as biotechnological processes, and are also implicated in human health. The ... [more ▼]

Microbial communities are ubiquitous and dynamic systems that inhabit a multitude of environments. They underpin natural as well as biotechnological processes, and are also implicated in human health. The elucidation and understanding of these structurally and functionally complex microbial systems using a broad spectrum of toolkits ranging from in situ sampling, high-throughput data generation ("omics"), bioinformatic analyses, computational modelling and laboratory experiments is the aim of the emerging discipline of Eco-Systems Biology. Integrated workflows which allow the systematic investigation of microbial consortia are being developed. However, in silico methods for analysing multi-omic data sets are so far typically lab-specific, applied ad hoc, limited in terms of their reproducibility by different research groups and suboptimal in the amount of data actually being exploited. To address these limitations, the present work initially focused on the development of the Integrated Meta-omic Pipeline (IMP), a large-scale reference-independent bioinformatic analyses pipeline for the integrated analysis of coupled metagenomic and metatranscriptomic data. IMP is an elaborate pipeline that incorporates robust read preprocessing, iterative co-assembly, analyses of microbial community structure and function, automated binning as well as genomic signature-based visualizations. The IMP-based data integration strategy greatly enhances overall data usage, output volume and quality as demonstrated using relevant use-cases. Finally, IMP is encapsulated within a user-friendly implementation using Python while relying on Docker for reproducibility. The IMP pipeline was then applied to a longitudinal multi-omic dataset derived from a model microbial community from an activated sludge biological wastewater treatment plant with the explicit aim of following bacteria-phage interaction dynamics using information from the CRISPR-Cas system. This work provides a multi-omic perspective of community-level CRISPR dynamics, namely changes in CRISPR repeat and spacer complements over time, demonstrating that these are heterogeneous, dynamic and transcribed genomic regions. Population-level analysis of two lipid accumulating bacterial species associated with 158 putative bacteriophage sequences enabled the observation of phage-host population dynamics. Several putatively identified bacteriophages were found to occur at much higher abundances compared to other phages and these specific peaks usually do not overlap with other putative phages. In addition, there were several RNA-based CRISPR targets that were found to occur in high abundances. In summary, the present work describes the development of a new bioinformatic pipeline for the analysis of coupled metagenomic and metatranscriptomic datasets derived from microbial communities and its application to a study focused on the dynamics of bacteria-virus interactions. Finally, this work demonstrates the power of integrated multi-omic investigation of microbial consortia towards the conversion of high-throughput next-generation sequencing data into new insights. [less ▲]

Detailed reference viewed: 132 (22 UL)
Full Text
See detailDie Rolle kleiner Städte und zentraler Orte im mittelalterlichen Herzogtum Luxemburg
Platt, Michèle Dorothy UL

Doctoral thesis (2017)

Diese Forschungsarbeit basiert auf der Untersuchung verschiedener Aspekte der urbanen Entwicklung verschiedener kleinerer Siedlungen vom 13. bis zum 16. Jahrhundert und soll deren Bedeutung auf mehreren ... [more ▼]

Diese Forschungsarbeit basiert auf der Untersuchung verschiedener Aspekte der urbanen Entwicklung verschiedener kleinerer Siedlungen vom 13. bis zum 16. Jahrhundert und soll deren Bedeutung auf mehreren Ebenen aufzeigen. Konkret stehen die im spätmittelalterlichen Herzogtum Luxemburg gelegenen Zentren Arrancy, Arlon, Bastogne, Bitburg, Damvillers, Diekirch, Durbuy, Chiny, Echternach, Grevenmacher, Houffalize, Ivoix, La Roche, Larochette, Marche, Marville, Remich, St. Vith, Thionville, Vianden und Virton im Fokus der Untersuchung. Es sollen nicht nur der urbane Entwicklungsprozess der einzelnen Orte bis 1600 und deren zentrale Funktionen in politisch-administrativer, rechtlicher, wirtschaftlicher, sozialer und kirchlich-kultureller Hinsicht aufgezeigt, sondern auch die Ausbildung des luxemburgischen Städtenetzes als Ganzes analysiert werden. Untersucht wird, zu welchem Ausmaß und zu welchem Zeitpunkt urbane Kriterien und Funktionen in den verschiedenen Zentren nachgewiesen werden können. Herausgestellt wurde ob und ab welchem Zeitpunkt die untersuchten Orte als kleine Stadt bzw. als Zentrum zu bezeichnen sind. Die Forschungsarbeit zeigt wann sich die urbanen Merkmale und zentrale Funktionen in den verschiedenen Orten häuften und wie weit der Einflussbereich dieser Zentren in das umliegende Gebiet hinausreichte. Es wurde die Bedeutung dieser Zentren auf lokaler Ebene sowie für das gesamte Territorium und die Herrschaftsträger ermittelt. Es wurde zum einen die Rolle der Städte und Zentren für die jeweiligen Bewohner und der umliegenden Gebiete herausgestellt und die Bedeutung der urbanen Entwicklung für die Einwohner dieser Orte analysiert. Im Vordergrund stand hierbei auch die Funktion und Bedeutung der Zentren für den Landesherrn und die landesherrliche Territorialpolitik. [less ▲]

Detailed reference viewed: 12 (2 UL)
Full Text
See detailFinancial Intermediation and Macroeconomic Fluctuations
Chevallier, Claire Océane UL

Doctoral thesis (2017)

Detailed reference viewed: 30 (6 UL)
Full Text
See detailPrivate Functional Encryption – Hiding What Cannot Be Learned Through Function Evaluation
Delerue Arriaga, Afonso UL

Doctoral thesis (2017)

Functional encryption (FE) is a generalization of many commonly employed crypto- graphic primitives, such as keyword search encryption (KS), identity-based encryption (IBE), inner-product encryption (IPE ... [more ▼]

Functional encryption (FE) is a generalization of many commonly employed crypto- graphic primitives, such as keyword search encryption (KS), identity-based encryption (IBE), inner-product encryption (IPE) and attribute-based encryption (ABE). In an FE scheme, the holder of a master secret key can issue tokens associated with functions of its choice. Possessing a token for f allows one to recover f(m), given an encryption of m. As it is important that ciphertexts preserve data privacy, in various scenarios it is also important that tokens do not expose their associated function. A notable example being the usage of FE to search over encrypted data without revealing the search query. Function privacy is an emerging new notion that aims to address this problem. The difficulty of formalizing it lies in the verification functionality, as the holder of a token for function f may encrypt arbitrary messages using the public key, and obtain a large number of evaluations of f. Prior privacy models in the literature were fine-tuned for specific functionalities, did not model correlations between ciphertexts and decryption tokens, or fell under strong uninstantiability results. Our first contribution is a new indistinguishability-based privacy notion that overcomes these limitations and is flexible enough to capture all previously proposed indistinguishability-based definitions as particular cases. The second contribution of this thesis is five constructions of private functional encryption supporting different classes of functions and meeting varying degrees of security: (1) a white-box construction of an Anonymous IBE scheme based on composite-order groups, shown to be secure in the absence of correlated messages; (2) a simple and functionality- agnostic black-box construction from obfuscation, also shown to be secure in the absence of correlated messages; (3) a more evolved and still functionality-agnostic construction that achieves a form of function privacy that tolerates limited correlations between messages and functions; (4) a KS scheme achieving privacy in the presence of correlated messages beyond all previously proposed indistinguishability-based security definitions; (5) a KS construction that achieves our strongest notion of privacy (but relies on a more expressive form of obfuscation than the previous construction). The standard approach in FE is to model complex functions as circuits, which yields inefficient evaluations over large inputs. As our third contribution, we propose a new primitive that we call “updatable functional encryption” (UFE), where instead of circuits we deal with RAM programs, which are closer to how programs are expressed in von Neumann architecture. We impose strict efficiency constrains and we envision tokens that are capable of updating the ciphertext, over which other tokens can be subsequently executed. We define a security notion for our primitive and propose a candidate construction from obfuscation, which serves as a starting point towards the realization of other schemes and contributes to the study on how to compute RAM programs over public-key encrypted data. [less ▲]

Detailed reference viewed: 63 (16 UL)
Full Text
See detailOn Composability and Security of Game-based Password-Authenticated Key Exchange
Skrobot, Marjan UL

Doctoral thesis (2017)

The main purpose of Password-Authenticated Key Exchange (PAKE) is to allow secure authenticated communication over insecure networks between two or more parties who only share a low-entropy password. It ... [more ▼]

The main purpose of Password-Authenticated Key Exchange (PAKE) is to allow secure authenticated communication over insecure networks between two or more parties who only share a low-entropy password. It is common practice that the secret key derived from a PAKE execution is used to authenticate and encrypt some data payload using symmetric key protocols. Unfortunately, most PAKEs of practical interest, including three protocols considered in this thesis, are studied using so-called game-based models, which -- unlike simulation models -- do not guarantee secure composition per se. However, Brzuska et al. (CCS 2011) have shown that a middle ground is possible in the case of authenticated key exchange that relies on Public-Key Infrastructure (PKI): the game-based models do provide secure composition guarantees when the class of higher-level applications is restricted to symmetric-key protocols. The question that we pose in this thesis is whether or not a similar result can be exhibited for PAKE. Our work answers this question positively. More specifically, we show that PAKE protocols secure according to the game-based Real-or-Random (RoR) definition of Abdalla et al. (PKC 2005) allow for automatic, secure composition with arbitrary, higher-level symmetric key protocols. Since there is evidence that most PAKEs secure in the Find-then-Guess (FtG) model of Bellare et al. (EUROCRYPT 2000) are in fact secure according to the RoR definition, we can conclude that nearly all provably secure PAKEs enjoy a certain degree of composition, one that at least covers the case of implementing secure channels. Although many different protocols that accomplish PAKE have been proposed over last two decades, only a few newcomers managed to find their way to real world applications - albeit lacking an intense and prolonged public scrutiny. As a step in the direction of providing one, this dissertation considers the security and efficiency of two relatively recently proposed PAKE protocols - Dragonfly and J-PAKE. In particular, we prove the security of a very close variant of Dragonfly employing the standard FtG model which incorporates forward secrecy. Thus, our work confirms that Dragonfly's main flows are sound. Furthermore, we contribute to the discussion by proposing and examining (in the RoR model of security) two variants of J-PAKE - which we call RO-J-PAKE and CRS-J-PAKE - that each makes the use of two less zero-knowledge proofs than the original protocol, at the cost of an additional security assumption. Our work reveals that CRS-J-PAKE has an edge in terms of efficiency over J-PAKE for both standard group choices: subgroups of finite fields and elliptic curves. The same is true for RO-J-PAKE, but only when instantiated with elliptic curves. [less ▲]

Detailed reference viewed: 34 (5 UL)
Full Text
See detailLa fonction juridictionnelle au service de l’intégration sud-américaine. Regards croisés sur la contribution des juges régionaux à la construction d’un espace intégré : Europe et Amérique du Sud
Pena-Pinon, Mariana UL

Doctoral thesis (2017)

Regional integration is a global phenomenon understood in this book as “the association between sovereign States of a given geographical region, by international agreements with the aim of approximating ... [more ▼]

Regional integration is a global phenomenon understood in this book as “the association between sovereign States of a given geographical region, by international agreements with the aim of approximating national laws through a binding regional law”. In the context of South America, this definition had led to the study of six regional organizations, examined in a comparative manner to establish the degree of integration accomplished in this region. In a second step, the role of the law in an integrational organization and its control by a jurisdiction has been examined. Parallels are drawn between the role of the European Court of Justice and the EFTA Court and the regional jurisdictions in South America. Finally, the creation of a regional court through a Treaty, common to the six organizations compared is advocated. The political and legal feasibility, the precise characteristics, remedies and the character of the decisions of said regional court are discussed in depth. [less ▲]

Detailed reference viewed: 65 (25 UL)
Full Text
See detailMULTI-OBJECTIVE CLOUD BROKERING OPTIMIZATION TAKING INTO ACCOUNT UNCERTAINTY AND LOAD PREDICTION
Nguyen, Anh Quan UL

Doctoral thesis (2017)

Cloud broker optimization for energy-aware in multi-clouds system is to use a metaheuristic method for this multi-objective optimization problem that focuses on reducing the cost as well as improving the ... [more ▼]

Cloud broker optimization for energy-aware in multi-clouds system is to use a metaheuristic method for this multi-objective optimization problem that focuses on reducing the cost as well as improving the energy efficiency. This broad topic has been motivated by the energy-aware challenge at the level of cloud brokerage service. The cloud broker bases on multi-objectives optimization is characterized by a tightly coupled constraints, a dynamic environment, and changing objectives and priorities. That results in investigating specific aspects of the cloud brokerage service - virtual machine placement problem. [less ▲]

Detailed reference viewed: 76 (14 UL)
See detailA Combined Unsupervised Technique for Automatic Classification in Electronic Discovery
Ayetiran, Eniafe Festus UL

Doctoral thesis (2017)

Electronic data discovery (EDD), e-discovery or eDiscovery is any process by which electronically stored information (ESI) is sought, identified, collected, preserved, secured, processed, searched for the ... [more ▼]

Electronic data discovery (EDD), e-discovery or eDiscovery is any process by which electronically stored information (ESI) is sought, identified, collected, preserved, secured, processed, searched for the ones relevant to civil and/or criminal litigations or regulatory matters with the intention of using them as evidence. Searching electronic document collections for relevant documents is part of eDiscovery which poses serious problems for lawyers and their clients alike. Getting efficient and effective techniques for search in eDiscovery is an interesting and still an open problem in the field of legal information systems. Researchers are shifting away from traditional keyword search to more intelligent approaches such as machine learning (ML) techniques. State-of-the-art algorithms for search in eDiscovery focus mainly on supervised approaches, mainly; supervised learning and interactive approaches. The former uses labelled examples for training systems while the latter uses human assistance in the search process to assist in retrieving relevant documents. Techniques in the latter approach include interactive query expansion among others. Both approaches are supervised form of technology assisted review (TAR). Technology assisted review is the use of technology to assist or completely automate the process of searching and retrieval of relevant documents from electronically stored information (ESI). In text retrieval/classification, supervised systems are known for their superior performance over unsupervised systems. However, two serious issues limit their application in the electronic discovery search and information retrieval (IR) in general. First, they have associated high cost in terms of finance and human effort. This is particularly responsible for the huge amount of money expended on eDiscovery on annual basis. Secondly, their case/project-specific nature does not allow for resuse, thereby contributing more to organizations' expenses when they have two or more cases involving eDiscovery. Unsupervised systems on the other hand, is cost-effective in terms of finance and human effort. A major challenge in unsupervised ad hoc information retrieval is that of vocabulary problem which causes terms mismatch in queries and documents. While topic modelling techniques try to tackle this from the thematic point of view in the sense that both queries and documents are likely to match if they discuss about the same topic, natural language processing (NLP) approaches view it from the semantic perspective. Scalable topic modelling algorithms, just like the traditional bag of words technique, suffer from polysemy and synonymy problems. Natural language processing techniques on the other hand, while being able to considerably resolve the polysemy and synonymy problems are computationally expensive and not suitable for large collections as is the case in eDiscovery. In this thesis, we exploit the peculiarity of eDiscovery collections being composed mainly of e-mail communications and their attachments, mining topics of discourse from e-mails and disambiguating these topics and queries for terms matching has been proven to be effective for retrieving relevant documents when compared to traditional stem-based retrieval. In this work, we present an automated unsupervised approach for retrieval/classification in eDiscovery. This approach is an ad hoc retrieval which creates a representative for each original document in the collection using latent dirichlet allocation (LDA) model with Gibbs sampling and explores word sense disambiguation (WSD) to give these representative documents and queries deeper meanings for distributional semantic similarity. The word sense disambiguation technique by itself is a hybrid algorithm derived from the modified version of the original Lesk algorithm and the Jiang & Conrath similarity measure. Evaluation was carried out on this technique using the TREC legal track. Results and observations are discussed in chapter 8. We conclude that WSD can improve ad hoc retrieval effectiveness. Finally, we suggest further work focusing on efficient algorithms for word sense disambiguation which can further improve retrieval effectiveness if applied to original document collections in contrast to using representative collections. [less ▲]

Detailed reference viewed: 19 (4 UL)
See detail‘WHAT DO YOU MEAN YOU LOST THE PAST?’ AGENCY, EXPRESSION AND SPECTACLE IN AMATEUR FILMMAKING
Wecker, Danièle UL

Doctoral thesis (2017)

The following thesis presents an examination of privately produced amateur films taken from the Amateur Film Archive in the Centre National d’Audiovisuel in Luxembourg. It analyzes how amateur films ... [more ▼]

The following thesis presents an examination of privately produced amateur films taken from the Amateur Film Archive in the Centre National d’Audiovisuel in Luxembourg. It analyzes how amateur films present a filmic world and examines specific notions of meaning generation without meta-data and original context. Rather than take amateur film as a homogenous genre or practice, this study concentrates on film language. The first part of the following two-fold engagement with these filmic worlds thus identifies the highly differentiated filmic modes that can be read from the images. A filmic mode is related to as a concomitance of style and choice in subject matter. Without original context, these films lose their most important means of meaning generation, namely the recollective narratives that are constructed by the intended audience in the viewing situation. This work operates from a basis of analysis that takes these images as remnants of a visual narration rather than in terms of recollective narratives. It operates from the very simple basis that what was filmed had significance for these filmmakers and how the camera was used can serve as illustration of underlying intentions and motivations—both intended and inadvertent. The first part of this study then focuses on the diversification within the images and reads concomitant cultural codifications that structure representational productions in the private and also analyzes film language as means of self-inscription and self-narration. The second part of this two-fold engagement explores filmic language in terms of a visualization of primordial signifying expression coming-into-being. It relates to amateur film and practice from a basis of primary Becoming rather than a fixed Being. This engagement extends to include the researcher and his/her own background as co-constitutive part of this process of primordial meaning coming-into-being. Film is related to as the opening of a filmic universe that presents its own structures and engagements and not as visualization of a profilmic world from the past. [less ▲]

Detailed reference viewed: 20 (7 UL)
Full Text
See detailTax havens under international pressure: a game theoretical approach
Pulina, Giuseppe UL

Doctoral thesis (2017)

Detailed reference viewed: 25 (6 UL)
Full Text
See detailUne étude acoustique et comparative sur les voyelles du luxembourgeois
Thill, Tina UL

Doctoral thesis (2017)

This thesis is part of a descriptive work in acoustic phonetics, with the aim of studying the productions of Luxembourgish vowels in native and non-native speech. Its objective is to conciliate the ... [more ▼]

This thesis is part of a descriptive work in acoustic phonetics, with the aim of studying the productions of Luxembourgish vowels in native and non-native speech. Its objective is to conciliate the variation of Luxembourgish, mainly a spoken language, composed of many regional varieties, evolving in a multilingual context, and the learning of Luxembourgish as a foreign language in the Grand-Duchy of Luxembourg. As we assume the fact that language acquisition implies knowledge of sound contrast in speech, we investigate the productions of speakers whose mother tongues have different features than Luxembourgish, such as French, to see whether if the contrast are reproduced in non-native speech. Productions of French speakers are compared to those of native speakers from the region around the capital city of the Grand-Duchy of Luxembourg, whose variety serves as a reference to the teaching of Luxembourgish as a foreign language. The purpose of the study is the following : - to extend the descriptions on the acoustic properties of vowels produced in a regional variety of the Grand-Duchy of Luxembourg, - to highlight the specific difficulties of productions by French learners of Luxembourgish, - to interpret the results regarding the teaching of Luxembourgish as a foreign language. Fieldwork and the creation of a corpus through recordings of 10 Luxembourg speakers and 10 French speakers are an important part of the empirical work. We obtained a corpus of 12 hours and a half of spoken and spontaneous speech, including native speech and not native of Luxembourgish and also native speech of French. This corpus represents a first corpus containing native and non-native speech of Luxembourgish and enables to conduct different comparative studies. In our thesis, we did comparative analyses of the data in read speech. The methodology we used made it possible to compare data of native and non-native speech and also data of the L1 and L2 of French speakers. The results gave information about native and non-native productions of vowels. They showed that, on the one hand, vowel productions vary among speakers, even if these speak the same regional variety and, on the other hand, French speakers who learn Luxembourgish at B1/B2 level have difficulties producing contrasts in Luxembourgish. This concerns: - the quantity of the long vowels [iː], [eː], [aː], [oː], [uː] and short vowels [i], [e], [ɑ], [ɔ]. [u], - the quality of the long vowel [aː] and the two short vowels [æ] and [ɑ], - the quality of the beginning of the diphthongs [æi], [æu], [ɑi], [ɑu]. These results as well as thorough descriptions of the vowels in native speech, extend knowledge not only of Luxembourgish, but also of the variety which serves as the reference for Luxembourgish as a foreign language. In addition, they open up prospects for studying Luxembourgish by problematizing the introduction of rules for this type of education, despite the absence of language instruction in schools and the evolution of regional varieties in a concentrated geographical area. [less ▲]

Detailed reference viewed: 10 (3 UL)
See detailStochastic Thermodynamics for Underdamped Brownian Particles: Equivalent Measures, Reversed Stochastic Processes and Feynman-Kac Techniques
Shayanfard, Kamran UL

Doctoral thesis (2017)

Underdamped stochastic thermodynamics provides a handy tool to study a large class of stochastic processes operating out of equilibrium. Colloidal particles in a laser trap, molecular motors and feedback ... [more ▼]

Underdamped stochastic thermodynamics provides a handy tool to study a large class of stochastic processes operating out of equilibrium. Colloidal particles in a laser trap, molecular motors and feedback processes are some of the prominent examples. In the present work we give a mathematical framework for the study of the thermodynamic properties of these phenomena. We focus on Markovian stochastic processes in continuous time and space, and show how the techniques of equivalent measures combined with stochastic solutions of partial differential equations, obtained through Feynman-Kac formula, can be used to derive exact relations between forward and backward diffusion processes. We prove a theorem which allows us to derive the time evolution of an arbitrary path quantity in a simple and systematic way. We further consider a fairly general underdamped stochastic model, and study its nonequilibrium thermodynamic properties at both single trajectory and average levels. For this model, we establish several integral and detailed fluctuation theorems for thermodynamic quantities such as work and entropy production, amongst others. Some of these theorems directly parallel those already obtained in the context of overdamped and master equations, while others are novel. We also discuss some special cases of our model which are directly related to physical systems such as active Brownian particles, feedback processes and isoenergetic stochastic processes. The formalism we develop, and the general model considered here constitute a unified and extended framework for the study of the thermodynamics of underdamped processes, encompassing several physical systems and applications. [less ▲]

Detailed reference viewed: 52 (12 UL)
Full Text
See detailOptical Characterization of Cu2ZnSnSe4 Thin Films
Sendler, Jan Michael UL

Doctoral thesis (2017)

Detailed reference viewed: 35 (9 UL)
Full Text
See detailImages of Galois representations and p-adic models of Shimura curves
Amoros Carafi, Laia UL

Doctoral thesis (2016)

The thesis treats two questions situated in the Langlands program, which is one of the most active and important areas in current number theory and arithmetic geometry. The first question concerns the ... [more ▼]

The thesis treats two questions situated in the Langlands program, which is one of the most active and important areas in current number theory and arithmetic geometry. The first question concerns the study of images of Galois representations into Hecke algebras coming from modular forms over finite fields, and the second one deals with p-adic models of Shimura curves and its bad reduction. Consequently, the thesis is divided in two parts. The first part is concerned with the study of images of Galois representations that take values in Hecke algebras of modular forms over finite fields. The main result of this part is a complete classification of the possible images of 2-dimensional Galois representations with coefficients in local algebras over finite fields under the hypotheses that: (i) the square of the maximal ideal is zero, (ii) that the residual image is big (in a precise sense), and (iii) that the coefficient ring is generated by the traces. In odd characteristic, the image is completely determined by these conditions; in even characteristic the classification is much richer. In this case, the image is uniquely determined by the number of different traces of the representation, a number which is given by an easy formula. As an application of these results, the existence of certain p-elementary abelian extensions of big non-solvable number fields can be deduced. Whereas some aspects of class field theory are accessible through this approach, it can be applied to huge fields for which standard techniques totally fail. The second part of the thesis consists of an approach to p-adic uniformisations of Shimura curves X(Dp,N) through a combination of different techniques concerning rigid analytic geometry and arithmetic of quaternion orders. The results in this direction lean on two methods: one is based on the information provided by certain Mumford curves covering Shimura curves and the second one on the study of Eichler orders of level N in the definite quaternion algebra of discriminant D. Combining these methods, an explicit description of fundamental domains associated to p-adic uniformisation of families of Shimura curves of discriminant Dp and level N ≥ 1, for which the one-sided ideal class number h(D,N) is 1, is given. The method presented in this thesis enables one to find Mumford curves covering Shimura curves, together with a free system of generators for the associated Schottky groups, p-adic good fundamental domains and their stable reduction-graphs. As an application, general formulas for the reduction-graphs with lengths at p of the considered families of Shimura curves can be computed. [less ▲]

Detailed reference viewed: 41 (15 UL)
See detailHousehold Nonemployment, Social Risks and Inequality in Europe
Hubl, Vanessa Julia UL

Doctoral thesis (2016)

The dissertation explores interactions between households, states and markets and their relation to socio-economic inequalities among working-age households. The focus lies on three aspects: the ... [more ▼]

The dissertation explores interactions between households, states and markets and their relation to socio-economic inequalities among working-age households. The focus lies on three aspects: the importance of the welfare state, economic risks and opportunities within households, and the link between these two aspects and broader patterns of inequality at the societal level. These are analysed in three empirical studies, using a range of statistical methods (multilevel analysis, event history models and counterfactual analyses of income distributions). In addition, an extensive framework paper provides a background to the analyses, clarifies their relation in theoretical terms, and discusses the results. The first empirical study explores the relation between the regulation of social benefits, social risks, and household nonemployment in 20 European countries using internationally comparative institutional and survey data. The study reveals that eligibility conditions and activation policy vary systematically with the effect of social risks on the probability of household nonemployment. The strength and direction of influence depends on the specific policy area and risk factor. The second study analyses the duration of household nonemployment for British and German couples from the early 1990s to the mid-2000s. Dual joblessness has become longer over time, which is related to changes in the household composition of nonemployed couples. The third analysis evaluates the consequences of welfare shifts between households on changing patterns of inequality between 2005 and 2010. Changes in the distribution of household employment, benefit transfers, and family types in Germany, the United Kingdom, Poland, and Spain are analysed in terms of their contribution to developments in income inequality between households. The analysis of income distributions suggests that changes in socio-demographic and economic household characteristics in a population can have a substantial impact on different income groups. The overarching conclusion of the dissertation is that certain aspects of household composition enhance the risk of lower economic activity and welfare but that the impact of these factors varies strongly according to the broader context the households are situated in. Social policies that have the potential to reduce inequalities between households need to consider possible adverse effects on economic risk structures and spill-over effects to other areas of social protection. Future research should continue studying the household’s role in relation to the market, the state, and individual needs and resources; incorporate additional economic and welfare regime aspects into the analyses; and explore further statistical tools to do so. [less ▲]

Detailed reference viewed: 24 (3 UL)
Full Text
See detailCohomologies and derived brackets of Leibniz algebras
Cai, Xiongwei UL

Doctoral thesis (2016)

In this thesis, we work on the structure of Leibniz algebras and develop cohomology theories for them. The motivation comes from: • Roytenberg, Stienon-Xu and Ginot-Grutzmann's work on standard and naive ... [more ▼]

In this thesis, we work on the structure of Leibniz algebras and develop cohomology theories for them. The motivation comes from: • Roytenberg, Stienon-Xu and Ginot-Grutzmann's work on standard and naive cohomology of Courant algebroids (Courant-Dorfman algebras). • Kosmann-Schwarzbach, Roytenberg and Alekseev-Xu's constructions of derived brackets for Courant algebroids. • The classical equivariant cohomology theory and the generalized geometry theory. This thesis consists of three parts: 1. We introduce standard cohomology and naive cohomology for a Leibniz algebra. We discuss their properties and show that they are isomorphic. By similar methods, we prove a generalization of Ginot-Grutzmann's theorem on transitive Courant algebroids, which was conjectured by Stienon-Xu. The relation between standard complexes of a Leibniz algebra and its corresponding crossed product is also discussed. 2. We observe a canonical 3-cochain in the standard complex of a Leibniz algebra. We construct a bracket on the subspace consisting of so-called representable cochains, and prove that the subspace becomes a graded Poisson algebra. Finally we show that for a fat Leibniz algebra, the Leibniz bracket can be represented as a derived bracket. 3. In spired by the notion of a Lie algebra action and the idea of generalized geometry, we introduce the notion of a generalized action of a Lie algebra g on a smooth manifold M, to be a homomorphism of Leibniz algebras from g to the generalized tangent bundle TM+T*M. We define the interior product and Lie derivative so that the standard complex of TM+T*M becomes a g differential algebra, then we discuss its equivariant cohomology. We also study the equivariant cohomology for a subcomplex of a Leibniz complex. [less ▲]

Detailed reference viewed: 58 (13 UL)
Full Text
See detailEssays on Inequality, Public Policy, and Banking
Mavridis, Dimitrios UL

Doctoral thesis (2016)

Detailed reference viewed: 20 (8 UL)
Full Text
See detailCOLLABORATIVE RULE-BASED PROACTIVE SYSTEMS: MODEL, INFORMATION SHARING STRATEGY AND CASE STUDIES
Dobrican, Remus-Alexandru UL

Doctoral thesis (2016)

The Proactive Computing paradigm provides us with a new way to make the multitude of computing systems, devices and sensors spread through our modern environment, work for/pro the human beings and be ... [more ▼]

The Proactive Computing paradigm provides us with a new way to make the multitude of computing systems, devices and sensors spread through our modern environment, work for/pro the human beings and be active on our behalf. In this paradigm, users are put on top of the interactive loop and the underlying IT systems are automated for performing even the most complex tasks in a more autonomous way. This dissertation focuses on providing further means, at both theoretical and applied levels, to design and implement Proactive Systems. It is shown how smart mobile, wearable and/or server applications can be developed with the proposed Rule-Based Middleware Model for computing pro-actively and for operating on multiple platforms. In order to represent and to reason about the information that the proactive system needs to know about its environment where it performs its computations, a new technique called Proactive Scenario is proposed. As an extension of its scope and properties, and for achieving global reasoning over inter-connected proactive systems, a new collaborative technique called Global Proactive Scenario is then proposed. Furthermore, to show their potential, three real world case studies of (collaborative) proactive systems have been explored for validating the proposed development methodology and its related technological framework in various domains like e-Learning, e-Business and e-Health. Results from these experiments con rm that software applications designed along the lines of the proposed rule-based proactive system model together with the concepts of local and global proactive scenarios, are capable of actively searching for the information they need, of automating tasks and procedures that do not require the user's input, of detecting various changes in their context and of taking measures to adapt to it for addressing the needs of the people which use these systems, and of performing collaboration and global reasoning over multiple proactive engines spread across different networks. [less ▲]

Detailed reference viewed: 79 (26 UL)
Full Text
See detailSynchronisation of Model Visualisation and Code Generation Based on Model Transformation
Gottmann, Susann UL

Doctoral thesis (2016)

The development, maintenance and documentation of complex systems is commonly supported by model-driven approaches where system properties are captured by visual models at different layers of abstraction ... [more ▼]

The development, maintenance and documentation of complex systems is commonly supported by model-driven approaches where system properties are captured by visual models at different layers of abstraction and from different perspectives as proposed by the Object Management Group (OMG) and its model-driven architecture. Generally, a model is a concrete view on the system from a specific perspective in a particular domain. We focus on visual models in the form of diagrams and whose syntax is defined by domain-specific modelling languages (DSLs). Different models may represent different views on a system, i.e., they may be linked to each other by sharing a common set of information. Therefore, models that are expressed in one DSL may be transformed to interlinked models in other DSLs and furthermore, model updates may be synchronised between different domains. Concretely, this thesis presents the transformation and synchronisation of source code (abstract syntax trees, ASTs) written in the Satellite-Procedure & Execution Language (SPELL) to flow charts (code visualisation) and vice versa (code generation) as the result of an industrial case study. The transformation and synchronisation are performed based on existing approaches for model transformations and synchronisations between two domains in the theoretic framework of graph transformation where models are represented by graphs. Furthermore, extensions to existing approaches are presented for treating non-determinism in concurrent model synchronisations. Finally, the existing results for model transformations and synchronisations between two domains are lifted to the more general case of an arbitrary number of domains or models containing views, i.e., a model in one domain may be transformed to models in several domains or to all other views, respectively, and model updates in one domain may be synchronised to several other domains or to all other views, respectively. [less ▲]

Detailed reference viewed: 45 (6 UL)
See detailDepression and ostracism: the role of attachment, self-esteem and rejection sensitivity for treatment success and depressive symptom deterioration
Borlinghaus, Jannika UL

Doctoral thesis (2016)

The current research programme is based on three studies investigating ramifications of ostracism in inpatients diagnosed with depression. It aims at understanding responses to ostracism in depressed ... [more ▼]

The current research programme is based on three studies investigating ramifications of ostracism in inpatients diagnosed with depression. It aims at understanding responses to ostracism in depressed patients, and the implications for psychotherapy and symptom deterioration using experimental (study 1) and longitudinal (studies 2 and 3) research designs. Investigating psychological factors such as attachment, self-esteem and Rejection Sensitivity, we found that attachment affects the immediate, physiological reactions to ostracism (study 1), that state self-esteem after an ostracism experience impacts therapy outcome (study 2) and that Rejection Sensitivity, the cognitive- affective disposition to anxiously expect and overact to rejection, predicts deterioration of depressive symptoms 6 months after treatment (study 3). These results highlight the salience of attachment when investigating reactions to ostracism, and the importance of Rejection Sensitivity over the course of therapy as an indicator for therapy outcome and risk for relapse. [less ▲]

Detailed reference viewed: 44 (8 UL)
Full Text
See detailAnalysis of the impact of ROS in networks describing neurodegenerative diseases
Ignatenko, Andrew UL

Doctoral thesis (2016)

In the current thesis the model of the ROS management network is built using the domino principle. The model offers insight into the design principles underlying the ROS management network and enlightens ... [more ▼]

In the current thesis the model of the ROS management network is built using the domino principle. The model offers insight into the design principles underlying the ROS management network and enlightens its functionality in the diseases such as cancer and Parkinson’s disease (PD). It is validated using experimental data. The model is used for in silico study of the ROS management dynamics under the stress conditions (oxidative stress). This highlights the phenomena of both adaptation to stress and the stress accumulation effect in case of repeated stress. This study also helps to discover the potential ways to a personalized treatment of the insufficient ROS management. The different ways of a control of the ROS management network are shown using the optimal control approach. Obtained results could be used for a seeking of the treatment strategies to fix the ROS management failures caused by an oxidative stress, neurodegenerative diseases, etc. Or, in vice versa, to develop the ways of a controllable cell death that might be used in cancer research. [less ▲]

Detailed reference viewed: 38 (9 UL)
See detailLa conversion de CO2, NOx et/ou SO2 en utilisant la technologie du charbon actif
Chaouni, Wafaâ

Doctoral thesis (2016)

Detailed reference viewed: 27 (6 UL)
Full Text
See detailÉtude sociolinguistique sur les pratiques linguistiques au sein de familles plurilingues vivant au Grand-Duché de Luxembourg
Made Mbe, Annie Flore UL

Doctoral thesis (2016)

The importance of investigating the family language policies within multilingual families living in Luxembourg is primarily based the trilingualism that characterizes Luxembourg, the heterogeneity of its ... [more ▼]

The importance of investigating the family language policies within multilingual families living in Luxembourg is primarily based the trilingualism that characterizes Luxembourg, the heterogeneity of its population, problems faced by immigrant children schooling in Luxembourg’s school and individual’s personal experience with everyday language use as well. Hence, this thesis’s aim is to investigate how parents from different linguistic backgrounds or having the same language of origin communicated with each other prior to the birth of their children and how the birth of these children reshapes the family language environment. Specifically, we aim to understand the parents’ motivations with regard to their language choices and the communication strategies they implement in order to establish a family communication environment. In addition, considering the effects of language contact, we focus on the school languages and their influence on the children’s language at home. In order to achieve this, from a methodological point of view, by combining ethnographic interviews with the recordings of a family conversation, we gained access to the declared and real linguistic practices of ten families with highly diverse linguistic profiles. These families reside between seven and forty-two years in Luxembourg. Further, content analysis was used to examine the migratory experience of each parent. Some of the major reasons why parents adopted a positive attitude towards multilingualism were (a) the language learning and use opportunities offered by Luxembourg and (b) the desire to develop the linguistic capital of their children. Our results later suggest that although children do not participate actively in the language use decision-making process they actively influence the family language environment. Because the languages they learn in school impact the ways in which they speak at home. Moreover, we discovered that once these children have contact with the officially recognised languages in Luxembourg, which might be different from that of the family, they tend to shift their preference towards these dominant languages. In addition, we discovered that there is no standard parental communication strategy for passing the family languages on to the children. Rather, depending on the parents' objectives, they can adopt different strategies. Overall, this thesis opens new perspectives for research that investigates the family language policies of multilingual families by highlighting the relevance of educational dimensions of children with immigrant backgrounds. [less ▲]

Detailed reference viewed: 23 (4 UL)
Full Text
See detailBoosting Static Security Analysis of Android Apps through Code Instrumentation
Li, Li UL

Doctoral thesis (2016)

Within a few years, Android has been established as a leading platform in the mobile market with over one billion monthly active Android users. To serve these users, the official market, Google Play ... [more ▼]

Within a few years, Android has been established as a leading platform in the mobile market with over one billion monthly active Android users. To serve these users, the official market, Google Play, hosts around 2 million apps which have penetrated into a variety of user activities and have played an essential role in their daily life. However, this penetration has also opened doors for malicious apps, presenting big threats that can lead to severe damages. To alleviate the security threats posed by Android apps, the literature has proposed a large body of works which propose static and dynamic approaches for identifying and managing security issues in the mobile ecosystem. Static analysis in particular, which does not require to actually execute code of Android apps, has been used extensively for market-scale analysis. In order to have a better understanding on how static analysis is applied, we conduct a systematic literature review (SLR) of related researches for Android. We studied influential research papers published in the last five years (from 2011 to 2015). Our in-depth examination on those papers reveals, among other findings, that static analysis is largely performed to uncover security and privacy issues. The SLR also highlights that no single work has been proposed to tackle all the challenges for static analysis of Android apps. Existing approaches indeed fail to yield sound results in various analysis cases, given the different specificities of Android programming. Our objective is thus to reduce the analysis complexity of Android apps in a way that existing approaches can also succeed on their failed cases. To this end, we propose to instrument the app code for transforming a given hard problem to an easily-resolvable one (e.g., reducing an inter-app analysis problem to an intra-app analysis problem). As a result, our code instrumentation boosts existing static analyzers in a non-invasive manner (i.e., no need to modify those analyzers). In this dissertation, we apply code instrumentation to solve three well-known challenges of static analysis of Android apps, allowing existing static security analyses to 1) be inter-component communication (ICC) aware; 2) be reflection aware; and 3) cut out common libraries. ICC is a challenge for static analysis. Indeed, the ICC mechanism is driven at the framework level rather than the app level, leaving it invisible to app-targeted static analyzers. As a consequence, static analyzers can only build an incomplete control-flow graph (CFG) which prevents a sound analysis. To support ICC-aware analysis, we devise an approach called IccTA, which instruments app code by adding glue code that directly connects components using traditional Java class access mechanism (e.g., explicit new instantiation of target components). Reflection is a challenge for static analysis as well because it also confuses the analysis context. To support reflection-aware analysis, we provide DroidRA, a tool-based approach, which instruments Android apps to explicitly replace reflective calls with their corresponding traditional Java calls. The mapping from reflective calls to traditional Java calls is inferred through a solver, where the resolution of reflective calls is reduced to a composite constant propagation problem. Libraries are pervasively used in Android apps. On the one hand, their presence increases time/memory consumption of static analysis. On the other hand, they may lead to false positives and false negatives for static approaches (e.g., clone detection and machine learning-based malware detection). To mitigate this, we propose to instrument Android apps to cut out a set of automatically identified common libraries from the app code, so as to improve static analyzer’s performance in terms of time/memory as well as accuracy. To sum up, in this dissertation, we leverage code instrumentation to boost existing static analyzers, allowing them to yield more sound results and to perform quicker analyses. Thanks to the afore- mentioned approaches, we are now able to automatically identify malicious apps. However, it is still unknown how malicious payloads are introduced into those malicious apps. As a perspective for our future research, we conduct a thorough dissection on piggybacked apps (whose malicious payloads are easily identifiable) in the end of this dissertation, in an attempt to understand how malicious apps are actually built. [less ▲]

Detailed reference viewed: 138 (22 UL)
See detailAstrocyte phenotype during differentiation: implication of the NFkB pathway
Birck, Cindy UL

Doctoral thesis (2016)

Detailed reference viewed: 48 (9 UL)
Full Text
See detailELECTRONIC AND STRUCTURAL PROPERTIES OF BISMUTH- AND RARE-EARTH-FERRITES
Weber, Mads Christof UL

Doctoral thesis (2016)

The work of this thesis stands in the framework of the understanding of multiferroics and light induced effects in these materials, more specifically in rare-earth and bismuth ferrites. Iron-based ... [more ▼]

The work of this thesis stands in the framework of the understanding of multiferroics and light induced effects in these materials, more specifically in rare-earth and bismuth ferrites. Iron-based materials offer the advantage of a high magnetic-ordering temperature, commonly well-above room temperature. To understand the coupling between magnetism and crystal lattice and the interaction of a material with light, knowledge about the crystal structure and electronic band structure, respectively, is crucial. In the first part of this work, the structural properties of six rare-earth orthoferrites RFeO3 (R = La, Sm, Eu, Gd, Tb, Dy) are analyzed by Raman scattering (RS). Polarization dependent RS of SmFeO3 and the comparison with first-principle calculations enable the assignment of the measured phonon modes to vibrational symmetries and atomic displacements. This allows correlating the phonon modes with the orthorhombic structural distortions of RFeO3 perovskites. In particular, the positions of two specific Ag modes scale linearly with the two FeO6 octahedra tilt angles, allowing the distortion to be tracked throughout the series. At variance with literature, we find that the two octahedra tilt angles scale differently with the vibration frequencies of their respective Ag modes. This behavior, as well as the general relations between the tilt angles, the frequencies of the associated modes, and the ionic radii are rationalized in a simple Landau model. The precise knowledge about the lattice vibration is used in the second part of the work to investigate the impact of magnetic transitions on crystal lattice of SmFeO3. SmFeO3 stands out of the rest of the rare-earth orthoferrites for its comparably high magnetic transition temperatures. While tuning the temperature through magnetic transitions, the structural properties are probed by RS complemented by resonant ultrasound spectroscopy and linear birefringence measurements. During the Fe3+-spin reorientation phase, we find an important softening of the elastic constants in the resonant-ultrasound spectra. Towards lower temperatures the Sm3+-spins order; this ordering is clearly represented in the Raman spectra in the form of changes of the evolution of certain vibrational bands and additional bands appear in the spectra. The knowledge about the vibrational displacements of the Raman bands allows an investigation of the anomalies related to the Sm3+-spin ordering. Bismuth ferrite can be seen as the role model multiferroic material since it is one of the few room temperature multiferroics with a strong electric polarization. The ferroelectric and magnetic properties have been studied in great detail. In addition to the multiferroic properties, photo-induced phenomena have renewed the interest in BiFeO3. However, for the understanding and tuning of photo-induced effects a profound knowledge of the electronic band structure is important. Despite the extensive study of BiFeO3, the understanding of the electronic transitions remains very limited. In the third part of the thesis, the electronic band structure of BiFeO3 is investigate using RS with twelve different excitation wavelengths ranging from the blue to the near infrared. Resonant Raman signatures (RRS) can be assigned to direct and indirect electronic transitions, as well as in-gap electronic levels, most likely associated with oxygen vacancies. RRS allows to distinguish between direct and indirect transitions even at higher temperatures. Thus, it is found that the remarkable and intriguing variation of the optical band gap with temperature can be related to the shrinking of an indirect electronic band gap, while the energies for direct electronic transitions remain nearly temperature independent. [less ▲]

Detailed reference viewed: 115 (18 UL)
Full Text
See detailTHREE-DIMENSIONAL MICROFLUIDIC CELL CULTURE OF STEM CELL-DERIVED NEURONAL MODELS OF PARKINSON'S DISEASE
Lucumi-Moreno, Edinson UL

Doctoral thesis (2016)

Cell culture models in 3D have become an essential tool for the implementation of cellular models of neurodegenerative diseases. Parkinson’s disease (PD) is characterized by the loss of dopaminergic ... [more ▼]

Cell culture models in 3D have become an essential tool for the implementation of cellular models of neurodegenerative diseases. Parkinson’s disease (PD) is characterized by the loss of dopaminergic neurons from the substantia nigra. The study of PD at the cellular level requires a cellular model that recapitulates the complexity of those neurons affected in PD. Induced Pluripotent Stem Cells (iPSC) technology is an efficient method for the derivation of dopaminergic neurons from human neuroepithelial stem cells (hNESC), hence proving to be a suitable tool to develop cellular models of PD. To obtain DA neurons from hNESC in a 3D culture, a protocol based on the use of small molecules and growth factors was implemented in a microfluidic device (OrganoPlate). This non PDMS device is based on the use of phaseguide (capillary pressure barriers that guide the liquid air interface) technology and the hydrogel matrigel as an extra cellular matrix surrogate. To characterize the morphological features and the electrophysiological activity of wild type hNESCs differentiated neuronal population, with those differentiated neurons carrying the LRRK2 mutation G2019S, a calcium imaging assay based on the use of a calcium sensitive dye (Fluo-4) and image analysis methods, were implemented. Additionally, several aspects of fluid flow dynamics, rheological properties of matrigel and its use as surrogate extracellular matrix were investigated. Final characterization of the differentiated neuronal population was done using an immunostaining assay and microscopy techniques. The yields of differentiated dopaminergic neurons in the 2 lane OrganoPlate were in the range of 13% to 27%. Morphological (length of processes) and electrophysiological (firing patterns) characteristics of wild type differentiated neurons and those carrying the LRRK2 mutation G2019S, were determined applying an image analysis pipeline. Velocity profiles and shear stress of fluorescent beads in matrigel flowing in culture lanes of the 2 lane OrganoPlate, were estimated using particle image velocimetry techniques. In this thesis, we integrate two new technologies to establish a new in vitro 3D cell based model to study several aspects of PD at the cellular level, aiming to establish a microfluidic cell culture experimental platform to study PD, using a systems biology approach. [less ▲]

Detailed reference viewed: 38 (10 UL)
Full Text
See detailTo Share or not to Share: Access Control and Information Inference in Social Networks
Zhang, Yang UL

Doctoral thesis (2016)

Online social networks (OSNs) have been the most successful online applications during the past decade. Leading players in the business, including Facebook, Twitter and Instagram, attract a huge number of ... [more ▼]

Online social networks (OSNs) have been the most successful online applications during the past decade. Leading players in the business, including Facebook, Twitter and Instagram, attract a huge number of users. Nowadays, OSNs have become a primary way for people to connect, communicate and share life moments. Although OSNs have brought a lot of convenience to our life, users' privacy, on the other hand, has become a major concern due to the large amount of personal data shared online. In this thesis, we study users' privacy in social networks from two aspects, namely access control and information inference. Access control is a mechanism, provided by OSNs, for users themselves to regulate who can view their resources. Access control schemes in OSNs are relationship-based, i.e., a user can define access control policies to allow others who are in a certain relationship with him to access his resources. Current OSNs have deployed multiple access control schemes, however most of these schemes do not satisfy users' expectations, due to expressiveness and usability. There are mainly two types of information that users share in OSNs, namely their activities and social relations. The information has provided an unprecedented chance for academia to understand human society and for industry to build appealing applications, such as personalized recommendation. However, the large quantity of data can also be used to infer a user's personal information, even though not shared by the user in OSNs. This thesis concentrates on users' privacy in online social networks from two aspects, i.e., access control and information inference, it is organized into two parts. The first part of this thesis addresses access control in social networks from three perspectives. First, we propose a formal framework based on a hybrid logic to model users' access control policies. This framework incorporates the notion of public information and provides users with a fine-grained way to control who can view their resources. Second, we design cryptographic protocols to enforce access control policies in OSNs. Under these protocols, a user can allow others to view his resources without leaking private information. Third, major OSN companies have deployed blacklist for users to enforce extra access control besides the normal access control policies. We formally model blacklist with the help of a hybrid logic and propose efficient algorithms to implement it in OSNs. The second part of this thesis concentrates on the inference of users' information in OSNs, using machine learning techniques. The targets of our inference are users' activities, represented by mobility, and social relations. First, we propose a method which uses a user's social relations to predict his locations. This method adopts a user's social community information to construct the location predictor, and perform the inference with machine learning techniques. Second, we focus on inferring the friendship between two users based on the common locations they have been to. We propose a notion namely location sociality that characterizes to which extent a location is suitable for conducting social activities, and use this notion for friendship prediction. Experiments on real life social network datasets have demonstrated the effectiveness of our two inferences. [less ▲]

Detailed reference viewed: 52 (13 UL)
See detailMitarbeiterführung und Social-Media-Nutzung im Führungsalltag von Generation-Y-Führungskräften - Eine explorative Analyse mittels Mixed-Methods-Ansatz
Feltes, Florian UL

Doctoral thesis (2016)

The topic of this thesis is the qualitative and quantitative evaluation of leadership behaviour and therefore the leadership style of Generation Y (GenY) considering the use of social media in day-to-day ... [more ▼]

The topic of this thesis is the qualitative and quantitative evaluation of leadership behaviour and therefore the leadership style of Generation Y (GenY) considering the use of social media in day-to-day management. It examines the question of how GenY leaders lead and how they use social media in this context. It explores the topic based on a sequential mixed methods approach of qualitative interviews and a quantitative online questionnaire. Using the qualitative content analysis, it examines 25 qualitative interviews concerning the following aspects: leadership behaviour of generation Y, generation-based differences in the leadership and different strength of leadership styles, influence of contextual factors like hierarchies, sector and company size on the leadership style and use of social media, use of social media on day-to-day management, and, finally, connections between applied leadership styles and social media usage of GenY leaders. The findings and tendencies were then verified in an online questionnaire. The results of the online questionnaire [self-evaluation of leaders (N=406), bottom-up evaluation by employees (N=622)] show a significant discrepancy between the leaders’ statements and those of the employees. However, there are clear results and tendencies that confirm the findings of the qualitative study. It was established that GenY leaders show characteristics of task-oriented, person-oriented, transactional and transformational leadership. GenY leadership is characterised by clear outcome orientation, flat hierarchies and feedback. The use of social media varies considerably, depending for example on the context in which the leader works, e. g. sector and level of management. In summary, it can be stated that there is a connection between the strength of the leadership style and the usage of social media in day-to-day management. [less ▲]

Detailed reference viewed: 41 (7 UL)
Full Text
See detailDynamic Vehicular Routing in Urban Environments
Codeca, Lara UL

Doctoral thesis (2016)

Traffic congestion is a persistent issue that most of the people living in a city have to face every day. Traffic density is constantly increasing and, in many metropolitan areas, the road network has ... [more ▼]

Traffic congestion is a persistent issue that most of the people living in a city have to face every day. Traffic density is constantly increasing and, in many metropolitan areas, the road network has reached its limits and cannot easily be extended to meet the growing traffic demand. Intelligent Transportation System (ITS) is a world wide trend in traffic monitoring that uses technology and infrastructure improvements in advanced communication and sensors to tackle transportation issues such as mobility efficiency, safety, and traffic congestion. The purpose of ITS is to take advantage of all available technologies to improve every aspect of mobility and traffic. Our focus in this thesis is to use these advancements in technology and infrastructure to mitigate traffic congestion. We discuss the state of the art in traffic flow optimization methods, their limitations, and the benefits of a new point of view. The traffic monitoring mechanism that we propose uses vehicular telecommunication to gather the traffic information that is fundamental to the creation of a consistent overview of the traffic situation, to provision real-time information to drivers, and to optimizing their routes. In order to study the impact of dynamic rerouting on the traffic congestion experienced in the urban environment, we need a reliable representation of the traffic situation. In this thesis, traffic flow theory, together with mobility models and propagation models, are the basis to providing a simulation environment capable of providing a realistic and interactive urban mobility, which is used to test and validate our solution for mitigating traffic congestion. The topology of the urban environment plays a fundamental role in traffic optimization, not only in terms of mobility patterns, but also in the connectivity and infrastructure available. Given the complexity of the problem, we start by defining the main parameters we want to optimize, and the user interaction required, in order to achieve the goal. We aim to optimize the travel time from origin to destination with a selfish approach, focusing on each driver. We then evaluated constraints and added values of the proposed optimization, providing a preliminary study on its impact on a simple scenario. Our evaluation is made in a best-case scenario using complete information, then in a more realistic scenario with partial information on the global traffic situation, where connectivity and coverage play a major role. The lack of a general-purpose, freely-available, realistic and dependable scenario for Vehicular Ad Hoc Networks (VANETs) creates many problems in the research community in providing and comparing realistic results. To address these issues, we implemented a synthetic traffic scenario, based on a real city, to evaluate dynamic routing in a realistic urban environment. The Luxembourg SUMO Traffic (LuST) Scenario is based on the mobility derived from the City of Luxembourg. The scenario is built for the Simulator of Urban MObiltiy (SUMO) and it is compatible with Vehicles in Network Simulation (VEINS) and Objective Modular Network Testbed in C++ (OMNet++), allowing it to be used in VANET simulations. In this thesis we present a selfish traffic optimization approach based on dynamic rerouting, able to mitigate the impact of traffic congestion in urban environments on a global scale. The general-purpose traffic scenario built to validate our results is already being used by the research community, and is freely-available under the MIT licence, and is hosted on GitHub. [less ▲]

Detailed reference viewed: 133 (28 UL)
Full Text
See detailNovel Methods for Multi-Shape Analysis
Bernard, Florian UL

Doctoral thesis (2016)

Multi-shape analysis has the objective to recognise, classify, or quantify morphological patterns or regularities within a set of shapes of a particular object class in order to better understand the ... [more ▼]

Multi-shape analysis has the objective to recognise, classify, or quantify morphological patterns or regularities within a set of shapes of a particular object class in order to better understand the object class of interest. One important aspect of multi-shape analysis are Statistical Shape Models (SSMs), where a collection of shapes is analysed and modelled within a statistical framework. SSMs can be used as (statistical) prior that describes which shapes are more likely and which shapes are less likely to be plausible instances of the object class of interest. Assuming that the object class of interest is known, such a prior can for example be used in order to reconstruct a three-dimensional surface from only a few known surface points. One relevant application of this surface reconstruction is 3D image segmentation in medical imaging, where the anatomical structure of interest is known a-priori and the surface points are obtained (either automatically or manually) from images. Frequently, Point Distribution Models (PDMs) are used to represent the distribution of shapes, where each shape is discretised and represented as labelled point set. With that, a shape can be interpreted as an element of a vector space, the so-called shape space, and the shape distribution in shape space can be estimated from a collection of given shape samples. One crucial aspect for the creation of PDMs that is tackled in this thesis is how to establish (bijective) correspondences across the collection of training shapes. Evaluated on brain shapes, the proposed method results in an improved model quality compared to existing approaches whilst at the same time being superior with respect to runtime. The second aspect considered in this work is how to learn a low-dimensional subspace of the shape space that is close to the training shapes, where all factors spanning this subspace have local support. Compared to previous work, the proposed method models the local support regions implicitly, such that no initialisation of the size and location of these regions is necessary, which is advantageous in scenarios where this information is not available. The third topic covered in this thesis is how to use an SSM in order to reconstruct a surface from only few surface points. By using a Gaussian Mixture Model (GMM) with anisotropic covariance matrices, which are oriented according to the surface normals, a more surface-oriented fitting is achieved compared to a purely point-based fitting when using the common Iterative Closest Point (ICP) algorithm. In comparison to ICP we find that the GMM-based approach gives superior accuracy and robustness on sparse data. Furthermore, this work covers the transformation synchronisation method, which is a procedure for removing noise that accounts for transitive inconsistency in the set of pairwise linear transformations. One interesting application of this methodology that is relevant in the context of multi-shape analysis is to solve the multi-alignment problem in an unbiased/reference-free manner. Moreover, by introducing an improvement of the numerical stability, the methodology can be used to solve the (affine) multi-image registration problem from pairwise registrations. Compared to reference-based multi-image registration, the proposed approach leads to an improved registration accuracy and is unbiased/reference-free, which makes it ideal for statistical analyses. [less ▲]

Detailed reference viewed: 70 (15 UL)
Full Text
See detailEmotion Regulation and Job Burnout: Investigating the relationship between emotion regulation knowledge, abilities and dispositions and their role in the prediction of Job Burnout
Seixas, Rita UL

Doctoral thesis (2016)

The present thesis has two goals: 1) to understand the relationship between three levels of emotion regulation - knowledge, abilities and dispositions - as proposed by the Three-level model of emotional ... [more ▼]

The present thesis has two goals: 1) to understand the relationship between three levels of emotion regulation - knowledge, abilities and dispositions - as proposed by the Three-level model of emotional competences (Mikolajczak, 2009) and 2) to investigate the role of these three levels in the prediction of job burnout – while accounting for the moderator role of the emotional labor of the job, and by distinguishing these effects in two professional sectors (finance and health-care sector). Methodologically, besides emotion regulation knowledge, specific emotion regulation strategies - reappraisal, suppression, enhancement and expressive flexibility – are considered and assessed both as abilities and as dispositions. Results from goal 1 indicate that: a) knowledge, abilities and dispositions are not hierarchically structured; b) different strategies are independent from each other (both in terms of ability and in terms of disposition); c) the disposition to reappraise and to enhance do not depend on a priori knowledge or ability, while the disposition to suppress decreases as the emotion regulation knowledge and the ability to enhance increase. Results from goal 2 indicate that emotion regulation knowledge, abilities and dispositions are incremental predictors of job burnout. Specifically: a) emotion regulation knowledge decreases emotional exhaustion, and reappraisal ability increases the sense of professional efficacy; b) expressive flexibility increases professional efficacy for workers in high emotional labor jobs, while its effect is detrimental for workers in low emotional labor jobs; c) suppression disposition protects individuals from professional inefficacy while suppression ability is detrimental in this regard. Finally, the results point out that different strategies have different impacts in different professional sectors, notably suppression which appears as a detrimental strategy for finance workers and as a protective strategy for health-care workers. Overall, these results point out that several dimensions of emotion regulation are relevant in the prediction of job burnout. Specifically, knowledge, as well as abilities and dispositions seem to play an incremental role in explaining variability in job burnout symptoms. The effects of the specific strategies should not be analyzed in a simplistic way but instead, are better understood when taking into account the specificities of the job and the professional context. [less ▲]

Detailed reference viewed: 97 (14 UL)
Full Text
See detailEnabling Model-Driven Live Analytics For Cyber-Physical Systems: The Case of Smart Grids
Hartmann, Thomas UL

Doctoral thesis (2016)

Advances in software, embedded computing, sensors, and networking technologies will lead to a new generation of smart cyber-physical systems that will far exceed the capabilities of today’s embedded ... [more ▼]

Advances in software, embedded computing, sensors, and networking technologies will lead to a new generation of smart cyber-physical systems that will far exceed the capabilities of today’s embedded systems. They will be entrusted with increasingly complex tasks like controlling electric grids or autonomously driving cars. These systems have the potential to lay the foundations for tomorrow’s critical infrastructures, to form the basis of emerging and future smart services, and to improve the quality of our everyday lives in many areas. In order to solve their tasks, they have to continuously monitor and collect data from physical processes, analyse this data, and make decisions based on it. Making smart decisions requires a deep understanding of the environment, internal state, and the impacts of actions. Such deep understanding relies on efficient data models to organise the sensed data and on advanced analytics. Considering that cyber-physical systems are controlling physical processes, decisions need to be taken very fast. This makes it necessary to analyse data in live, as opposed to conventional batch analytics. However, the complex nature combined with the massive amount of data generated by such systems impose fundamental challenges. While data in the context of cyber-physical systems has some similar characteristics as big data, it holds a particular complexity. This complexity results from the complicated physical phenomena described by this data, which makes it difficult to extract a model able to explain such data and its various multi-layered relationships. Existing solutions fail to provide sustainable mechanisms to analyse such data in live. This dissertation presents a novel approach, named model-driven live analytics. The main contribution of this thesis is a multi-dimensional graph data model that brings raw data, domain knowledge, and machine learning together in a single model, which can drive live analytic processes. This model is continuously updated with the sensed data and can be leveraged by live analytic processes to support decision-making of cyber-physical systems. The presented approach has been developed in collaboration with an industrial partner and, in form of a prototype, applied to the domain of smart grids. The addressed challenges are derived from this collaboration as a response to shortcomings in the current state of the art. More specifically, this dissertation provides solutions for the following challenges: First, data handled by cyber-physical systems is usually dynamic—data in motion as opposed to traditional data at rest—and changes frequently and at different paces. Analysing such data is challenging since data models usually can only represent a snapshot of a system at one specific point in time. A common approach consists in a discretisation, which regularly samples and stores such snapshots at specific timestamps to keep track of the history. Continuously changing data is then represented as a finite sequence of such snapshots. Such data representations would be very inefficient to analyse, since it would require to mine the snapshots, extract a relevant dataset, and finally analyse it. For this problem, this thesis presents a temporal graph data model and storage system, which consider time as a first-class property. A time-relative navigation concept enables to analyse frequently changing data very efficiently. Secondly, making sustainable decisions requires to anticipate what impacts certain actions would have. Considering complex cyber-physical systems, it can come to situations where hundreds or thousands of such hypothetical actions must be explored before a solid decision can be made. Every action leads to an independent alternative from where a set of other actions can be applied and so forth. Finding the sequence of actions that leads to the desired alternative, requires to efficiently create, represent, and analyse many different alternatives. Given that every alternative has its own history, this creates a very high combinatorial complexity of alternatives and histories, which is hard to analyse. To tackle this problem, this dissertation introduces a multi-dimensional graph data model (as an extension of the temporal graph data model) that enables to efficiently represent, store, and analyse many different alternatives in live. Thirdly, complex cyber-physical systems are often distributed, but to fulfil their tasks these systems typically need to share context information between computational entities. This requires analytic algorithms to reason over distributed data, which is a complex task since it relies on the aggregation and processing of various distributed and constantly changing data. To address this challenge, this dissertation proposes an approach to transparently distribute the presented multi-dimensional graph data model in a peer-to-peer manner and defines a stream processing concept to efficiently handle frequent changes. Fourthly, to meet future needs, cyber-physical systems need to become increasingly intelligent. To make smart decisions, these systems have to continuously refine behavioural models that are known at design time, with what can only be learned from live data. Machine learning algorithms can help to solve this unknown behaviour by extracting commonalities over massive datasets. Nevertheless, searching a coarse-grained common behaviour model can be very inaccurate for cyber-physical systems, which are composed of completely different entities with very different behaviour. For these systems, fine-grained learning can be significantly more accurate. However, modelling, structuring, and synchronising many fine-grained learning units is challenging. To tackle this, this thesis presents an approach to define reusable, chainable, and independently computable fine-grained learning units, which can be modelled together with and on the same level as domain data. This allows to weave machine learning directly into the presented multi-dimensional graph data model. In summary, this thesis provides an efficient multi-dimensional graph data model to enable live analytics of complex, frequently changing, and distributed data of cyber-physical systems. This model can significantly improve data analytics for such systems and empower cyber-physical systems to make smart decisions in live. The presented solutions combine and extend methods from model-driven engineering, models@run.time, data analytics, database systems, and machine learning. [less ▲]

Detailed reference viewed: 197 (45 UL)
See detailLa sanction de l'obligation légale d'information en droit des contrats de consommation : Étude de droit français et luxembourgeois.
Pitzalis Épouse Welch, Cécile Elise UL

Doctoral thesis (2016)

Numerous legal duties to disclose information are promulgated in consumer contract law by the legislational body of the European Union and are thus common to French and Luxembourgish laws. In this context ... [more ▼]

Numerous legal duties to disclose information are promulgated in consumer contract law by the legislational body of the European Union and are thus common to French and Luxembourgish laws. In this context, the legal duty to disclose information possesses a double objective to protect the consumer by enlightening their consent, and regulating the market by favoring loyal competition. A breach of obligatory information disclosures by a professional must be sanctioned to ensure the effectiveness of the obligation. The penalty for breaching the legal obligation to disclose information in consumer contract law must be analyzed using its angle of efficiency within the capacity of its effects to reach the assigned goals. Analyzing French and Luxembourgish consumer contract laws, both similar but with specificities, surmounts a perspective of legislatory choices in terms of sanctioning the legal duties to disclose information, and also aids by informing proposals to improve these current systems of sanction. [less ▲]

Detailed reference viewed: 28 (4 UL)
Full Text
See detailEnergy-efficient Communications in Cloud, Mobile Cloud and Fog Computing
Fiandrino, Claudio UL

Doctoral thesis (2016)

This thesis studies the problem of energy efficiency of communications in distributed computing paradigms, including cloud computing, mobile cloud computing and fog/edge computing. Distributed computing ... [more ▼]

This thesis studies the problem of energy efficiency of communications in distributed computing paradigms, including cloud computing, mobile cloud computing and fog/edge computing. Distributed computing paradigms have significantly changed the way of doing business. With cloud computing, companies and end users can access the vast majority services online through a virtualized environment in a pay-as-you-go basis. %Three are the main services typically consumed by cloud users are Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Mobile cloud and fog/edge computing are the natural extension of the cloud computing paradigm for mobile and Internet of Things (IoT) devices. Based on offloading, the process of outsourcing computing tasks from mobile devices to the cloud, mobile cloud and fog/edge computing paradigms have become popular techniques to augment the capabilities of the mobile devices and to reduce their battery drain. Being equipped with a number of sensors, the proliferation of mobile and IoT devices has given rise to a new cloud-based paradigm for collecting data, which is called mobile crowdsensing as for proper operation it requires a large number of participants. A plethora of communication technologies is applicable to distributing computing paradigms. For example, cloud data centers typically implement wired technologies while mobile cloud and fog/edge environments exploit wireless technologies such as 3G/4G, WiFi and Bluetooth. Communication technologies directly impact the performance and the energy drain of the system. This Ph.D. thesis analyzes from a global perspective the efficiency in using energy of communications systems in distributed computing paradigms. In particular, the following contributions are proposed: - A new framework of performance metrics for communication systems of cloud computing data centers. The proposed framework allows a fine-grain analysis and comparison of communication systems, processes, and protocols, defining their influence on the performance of cloud applications. - A novel model for the problem of computation offloading, which describes the workflow of mobile applications through a new Directed Acyclic Graph (DAG) technique. This methodology is suitable for IoT devices working in fog computing environments and was used to design an Android application, called TreeGlass, which performs recognition of trees using Google Glass. TreeGlass is evaluated experimentally in different offloading scenarios by measuring battery drain and time of execution as key performance indicators. - In mobile crowdsensing systems, novel performance metrics and a new framework for data acquisition, which exploits a new policy for user recruitment. Performance of the framework are validated through CrowdSenSim, which is a new simulator designed for mobile crowdsensing activities in large scale urban scenarios. [less ▲]

Detailed reference viewed: 157 (16 UL)
Full Text
See detailGAMES AND STRATEGIES IN ANALYSIS OF SECURITY PROPERTIES
Tabatabaei, Masoud UL

Doctoral thesis (2016)

Information security problems typically involve decision makers who choose and adjust their behaviors in the interaction with each other in order to achieve their goals. Consequently, game theoretic ... [more ▼]

Information security problems typically involve decision makers who choose and adjust their behaviors in the interaction with each other in order to achieve their goals. Consequently, game theoretic models can potentially be a suitable tool for better understanding the challenges that the interaction of participants in information security scenarios bring about. In this dissertation, we employ models and concepts of game theory to study a number of subjects in the field of information security. In the first part, we take a game-theoretic approach to the matter of preventing coercion in elections. Our game models for the election involve an honest election authority that chooses between various protection methods with different levels of resistance and different implementation costs. By analysing these games, it turns out that the society is better off if the security policy is publicly announced, and the authorities commit to it. Our focus in the second part is on the property of noninterference in information flow security. Noninterference is a property that captures confidentiality of actions executed by a given process. However, the property is hard to guarantee in realistic scenarios. We show that the security of a system can be seen as an interplay between functionality requirements and the strategies adopted by users, and based on this we propose a weaker notion of noninterference, which we call strategic noninterference. We also give a characterisation of strategic noninterference through unwinding relations for specific subclasses of goals and for the simplified setting where a strategy is given as a parameter. In the third part, we study the security of information flow based on the consequences of information leakage to the adversary. Models of information flow security commonly prevent any information leakage, regardless of how grave or harmless the consequences the leakage can be. Even in models where each piece of information is classified as either sensitive or insensitive, the classification is “hardwired” and given as a parameter of the analysis, rather than derived from more fundamental features of the system. We suggest that information security is not a goal in itself, but rather a means of preventing potential attackers from compromising the correct behavior of the system. To formalize this, we first show how two information flows can be compared by looking at the adversary’s ability to harm the system. Then, we propose that the information flow in a system is effectively secure if it is as good as its idealized variant based on the classical notion of noninterference. Finally, we shift our focus to the strategic aspect of information security in voting procedures. We argue that the notions of receipt-freeness and coercion resistance are underpinned by existence (or nonexistence) of a suitable strategy for some participants of the voting process. In order toback the argument formally, we provide logical “transcriptions” of the informal intuitions behind coercion-related properties that can be found in the existing literature. The transcriptions are formulatedin the modal game logic ATL*, well known in the area of multi-agent systems. [less ▲]

Detailed reference viewed: 51 (15 UL)
Full Text
See detailA Model-Driven Approach to Offline Trace Checking of Temporal Properties
Dou, Wei UL

Doctoral thesis (2016)

Offline trace checking is a procedure for evaluating requirements over a log of events produced by a system. The goal of this thesis is to present a practical and scalable solution for the offline ... [more ▼]

Offline trace checking is a procedure for evaluating requirements over a log of events produced by a system. The goal of this thesis is to present a practical and scalable solution for the offline checking of the temporal requirements of a system, which can be used in contexts where model-driven engineering is already a practice, where temporal specifications should be written in a domain-specific language not requiring a strong mathematical background, and where relying on standards and industry-strength tools for property checking is a fundamental prerequisite. The main contributions of this thesis are: i) the TemPsy (Temporal Properties made easy) language, a pattern-based domain-specific language for the specification of temporal properties; ii) a model-driven trace checking procedure, which relies on an optimized mapping of temporal requirements written in TemPsy into Object Constraint Language (OCL) constraints on a conceptual model of execution traces; iii) a model-driven approach to violation information collection, which relies on the evaluation of OCL queries on an instance of the trace model; iv) three publicly-available tools: 1) TemPsy-Check and 2) TemPsy-Report, implementing, respectively, the trace checking and violation information collection procedures; 3) an interactive visualization tool for navigating and analyzing the violation information collected by TemPsy-Report; v) an evaluation of the scalability of TemPsy-Check and TemPsy-Report, when applied to the verification of real properties. The proposed approaches have been applied to and evaluated on a case study developed in collaboration with a public service organization, active in the domain of business process modeling for eGovernment. The experimental results show that TemPsy-Check is able to analyze traces with one million events in about two seconds, and TemPsy-Report can collect violation information from such large traces in less than ten seconds; both tools scale linearly with respect to the length of the trace. [less ▲]

Detailed reference viewed: 63 (28 UL)
Full Text
See detailMining Software Artefact Variants for Product Line Migration and Analysis
Martinez, Jabier UL

Doctoral thesis (2016)

Software Product Lines (SPLs) enable the derivation of a family of products based on variability management techniques. Inspired by the manufacturing industry, SPLs use feature configurations to satisfy ... [more ▼]

Software Product Lines (SPLs) enable the derivation of a family of products based on variability management techniques. Inspired by the manufacturing industry, SPLs use feature configurations to satisfy different customer needs, along with reusable assets associated to the features, to allow systematic and planned reuse. SPLs are reported to have numerous benefits such as time-to-market reduction, productivity increase or product quality improvement. However, the barriers to adopt an SPL are equally numerous requiring a high up-front investment in domain analysis and implementation. In this context, to create variants, companies more commonly rely on ad-hoc reuse techniques such as copy-paste-modify. Capitalizing on existing variants by extracting the common and varying elements is referred to as extractive approaches for SPL adoption. Extractive SPL adoption allows the migration from single-system development mentality to SPL practices. Several activities are involved to achieve this goal. Due to the complexity of artefact variants, feature identification is needed to analyse the domain variability. Also, to identify the associated implementation elements of the features, their location is needed as well. In addition, feature constraints should be identified to guarantee that customers are not able to select invalid feature combinations (e.g., one feature requires or excludes another). Then, the reusable assets associated to the feature should be constructed. And finally, to facilitate the communication among stakeholders, a comprehensive feature model need to be synthesized. While several approaches have been proposed for the above-mentioned activities, extractive SPL adoption remains challenging. A recurring barrier consists in the limitation of existing techniques to be used beyond the specific types of artefacts that they initially targeted, requiring inputs and providing outputs at different granularity levels and with different representations. Seamlessly address the activities within the same environment is a challenge by itself. This dissertation presents a unified, generic and extensible framework for mining software artefact variants in the context of extractive SPL adoption. We describe both its principles and its realization in Bottom-Up Technologies for Reuse (BUT4Reuse). Special attention is paid to model-driven development scenarios. A unified process and representation would enable practitioners and researchers to empirically analyse and compare different techniques. Therefore, we also focus on benchmarks and in the analysis of variants, in particular, in benchmarking feature location techniques and in identifying families of variants in the wild for experimenting with feature identification techniques. We also present visualisation paradigms to support domain experts on feature naming during feature identification and to support on feature constraints discovery. Finally, we investigate and discuss the mining of artefact variants for SPL analysis once the SPL is already operational. Concretely, we present an approach to find relevant variants within the SPL configuration space guided by end user assessments. [less ▲]

Detailed reference viewed: 174 (28 UL)
Full Text
See detailEssays on Financial Markets and Banking Regulation.
El Joueidi, Sarah UL

Doctoral thesis (2016)

Detailed reference viewed: 45 (16 UL)
Full Text
See detailCOMPLEX PROBLEM SOLVING IN UNIVERSITY SELECTION
Stadler, Matthias Johannes UL

Doctoral thesis (2016)

Detailed reference viewed: 24 (7 UL)
Full Text
See detailAUTOMATED ANALYSIS OF NATURAL-LANGUAGE REQUIREMENTS USING NATURAL LANGUAGE PROCESSING
Arora, Chetan UL

Doctoral thesis (2016)

Natural Language (NL) is arguably the most common vehicle for specifying requirements. This dissertation devises automated assistance for some important tasks that requirements engineers need to perform ... [more ▼]

Natural Language (NL) is arguably the most common vehicle for specifying requirements. This dissertation devises automated assistance for some important tasks that requirements engineers need to perform in order to structure, manage, and elaborate NL requirements in a sound and effective manner. The key enabling technology underlying the work in this dissertation is Natural Language Processing (NLP). All the solutions presented herein have been developed and empirically evaluated in close collaboration with industrial partners. The dissertation addresses four different facets of requirements analysis: • Checking conformance to templates. Requirements templates are an effective tool for improving the structure and quality of NL requirements statements. When templates are used for specifying the requirements, an important quality assurance task is to ensure that the requirements conform to the intended templates. We develop an automated solution for checking the conformance of requirements to templates. • Extraction of glossary terms. Requirements glossaries (dictionaries) improve the understandability of requirements, and mitigate vagueness and ambiguity. We develop an auto- mated solution for supporting requirements analysts in the selection of glossary terms and their related terms. • Extraction of domain models. By providing a precise representation of the main concepts in a software project and the relationships between these concepts, a domain model serves as an important artifact for systematic requirements elaboration. We propose an automated approach for domain model extraction from requirements. The extraction rules in our approach encompass both the rules already described in the literature as well as a number of important extensions developed in this dissertation. • Identifying the impact of requirements changes. Uncontrolled change in requirements presents a major risk to the success of software projects. We address two different dimen- sions of requirements change analysis in this dissertation: First, we develop an automated approach for predicting how a change to one requirement impacts other requirements. Next, we consider the propagation of change from requirements to design. To this end, we develop an automated approach for predicting how the design of a system is impacted by changes made to the requirements. [less ▲]

Detailed reference viewed: 272 (62 UL)
See detailHIGHER MOMENT ASSET PRICING: RISK PREMIUMS, METHODOLOGY AND ANOMALIES
Lin, Yuehao UL

Doctoral thesis (2016)

Detailed reference viewed: 41 (8 UL)
Full Text
See detailSpatial modelling of feedback effects between urban structure and traffic-induced air pollution - Insights from quantitative geography and urban economics
Schindler, Mirjam UL

Doctoral thesis (2016)

Urban air pollution is among the largest environmental health risk and its major source is traffic, which is also the main cause of spatial variation of pollution concerns within cities. Spatial responses ... [more ▼]

Urban air pollution is among the largest environmental health risk and its major source is traffic, which is also the main cause of spatial variation of pollution concerns within cities. Spatial responses by residents to such a risk factor have important consequences on urban structures and, in turn, on the spatial distribution of air pollution and population exposure. These spatial interactions and feedbacks need to be understood comprehensively in order to design spatial planning policies to mitigate local health effects. This dissertation focusses on how residents take their location decisions when they are concerned about health effects associated with traffic-induced air pollution and how these decisions shape future cities. Theoretical analytical and simulation models integrating urban economics and quantitative geography are developed to analyse and simulate the feedback effect between urban structure and population exposure to traffic-induced air pollution. Based on these, spatial impacts of policy, socio-economic and technological frameworks are analysed. Building upon an empirical exploratory analysis, a chain of theoretical models simulates in 2D how the preference of households for green amenities as indirect appraisal of local air quality and local neighbourhood design impact the environment, residents' health and well-being. In order to study the feedback effect of households' aversion to traffic-induced pollution exposure on urban structure, a 1D theoretical urban economics model is developed. Feedback effects on pollution and exposure distributions and intra-urban equity are analysed. Equilibrium, first- and second-best are compared and discussed as to their population distributions, spatial extents and environmental and health implications. Finally, a dynamic agent-based simulation model in 2D further integrates geographical elements into the urban economics framework. Thus, it enhances the spatial representation of the spatial interactions between the location of households and traffic-induced air pollution within cities. Simulations contrast neighbourhood and distance effects of the pollution externality and emphasise the role of local urban characteristics to mitigate population exposure and to consolidate health and environmental effects. The dissertation argues that the consideration of local health concerns due to traffic-induced air pollution in policy design challenges the concept of high urban densification both locally and with respect to distance and advises spatial differentiation. [less ▲]

Detailed reference viewed: 144 (17 UL)
Full Text
See detailNew approaches to understand conductive and polar domain walls by Raman spectroscopy and low energy electron microscopy
Nataf, Guillaume UL

Doctoral thesis (2016)

We investigate the structural and electronic properties of domain walls to achieve a better understanding of the conduction mechanisms in domain walls of lithium niobate and the polarity of domain walls ... [more ▼]

We investigate the structural and electronic properties of domain walls to achieve a better understanding of the conduction mechanisms in domain walls of lithium niobate and the polarity of domain walls in calcium titanate. In a first part, we discuss the interaction between defects and domain walls in lithium niobate. A dielectric resonance with a low activation energy is observed, which vanishes under thermal annealing in monodomain samples while it remains stable in periodically poled samples. Therefore we propose that domain walls stabilize polaronic states. We also report the evolution of Raman modes with increasing amount of magnesium in congruent lithium niobate. We identified specific frequency shifts of the modes at the domain walls. The domains walls appear then as spaces where polar defects are stabilized. In a second step, we use mirror electron microscopy (MEM) and low energy electron microscopy (LEEM) to characterize the domains and domain walls at the surface of magnesium-doped lithium niobate. We demonstrate that out of focus settings can be used to determine the domain polarization. At domain walls, a local stray, lateral electric field arising from different surface charge states is observed. In a second part, we investigate the polarity of domain walls in calcium titanate. We use resonant piezoelectric spectroscopy to detect elastic resonances induced by an electric field, which is interpreted as a piezoelectric response of the walls. A direct image of the domain walls in calcium titanate is also obtained by LEEM, showing a clear contrast in surface potential between domains and walls. This contrast is observed to change reversibly upon electron irradiation due to the screening of polarization charges at domain walls. [less ▲]

Detailed reference viewed: 64 (12 UL)
See detailRegulating Hedge Funds in the EU
Seretakis, Alexandros UL

Doctoral thesis (2016)

Praised for enhancing the liquidity of the markets in which they trade and improving the corporate governance of the companies which they target and criticized for contributing to the instability of the ... [more ▼]

Praised for enhancing the liquidity of the markets in which they trade and improving the corporate governance of the companies which they target and criticized for contributing to the instability of the financial system hedge funds remain the most controversial vehicles of the modern financial system. Unconstrained until recently by regulation, operating under the radar of securities laws and with highly incentivized managers, hedge funds have managed to attract ever-increasing amounts of capital from sophisticated investors and have attracted the attention of the public, regulators and politicians. The financial crisis of 2007-2008, the most severe financial crisis after the Great Depression, prompted politicians and regulators both in the U.S. and Europe to redesign the financial system. The unregulated hedge fund industry heavily criticized for contributing or even causing the financial crisis was one of the first to come under the regulator?s ambit. The result was the adoption of the Dodd-Frank Act in the U.S. and the Alternative Investment Fund Managers Directive in the European Union. These two pieces of legislation are the first ever attempt to directly regulate the hedge fund industry. Taking into account the exponential growth of the hedge fund industry, its beneficial effects and its importance for certain countries such as U.S and Luxembourg, one can easily understand the considerable impact of these regulations. A comparative and critical examination of these major pieces of regulation and their potential impact on the hedge fund industry in Europe and the U.S. is absent from the academic literature something completely excusable when considering that the Dodd-Frank was adopted in 2010 and the AIFM Directive in 2009. Our Phd thesis will attempt to fill this gap and offer a critical assessment of both the Dodd-Frank Act and the AIFM Directive and their impact on the hedge fund industry across the Atlantic. Furthermore, our thesis will seek to offer concrete proposals for the amelioration of the current EU regime with respect to hedge funds building upon US regulations. [less ▲]

Detailed reference viewed: 62 (6 UL)
Full Text
See detailMetalorganic chemical vapour deposition of p-type delafossite CuCrO2 semiconductor thin films: characterization and application to transparent p-n junction
Crêpellière, Jonathan Charles UL

Doctoral thesis (2016)

Transparent conducting oxides such as ITO, FTO or AZO, are currently used in a number of commercial applications, such as transparent electrodes for flat panel displays, light-emitting diodes and solar ... [more ▼]

Transparent conducting oxides such as ITO, FTO or AZO, are currently used in a number of commercial applications, such as transparent electrodes for flat panel displays, light-emitting diodes and solar cells. These applications rely essentially on n-type conductive materials. The developments towards electronic devices based on transparent p-n junctions have triggered an intense research for the synthesis of p-type transparent conductors with sufficiently high quality. Copper-based delafossite materials are thought to hold one of the highest potential and among them CuCrO2 has exhibited strong potential in terms of trade off electrical conductivity and optical transparency. In this work, we report for the first time on CuCrO2 thin-films, grown using a pulsed injection MOCVD. We particularly highlight the influence of the growth temperature, the volume precursor ration and the oxygen partial pressure on chemical, morphological, structural, electrical and optical properties of the films. Delafossite CuCrO2 thin films are synthesized as low as 310°C on glass substrate, which is the lowest growth temperature reported to our knowledge. The films exhibit a carbon contamination below 1%, an excess of chromium and a p-type conductivity. Electrical conductivity at room temperature is measured as high as 17S.cm-1 with a moderate visible transparency at 50%. We report the highest trade off electrical conductivity and visible transparency of CuCrO2 thin films. We investigate the transport conduction with simultaneous electrical and thermoelectrical measurements and band conduction and small polaron models are controversially discussed. A functional transparent p-n junction CuCrO2/ZnO, based on only two-layers, is synthesized with a visible transparency of 45-50%. The junction shows a typical current-voltage characteristic of a diode, with high series resistance features. The device is efficiently acting as an UV detector. [less ▲]

Detailed reference viewed: 24 (2 UL)
Full Text
See detailThe C*-algebras of certain Lie groups
Günther, Janne-Kathrin UL

Doctoral thesis (2016)

In this doctoral thesis, the C*-algebras of the connected real two-step nilpotent Lie groups and the Lie group SL(2,R) are characterized. Furthermore, as a preparation for an analysis of its C*-algebra ... [more ▼]

In this doctoral thesis, the C*-algebras of the connected real two-step nilpotent Lie groups and the Lie group SL(2,R) are characterized. Furthermore, as a preparation for an analysis of its C*-algebra, the topology of the spectrum of the semidirect product U(n) x H_n is described, where H_n denotes the Heisenberg Lie group and U(n) the unitary group acting by automorphisms on H_n. For the determination of the group C*-algebras, the operator valued Fourier transform is used in order to map the respective C*-algebra into the algebra of all bounded operator fields over its spectrum. One has to find the conditions that are satisfied by the image of this C*-algebra under the Fourier transform and the aim is to characterize it through these conditions. In the present thesis, it is proved that both the C*-algebras of the connected real two-step nilpotent Lie groups and the C*-algebra of SL(2,R) fulfill the same conditions, namely the “norm controlled dual limit” conditions. Thereby, these C*-algebras are described in this work and the “norm controlled dual limit” conditions are explicitly computed in both cases. The methods used for the two-step nilpotent Lie groups and the group SL(2,R) are completely different from each other. For the two-step nilpotent Lie groups, one regards their coadjoint orbits and uses the Kirillov theory, while for the group SL(2,R) one can accomplish the calculations more directly. [less ▲]

Detailed reference viewed: 44 (11 UL)
Full Text
See detailTorsion and purity on non-integral schemes and singular sheaves in the fine Simpson moduli spaces of one-dimensional sheaves on the projective plane
Leytem, Alain UL

Doctoral thesis (2016)

This thesis consists of two individual parts, each one having an interest in itself, but which are also related to each other. In Part I we analyze the general notions of the torsion of a module over a ... [more ▼]

This thesis consists of two individual parts, each one having an interest in itself, but which are also related to each other. In Part I we analyze the general notions of the torsion of a module over a non-integral ring and the torsion of a sheaf on a non-integral scheme. We give an explicit definition of the torsion subsheaf of a quasi-coherent O_X-module and prove a condition under which it is also quasi-coherent. Using the associated primes of a module and the primary decomposition of ideals in Noetherian rings, we review the main criteria for torsion-freeness and purity of a sheaf that have been established by Grothendieck and Huybrechts-Lehn. These allow to study the relations between both concepts. It turns out that they are equivalent in "nice" situations, but they can be quite different as soon as the scheme does not have equidimensional components. We illustrate the main differences on various examples. We also discuss some properties of the restriction of a coherent sheaf to its annihilator and its Fitting support and finally prove that sheaves of pure dimension are torsion-free on their support, no matter which closed subscheme structure it is given. Part II deals with the problem of determining "how many" sheaves in the fine Simpson moduli spaces M = M_{dm-1}(P2) of stable sheaves on the projective plane P2 with linear Hilbert polynomial dm-1 for d\geq 4 are not locally free on their support. Such sheaves are called singular and form a closed subvariety M' in M. Using results of Maican and Drézet, the open subset M0 of sheaves in M without global sections may be identified with an open subvariety of a projective bundle over a variety of Kronecker modules N. By the Theorem of Hilbert-Burch we can describe sheaves in an open subvariety of M0 as twisted ideal sheaves of curves of degree d. In order to determine the singular ones, we look at ideals of points on planar curves. In the case of simple and fat curvilinear points, we characterize free ideals in terms of the absence of two coeffcients in the polynomial defining the curve. This allows to show that a generic fiber of M0\cap M' over N is a union of projective subspaces of codimension 2 and finally that M' is singular of codimension 2. [less ▲]

Detailed reference viewed: 168 (56 UL)
Full Text
See detailEntre régions : Le Maroc et le Mexique face aux migrations , dans les contextes d'intégration régionale
Nanga, Emeline Modeste UL

Doctoral thesis (2016)

L’objet de cette thèse vise à analyser d’un point de vue comparé, les liens étroits qui existent entre le phénomène de l’immigration (clandestine), les processus d’intégration régionale actuellement en ... [more ▼]

L’objet de cette thèse vise à analyser d’un point de vue comparé, les liens étroits qui existent entre le phénomène de l’immigration (clandestine), les processus d’intégration régionale actuellement en cours dans l’espace EuroMed et les Amériques et la sécurité (nationale/humaine). Et parallèlement, l’impact de ces considérations sur les droits fondamentaux des migrants en transit ou en situation irrégulière dans ces espaces ; ainsi que sur le rôle et les responsabilités traditionnellement reconnus à l’État. L’accent ici étant mis sur l'UE et les Etats-Unis en tant que pays d’accueil ; le Mexique et le Maroc, simultanément en tant que pays d’émigration, d’immigration et de transit. [less ▲]

Detailed reference viewed: 81 (5 UL)
Full Text
See detailBerezin-Toeplitz Quantization on K3 Surfaces and Hyperkähler Berezin-Toeplitz Quantization
Castejon-Diaz, Hector UL

Doctoral thesis (2016)

Given a quantizable Kähler manifold, the Berezin-Toeplitz quantization scheme constructs a quantization in a canonical way. In their seminal paper Martin Bordemann, Eckhard Meinrenken and Martin ... [more ▼]

Given a quantizable Kähler manifold, the Berezin-Toeplitz quantization scheme constructs a quantization in a canonical way. In their seminal paper Martin Bordemann, Eckhard Meinrenken and Martin Schlichenmaier proved that for a compact Kähler manifold such scheme is a well defined quantization which has the correct semiclassical limit. However, there are some manifolds which admit more than one (non-equivalent) Kähler structure. The question arises then, whether the choice of a different Kähler structure gives rise to a completely different quantizations or the resulting quantizations are related. An example of such objects are the so called K3 surfaces, which have some extra relations between the different Kähler structures. In this work, we consider the family of K3 surfaces which admit more than one quantizable Kähler structure and we use the relations between the different Kähler structures to study whether the corresponding quantizations are related or not. In particular, we prove that such K3 surfaces have always Picard number 20, which implies that their moduli space is discrete, and that the resulting quantum Hilbert spaces are always isomorphic, although not always in a canonical way. However, there exists an infinite subfamily of K3 surfaces for which the isomorphism is canonical. We also define new quantization operators on the product of the different quantum Hilbert spaces and we call this process Hyperkähler quantization. We prove that these new operators have the semiclassical limit, as well as new properties inherited from the quaternionic numbers. [less ▲]

Detailed reference viewed: 94 (10 UL)
Full Text
See detailUnderstanding the internationalization of higher education as a policy process. The case of Romania
Deca, Ligia UL

Doctoral thesis (2016)

This doctoral thesis analyzes internationalization of higher education in Romania as a both an international norm diffusion process and as a discrete policy process, in a wider context of post-communist ... [more ▼]

This doctoral thesis analyzes internationalization of higher education in Romania as a both an international norm diffusion process and as a discrete policy process, in a wider context of post-communist transition. It is conceived as a study of policy for policy, with the explicit aim of contributing to better decision-making at the national and institutional levels. As such, it is intended to facilitate a strategic pursuit of internationalization strategies in Romania, which may further inform our understanding of other similar (post-communist transition) national cases. The research objective is to understand the internationalization of higher education as a distinct policy process at the national and university level, by using a five-point star model of the policy field, which highlights the multiplicity of actors involved and acts as a ‘cat’s cradle’. A multi-theory approach for higher education governance is used for unpacking the complexity of this policy field. Stakeholder and resource dependency theories are employed for understanding the articulation of the interests, capacities and interactions between the actors, while discursive institutionalism is used to look at the role of ideas (norms) mobilized by actors to influence policy change and to construct policy frames. In terms of scope, the thesis addresses the rationales, drivers and impacts of internationalization of higher education, as well as its strategic use by relevant actors. The conclusion yields that internationalization in Romania, especially at the national level, is more a fruit of the existing context – the overall globalization trends, the Bologna Process and the EU pre- and post-accession policy processes – than a deliberate strategic pursuit based on either foresight or long term planning. Political and economic rationales are predominant, to the detriment of those linked to social and cultural considerations, given the competing pressures linked to the demographic downturn, reduced public funding to universities, the perceived need to ‘catch-up with Europe’ and the global competitiveness imperative. Another finding is that internationalization of higher education has never reached the stage of policy formulation at the national level and in most Romanian universities; it was used as a legitimating discourse within higher education reform, but a genuine commitment to comprehensive internationalization policies was lacking, leading to an over-reliance on European programs and a narrow focus on mobility and research partnerships. When looking at the agents of change, it can be inferred that success in pursuing internationalization activities was mostly influenced by policy entrepreneurs and leadership commitment and continuity, regardless of the institutional profile. At the same time, Romania has proven to be an exceptional laboratory for understanding internationalization as a distinctive public policy process within the higher education sector. This is due to the double centralization legacy of the higher education system (caused by its Napoleonic model of higher education system and the communist influence) and the over-sized influence of international actors in policy reform (e.g. UNESCO CEPES and the World Bank). A number of the overall conclusions, mainly aimed at improving decision-making at the national level, are also potentially relevant for a wider regional audience: the need to minimize the over-reliance on international funds and technical assistance of international organizations; limiting over-regulation based on international norms; and improving the national role in the global discussions on internationalization and fighting double discourse. This latter aspect points to the difficulties of replicating policy concepts across borders in a non-contextualized form, especially when domestic contexts differ significantly from the pioneering setting of a given policy. [less ▲]

Detailed reference viewed: 36 (4 UL)
Full Text
See detailFULL 3D RECONSTRUCTION OF DYNAMIC NON-RIGID SCENES: ACQUISITION AND ENHANCEMENT
Afzal, Hassan UL

Doctoral thesis (2016)

Recent advances in commodity depth or 3D sensing technologies have enabled us to move closer to the goal of accurately sensing and modeling the 3D representations of complex dynamic scenes. Indeed, in ... [more ▼]

Recent advances in commodity depth or 3D sensing technologies have enabled us to move closer to the goal of accurately sensing and modeling the 3D representations of complex dynamic scenes. Indeed, in domains such as virtual reality, security, surveillance and e-health, there is now a greater demand for aff ordable and flexible vision systems which are capable of acquiring high quality 3D reconstructions. Available commodity RGB-D cameras, though easily accessible, have limited fi eld-of-view, and acquire noisy and low-resolution measurements which restricts their direct usage in building such vision systems. This thesis targets these limitations and builds approaches around commodity 3D sensing technologies to acquire noise-free and feature preserving full 3D reconstructions of dynamic scenes containing, static or moving, rigid or non-rigid objects. A mono-view system based on a single RGB-D camera is incapable of acquiring full 360 degrees 3D reconstruction of a dynamic scene instantaneously. For this purpose, a multi-view system composed of several RGB-D cameras covering the whole scene is used. In the first part of this thesis, the domain of correctly aligning the information acquired from RGB-D cameras in a multi-view system to provide full and textured 3D reconstructions of dynamic scenes, instantaneously, is explored. This is achieved by solving the extrinsic calibration problem. This thesis proposes an extrinsic calibration framework which uses the 2D photometric and 3D geometric information, acquired with RGB-D cameras, according to their relative (in)accuracies, a ffected by the presence of noise, in a single weighted bi-objective optimization. An iterative scheme is also proposed, which estimates the parameters of noise model aff ecting both 2D and 3D measurements, and solves the extrinsic calibration problem simultaneously. Results show improvement in calibration accuracy as compared to state-of-art methods. In the second part of this thesis, the domain of enhancement of noisy and low-resolution 3D data acquired with commodity RGB-D cameras in both mono-view and multi-view systems is explored. This thesis extends the state-of-art in mono-view template-free recursive 3D data enhancement which targets dynamic scenes containing rigid-objects, and thus requires tracking only the global motions of those objects for view-dependent surface representation and fi ltering. This thesis proposes to target dynamic scenes containing non-rigid objects which introduces the complex requirements of tracking relatively large local motions and maintaining data organization for view-dependent surface representation. The proposed method is shown to be e ffective in handling non-rigid objects of changing topologies. Building upon the previous work, this thesis overcomes the requirement of data organization by proposing an approach based on view-independent surface representation. View-independence decreases the complexity of the proposed algorithm and allows it the flexibility to process and enhance noisy data, acquired with multiple cameras in a multi-view system, simultaneously. Moreover, qualitative and quantitative experimental analysis shows this method to be more accurate in removing noise to produce enhanced 3D reconstructions of non-rigid objects. Although, extending this method to a multi-view system would allow for obtaining instantaneous enhanced full 360 degrees 3D reconstructions of non-rigid objects, it still lacks the ability to explicitly handle low-resolution data. Therefore, this thesis proposes a novel recursive dynamic multi-frame 3D super-resolution algorithm together with a novel 3D bilateral total variation regularization to filter out the noise, recover details and enhance the resolution of data acquired from commodity cameras in a multi-view system. Results show that this method is able to build accurate, smooth and feature preserving full 360 degrees 3D reconstructions of the dynamic scenes containing non-rigid objects. [less ▲]

Detailed reference viewed: 122 (14 UL)
Full Text
See detailFast reconsonstruction of compact context-specific network models
Pacheco, Maria Irene UL

Doctoral thesis (2016)

Recent progress in high-throughput data acquisition has shifted the focus from data generation to the processing and understanding of now easily collected patient-specific information. Metabolic models ... [more ▼]

Recent progress in high-throughput data acquisition has shifted the focus from data generation to the processing and understanding of now easily collected patient-specific information. Metabolic models, which have already proven to be very powerful for the integration and analysis of such data sets, might be successfully applied in precision medicine in the near future. Context-specific reconstructions extracted from generic genome-scale models like Reconstruction X (ReconX) (Duarte et al., 2007; Thiele et al., 2013) or Human Metabolic Reconstruction (HMR) (Agren et al., 2012; Mardinoglu et al., 2014a) thereby have the potential to become a diagnostic and treatment tool tailored to the analysis of specific groups of individuals. The use of computational algorithms as a tool for the routinely diagnosis and analysis of metabolic diseases requires a high level of predictive power, robustness and sensitivity. Although multiple context-specific reconstruction algorithms were published in the last ten years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. The aim of this thesis was to create a family of robust and fast algorithms for the building of context-specific models that could be used for the integration of different types of omics data and which should be sensitive enough to be used in the framework of precision medicine. FASTCORE (Vlassis et al., 2014), which was developed in the frame of this thesis is among the first context-specific building algorithms that do not optimize for a biological function and that has a computational time around seconds. Furthermore, FASTCORE is devoid of heuristic parameter settings. FASTCORE requires as input a set of reactions that are known to be active in the context of interest (core reactions) and a genome-scale reconstruction. FASTCORE uses an approximation of the cardinality function to force the core set of reactions to carry a flux above a threshold. Then an L1-minimization is applied to penalize the activation of reactions with low confidence level while still constraining the set of core reactions to carry a flux. The rationale behind FASTCORE is to reconstruct a compact consistent (all the reactions of the model have the potential to carry non zero-flux) output model that contains all the core reactions and a small number of non-core reactions. Then, in order to cope with the non-negligible amount of noise that impede direct comparison within genes, FASTCORE was extended to the FASTCORMICS workflow (Pires Pacheco and Sauter, 2014; Pires Pacheco et al., 2015a) for the building of models via the integration of microarray data . FASTCORMICS was applied to reveal control points regulated by genes under high regulatory load in the metabolic network of monocyte derived macrophages (Pires Pacheco et al., 2015a) and to investigate the effect of the TRIM32 mutation on the metabolism of brain cells of mice (Hillje et al., 2013). The use of metabolic modelling in the frame of personalized medicine, high-throughput data analysis and integration of omics data calls for a significant improvement in quality of existing algorithms and generic metabolic reconstructions used as input for the former. To this aim and to initiate a discussion in the community on how to improve the quality of context-specific reconstruction, benchmarking procedures were proposed and applied to seven recent contextspecific algorithms including FASTCORE and FASTCORMICS (Pires Pacheco et al., 2015a). Further, the problems arising from a lack of standardization of building and annotation pipelines and the use of non-specific identifiers was discussed in the frame of a review. In this review, we also advocated for a switch from gene-centred protein rules (GPR rules) to transcript-centred protein rules (Pfau et al., 2015). [less ▲]

Detailed reference viewed: 51 (22 UL)
Full Text
See detailEssays on the macro-analysis of international migration
Delogu, Marco UL

Doctoral thesis (2016)

This dissertation consists of three chapters, all of them are self-contained works. The first chapter, “Globalizing labor and the world economy: the role of human capital” is a joint work with Prof. Dr ... [more ▼]

This dissertation consists of three chapters, all of them are self-contained works. The first chapter, “Globalizing labor and the world economy: the role of human capital” is a joint work with Prof. Dr. Frédéric Docquier and Dr. Joël Machado. We develop a microfounded model of the world economy aiming to compare short- and long-run effects of migration restrictions on the world distribution of income. We find that a complete removal of migration barriers would increase the world average level of GDP per worker by 13% in the short run and by about 54% after one century. These results are very robust to our identification strategy and technological assumptions. The second chapter, titled “Infrastructure Policy: the role of informality and brain drain” analyses the effectiveness of infrastructure policy in developing countries. I show that, at low level of development, the possibility to work informally has a detrimental impact on infrastructure accumulation. I find that increasing the tax rate or enlarging the tax base can reduce the macroeconomic performance in the short run, while inducing long-run gains. These effects are amplified when brain drain is endogenous. The last chapter, titled “The role of fees in foreign education: evidence from Italy and the UK” is mainly empirical. Relying upon a discrete choice model, together with Prof. Dr. Michel Beine and Prof. Dr. Lionel Ragot I assess the determinants of international students mobility exploiting, for the first time in the literature, data at the university level. We focus on student inflows to Italy and the UK, countries on which tuition fees varies across universities. We obtain evidence for a clear and negative impact of tuition fees on international students inflows and confirm the positive impact of quality of education. The estimations find also support for an important role of additional destination-specific variables such as host capacity, expected return of education and cost of living in the vicinity of the university. [less ▲]

Detailed reference viewed: 60 (11 UL)
Full Text
See detailAUTOMATED TESTING OF SIMULINK/STATEFLOW MODELS IN THE AUTOMOTIVE DOMAIN
Matinnejad, Reza UL

Doctoral thesis (2016)

Context. Simulink/Stateflow is an advanced system modeling platform which is prevalently used in the Cyber Physical Systems domain, e.g., automotive industry, to implement software con- trollers. Testing ... [more ▼]

Context. Simulink/Stateflow is an advanced system modeling platform which is prevalently used in the Cyber Physical Systems domain, e.g., automotive industry, to implement software con- trollers. Testing Simulink models is complex and poses several challenges to research and prac- tice. Simulink models often have mixed discrete-continuous behaviors and their correct behav- ior crucially depends on time. Inputs and outputs of Simulink models are signals, i.e., values evolving over time, rather than discrete values. Further, Simulink models are required to operate satisfactory for a large variety of hardware configurations. Finally, developing test oracles for Simulink models is challenging, particularly for requirements capturing their continuous aspects. In this dissertation, we focus on testing mixed discrete-continuous aspects of Simulink models, an important, yet not well-studied, problem. The existing Simulink testing techniques are more amenable to testing and verification of logical and state-based properties. Further, they are mostly incompatible with Simulink models containing time-continuos blocks, and floating point and non- linear computations. In addition, they often rely on the presence of formal specifications, which are expensive and rare in practice, to automate test oracles. Approach. In this dissertation, we propose a set of approaches based on meta-heuristic search and machine learning techniques to automate testing of software controllers implemented in Simulink. The work presented in this dissertation is motived by Simulink testing needs at Delphi Automotive Systems, a world leading part supplier to the automotive industry. To address the above-mentioned challenges, we rely on discrete-continuous output signals of Simulink models and provide output- based black-box test generation techniques to produce test cases with high fault-revealing ability. Our algorithms are black-box, hence, compatible with Simulink/Stateflow models in their en- tirety. Further, we do not rely on the presence of formal specifications to automate test oracles. Specifically, we propose two sets of test generation algorithms for closed-loop and open-loop con- trollers implemented in Simulink: (1) For closed-loop controllers, test oracles can be formalized and automated relying on the feedback received from the controlled system. We characterize the desired behavior of closed-loop controllers in a set of common requirements, and then use search to identify the worst-case test scenarios of the controller with respect to each requirement. (2) For open-loop controllers, we cannot automate test oracles since the feedback is not available, and test oracles are manual. Hence, we focus on providing test generation algorithms that develop small effective test suites with high fault revealing ability. We further provide a test case prioriti- zation algorithm to rank the generated test cases based on their fault revealing ability and lower the manual oracle cost. Our test generation and prioritization algorithms are evaluated with several industrial and publicly available Simulink models. Specifically, we showed that fault revealing ability of our our approach outperforms that of Simulink Design Verifier (SLDV), the only test generation toolbox of Simulink and a well-known commercial Simulink testing tool. In addition, using our approach, we were able to detect several real faults in Simulink models from our industry partner, Delphi, which had not been previously found by manual testing based on domain expertise and existing Simulink testing tools. Contributions. The main research contributions in this dissertation are: 1. An automated approach for testing closed-loop controllers that characterize the desired be- havior of such controllers in a set of common requirements, and combines random explo- ration and search to effectively identify the worst-case test scenarios of the controller with respect to each requirement. 2. An automated approach for testing highly configurable closed-loop controllers by account- ing for all their feasible configurations and providing strategies to scale the search to large multi-dimensional spaces relying on dimensionality reduction and surrogate modelling 3. A black-box output-based test generation algorithm for open-loop Simulink models which uses search to maximize the likelihood of presence of specific failure patterns (i.e., anti- patterns) in Simulink output signals. 4. A black-box output-based test generation algorithm for open-loop Simulink models that maximizes output diversity to develop small test suites with diverse output signal shapes and, hence, high fault revealing ability. 5. A test case prioritization algorithm which relies on output diversity of the generated test suites, in addition to the dynamic structural coverage achieved by individual tests, to rank test cases and help engineers identify faults faster by inspecting a few test cases. 6. Two test generation tools, namely CoCoTest and SimCoTest, that respectively implement our test generation approaches for closed-loop and open-loop controllers. [less ▲]

Detailed reference viewed: 105 (22 UL)
Full Text
See detailDomain Completeness of Model Transformations and Synchronisations
Nachtigall, Nico UL

Doctoral thesis (2016)

The intrinsic question of most activities in information science, in practice or science, is “Does a given system satisfy the requirements regarding its application?” Commonly, requirements are expressed ... [more ▼]

The intrinsic question of most activities in information science, in practice or science, is “Does a given system satisfy the requirements regarding its application?” Commonly, requirements are expressed and accessible by means of models, mostly in a diagrammatic representation by visual models. The requirements may change over time and are often defined from different perspectives and within different domains. This implies that models may be transformed either within the same domain-specific visual modelling language or into models in another language. Furthermore, model updates may be synchronised between different models. Most types of visual models can be represented by graphs where model transformations and synchronisations are performed by graph transformations. The theory of graph transformations emerged from its origins in the late 1960s and early 1970s as a generalisation of term and tree rewriting systems to an important field in (theoretical) computer science with applications particularly in visual modelling techniques, model transformations, synchronisations and behavioural specifications of models. Its formal foundations but likewise visual notation enable both precise definitions and proofs of important properties of model transformations and synchronisations from a theoretical point of view and an intuitive approach for specifying transformations and model updates from an engineer’s point of view. The recent results were presented in the EATCS monographs “Fundamentals of Algebraic Graph Transformation” (FAGT) in 2006 and its sequel “Graph and Model Transformation: General Framework and Applications” (GraMoT) in 2015. This thesis concentrates on one important property of model transformations and synchronisations, i.e., syntactical completeness. Syntactical completeness of model transformations means that given a specification for transforming models from a source modelling language into models in a target language, then all source models can be completely transformed into corresponding target models. In the same given context, syntactical completeness of model synchronisations means that all source model updates can be completely synchronised, resulting in corresponding target model updates. This work is essentially based on the GraMoT book and mainly extends its results for model transformations and synchronisations based on triple graph grammars by a new more general notion of syntactical completeness, namely domain completeness, together with corresponding verification techniques. Furthermore, the results are instantiated to the verification of the syntactical completeness of software transformations and synchronisations. The well-known transformation of UML class diagrams into relational database models and the transformation of programs of a small object-oriented programming language into class diagrams serve as running examples. The existing AGG tool is used to support the verification of the given examples in practice. [less ▲]

Detailed reference viewed: 72 (14 UL)