Location privacy-preserving mechanisms (LPPM) use different computational approaches to protect personal data in queries to location-based services. These LPPMs give different privacy claims that are not per se comparable. We propose the σ-calculus, a process calculus to model LPPMs and their privacy guarantees. Furthermore, we present LP3Verif, which is a model checking tool to verify location privacy properties of LPPMs using the σ-calculus.
We present an abstract domain for integer value analysis that is especially suited for the analysis of low-level code, where precision is of particular interest. Traditional value analysis domains trade precision for efficiency by approximating value sets to convex shapes. Our value sets are based on modified binary decision diagrams (BDDs), which enable size-efficient storage of integer sets. The associated transfer functions are defined on the structure of the BDDs, making them efficient even for very large sets. We provide the domain in the form of a library that we use in the implementation of a plug-in for the binary analysis framework Jakstab. The library and the plug-in are evaluated by comparison to set representations in traditional value analyses and application of the plug-in to CPU2006 benchmarks respectively.
We present a structural analysis (SA) that identifies key positions for program comprehension of executables, and a differential analysis (DA) that identifies similarities between pairs of executables. Our SA is based on a new way of abstracting statement-level control-flow information to a representation with a graphical view that allows an intuitive identification of several provided patterns. Our DA is based on sequence alignments and only needs the position and size information of the functions. We manually validate the SA on a set of diverse executables and in one detailed case study, and automatically validate the DA on three sets of executables, namely sets of similar and dissimmilar executables as well as a set of academic executables obtained from a third party.
This dissertation explores online model checking, a dynamic variant of model checking, that can be used to prove properties of a system even if long-term models for the system are not available. A framework is presented that provides an integrated approach to creating such online model-checking applications with data acquisition, data processing, model simulation, model verification, and simulation visualization features. The correctness of two transformation reductions used is shown. Furthermore, two case studies on online model checking in the medical domain are presented.
This thesis offers a new component-based approach for programming language implementation with the specific purpose of experimentally studying the languages characteristics (CBM). We ship the first set of reusable syntax, semantics, and analysis components for a selection of lazy languages. We define and formulate the Expression Compatibility Problem and present two solutions for it, which are based on additive component composition and feature-oriented programming, respectively. Inspired by lightweight family polymorphism, we introduce the first formal model (syntax, static semantics, and dynamic semantics)for CBM. We employ this model as a means for high-level description of our components and their use.
This thesis introduces a method based on program transformation for developing generic software libraries. More precisely, the signatures of generic functions, i.e., functions parameterised on types, are transformed from a functional language to an object-oriented language. Type level functional constructs, such as higher-order functions and type constructors, are mapped to type parameters specified using a mechanism called concepts. The transformation is independent from particular languages and can be formally described.
Moore’s law has correctly predicted the number of transistors in integrated circuits to double approximately every two years. This exponential growth has been made possible through the continued shrinking of individual transistors and has allowed processing speed and memory sizes to grow at similar rates. However, smaller components become increasingly vulnerable to environmental fluctuations, making software systems especially vulnerable to the new class of soft errors, originating from voltage spikes produced by cosmic radiation.
To achieve a dependable system, redundant hardware or software is needed to detect and recover from these errors. Traditionally, the implementation of hardware dependence has been a hardware-only consideration, but softwareimplemented hardware fault tolerance is now more and more advocated. Software enables a configurable solution where protection can be applied only to those regions of an application that actually need it and thereby avoids protection overhead for the non-critical regions of the application.
However, software-implemented hardware fault tolerance has yet to deliver truly configurable solutions. The hardening techniques presented so far have been introduced mostly in isolation, making integration of different mechanisms unnecessarily difficult. We propose a type-based approach and demonstrate a C++ type-library implementation that can be used to express existing hardening techniques in a way that allows straight-forward integration.
To analyze the connection between variable types and soft-error impact, we design and implement a type-based program analysis to infer low-level types in assembly listings. We apply the analysis to an existing fault-injection experiment and conclude that the soft-error impact varies greatly with type.
We leverage such type-specific behavior to design type-specific fault tolerance that provides better error coverage and better performance compared to generic fault-tolerance mechanisms. Specifically, we target one of the simplest and one of the most common data types—Booleans. Although they are often critical, the traditional representation is ill-protected against soft errors. We propose a simple modification to the standard Boolean that improves fault tolerance with no performance overhead. Additionally, we propose four efficient alternative designs, based on bit counting, rotation, and partitioning.