Miscellaneous

Cunha A, Kindler E.  2015.  Proceedings of the 4th International Workshop on Bidirectional Transformations co-located with Software Technologies: Applications and Foundations. Editorship of Proceedings of the 4th International Workshop on Bidirectional Transformations co-located with Software Technologies: Applications and Foundations, STAF 2015. preface.pdf
[Anonymous].  2015.  Editorship of Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data. Editorship of Proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data. Abstract

Consistency is one of the fundamental issues of distributed computing. There are many competing consistency models, with subtly different power in principle. In practice, the well-known Consistency-Availability-Partition Tolerance trade-off translates to difficult choices between fault tolerance, performance, and programmability. The issues and trade-offs are particularly vexing at scale, with a large number of processes or a large shared database, and in the presence of high latency and failure-prone networks. It is clear that there is no one universally best solution. Possible approaches cover the whole spectrum between strong and eventual consistency. Strong consistency (total ordering via, for example, linearizability or serializability) provides familiar and intuitive semantics but requires slower and, in some contexts, fragile coordination. The unlimited parallelism allowed by weaker models such as eventual consistency promises high performance, but divergence and conflicts make it difficult to ensure useful application invariants, and meta-data is hard to keep in check. The research and development communities are actively exploring intermediate models (replicated data types, monotonic programming, CRDTs, LVars, causal consistency, red-blue consistency, invariant- and proof-based systems, etc.), designed to improve efficiency, programmability, and overall operation without negatively impacting scalability.

This workshop aims to investigate the principles and practice of weak consistency models for large-scale, distributed shared data systems. It brings together theoreticians and practitioners from different horizons: system development, distributed algorithms, concurrency, fault tolerance, databases, language and verification, including both academia and industry.

Zhang Y, Jones P, Masci P.  2015.  Model Based Design and Safety Analysis of Medical Device User Interfaces. Abstract

Plain Language Synopsis: This research applies model based engineering techniques to develop novel verification and hazard analysis methods, which help manufacturers establish the quality and safety in medical device user interface designs. Artifacts produced by these methods provide evidence for regulators to quickly and objectively assess the safety of devices' interaction with users.

Abstract: Designs of medical device Human Computer Interfaces (HCI) need to be robust and appropriately reactive to user actions. There is evidence that the HCI design of some devices on the market can cause use errors and erroneously process user input, which may subsequently lead to serious patient harm. Model based engineering (MBE) technology can be used to model HCI design decisions with mathematical precision. This technology can facilitate the development of HCI models that clearly define the device's interaction behavior with users; offering a formal (mathematical) basis to reason about and verify the safety of the design. Automatic tool support is available to facilitate such reasoning and verification activities. Tool artifacts provide manufacturers and regulators an objective and scientific basis for assessing the safety of medical device user interfaces. The authors have successfully applied MBE techniques to the analysis of medical device user interfaces in two studies. In the first study, automatic model extraction was applied to the user interface software of a marketed infusion pump to produce a model that resembles the pump's use interaction behavior. Automated formal proving on the model uncovered several design flaws in the pump's user interface that could lead to severe consequences including the pump ignoring key presses that might cause patient overdose. In the second study, the authors captured the user interface software design common in medical devices with a generic user interface model. Based on this generic model, a hazard analysis technique was proposed that integrates human cognition process models and general interaction design principles to guide more comprehensive and systematic identification of design flaws in user interfaces. Preliminary experiments showed that this hazard analysis technique can identify 3 times more software-related hazards in user interface designs, compared to standard hazard analysis techniques.

Oliveira JN, Miraldo VC.  2015.  Keep definition, change category: a practical approach to state-based system calculi. Journal of Logical and Algebraic Methods in Programming. Abstract_om15-draft.pdf

Faced with the need to quantify software (un)reliability in the presence of faults, the semantics of state-based systems is urged to evolve towards quantified (e.g. probabilistic) nondeterminism. When one is approaching such semantics from a categorical perspective, this inevitably calls for some technical elaboration, in a monadic setting.
This paper proposes that such an evolution be undertaken without sacrificing the simplicity of the original (qualitative) definitions, by keeping quantification implicit rather than explicit. The approach is a monad lifting strategy whereby, under some conditions, definitions can be preserved provided the semantics moves to another category.
The technique is illustrated by showing how to introduce probabilism in an existing software component calculus, by moving to a suitable category of matrices and using linear algebra in the reasoning.
The paper also addresses the problem of preserving monadic strength in the move from original to target (Kleisli) categories, a topic which bears relationship to recent studies in categorial physics.

Masci P.  2014.  A preliminary hazard analysis for the GIP number entry software. Abstracttechrep-pha.pdf

The results of a preliminary hazard analysis are presented that identify common design errors in infusion pump software that may potentially cause use hazards. Many identified problems apply to other types of interactive medical devices, including ventilators and radiotherapy machines. The identified issues may be used as a basis to define safety requirements that, if satisfied by user interface software, can substantially improve the quality and safety of broad classes of medical devices

Gonçalves R, Almeida PS, Moreno CB, Fonte V, Preguiça N.  2012.  Dotted Version Vectors: Efficient Causality Tracking for Distributed Key-Value Stores. DAIS - IFIP International Conference on Distributed Applications and Interoperable Systems.. poster_dais.pdf
Barbosa MB, Moss A, Page D, Rodrigues N, Silva P.  2011.  Type Checking Cryptography Implementations. Abstractdi-cctc-11-01.pdf

Cryptographic software development is a challenging field: high performance must be achieved, while ensuring correctness and compliance with low-level security policies. CAO is a domain specific language designed to assist development of cryptographic software. An important feature of this language is the design of a novel type system introducing native types such as predefined sized vectors, matrices and bit strings, residue classes modulo an integer, finite fields and finite field extensions, allowing for extensive static validation of source code. We present the formalisation, validation and implementation of this type system.

Gonzalez-Sanchez A, Abreu R, Gross H-G, van Gemund A.  2010.  A diagnostic approach to test prioritization. Abstracttud-serg-2010-007.pdf

n development processes with high code production rates testing typically triggers fault diagnosis to localize the detected failures. However, current test prioritization algorithms are tuned for failure detection rate rather than diagnostic information. Consequently, unnecessary diagnostic effort might be spent to localize the faults. We present a dynamic test prioritization algorithm that trades fault detection rate for diagnostic performance, minimizing overall testing and diagnosis cost. The algorithm exploits pass/fail information from each test to select the next test, optimizing the diagnostic information produced per test. Experimental results from synthetic test suites, and suites taken from the Software-artifact Infrastructure Repository show possible diagnostic cost reductions up to 10 and 19 percent, respectively, compared to the best of random selection, FEP, and ART. The cost reduction is sensitive to the quality of the test coverage matrices and component health, but tends to grow with the number of faults.

Bernardeschi C, Cassano L, Domenici A, Masci P.  2010.  Analysis of FPGAs Using the SAN Formalism. Technical Report, University of Pisa - Department of Information Engineering. Abstract

We describe a model of FPGA based systems realised with the Stochastic Activity Networks (SAN) formalism. The model can be used (i) to debug the FPGA circuit design synthesised from the high level description of the system, and (ii) to calculate the signal probabilities and transition densities of the FPGA circuit design, that can be used for reliability analysis, power consumption estimation and pseudo random testing of digital circuit design. We validate the model by reproducing results presented in other studies for some representative combinatorial circuits, and we explore the applicability of the model in the analysis of real-world devices by studying a circuit for the generation of CRC codes.

Matos M, Sousa AL, Pereira JO, Oliveira R.  2009.  CLON: Overlay Networks and Gossip Protocols for Cloud Environments. Abstractcctc-clon.pdf

Although epidemic or gossip-based multicast is a robust and scalable approach to reliable data dissemination in distributed systems, its inherent redundancy may result in high resource consumption both on links and nodes. This problem is aggravated in settings that have costlier or resource constrained links, as happens in Cloud Computing infrastructures composed by several interconnected large data centers across the globe. The goal of this work is therefore to improve the efficiency of gossip-based reliable multicast used in infrastructure management systems by reducing the load imposed on those constrained links. In detail, the proposed CLON protocol combines an overlay that gives preference to local links and a dissemination strategy that takes into account locality. Extensive experimental evaluation using a very large number of simulated nodes shows that this results in a reduction of traffic in constrained links by an order of magnitude, while at the same time preserving the resilience properties that make gossip-based protocols so attractive.

da Cruz D, Oliveira N, Henriques PR.  2009.  GraAL - A Grammar Analyzer.
Silva P, Oliveira JN.  2008.  Report on the Design of a Galculator. Abstracttechicalreport-fast-08-01.pdf

This report presents the Galculator, a tool aimed at deriving equational proofs in arbitrary domains using Galois connections as the fundamental concept. When combined with the pointfree transform and the indirect equality principle, Galois connections offer a very powerful, generic device to tackle the complexity of proofs in program verification.

We show how rewriting rules derived from the properties of the Galois connections are applied in proofs using a strategic term rewriting system which, in the current prototype, is implemented in Haskell. The prospect of integrating the Galculator with other proof assistants such as e.g. Coq is also discussed.

Abreu R, Zoeteweij P, Van Gemund AJC.  2008.  Techniques for diagnosing software faults. Abstract0deec52dfd785cc945000000_1.pdf

This technical report is meant to report our findings and ideas with respect to spectrum-based fault localization and model-based diagnosis. In the following we want to introduce and compare model-based diagnosis (MBD), spectrum-based fault localization (SFL) and our contributions using 3-inverters as a running example (which is simple, yet sufficiently interesting).The remainder of this paper is organized as follows. The concepts and definitions used in this paper are given in the next section. The combination of model-based diagnosis and Bayesian reasoning, as it is normally applied to, e.g., digital circuits, is discussed in Section 3. Spectrum-based fault localization, including the system transformation for instrumentation to collect data to reason about failures is discussed in Section 4.1. In Section 4.2 we investigate several novel approaches for applying model-based diagnosis, and notably Bayesian reasoning to systems that have been prepared for spectrum-based fault localization.

Jesus P, Moreno CB, Almeida PS.  2007.  A Study on Aggregation by Averaging Algorithms. Abstractjesus-eurosys2007b.pdf

With the advent of multihop ad-hoc networks, sensor networks and large scale overlay networks, there is a demand for tools that can abstract meaningful system properties from given assemblies of nodes. Distributed aggregation algorithms allow the evaluation of properties such as: network size; total storage capacity; average load; and majorities. A useful class of aggregation algorithms is based on averaging techniques. Such algorithms start from a set of input values spread across the nodes, and iteratively average the values in neighbour nodes that are able to communicate. Eventually all nodes will converge to the same value and can estimate some useful metric. For instance, if one node starts with input 1 and all other nodes with input 0, eventually all nodes will end up with the same value and the aggregate property network size can be estimated by the fraction 1.

Zoeteweij P, Abreu R, Van Gemund AJC.  2007.  Software fault diagnosis. A Tutorial in TESTCOM/FATES. Abstracttutorial_paper.pdf

This tutorial paper gives an overview of automated diagnosis applied to software faults. The emphasis is on a particular technique called spectrum-based fault localization, which is well-suited for diagnosing software systems, and which can easily be integrated with existing testing schemes. We discuss applications of the technique, including the specific application domain of embedded software, and provide pointers to recent research on factors that influence its diagnostic accuracy. In addition, we give instructions for quickly getting started with applying spectrum-based fault localization to existing projects.

Sousa N, Campos JC.  2006.  IVY Trace Visualizer. Abstract17_yvy_jose_campos_1.pdf

No contexto do projecto IVY, tem vindo a ser desenvolvida uma ferramenta de modelação e análise de sistemas interactivos, tendo em vista a detecção de potenciais problemas de usabilidade no início do desenvolvimento de um qualquer sistema interactivo. Quando uma dada propriedade em análise não se verifica, a ferramenta procura indicar um contraxemplo: um comportamento do modelo que demonstre a falsidade da propriedade em questão. Estes contra-exemplos, no entanto, podem atingir tamanhos consideráveis, dependendo da complexidade do modelo, o que dificulta a sua análise. De forma a facilitar essa análise, a arquitectura da ferramenta IVY prevê um componente de suporte à análise. Este componente visa, através de representações visuais e de mecanismos de análise, facilitar a compreensão dos contra exemplos, de forma a tornar mais claro qual o problema que está a ser apontado e possíveis soluções para o mesmo. Este artigo apresenta o componente de análise da ferramenta IVY. São apresentadas a arquitectura do componente, as representações implementadas e os mecanismos de análise disponibilizados.

Sousa N, Campos JC.  2006.  Um visualizador de traços de comportamento para a ferramenta IVY. Abstractivy-tr-5-03.pdf

This report describes the development of a component (Trace Visualiser) for a modular tool named IVY (Interactors VerYfier).

Harrison M, Doherty G, Campos JC.  2005.  Is there a role for rigorous system analysis in experience centred design? Abstractmdhmauserev8.pdf

This chapter explores the role that formal modelling may play in aiding the visualization and implementation of usability, with a particular emphasis on experience requirements in an ambient and mobile system. Mechanisms for requirements elicitation and evaluation are discussed, as well as the role of scenarios and their limitations in capturing experience requirements. The chapter then discusses the role of formal modelling by revisiting an analysis based on an exploration of traditional usability requirements before moving on to consider requirements more appropriate to a built environment. The role of modelling within the development process is re-examined by looking at how models may incorporate knowledge relating to user experience, and how the results of the analysis of such models may be exploited by human factors and domain experts in their consideration of user experience issues.