Recent Publications

Madeira A, Martins MA, Barbosa LS.  2016.  A logic for n-dimensional hierarchical refinement. Proceedings 17th International Workshop on Refinement, Refine@FM 2015, Oslo, Norway, 22nd June 2015. 209:40–56.2.pdf
Madeira A, Barbosa LS, Hennicker R, Martins MA.  2016.  Dynamic Logic with Binders and Its Application to the Development of Reactive Systems. Theoretical Aspects of Computing - ICTAC 2016 - 13th International Colloquium, Taipei, Taiwan, ROC, October 24-31, 2016, Proceedings. 9965:422–440. Abstractmbhm16.pdf

This paper introduces a logic to support the specification and development of reactive systems on various levels of abstraction, from property specifications, concerning e.g. safety and liveness requirements, to constructive specifications representing concrete processes. This is achieved by combining binders of hybrid logic with regular modalities of dynamic logics in the same formalism, which we call D↓-logic. The semantics of our logic focuses on effective processes and is therefore given in terms of reachable transition systems with initial states. The second part of the paper resorts to this logic to frame stepwise development of reactive systems within the software development methodology proposed by Sannella and Tarlecki. In particular, we instantiate the generic concepts of constructor and abstractor implementations by using standard operators on reactive components, like relabelling and parallel composition, as constructors, and bisimulation for abstraction. We also study vertical composition of implementations which relies on the preservation of bisimularity by the constructions on labeleld transition systems.

Neves R, Barbosa LS.  2016.  Hybrid Automata as Coalgebras. Theoretical Aspects of Computing - ICTAC 2016 - 13th International Colloquium, Taipei, Taiwan, ROC, October 24-31, 2016, Proceedings. 9965:385–402. Abstractnb16.pdf

Able to simultaneously encode discrete transitions and continuous behaviour, hybrid automata are the de facto framework for the formal specification and analysis of hybrid systems. The current paper revisits hybrid automata from a coalgebraic point of view. This allows to interpret them as state-based components, and provides a uniform theory to address variability in their definition, as well as the corresponding notions of behaviour, bisimulation, and observational semantics.

Fernandes S, Barbosa LS.  2016.  Applying the 3C Model to FLOSS Communities. Collaboration and Technology - 22nd International Conference, CRIWG 2016, Kanazawa, Japan, September 14-16, 2016, Proceedings. 9848:139–150. Abstractfb16a.pdf

How learning occurs within Free/Libre Open Source (FLOSS) communities and what is the dynamics such projects (e.g. the life cycle of such projects) are very relevant questions when considering the use of FLOSS projects in a formal education setting. This paper introduces an approach based on the 3C collaboration model (communication, coordination and cooperation) to represent the collaborative learning dynamics within FLOSS communities. To explore the collaborative learning potential of FLOSS communities a number of questionnaires and interviews to selected FLOSS contributors were run. From this study a 3C collaborative model applicable to FLOSS communities was designed and discussed.

Veiga L, Janowski T, Barbosa LS.  2016.  Digital Government and Administrative Burden Reduction. Proceedings of the 9th International Conference on Theory and Practice of Electronic Governance, ICEGOV 2016, Montevideo, Uruguay, March 1-3, 2016. :323–326. Abstractvjb16.pdf

Administrative burden represents the costs to businesses, citizens and the administration itself of complying with government regulations and procedures. The burden tends to increase with new forms of public governance that rely less on direct decisions and actions undertaken by traditional government bureaucracies, and more on government creating and regulating the environment for other, non-state actors to jointly address public needs. Based on the reviews of research and policy literature, this paper explores administrative burden as a policy problem, presents how Digital Government (DG) could be applied to address this problem, and identifies societal adoption, organizational readiness and other conditions under which DG can be an effective tool for Administrative Burden Reduction (ABR). Finally, the paper tracks ABR to the latest Contextualization stage in the DG evolution, and discusses possible development approaches and technological potential of pursuing ABR through DG.

Oliveira N, Barbosa LS.  2016.  An Enhanced Model for Stochastic Coordination. Proceedings of the First International Workshop on Formal Methods for and on the Cloud, iFMCloud@IFM 2016, Reykjavik, Iceland, 4th June 2016.. 228:35–45. Abstractob16.pdf

Applications developed over the cloud coordinate several, often anonymous, computational resources, distributed over different execution nodes, within flexible architectures. Coordination models able to represent quantitative data provide a powerful basis for their analysis and validation. This paper extends IMCreo, a semantic model for Stochastic reo based on interactive Markov chains, to enhance its scalability, by regarding each channel and node, as well as interface components, as independent stochastic processes that may (or may not) synchronise with the rest of the coordination circuit.

Silva JMC, Carvalho P, Bispo KA, Lima SR.  2016.  Lightweight Multivariate Sensing in WSNs. Ubiquitous Computing and Ambient Intelligence: 10th International Conference, UCAmI 2016, San Bartolomé de Tirajana, Gran Canaria, Spain, November 29 – December 2, 2016, Part II. :205–211. Abstract

n/a

Pereira R, Couto M, Saraiva J, Cunha J, Fernandes JP.  2016.  The Influence of the Java Collection Framework on Overall Energy Consumption. Proceedings of the 5th International Workshop on Green and Sustainable Software. :15–21. Abstract
n/a
Pereira R, Saraiva J, Cunha J, Fernandes JP.  2016.  User-friendly Spreadsheet Querying: An Empirical Study. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :202–204. Abstract
n/a
Bahsoun JP, Guerraoui R, Shoker A.  2015.  Making BFT Protocols Really Adaptive. In the Proceedings of the 29th IEEE International Parallel & Distributed Processing Symposium. Abstractadapt.pdf

Many state-machine Byzantine Fault Tolerant (BFT) protocols have been introduced so far. Each protocol addressed a different subset of conditions and use-cases. However, if the underlying conditions of a service span different subsets, choosing a single protocol will likely not be a best fit. This yields robustness and performance issues which may be even worse in services that exhibit fluctuating conditions and workloads.
In this paper, we reconcile existing state-machine BFT protocols in a single adaptive BFT system, called ADAPT, aiming at covering a larger set of conditions and use-cases, probably the union of individual subsets of these protocols. At anytime, a launched protocol in ADAPT can be aborted and replaced by another protocol according to a potential change (an event) in the underlying system conditions. The launched protocol is chosen according to an “evaluation process” that takes into consideration both: protocol characteristics and its performance. This is achieved by applying some mathematical formulas that match the profiles of protocols to given user (e.g., service owner) preferences. ADAPT can assess the profiles of protocols (e.g., throughput) at run-time using Machine Learning prediction mechanisms to get accurate evaluations. We compare ADAPT with well known BFT protocols showing that it outperforms others as system conditions change and under dynamic workloads.

Almeida PS, Moreno CB, Fonte V, Gonçalves R.  2015.  Concise Server-Wide Causality Management for Eventually Consistent Data Stores. DAIS - IFIP International Conference on Distributed Applications and Interoperable Systems. Abstractglobal_logical_clocks.pdf

Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of network partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. We introduce a new causality management framework for eventually consistent data stores, that leverages node logical clocks (Bitmapped Version Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.

Abade T, Campos JC, Moreira R, Silva CC, Silva JL.  2015.  Immersiveness of Ubiquitous Computing Environments Prototypes: A case study. Lecture Notes in Computer Science. 9189 Abstract15-dapi-abadecmss-sarch.pdf

The development of ubiquitous computing (ubicomp) environments raises several challenges in terms of their evaluation. Ubicomp virtual reality prototyping tools enable users to experience the system to be developed and are of great help to face those challenges, as they support developers in assessing the consequences of a design decision in the early phases of development. Given the situated nature of ubicomp environments, a particular issue to consider is the level of realism provided by the prototypes. This work presents a case study where two ubicomp prototypes, featuring different levels of immersion (desktop-based versus CAVE-based), were developed and compared. The goal was to determine the cost/benefits relation of both solutions, which provided better user experience results, and whether or not simpler solutions provide the same user experience results as more elaborate one.

Neves DT, Gonçalves RC.  2015.  On the Synthesis and Reconfiguration of Pipelines. ARCS '15: Proceedings of the 28th International Conference on Architecture of Computing Systems. Abstractneves-2015.pdf

In recent years we have observed great advances in parallel platforms and the exponential growth of datasets in several domains. Undoubtedly, parallel programming is crucial to harness the performance potential of such platforms and to cope with very large datasets. However, quite often one has to deal with legacy software systems that may use third-party frameworks, libraries, or tools, and that may be executed in different multicore architectures. Managing different software configurations and adapt them for different needs is an arduous task, particularly when it has to be carried out by scientists or when dealing with irregular applications. In this paper, we present an approach to abstract legacy software systems using workflow modeling tools. We show how a basic pipeline is modeled and adapted—using model transformations—to different application scenarios, either to obtain better performance, or more reliable results. Moreover, we explain how the system we provide to support the approach is easily extensible to accommodate new tools and algorithms. We show how a pipeline of three irregular applications— all from phylogenetics—is mapped to parallel implementations. Our studies show that the derived programs neither downgrade performance nor sacrifice scalability, even in the presence of a set of asymmetric tasks and when using third-party tools.

Jorge T, Maia F, Matos M, Pereira JO, Oliveira R.  2015.  Practical Evaluation of Large Scale Applications. Proceedings of the 15th IFIP International Conference on Distributed Applications and Interoperable Systems (DAIS). :124–137. Abstractdais15minhajsr.pdf

Designing and implementing distributed systems is a hard endeavor, both at an abstract level when designing the system, and at a concrete level when implementing, debugging and evaluating it. This stems not only from the inherent complexity of writing and reasoning about distributed software, but also from the lack of tools for testing and evaluating it under realistic conditions. Moreover, the gap between the protocols’ specifications found on research papers and their implementations on real code is huge, leading to inconsistencies that often result in the implementation no longer following the specification. As an example, the specification of the popular Chord DHT comprises a few dozens of lines, while its Java implementation, OpenChord, is close to twenty thousand lines, excluding libraries. This makes it hard and error prone to change the implementation to reflect changes in the specification, regardless of programmers’ skill. Besides, critical behavior due to the unpredictable interleaving of operations and network uncertainty, can only be observed on a realistic setting, limiting the usefulness of simulation tools. We believe that being able to write an algorithm implementation very close to its specification, and evaluating it in a real environment is a big step in the direction of building better distributed systems. Our approach leverages the MINHA platform to offer a set of built in primitives that allows one to program very close to pseudo-code. This high level implementation can interact with off-the-shelf existing middleware and can be gradually replaced by a production-ready Java implementation. In this paper, we present the system design and showcase it using a well-known algorithm from the literature.

Belo O, Faria J, Ribeiro AN, Oliveira B, Santos V.  2015.  Modeling E-Government Processes Using Yawl: Half-Way Towards Their Effective Real Implementation. Proceedings of the 8th International Conference on Theory and Practice of Electronic Governance - Icegov . :{288-291}. Abstractp288-belo.pdf

Today E-Government institutions face a lot of challenges related to the quality and effectiveness of the ervices they provide. In most cases, their users are more demanding, imposing new ways of acting and dealing with heir needs, requesting often expeditious and effective attendance. Independently for their nature, we believe that such pertinent characteristics begin to be sustained immediately as we start to study and model E-Government processes. Modeling and simulation are useful tools on the assurance of the availability of E-Government services in many aspects, contributing significantly to improve processes implementation, ranging from their inception to their final software application roll-up and maintenance. In this paper we studied the use of YAWL – a work flowing language – for modeling E-Government processes, showing through a real world application case how it can help us in the construction of effective models that may be used as a basis for understanding and building the correspondent software applications.