Recent Publications

Fernandes S, Barbosa LS.  2016.  Applying the 3C Model to FLOSS Communities. Collaboration and Technology - 22nd International Conference, CRIWG 2016, Kanazawa, Japan, September 14-16, 2016, Proceedings. 9848:139–150. Abstractfb16a.pdf

How learning occurs within Free/Libre Open Source (FLOSS) communities and what is the dynamics such projects (e.g. the life cycle of such projects) are very relevant questions when considering the use of FLOSS projects in a formal education setting. This paper introduces an approach based on the 3C collaboration model (communication, coordination and cooperation) to represent the collaborative learning dynamics within FLOSS communities. To explore the collaborative learning potential of FLOSS communities a number of questionnaires and interviews to selected FLOSS contributors were run. From this study a 3C collaborative model applicable to FLOSS communities was designed and discussed.

Fernandes S, Barbosa LS.  2016.  Electronic governance in Portugal: a silent pioneer. Proceedings of the International Conference on Electronic Governance and Open Society - Challenges in Eurasia, {EGOSE} 2016, St. Petersburg, Russia, November 22-23, 2016. :77–82. Abstractfb2016b.pdf

This paper sums up the Portuguese experience on launching and implementing a number of successful EGOV policies at different levels. Such a success was based on a holistic understanding of EGOV, in the confluence of technology, administrative reform and innovation processes, and enforced through a number of well-defined, consistent policies carried out over time. The role of national agencies, independent from local political instances and private interests, in the implementation of an effective EGOV strategy is also stressed.

Veiga L, Janowski T, Barbosa LS.  2016.  Digital Government and Administrative Burden Reduction. Proceedings of the 9th International Conference on Theory and Practice of Electronic Governance, ICEGOV 2016, Montevideo, Uruguay, March 1-3, 2016. :323–326. Abstractvjb16.pdf

Administrative burden represents the costs to businesses, citizens and the administration itself of complying with government regulations and procedures. The burden tends to increase with new forms of public governance that rely less on direct decisions and actions undertaken by traditional government bureaucracies, and more on government creating and regulating the environment for other, non-state actors to jointly address public needs. Based on the reviews of research and policy literature, this paper explores administrative burden as a policy problem, presents how Digital Government (DG) could be applied to address this problem, and identifies societal adoption, organizational readiness and other conditions under which DG can be an effective tool for Administrative Burden Reduction (ABR). Finally, the paper tracks ABR to the latest Contextualization stage in the DG evolution, and discusses possible development approaches and technological potential of pursuing ABR through DG.

Oliveira N, Barbosa LS.  2016.  An Enhanced Model for Stochastic Coordination. Proceedings of the First International Workshop on Formal Methods for and on the Cloud, iFMCloud@IFM 2016, Reykjavik, Iceland, 4th June 2016.. 228:35–45. Abstractob16.pdf

Applications developed over the cloud coordinate several, often anonymous, computational resources, distributed over different execution nodes, within flexible architectures. Coordination models able to represent quantitative data provide a powerful basis for their analysis and validation. This paper extends IMCreo, a semantic model for Stochastic reo based on interactive Markov chains, to enhance its scalability, by regarding each channel and node, as well as interface components, as independent stochastic processes that may (or may not) synchronise with the rest of the coordination circuit.

Silva JMC, Carvalho P, Bispo KA, Lima SR.  2016.  Lightweight Multivariate Sensing in WSNs. Ubiquitous Computing and Ambient Intelligence: 10th International Conference, UCAmI 2016, San Bartolomé de Tirajana, Gran Canaria, Spain, November 29 – December 2, 2016, Part II. :205–211. Abstract

n/a

Pereira R, Couto M, Saraiva J, Cunha J, Fernandes JP.  2016.  The Influence of the Java Collection Framework on Overall Energy Consumption. Proceedings of the 5th International Workshop on Green and Sustainable Software. :15–21. Abstract
n/a
Pereira R, Saraiva J, Cunha J, Fernandes JP.  2016.  User-friendly Spreadsheet Querying: An Empirical Study. Proceedings of the 31st Annual ACM Symposium on Applied Computing. :202–204. Abstract
n/a
Pontes R, Maia F, Paulo J, Vilaça RMP.  2016.  SafeRegions: Performance Evaluation of Multi-party Protocols on HBase. 35th {IEEE} Symposium on Reliable Distributed Systems Workshops, {SRDS} 2016 Workshop, Budapest, Hungary, September 26, 2016. :31–36. Abstract
n/a
Burihabwa D, Pontes R, Felber P, Maia F, Mercier H, Oliveira R, Paulo J, Schiavoni V.  2016.  On the Cost of Safe Storage for Public Clouds: An Experimental Evaluation. 35th {IEEE} Symposium on Reliable Distributed Systems, {SRDS} 2016, Budapest, Hungary, September 26-29, 2016. :157–166. Abstract
n/a
Bahsoun JP, Guerraoui R, Shoker A.  2015.  Making BFT Protocols Really Adaptive. In the Proceedings of the 29th IEEE International Parallel & Distributed Processing Symposium. Abstractadapt.pdf

Many state-machine Byzantine Fault Tolerant (BFT) protocols have been introduced so far. Each protocol addressed a different subset of conditions and use-cases. However, if the underlying conditions of a service span different subsets, choosing a single protocol will likely not be a best fit. This yields robustness and performance issues which may be even worse in services that exhibit fluctuating conditions and workloads.
In this paper, we reconcile existing state-machine BFT protocols in a single adaptive BFT system, called ADAPT, aiming at covering a larger set of conditions and use-cases, probably the union of individual subsets of these protocols. At anytime, a launched protocol in ADAPT can be aborted and replaced by another protocol according to a potential change (an event) in the underlying system conditions. The launched protocol is chosen according to an “evaluation process” that takes into consideration both: protocol characteristics and its performance. This is achieved by applying some mathematical formulas that match the profiles of protocols to given user (e.g., service owner) preferences. ADAPT can assess the profiles of protocols (e.g., throughput) at run-time using Machine Learning prediction mechanisms to get accurate evaluations. We compare ADAPT with well known BFT protocols showing that it outperforms others as system conditions change and under dynamic workloads.

Almeida PS, Shoker A, Moreno CB.  2015.  Efficient State-based CRDTs by Delta-Mutation. In the Proceedings of the International Conference on NETworked sYStems}. 9466 Abstractdeltacrdt.pdf

CRDTs are distributed data types that make eventual consistency of a distributed object possible and non ad-hoc. Specifically, state-based CRDTs ensure convergence through disseminating the entire state, that may be large, and merging it to other replicas; whereas operation-based CRDTs disseminate operations (i.e., small states) assuming an exactly-once reliable dissemination layer. We introduce Delta State Conflict-Free Replicated Datatypes (δ-CRDT) that can achieve the best of both worlds: small messages with an incremental nature, as in operation-based CRDTs, disseminated over unreliable communication channels, as in traditional state-based CRDTs. This is achieved by defining δ-mutators to return a delta-state, typically with a much smaller size than the full state, that is joined to both: local and remote states. We introduce the δ-CRDT framework, and we explain it through establishing a correspondence to current state-based CRDTs. In addition, we present an anti-entropy algorithm that ensures causal consistency, and we introduce two δ-CRDT specifications of well-known replicated datatypes.

Almeida PS, Moreno CB, Fonte V, Gonçalves R.  2015.  Concise Server-Wide Causality Management for Eventually Consistent Data Stores. DAIS - IFIP International Conference on Distributed Applications and Interoperable Systems. Abstractglobal_logical_clocks.pdf

Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of network partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. We introduce a new causality management framework for eventually consistent data stores, that leverages node logical clocks (Bitmapped Version Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.

Abade T, Campos JC, Moreira R, Silva CC, Silva JL.  2015.  Immersiveness of Ubiquitous Computing Environments Prototypes: A case study. Lecture Notes in Computer Science. 9189 Abstract15-dapi-abadecmss-sarch.pdf

The development of ubiquitous computing (ubicomp) environments raises several challenges in terms of their evaluation. Ubicomp virtual reality prototyping tools enable users to experience the system to be developed and are of great help to face those challenges, as they support developers in assessing the consequences of a design decision in the early phases of development. Given the situated nature of ubicomp environments, a particular issue to consider is the level of realism provided by the prototypes. This work presents a case study where two ubicomp prototypes, featuring different levels of immersion (desktop-based versus CAVE-based), were developed and compared. The goal was to determine the cost/benefits relation of both solutions, which provided better user experience results, and whether or not simpler solutions provide the same user experience results as more elaborate one.

Neves DT, Gonçalves RC.  2015.  On the Synthesis and Reconfiguration of Pipelines. ARCS '15: Proceedings of the 28th International Conference on Architecture of Computing Systems. Abstractneves-2015.pdf

In recent years we have observed great advances in parallel platforms and the exponential growth of datasets in several domains. Undoubtedly, parallel programming is crucial to harness the performance potential of such platforms and to cope with very large datasets. However, quite often one has to deal with legacy software systems that may use third-party frameworks, libraries, or tools, and that may be executed in different multicore architectures. Managing different software configurations and adapt them for different needs is an arduous task, particularly when it has to be carried out by scientists or when dealing with irregular applications. In this paper, we present an approach to abstract legacy software systems using workflow modeling tools. We show how a basic pipeline is modeled and adapted—using model transformations—to different application scenarios, either to obtain better performance, or more reliable results. Moreover, we explain how the system we provide to support the approach is easily extensible to accommodate new tools and algorithms. We show how a pipeline of three irregular applications— all from phylogenetics—is mapped to parallel implementations. Our studies show that the derived programs neither downgrade performance nor sacrifice scalability, even in the presence of a set of asymmetric tasks and when using third-party tools.

Jorge T, Maia F, Matos M, Pereira JO, Oliveira R.  2015.  Practical Evaluation of Large Scale Applications. Proceedings of the 15th IFIP International Conference on Distributed Applications and Interoperable Systems (DAIS). :124–137. Abstractdais15minhajsr.pdf

Designing and implementing distributed systems is a hard endeavor, both at an abstract level when designing the system, and at a concrete level when implementing, debugging and evaluating it. This stems not only from the inherent complexity of writing and reasoning about distributed software, but also from the lack of tools for testing and evaluating it under realistic conditions. Moreover, the gap between the protocols’ specifications found on research papers and their implementations on real code is huge, leading to inconsistencies that often result in the implementation no longer following the specification. As an example, the specification of the popular Chord DHT comprises a few dozens of lines, while its Java implementation, OpenChord, is close to twenty thousand lines, excluding libraries. This makes it hard and error prone to change the implementation to reflect changes in the specification, regardless of programmers’ skill. Besides, critical behavior due to the unpredictable interleaving of operations and network uncertainty, can only be observed on a realistic setting, limiting the usefulness of simulation tools. We believe that being able to write an algorithm implementation very close to its specification, and evaluating it in a real environment is a big step in the direction of building better distributed systems. Our approach leverages the MINHA platform to offer a set of built in primitives that allows one to program very close to pseudo-code. This high level implementation can interact with off-the-shelf existing middleware and can be gradually replaced by a production-ready Java implementation. In this paper, we present the system design and showcase it using a well-known algorithm from the literature.