Conference Papers

Paulo J, Reis P, Pereira JO, Sousa AL.  2012.  DEDISbench: A Benchmark for Deduplicated Storage Systems. In proceedings of International Symposium on Secure Virtual Infrastructures - ACSAC. 7566 Abstractprp12.pdf

Deduplication is widely accepted as an effective technique for eliminating duplicated data in backup and archival systems. Nowadays, deduplication is also becoming appealing in cloud computing, where large-scale virtualized storage infrastructures hold huge data volumes with a significant share of duplicated content. There have thus been several proposals for embedding deduplication in storage appliances and file systems, providing different performance trade-offs while targeting both user and application data, as well as virtual machine images.

It is however hard to determine to what extent is deduplication useful in a particular setting and what technique will provide the best results. In fact, existing disk I/O micro-benchmarks are not designed for evaluating deduplication systems, following simplistic approaches for generating data written that lead to unrealistic amounts of duplicates.

We address this with DEDISbench, a novel micro-benchmark for evaluating disk I/O performance of block based deduplication systems. As the main contribution, we introduce the generation of a realistic duplicate distribution based on real datasets. Moreover, DEDISbench also allows simulating access hotspots and different load intensities for I/O operations. The usefulness of DEDISbench is shown by comparing it with Bonnie++ and IOzone open-source disk I/O micro-benchmarks on assessing two open-source deduplication systems, Opendedup and Lessfs, using Ext4 as a baseline. As a secondary contribution, our results lead to novel insight on the performance of these file systems.

Matos M, Sousa AL, Pereira JO, Oliveira R, Deliot E, Murray P.  2009.  CLON: Overlay networks and gossip protocols for cloud environments. Proceedings of On the Move to Meaningful Internet Systems - OTM. 5870 Abstractdoa-clon.pdf

Although epidemic or gossip-based multicast is a robust and scalable approach to reliable data dissemination, its inherent redundancy results in high resource con- sumption on both links and nodes. This problem is aggravated in settings that have costlier or resource constrained links, as happens in Cloud Computing infrastruc- tures composed by several interconnected data centers across the globe. The goal of this work is therefore to improve the efficiency of gossip-based reliable multicast by reducing the load imposed on those constrained links. In detail, the proposed CLON protocol combines an overlay that gives preference to local links and a dissemination strategy that takes into account locality. Extensive experimental evaluation using a very large number of simulated nodes shows that this results in a reduction of traffic in constrained links by an order of magnitude, while at the same time preserving the resilience properties that make gossip-based protocols so attractive.

Costa B, Matos M, Sousa AL.  2009.  CAPI: Cloud Computing API. Simpósio de Informática - INForum. :499–502. Abstractinforum-capi.pdf

Cloud Computing is an emerging business model to provide access to IT resources in a pay per use fashion. Those resources range from low-level virtual machines, passing by application platforms and ending in ready to use software delivered through the Internet, creating a layered stack of differentiated services. Different cloud vendors tend to specialize in different levels, offering different services each with its own proprietary API. This clearly led to provider lock-in and hinders the portability of a given application between different providers. The goal of this paper is therefore to provide an abstraction of the different levels of services in order to reduce vendor lock-in and improve portability, two issues that we believe impair the adoption of the Cloud Computing model.

Matos M, Sousa AL, Pereira JO, Oliveira R.  2009.  A Distributed Bootstrapping Protocol for Overlay Networks. Simpósio de Informática - INForum. :357–368. Abstractinforum-cloncontact.pdf

Peer to peer overlay networks have become popular due to their inherent scalability and resilience properties that come naturally from their decentralized nature. Unfortunately, despite this decentralized approach there is still one important piece that remains centralized: a set of servers to provide identifiers to peers joining the system. This initial step is necessary as new peers need to obtain some contact points in order to establish links to them and join the overlay. This puts out of the protocol model a crucial mechanism as it introduces an external centralized entity to manage the process, leading to problems of scale and fault-tolerance. In scenarios where the churn rate is considerable, the cost of maintaining the list of known peers by the well-known servers may be unbearable. Furthermore, if the peer identifiers provided are not evenly distributed across the universe, this will inevitably cluster the overlay, making it more prone to partitions. In this paper we propose a fully decentralized protocol to obtain those set of initial contact peers using the already deployed infrastructure. By conducting an extensive experimental evaluation we show the effectiveness of the protocol and reason how the ideal number of contact peers may be provided in a fully decentralized fashion.

Matos M, Sousa AL, Pereira JO, Oliveira R.  2009.  CLON: overlay network for clouds. Proceedings of the 3rd Workshop on Dependable Distributed Data Management - WDDM. :14–17. Abstractdoa-clon.pdf

Gossip-based protocols have been gaining an increasing interest from the research community due to the high resilience to node churn and high scalability, thus making them suitable to modern large-scale dynamic systems. Unfortunately, these properties come at the cost of redundant message transmissions to ensure bimodal delivery to all interested peers. In systems with high message throughput, those additional messages could pose a significant burden on the excess of required bandwidth. Furthermore, the overlays upon which message dissemination takes place are oblivious to the underlying network, or rely on posterior optimizations that bias the overlay to mimic the network topology. This contributes even more to the required bandwidth as ’undesirable’ paths are chosen with equal probability among desired ones. In a Cloud Computing scenario, nodes tend to be aggregated in sub-nets inside a data-center or in multiple data-centers, which are connected by costlier, long-distance links. The goal of this work is, therefore, to build an overlay that approximates the structure of the physical network, while ensuring the connectivity properties desirable to ensure reliable dissemination. By having each node judiciously choose which nodes are in its dissemination list at construction time, i.e. by giving preference to local nodes, we are able to significantly reduce the number of messages traversing the long-distance links. In a later stage, this overlay shall be presented as a service upon which data dissemination and management protocols could be run.

Matos M, Sousa AL.  2008.  Dependable distributed OSGi environment. Proceedings of the 3rd workshop on Middleware for service oriented computing - MW4SOC. :1–6. Abstractmw4soc-ddosgi.pdf

As the concept of Service Oriented Computing matures the need for well defined architectures and protocols to address this trend is essential if IT is going to properly embrace SOC. The SOC paradigm has several requirements to work properly such as service composition and cooperation in a loosely coupled fashion, ability to adapt autonomously to environmental and business changes and address concerns such as modularity, dynamicity and proper integration between services. The popularization of the OSGi platform its another effort towards the SOC paradigm by issuing key aspects such as modularity and dynamicity in its service oriented design. However there is much room for improvement namely on the creation of architectures and mechanisms to improve the dependability of the overall solution by strengthening key properties such as the availability, reliability, integrity, safety and maintainability of the platform. In this work we propose a middleware layer that offers the strong modular and dynamic properties required in an SOC environment by relying on OSGi while addressing dependability concerns. The starting point to achieve this is by instrumenting an OSGi implementation and providing means to monitor and manage it accordingly to business and environmental requirements. By relying on group communication facilities and some properties from the OSGi specification we are able to migrate OSGi environments between nodes thus minimizing service delivery disruption in the presence of faults and addressing, at the same time SLA properties by migrating (or shutting down) services that are consuming more resources than agreed/expected.

Sousa AL, Correia A, Moura F, Pereira JO, Oliveira R.  2006.  Evaluating certification protocols in the partial database state machine. Proceedings of 1st International Conference on Availability, Reliability and Security (ARES). Abstractpdbsm_als.pdf

Partial replication is an alluring technique to ensure the reliability of very large and geographically distributed databases while, at the same time, offering good performance. By correctly exploiting access locality most transactions become confined to a small subset of the database replicas thus reducing processing, storage access and communication overhead associated with replication. The advantages of partial replication have however to be weighted against the added complexity that is required to manage it. In fact, if the chosen replica configuration prevents the local execution of transactions or if the overhead of consistency protocols offsets the savings of locality, potential gains cannot be realized. These issues are heavily dependent on the application used for evaluation and render simplistic benchmarks useless. In this paper, we present a detailed analysis of partial database state machine (PDBSM) replication by comparing alternative partial replication protocols with full replication. This is done using a realistic scenario based on a detailed network simulator and access patterns from an industry standard database benchmark. The results obtained allow us to identify the best configuration for typical online transaction processing applications.

Sousa AL, Pereira JO, Soares L, Correia A, Rocha L, Oliveira R, Moura F.  2005.  Testing the dependability and performance of group communication based database replication protocols. Proceedings of 35th IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). Abstractdbsim-dsn-pds05.pdf

Database replication based on group communication systems has recently been proposed as an efficient and resilient solution for large-scale data management. However, its evaluation has been conducted either on simplistic simulation models, which fail to assess concrete implementations, or on complete system implementations which are costly to test with realistic large-scale scenarios. This paper presents a tool that combines implementations of replication and communication protocols under study with simulated network, database engine, and traffic generator models. Replication components can therefore be subjected to realistic large scale loads in a variety of scenarios, including fault-injection, while at the same time providing global observation and control. The paper shows first how the model is configured and validated to closely reproduce the behavior of a real system, and then how it is applied, allowing us to derive interesting conclusions both on replication and communication protocols and on their implementations.

Correia A, Sousa AL, Soares L, Pereira JO, Moura F, Oliveira R.  2005.  Group-based replication of on-line transaction processing servers. Proceedings of 2nd Latin-American Symposium on Dependable Computing (LADC). 3747 Abstractladc05.pdf

Several techniques for database replication using group communication have recently been proposed, namely, the Database State Machine, Postgres- R, and the NODO protocol. Although all rely on a totally ordered multicast for consistency, they differ substantially on how multicast is used. This re- sults in different performance trade-offs which are hard to compare as each protocol is presented using a different load scenario and evaluation method. In this paper we evaluate the suitability of such protocols for replication of On-Line Transaction Processing (OLTP) applications in clusters of servers and over wide area networks. This is achieved by implementing them using a common infra-structure and by using a standard workload. The results allows us to select the best protocol regarding performance and scalability in a demanding but realistic usage scenario.

Correia A, Sousa AL, Soares L, Moura F, Oliveira R.  2004.  Revisiting epsilon serializabilty to improve the database state machine. Proceedings of the Workshop on Dependable Distributed Data Management (with SRDS 2004). Abstract10.

Recently, a large body of research has been exploiting group communication based techniques to improve the dependability and performance of synchronously replicated database systems.

Sousa AL, Pereira JO, Oliveira R, Moura F.  2004.  Semantic reliability on the Database State Machine. JISBD - Jornadas de Ingenieria del Software y Bases de Datos. Abstractspo04.pdf

Database replication protocols based on group communication primitives have recently been the subject of a considerable body of research [1, 11, 13, 6, 8, 4]. The reason for this stems from the adequacy of the order and atomicity properties of group communication primitives to implement synchronous replication (i.e., strong consistent) strategies. Unlike database replication schemes based on traditional transactional.

Soares L, Sousa AL, Pereira JO, Oliveira R, Rocha L, Moura F, Correia A.  2003.  Avaliação de um SGBD replicado usando simulação de redes. Actas da 6ª Conferência sobre Redes de Computadores. Abstractcrc03-avaliacao.pdf

A replicação de sistemas de gestão de bases de dados (SGBD) é um mecanismo fundamental para a fiabilidade de sistemas de informação. Em sistemas geograficamente distribuídos é ainda útil na recuperação de desas- tres e disponibilidade ubíqua de dados. Uma técnica de replicação recentemente proposta é a Database State Ma- chine (DBSM), que promete aliar fiabilidade a elevado desempenho tirando partido de sistemas de comunicação em grupo. A avaliação do desempenho desta técnica tem no entanto sido efectuada com redes de comunicação demasiado simples ou irrealistas e com uma carga não representativa. Este artigo propõe uma avaliação rigorosa de uma concretização desta técnica de replicação, aliando um modelo de simulação realista de redes de comunicação com uma geração de carga efectuada de acordo com os padrões elaborados pelo Transaction Processing Council (TPC). Os resultados obtidos confirmam o interesse desta técnica em redes locais, mas mostram que o seu desempenho é condicionado pelas características da rede e da carga.

Sousa AL, Pereira JO, Moura F, Oliveira R.  2002.  Optimistic total order in wide area networks. Proceedings of 21st IEEE International Symposium on Reliable Distributed Systems (SRDS). Abstractspmo02.pdf

Total order multicast greatly simplifies the implementa- tion of fault-tolerant services using the replicated state ma- chine approach. The additional latency of total ordering can be masked by taking advantage of spontaneous order- ing observed in LANs: A tentative delivery allows the ap- plication to proceed in parallel with the ordering protocol. The effectiveness of the technique rests on the optimistic as- sumption that a large share of correctly ordered tentative deliveries offsets the cost of undoing the effect of mistakes. This paper proposes a simple technique which enables the usage of optimistic delivery also in WANs with much larger transmission delays where the optimistic assumption does not normally hold. Our proposal exploits local clocks and the stability of network delays to reduce the mistakes in the ordering of tentative deliveries. An experimental evalu- ation of a modified sequencer-based protocol is presented, illustrating the usefulness of the approach in fault-tolerant database management.

Sousa AL, Pedone F, Oliveira R, Moura F.  2001.  Partial replication in the database state machine. Proceedings of the IEEE International Symposium on Network Computing and Applications (NCA). Abstractpartial_replication.pdf

This paper investigates the use of partial replication in the Database State Machine approach introduced ear- lier for fully replicated databases. It builds on the or- der and atomicity properties of group communication primitives to achieve strong consistency and proposes two new abstractions: Resilient Atomic Commit and Fast Atomic Broadcast. Even with atomic broadcast, partial replication re- quires a termination protocol such as atomic commit to ensure transaction atomicity. With Resilient Atomic Commit our termination protocol allows the commit of a transaction despite the failure of some of the par- ticipants. Preliminary performance studies suggest that the additional cost of supporting partial replica- tion can be mitigated through the use of Fast Atomic Broadcast.

Pereira JO, Oliveira R, Sousa AL, Moura F.  1999.  Ardina – Difusão de Informação em Computação Móvel. Actas do Encontro Português de Computação Móvel. Abstractposm99.pdf

Este artigo apresenta o Ardina, uma infra-estrutura modular de suporte à difusão de informação por intermédio de palmatops. A difusão de informação do Ardina reside na exploração da habitual operação de sincronização ponto-a-ponto entre o palmtop e uma estação de suporte ou entre dois palmtops.

Sousa AL, Moreno CB, Pereira JO, Oliveira R, Moura F.  1996.  A human centered perspective for mobile information sharing and delivery. ECOOP - Workshop Reader of the 10th European Conference on Object-Oriented Programming. :412. Abstractwmr96.pdf

This paper focus on the design and implementation of a suitable communication layer to be used with groups of mobile systems, targeted to support personal information management and exchange applications. A PIM was used as case study, focusing on the problems raised by the process of scheduling meetings. As a first step, the environment is evaluated in order to establish some facts and assumptions that should shape the system. We also think that the observation of human behavior, in particular the study of human information interchange techniques.