Publications

Nunes A, Couto R, Pacheco H, Bessa R, Gouveia C, Seca L, Moreira J, Nunes P, Matos P, Oliveira A.  2017.  Towards new data management platforms for a DSO as market enabler – UPGRID Portugal demo.
Nunes A, Pereira JO.  2013.  Improving transaction abort rates without compromising throughput through judicious scheduling. Proceedings of the 28th Annual ACM Symposium on Applied Computing. :493–494. Abstractimprovtxnabort.pdf

Althought optimistic concurrency control protocols have increasingly been used in distributed database management systems, they imply a trade-of between the number of transactions that can be executed concurrently, hence, the peak throughput, and transactions aborted due to conflicts.
We propose a novel optimistic concurrency control mechanism that controls transaction abort rate by minimizing the time during which transactions are vulnerable to abort, without compromising throughput. Briefly, we throttle transaction execution with an adaptive mechanism based on the state of the transaction queues while allowing out-of-order execution based on expected transaction latency. Prelimi- nary evaluation shows that this provides a substantial improvement in committed transaction throughput.

Nunes A, Oliveira R, Pereira JO.  2013.  Conflict Classes for Replicated Databases: A Case-Study. Workshop on Planetary-Scale Distributed Systems - W-PSDS. Abstractconflictclasses.pdf

The major challenge in fault-tolerant replicated transactional databases is providing efficient distributed concurrency control that allows non-conflicting transactions to execute concurrently. A common approach is to partition the data according to the data access patterns of the workload, assuming that this will allow operations in each partition to be scheduled independently and run in parallel.
The effectiveness of this approach hinges on the characteristics of the workload: (i) the ability to identify such partitions and (ii) the actual number of such partitions that arises. Performance results that have been presented to support such proposals are thus tightly linked to the simplistic synthetic benchmarks that have been used. This is worrisome, since these benchmarks have not been conceived for this purpose and the resulting definition of partitions might not be representative of real applications. In this paper we contrast a more complex synthetic benchmark (TPC-E) with a real application in the same area (financial brokerage), concluding that the real setting makes it much harder to determine a correct partition of the data and that sub-optimal partitioning severely constrains the performance of replication.

Nunes A, Oliveira R, Pereira JO.  2013.  Ajitts: Adaptive just-in-time transaction scheduling. Distributed Applications and Interoperable Systems DAIS. :57–70. Abstractajitts.pdf

Distributed transaction processing has benefited greatly from optimistic concurrency control protocols thus avoiding costly fine-grained synchronization. However, the performance of these protocols degrades significantly when the workload increases, namely, by leading to a substantial amount of aborted transactions due to concurrency conflicts. Our approach stems from the observation that when the abort rate increases with the load as already executed transactions queue for longer periods of time waiting for their turn to be certified and committed. We thus propose an adaptive algorithm for judiciously scheduling transactions to minimize the time during which these are vulnerable to being aborted by concurrent transactions, thereby reducing the overall abort rate. We do so by throttling transaction execution using an adaptive mechanism based on the locally known state of globally executing transactions, that includes out-of-order execution.
Our evaluation using traces from the industry standard TPC-E workload shows that the amount of aborted transactions can be kept bounded as system load increases, while at the same time fully utilizing system resources and thus scaling transaction processing throughput.

Matos M, Nunes A, Oliveira R, Pereira JO.  2010.  StAN: exploiting shared interests without disclosing them in gossip-based. 9th Internacional Workshop on Peer- to- Peer Systems - IPTPS. :9. AbstractStAN.pdf

Publish/subscribe mechanisms for scalable event dissemination are a core component of many distributed systems ranging from EAI middleware to news dissemination in the Internet. Hence, a lot of research has been done on overlay networks for efficient decentralized topic-based routing. Specifically, in gossip-based dissemination, approximating nodes with shared interests in the overlay makes dissemination more efficient. Unfortunately, this usually requires fully disclosing interests to nearby nodes and impacts reliability due to clustering.
In this paper we address this by starting with multiple overlays, one for each topic subscribed, that then separately self-organize to share a large number of physical connections, thereby leading to reduced message traffic and maintenance overhead. This is achieved without a node ever disclosing an interest to another node that doesn’t share it and without impacting the robustness of the overlay. Besides presenting the overlay maintenance protocol, we evaluate it using simulation in order to validate our results.

Nunes A, Marques J, Pereira JO.  2009.  Seeds: The social internet feed caching and dissemination architecture. INForum - Simpósio de Informática. :25. Abstractseeds.pdf

Syndicated content in the Internet has been a huge success ever since the early days of RSS 0.9 and MyNetscape. Currently, it is the cornerstone of content push, ranging from podcasts to emerging Web 2.0 sites such as Friend-Feed and Plexus. Unfortunately, the simple technology that makes publication and subscription very simple and flexible, thus explaining in part its success, is also limiting its usefulness in more demanding applications.
This paper proposes a novel distributed architecture for feed caching and dissemination. It leverages social networks as promoters of discovery and aggregation, and peer-to-peer protocols for content distribution, while providing an evolutionary upgrade path that does not disrupt current infrastructure or require changes to publishers’ or consumers’ habits.

Nunes A.  2009.  P2P content-push in the Internet. AbstractP2P_content-push_in_the_Internet

Syndicated content-push in the Internet is a huge success, and web feeds are being used to convey various types of content: from news headlines, to podcasts and video podcasts, to being a feature in Web 2.0 websites. This diversity lead to the appearance of several frameworks, each tailored to a specific content type. At the same time, interest in social networking exploded, as more and more websites of this purpose were launched. Syndicated content and social networking websites are now intimately connected.
In this work, a generic, modular, p2p content-push architecture is proposed. It provides an evolutionary upgrade path based on the technologies already in use in the Internet for interoperability, thus without disrupting current infrastructure or changing participants’ habits. It also leverages social networks for content discovery and aggregation, using peer-to-peer protocols for distribution. A partial implementation of this architecture, dubbed SEEDS, is also presented.