Recent Publications

Belo O, Faria J, Ribeiro AN, Oliveira B, Santos V.  2015.  Modeling E-Government Processes Using Yawl: Half-Way Towards Their Effective Real Implementation. Proceedings of the 8th International Conference on Theory and Practice of Electronic Governance - Icegov . :{288-291}. Abstractp288-belo.pdf

Today E-Government institutions face a lot of challenges related to the quality and effectiveness of the ervices they provide. In most cases, their users are more demanding, imposing new ways of acting and dealing with heir needs, requesting often expeditious and effective attendance. Independently for their nature, we believe that such pertinent characteristics begin to be sustained immediately as we start to study and model E-Government processes. Modeling and simulation are useful tools on the assurance of the availability of E-Government services in many aspects, contributing significantly to improve processes implementation, ranging from their inception to their final software application roll-up and maintenance. In this paper we studied the use of YAWL – a work flowing language – for modeling E-Government processes, showing through a real world application case how it can help us in the construction of effective models that may be used as a basis for understanding and building the correspondent software applications.

Daniels W, Proença J, Clarke D, Joosen W, Hughes D.  2015.  Refraction: Low-Cost Management of Reflective Meta-Data in Pervasive Component-Based Applications. Proceedings of the 18th International Symposium on Component-Based Software Engineering - CBSE. :27–36. Abstractrefraction.pdf

This paper proposes the concept of refraction, a principled means to lower the cost of managing reflective meta-data for pervasive systems. While prior work has demonstrated the benefits of reflective component-based middleware for building open and reconfigurable applications, the cost of using remote reflective operations remains high. Refractive components address this problem by selectively augmenting application data flows with their reflective meta-data, which travels at low cost to refractive pools, which serve as loci of inspection and control for the distributed application. Additionally reactive policies are introduced, providing a mechanism to trigger reconfigurations based on incoming reflective meta-data. We evaluate the performance of refraction in a case-study of automatic configuration repair for a real-world pervasive application. We show that refraction reduces network overhead in comparison to the direct use of reflective operations while not increasing development overhead. To enable further experimentation with the concept of refraction, we provide RxCom, an open-source refractive component model and supporting runtime environment.

Ramachandran GS, Daniels W, Proença J, Michiels S, Joosen W, Hughes D, Porter B.  2015.  Hitch Hiker: A Remote Binding Model with Priority Based Data Aggregation for Wireless Sensor Networks. Proceedings of the 18th International Symposium on Component-Based Software Engineering, CBSE. :43–48. Abstracthitchhiker.pdf

The aggregation of network traffic has been shown to enhance the performance of wireless sensor networks. By reducing the number of packets that are transmitted, energy consumption, collisions and congestion are minimised. However, current data aggregation schemes restrict developers to a specific network structure or cannot handle multi-hop data aggregation. In this paper, we propose Hitch Hiker, a remote component binding model that provides support for multi-hop data aggregation. Hitch Hiker uses component meta-data to discover remote component bindings and to construct a multi-hop overlay network within the free payload space of existing traffic flows. This overlay network provides end-to-end routing of low-priority traffic while using only a small fraction of the energy of standard communication. We have developed a prototype implementation of Hitch Hiker for the LooCI component model. Our evaluation shows that Hitch Hiker consumes minimal resources and that using Hitch Hiker to deliver low-priority traffic reduces energy consumption by up to 15%.

Backes M, Barbosa MB, Fiore D, Reischuk RM.  2015.  ADSNARK: Nearly Practical and Privacy-Preserving Proofs on Authenticated Data. 2015 IEEE Symposium on Security and Privacy, {SP} 2015, San Jose, CA, USA, May 17-21, 2015. :271–286. Abstractadsnark.pdf

We study the problem of privacy-preserving proofs on authenticated data, where a party receives data from a trusted source and is requested to prove computations over the data to third parties in a correct and private way, i.e., the third party learns no information on the data but is still assured that the claimed proof is valid. Our work particularly focuses on the challenging requirement that the third party should be able to verify the validity with respect to the specific data authenticated by the source — even without having access to that source. This problem is motivated by various scenarios emerging from several application areas such as wearable computing, smart metering, or general business-to-business interactions. Furthermore, these applications also demand any meaningful solution to satisfy additional properties related to usability and scalability. In this paper, we formalize the above three-party model, discuss concrete application scenarios, and then we design, build, and evaluate ADSNARK, a nearly practical system for proving arbitrary computations over authenticated data in a privacy-preserving manner. ADSNARK improves significantly over state-of-the-art solutions for this model. For instance, compared to corresponding solutions based on Pinocchio (Oakland’13), ADSNARK achieves up to 25× improvement in proof computation time and a 20× reduction in prover storage space.

Guimarães P, Pereira JO.  2015.  X-Ray: Monitoring and analysis of distributed database queries. Distributed Applications and Interoperable Systems (DAIS, with DisCoTec). Abstractgp15.pdf

The integration of multiple database technologies, including both SQL and NoSQL, allows using the best tool for each aspect of a complex problem and is increasingly sought in practice. Unfortunately, this makes it difficult for database developers and administrators to obtain a clear view of the resulting composite data processing paths, as they combine operations chosen by different query optimisers, implemented by different software packages, and partitioned across distributed systems.

This work addresses this challenge with the X-Ray framework, that allows monitoring code to be added to a Java-based distributed system by manipulating its bytecode at runtime. The resulting information is collected in a NoSQL database and then processed to visualise data processing paths as required for optimising integrated database systems. This proposal is demonstrated with a distributed query over a federation of Apache Derby database servers and its performance evaluated with the standard TPC-C benchmark workload.

Macedo N, Cunha A, Guimarães T.  2015.  Exploring Scenario Exploration. Proceedings of the 18th International Conference on Fundamental Approaches to Software Engineering. Abstractfase15var.pdf

Model finders are very popular for exploring scenarios, helping users validate specifications by navigating through conforming model instances. To be practical, the semantics of such scenario exploration operations should be formally defined and, ideally, controlled by the users, so that they are able to quickly reach interesting scenarios.
This paper explores the landscape of scenario exploration operations, by formalizing them with a relational model finder. Several scenario exploration operations provided by existing tools are formalized, and new ones are proposed, namely to allow the user to easily explore very similar (or different) scenarios, by attaching preferences to model elements. As a proof-of-concept, such operations were implemented in the popular Alloy Analyzer, further increasing its usefulness for (user-guided) scenario exploration.

Lamas J, Silva CC, Silva M, Mouta S, Campos JC, Santos J.  2015.  Measuring end-to-end delay in real-time auralisation systems. Euronoise – 10th European Congress and Exposition on Noise Control Engineering Abstractend2enddelay_euronoise2015.pdf

One of the major challenges in the development of an immersive system is handling the delay between the tracking of the user’s head position and the updated projection of a 3D image or auralised sound, also called end-to-end delay. Excessive end-to-end delay can result in the general decrement of the “feeling of presence”, the occurrence of motion sickness and poor performance in perception-action tasks. These latencies must be known in order to provide insights on the technological (hardware/software optimization) or psychophysical (recalibration sessions) strategies to deal with them. Our goal was to develop a new measurement method of end-to-end delay that is both precise and easily replicated. We used a Head and Torso simulator (HATS) as an auditory signal sensor, a fast response photo-sensor to detect a visual stimulus response from a Motion Capture System, and a voltage input trigger as real-time event. The HATS was mounted in a turntable which allowed us to precisely change the 3D sound relative to the head position. When the virtual sound source was at 90º azimuth, the correspondent HRTF would set all the intensity values to zero, at the same time a trigger would register the real-time event of turning the HATS 90º azimuth. Furthermore, with the HATS turned 90º to the left, the motion capture marker visualization would fell exactly in the photo-sensor receptor. This method allowed us to precisely measure the delay from tracking to displaying. Moreover, our results show that the method of tracking, its tracking frequency, and the rendering of the sound reflections are the main predictors of end-to-end delay.

Moreno CB, Lima R, Miranda H.  2015.  Adaptive Broadcast Cancellation Query Mechanism for Unstructured Networks. 9th International Conference on Next Generation Mobile Applications, Services and Technologies. Abstract142629213215577.pdf

The availability of cheap embedded sensors in mobile devices boosted the emergence of unstructured networks using wireless technologies without centralised administration. However, a simple task such as collecting temperature needs a discovery service to find a thermometer. Usually, the resource discovery relies on flooding mechanisms, wasting energy and compromising system availability. On the other hand, energy efficient solutions based on broadcast cancellation mechanism have a significant impact on latency. The paper proposes ABC (Adaptive Broadcast Cancellation) a new algorithm that uses the knowledge acquired in the past to accelerate queries towards the resource. Each node listens to its neighbours and the acquired context is stored in a variation of bloom filters.

Lima R, Moreno CB, Miranda H.  2015.  FBL - Filtro Bloom Linear. Infórum 2015. Abstractlima15.pdf

As estruturas de dados que permitem o armazenamento de informação de forma probabilística (em particular, os filtros de Bloom) caracterizam-se por permitir a regulação do equilíbrio entre a eficiência na gestão do espaço de armazenamento e a precisão das respostas. Esta possibilidade tem motivado a sua utilização em cenários adversos, por exemplo em redes de sensores, onde os dispositivos apresentam recursos limitados (memória, cpu, energia). Este artigo apresenta um mecanismo de Filtro de Bloom Linear (FBL), que permite associar a cada um dos elementos uma probabilidade quantizada no intervalo real [0,1], ultrapassando as limitações impostas pela característica binária dos filtros de Bloom. Os resultados mostram que é possível parametrizar um FBL de modo a manter um erro aceitável em função da variação do número de bits usados na quantização e do número de funções de hash usadas na indexação. O artigo discute a aplicação dos FBLs em mecanismos de disseminação e descoberta de recursos em redes de sensores, mostrando como contribuem para manter uma dimensão constante das mensagens trocadas pelos sensores, independentemente da dimensão da rede.

Oliveira R, Matos M, Felber P, Sutra P, Rivière E, Schiavoni V.  2015.  TOPiCo: detecting most frequent items from multiple high-rate event streams. DEBS 2015 - . Abstractp58-schiavoni.pdf

Systems such as social networks, search engines or trading platforms operate geographically distant sites that continuously generate streams of events at high-rate. Such events can be access logs to web servers, feeds of messages from participants of a social network, or financial data, among others. The ability to timely detect trends and popularity variations is of paramount importance in such systems. In particular, determining what are the most popular events across all sites allows to capture the most relevant information in near real-time and quickly adapt the system to the load. This paper presents TOPiCo, a protocol that computes the most popular events across geo-distributed sites in a low cost, bandwidth-efficient and timely manner. TOPiCo starts by building the set of most popular events locally at each site. Then, it disseminates only events that have a chance to be among the most popular ones across all sites, significantly reducing the required bandwidth. We give a correctness proof of our algorithm and evaluate TOPiCo using a real-world trace of more than 240 million events spread across 32 sites. Our empirical results shows that (i) TOPiCo is timely and cost-efficient for detecting popular events in a large-scale setting, (ii) it adapts dynamically to the distribution of the events, and (iii) our protocol is particularly efficient for skewed distributions.

Almeida D, Campos JC, Saraiva JA, Silva JC.  2015.  Towards a catalog of usability smells. SAC - Proceedings of the 30th Annual ACM Symposium on Applied Computing. Abstractsac2015_1.pdf

This paper presents a catalog of smells in the context of interactive applications. These so-called usability smells are indicators of poor design on an application's user interface, with the potential to hinder not only its usability but also its maintenance and evolution. To eliminate such usability smells we discuss a set of program/usability refactorings. In order to validate the presented usability smells catalog, and the associated refactorings, we present a preliminary empirical study with software developers in the context of a real open source hospital management application. Moreover, a tool that computes graphical user interface behavior models, giving the applications' source code, is used to automatically detect usability smells at the model level.

Abreu R, Passos LS, Rossetti RJF.  2015.  Spectrum-Based Fault Localisation for Multi-Agent Systems. IJCAI - 24th International Joint Conference on Artificial Intelligence. Abstractijcai15-164.pdf

Diagnosing unwanted behaviour in Multi-Agent Systems (MASs) is crucial to ascertain agents’ correct operation. However, generation of MAS models is both error-prone and time intense, as it exponentially increases with the number of agents and their interactions. In this paper, we propose a light-weight, automatic debugging-based technique, coined ESFL-MAS, which shortens the diagnostic process, while only relying on minimal information about the system. ESFL-MAS uses a heuristic that quantifies the suspiciousness of an agent to be faulty; therefore, different heuristics may have different impact on the diagnostic quality. Our experimental evaluation shows that 10 out of 42 heuristics yield the best diagnostic accuracy (96.26% on average).

Masci P, Oladimeji P, Zhang Y, Jones P, Curzon P, Thimbleby H.  2015.  PVSio-web 2.0: Joining PVS to Human-Computer Interaction. 27th International Conference on Computer Aided Verification (CAV2015). Abstractpvsioweb-cav2015.pdf

PVSio-web is a graphical environment for facilitating the design and evaluation of interactive (human-computer) systems. Using PVSio-web, one can generate and evaluate realistic interactive prototypes from formal models. PVSio-web has been successfully used over the last two years for analyzing commercial, safety-critical medical devices. It has been used to create training material for device developers and device users. It has also been used for medical device design, by both formal methods experts and non-technical end users.

This paper presents the latest release of PVSio-web 2.0, which will be part of the next PVS distribution. The new tool architecture is discussed, and the rationale behind its design choices are presented

Masci P, Couto LD, Larsen PG, Curzon P.  2015.  Integrating the PVSio-web modelling and prototyping environment with Overture. 13th Overture Workshop, satellite event of FM2015. Abstractpvsioweb-overture2015.pdf

Tools are needed that overcome the barriers preventing development teams using formal verification technologies. We present our work integrating PVSio-web with the Overture development and analysis environment for VDM. PVSio-web is a graphical environment for modelling and prototyping interactive systems. Prototypes developed within PVSio-web can closely resemble the visual appearance and behaviour of a real system. The behaviour of the prototypes is entirely driven by executable formal models. These formal models can be generated automatically from Emucharts, graphical diagrams based on the Statechart notation. Emucharts conveniently hides aspects of the formal syntax that create barriers for developers and domain experts who are new to formal methods. Here, we present the implementation of a VDM-SL model generator for Emucharts. An example is presented based on a medical device. It demonstrates the benefits of using Emucharts to develop a formal model, how PVSio-web can be used to perform lightweight formal analysis, and how the developed VDM-SL model generator can be used to produce a model that can be further analysed within Overture.

Masci P, Mallozzi P, DeAngelis FL, Serugendo GDM, Curzon P.  2015.  Using PVSio-web and SAPERE for rapid prototyping of user interfaces in Integrated Clinical Environments. Verisure2015, Workshop on Verification and Assurance, co-located with CAV2015. Abstractpvsioweb-sapere-verisure2015.pdf

Integrated clinical environments (ICEs) consist of interoperable medical devices that seamlessly exchange data and commands to create safety interlocks and closed loop controls to improve the quality of care delivered to the patient. Currently at the prototype stage, they promise to form the basis of a new generation of healthcare systems for high acuity patients. Regulators such as the US Food and Drug Administration are promoting the development of tools and techniques for verification and validation of essential safety requirements for ICEs. To date, little research has focused on the certification and assurance of their user interfaces with respect to use errors. In this work, we demonstrate how the PVSio-web prototyping tool can be conveniently used in combination with the communication framework SAPERE to generate realistic ICE systems prototypes from formal models. This approach is particularly suitable for exploring requirements, design, and regulatory issues of usability and safety of the user interfaces of ICE systems. An example ICE system prototype is presented, along with an example analysis demonstrating how the prototype can be used to explore subtle user interface design issues that could lead to usability and safety hazards in clinical scenarios.