Journal Articles

Almeida JB, Pinto JS, Vilaça M.  2008.  A Tool for Programming with Interaction Nets. Electronic Notes in Theoretical Computer Science. 219:83-96. Abstract07tpin.pdf

This paper introduces INblobs, a visual tool developed at Minho for integrated development with Interaction Nets. Most of the existing tools take as input interaction nets and interaction rules represented in a textual format. INblobs is first of all a visual editor that allows users to edit interaction systems (both interaction nets and interaction rules) graphically, and to convert them to textual notation. This can then be used as input to other tools that implement reduction of nets. INblobs also allows the user to reduce nets within the tool, and includes a mechanism that automatically selects the next active pair to be reduced, following one of the given reduction strategies. The paper also describes other features of the tool, such as the creation of rules from pre-defined templates.

Saraiva JA, Bigonha R, Tirelo F.  2008.  Disentangling denotational semantics specifications. JUCS - Journal of Universal Computer Science. 14(21):3592--3607. Abstractjucs_14_21_3592_3607_tirelo.pdf

Denotational semantics is a powerful technique to formally define programming languages. However, language constructs are not always orthogonal, so many semantic equations in a definition may have to be aware of unrelated constructs semantics. Current approaches for modularity in this formalism do not address this problem, providing, for this reason, tangled semantic definitions. This paper proposes an incremental approach for denotational semantic specifications, in which each step can either add new features or adapt existing equations, by means of a formal language based on function transformation and aspect weaving.

Wang S, Barbosa LS, Ribeiro P.  2008.  An Exercise on Transition Systems. Electronic Notes in Theoretical Computer Science. 207:89-106. Abstractttff08-rwb.pdf

Labelled transition systems admit different but equivalent characterizations either as relational structures or coalgebras for the powerset functor, each of them with their own merits. Notions of simulation and bisimulation, for example, are expressed in the pointfree relational calculus in a very concise and precise way. On the other hand, the coalgebraic perspective regards processes as inhabitants of a final universe and allows for an intuitive definition of the semantics of process' combinators. This paper is an exercise on such a dual characterisation. In particular, it discusses how a notion of weak bisimilarity can be lifted from the relational to the coalgebraic level, to become an effective reasoning tool on coinductively defined process algebras.

Koehler C, Costa D, Proença J, Arbab F.  2008.  Reconfiguration of Reo Connectors Triggered by Dataflow. ECEASST - Electronic Communications of the EASST. 10 Abstractreconfiguration_of_reo_connectors-gt-vmt.pdf

Reo is a language for coordinating autonomous components in distributed environments. Coordination in Reo is performed by circuit-like connectors, which are constructed from primitive, mobile channels with well-defined behaviour. While the structure of a connector can be modeled as a graph, its behaviour is compositionally defined using that of its primitive constituents. In previous work, we showed that graph transformation techniques are well-suited to model reconfigurations of connectors. In this paper, we investigate how the connector colouring semantics can be used to perform dynamic reconfigurations. Dynamic reconfigurations are triggered by dataflow in the connector at runtime, when certain structural patterns enriched with dataflow annotations occur. For instance we are able to elegantly model dynamic Reo circuits, such as just-in-time augmentation of singlebuffered channels to a circuit that models a channel with an unbounded buffer. Additionally we extend Reo’s visual notation and the Reo animation language to describe and animate dynamically reconfiguring connectors.

Proença J, Clarke D.  2008.  Coordination Models Orc and Reo Compared. Electronic Notes in Theoretical Computer Science. 194(4):57–76. Abstractorcandreocompared.pdf

Orcand Reo are two complementary approaches to the problem of coordinating components or services. On one hand, Orc is highly asynchronous, naturally dynamic, and based on ephemeral connections to services. On the other hand, Reo is based on the interplay between synchronization and mutual exclusion, is more static, and establishes more continuous connections between components (services). The question of how Orcand Reo relate to each other naturally arises. In this paper, we present a detailed comparison between the two models. We demonstrate that embedding non-recursive Orc expressions into Reo connectors is straightforward, whereas recursive Orc expressions require an extension to the Reo model. For the other direction, we argue that embedding Reo into Orc would require signi cantly more e ort. We conclude with some general observations and comparisons between the two approaches.

Bernardeschi C, Francesco ND, Lettieri G, Martini L, Masci P.  2008.  Decomposing bytecode verification by abstract interpretation. ACM Transactions on Programming Languages and Systems. 31(1):1-63. Abstract

Bytecode verification is a key point in the security chain of the Java platform. This feature is only optional in many embedded devices since the memory requirements of the verification process are too high. In this article we propose an approach that significantly reduces the use of memory by a serial/parallel decomposition of the verification into multiple specialized passes. The algorithm reduces the type encoding space by operating on different abstractions of the domain of types. The results of our evaluation show that this bytecode verification can be performed directly on small memory systems. The method is formalized in the framework of abstract interpretation.

Almeida PS, Moreno CB, Preguiça N, Hutchison D.  2007.  Scalable bloom filters. Information Processing Letters. 101:255–261. Abstractdbloom.pdf

Bloom Filters provide space-efficient storage of sets at the cost of a probability of false positives on membership queries. The size of the filter must be defined a priori based on the number of elements to store and the desired false positive probability, being impossible to store extra elements without increasing the false positive probability. This leads typically to a conservative assumption regarding maximum set size, possibly by orders of magnitude, and a consequent space waste. This paper proposes Scalable Bloom Filters, a variant of Bloom Filters that can adapt dynamically to the number of elements stored, while assuring a maximum false positive probability.

Barbosa LS, Campos JC.  2007.  Towards a coordination model for interactive systems. Electronic Notes in Theorectical Computer Science. :73-88. Abstractbbc06_lsb.pdf

When modelling complex interactive systems, traditional interactor-based approaches suffer from lack of expressiveness regarding the composition of the different interactors present in the user interface model into a coherent system. In this paper we investigate an alternative approach to the composition of interactors for the specification of complex interactive systems which is based on the coordination paradigm. We layout the fundations for the work and present an illustrative example. Lines for future work are identified.

Carvalho N, Correia A, Pereira JO, Rodrigues L, Oliveira R, Guedes S.  2007.  On the use of a reflective architecture to augment database management systems. Journal Of Universal Computer Science. 13:1110-1135. Abstract10.1.1.97.5865.pdf

The Database Management System (DBMS) used to be a commodity software component, with well known standard interfaces and semantics. However, the performance and reliability expectations being placed on DBMSs have increased the demand for a variety add-ons, that augment the functionality of the database in a wide range of deployment scenarios, offering support for features such as clustering, replication, and selfmanagement, among others. The effectiveness of such extensions largely rests on closely matching the actual needs of applications, hence on a wide range of tradeoffs and configuration options out of the scope of traditional client interfaces. A well known software engineering approach to systems with such requirements is reflection. Unfortunately, standard reflective interfaces in DBMSs are very limited (for instance, they often do not support the desired range of atomicity guarantees in a distributed setting). Some of these limitations may be circumvented by implementing reflective features as a wrapper to the DBMS server. Unfortunately, this solutions comes at the expense of a large development effort and significant performance penalty. In this paper we propose a general purpose DBMS reflection architecture and interface, that supports multiple extensions while, at the same time, admitting efficient implementations. We illustrate the usefulness of our proposal with concrete examples, and evaluate its cost and performance under different implementation strategies.

Rivière E, Baldoni R, Li H, Pereira JO.  2007.  Compositional gossip: a conceptual architecture for designing gossip-based applications. Operating Systems Review. 41:43–50. Abstractrblp-osr.pdf

Most proposed gossip-based systems use an ad-hoc design. We observe a low degree of reutilization among this proposals. We present how this limits both the systematic development of gossip-based applications and the number of applications that can benefit from gossip-based construction. We posit that these reinvent-the-wheel approaches poses a significant barrier to the spread and usability of gossip protocols. This paper advocates a conceptual design framework based upon aggregating basic and predefined building blocks BD 2. We show how to compose building blocks within our framework to construct more complex blocks to be used in gossip-based applications. The concept is further depicted with two gossip-based applications described using our building blocks.

Almeida JB, Pinto JS, Vilaça M.  2007.  A Local Graph-rewriting System for Deciding Equality in Sum-product Theories. Electronic Notes in Theoretical Computer Science. 176(1):139-163. Abstract06sum-product.pdf

In this paper we give a graph-based decision procedure for a calculus with sum and product types. Although our motivation comes from the Bird-Meertens approach to reasoning algebraically about functional programs, the language used here can be seen as the internal language of a category with binary products and coproducts. As such, the decision procedure presented has independent interest.
A standard approach based on term rewriting would work modulo a set of equations; the present work proposes a simpler approach, based on graph-rewriting. We show in turn how the system covers reflection equational laws, fusion laws, and cancellation laws.

Barbosa LS.  2007.  Configurations of Web Services. Electronic Notes in Theoretical Computer Science. 175:39-57. Abstractbbfoclasa06_lsb.pdf

The quest for sound foundations for the orchestration of web services is still open. To a great extent its relevance comes from the possibility of defining formal semantics for new language standards (like BPEL4WS or WS-CDL) in this emerging and challenging technology. As a step in that direction, this paper resorts to a notion of configuration, developed by the authors in the context of a Reo-like exogenous coordination model for software components, to formally express service orchestration. The latter is regarded as involving both the architectural assembly of independent services and the description of their interactions.

Barbosa LS.  2007.  An Orchestrator for Dynamic Interconnection of Software Components. Electronic Notes in Theoretical Computer Science. 181:49-61. Abstract1-s2.0-s1571066107003684-main.pdf

Composing and orchestrating software components is a fundamental concern in modern software engineering. This paper addresses the possibility of such orchestration being dynamic, in the sense that the structure of component's interconnection patterns can change at run-time. The envisaged approach extends previous work by the authors on the use of coalgebraic models for the specification of software connectors.

Rodrigues N, Barbosa LS.  2007.  Higher-Order Lazy Functional Slicing. Journal of Universal Computer Science. 13:854–873. Abstract10.1.1.98.5238.pdf

Abstract: Program slicing is a well known family of techniques intended to identify and isolate code fragments which depend on, or are depended upon, specific program entities. This is particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, and corresponding tools, target either the imperative or the object oriented paradigms, where program slices are computed with respect to a variable or a program statement. Taking a complementary point of view, this paper focuses on the slicing of higher-order functional programs under a lazy evaluation strategy. A prototype of a Haskell slicer, built as proof-of-concept for these ideas, is also introduced.

Avvenuti M, Corsini P, Masci P, Vecchio A.  2007.  An application adaptation layer for wireless sensor networks. Pervasive and Mobile Computing. 3:413–438. Abstract

In wireless sensor networks, poor performance or unexpected behavior may be experienced for several reasons, such as trivial deterioration of sensing hardware, unsatisfactory implementation of application logic, or mutated network conditions. This leads to the necessity of changing the application behavior after the network has been deployed. Such flexibility is still an open issue as it can be achieved either at the expense of significant energy consumption or through software complexity. This paper describes an approach to adapt the behavior of running applications by intercepting the calls made to the operating system services and changing their effects at run-time. Customization is obtained through small fragments of interpreted bytecode, called adaptlets, injected into the network by the base station. Differently from other approaches, where the entire application is interpreted, adaptlets are tied only to specific services, while the bulk of the application is still written in native code. This makes our system able to preserve the compactness and efficiency of native code and to have little impact on the overall application performance. Also, applications must not be rewritten because the operating system interfaces are unaffected. The adaptation layer has been implemented in the context of TinyOS using an instruction set inspired to the Java bytecode. Examples that illustrate the programming of the adaptation layer are presented together with their experimental validation.

Avvenuti M, Corsini P, Masci P, Vecchio A.  2007.  Energy-efficient reception of large preambles in MAC protocols for wireless sensor networks. Electronics Letters. 43:300–301. Abstract

A technique able to significantly reduce the energy consumption of contention-based MAC protocols for wireless sensor networks is presented. Address and timing information is embedded into the packet preamble, allowing receivers to power off their radio during part of the transmission. Analytical and experimental evaluations show a significant extension of the network lifetime.

Mano A, Campos JC.  2006.  Usabilidade em interfaces para crianças. Jornal de Circunstâncias Cognitivas. Abstractjcc_v2.pdf

O estudo da interacção entre crianças e computadores tem recebido atenção crescente nos últimos anos (ver, por exemplo, a série de conferências em Interaction Design and Children). A utilização de meios informáticos, quer para actividades educativas, quer para actividades de lazer (bem como a sua integração), tem um potencial genericamente reconhecido. No entanto, é necessário estudar de que forma as crianças interpretam e interagem com as interfaces se pretendemos desenvolver sistemas bem sucedidos.
O objectivo do estudo apresentado nesta comunicação consistiu exactamente em estudar o modo como crianças entre os 5 e os 7 anos de idade interagem com computadores. Tal como mencionado, existe já investigação nesta área, e alguns conjuntos de guidelines de desenho estão já publicados (por exemplo, Gilutz e Nielsen, 2002). A diferença entre o presente estudo e outras publicações reside na forma como os dados serão obtidos e nos próprios objectivos do estudo. A principal meta do método utilizado não é descobrir o que as crianças podem ou não fazer numa interface, mas sim os motivos pelos quais elas conseguem ou não utilizá-la.

Barbosa LS, Oliveira JN.  2006.  Transposing Partial Components: an Exercise on Coalgebraic Refinement. Theoretical Computer Science. 365:2-22. Abstractbo06_lsb.pdf

A partial component is a process which fails or dies at some stage, thus exhibiting a finite, more ephemeral behaviour than expected (eg, operating system crash). Partiality --- which is the rule rather than exception in formal modelling --- can be treated mathematically via totalization techniques. In the case of partial functions, totalization involves error values and exceptions. In the context of a coalgebraic approach to component semantics, this paper argues that the behavioural counterpart to such functional techniques should extend behaviour with try-again cycles preventing from component collapse, thus extending totalization or transposition from the algebraic to the coalgebraic context. We show that a refinement relationship holds between original and totalized components which is reasoned about in a coalgebraic approach to component refinement expressed in the pointfree binary relation calculus. As part of the pragmatic aims of this research, we also address the factorization of every such totalized coalgebra into two coalgebraic components --- the original one and an added front-end --- which cooperate in a client-serverstyle.

Rodrigues N, Barbosa LS.  2006.  Component Identification Through Program Slicing. Electronic Notes in Theoretical Computer Science. 160:291-304. Abstractcips_nfr.pdf

This paper reports on the development of specific slicing techniques for functional programs and their use for the identification of possible coherent components from monolithic code. An associated tool is also introduced. This piece of research is part of a broader project on program understanding and re-engineering of legacy code supported by formal methods.

Ribeiro P, Barbosa LS.  2006.  Generic process algebra: A programming challenge. Journal of Universal Computer Science. 12:922–937. Abstractrbb06_lsb.pdf

Emerging interaction paradigms, such as service-oriented computing, and new technological challenges, such as exogenous component coordination, suggest new roles and application areas for process algebras. This, however, entails the need for more generic and adaptable approaches to their design. For example, some applications may require similar programming constructs coexisting with different interaction disciplines. In such a context, this paper pursues a research programme on a coinductive rephrasal of classic process algebra, proposing a clear separation between structural aspects and interaction disciplines. A particular emphasis is put on the study of interruption combinators defined by natural co-recursion. The paper also illustrates the verification of their properties in an equational and pointfree reasoning style as well as their direct encoding in Haskell.

Rodrigues N, Barbosa LS.  2006.  Program Slicing by Calculation. Journal of Universal Computer Science. 12:828–848. Abstractrb06_lsb.pdf

Program slicing is a well known family of techniques used to identify code fragments which depend on or are depended upon specific program entities. They are particularly useful in the areas of reverse engineering, program understanding, testing and software maintenance. Most slicing methods, usually oriented towards the imperatice or object paradigms, are based on some sort of graph structure representing program dependencies. Slicing techniques amount, therefore, to (sophisticated) graph transversal algorithms. This paper proposes a completely different approach to the slicing problem for functional programs. Instead of extracting program information to build an underlying dependencies' structure, we resort to standard program calculation strategies, based on the so-called Bird-Meertens formalism. The slicing criterion is specified either as a projection or a hiding function which, once composed with the original program, leads to the identification of the intended slice. Going through a number of examples, the paper suggests this approach may be an interesting, even if not completely general, alternative to slicing functional programs.

Bernardeschi C, Lettieri G, Martini L, Masci P.  2006.  Using postdomination to reduce space requirements of data flow analysis. Information Processing Letters. 98:11–18.
Bernardeschi C, Lettieri G, Martini L, Masci P.  2006.  Using Control Dependencies for Space-Aware Bytecode Verification. Computer Journal. 49:234–248. Abstract

Java applets run on a Virtual Machine that checks code integrity and correctness before execution using a module called the Bytecode Verifier. Java Card technology allows Java applets to run on smart cards. The large memory requirements of the verification process do not allow the implementation of an embedded Bytecode Verifier in the Java Card Virtual Machine. To address this problem, we propose a verification algorithm that optimizes the use of system memory by imposing an ordering on the verification of the instructions. This algorithm is based on control flow dependencies and immediate postdominators in control flow graphs.

Barbosa MB, Cunha A, Pinto JS.  2005.  Recursion Patterns and Time-analysis. SIGPLAN Notices. 40(5):45–54. Abstractrecpta.pdf

This paper explores some ideas concerning the time-analysis of functional programs defined by instantiating typical recursion patterns such as folds, unfolds, and hylomorphisms. The concepts in this paper are illustrated through a rich set of examples in the Haskell programming language. We concentrate on unfolds and folds (also known as anamorphisms and catamorphisms respectively) of recursively defined types, as well as the more general hylomorphism pattern. For the latter, we use as case-studies two famous sorting algorithms, mergesort and quicksort. Even though time analysis is not compositional, we argue that splitting functions to expose the explicit construction of the recursion tree and its later consumption helps with this analysis.

Cunha A, Pinto JS.  2005.  Point-free Program Transformation. Fundamenta Informaticae. 66(4):315–352. Abstractpfpt.pdf

The subject of this paper is functional program transformation in the so-called point-free style. By this we mean first translating programs to a form consisting only of categorically-inspired combinators, algebraic data types defined as fixed points of functors, and implicit recursion through the use of type-parameterized recursion patterns. This form is appropriate for reasoning about programs equationally, but difficult to actually use in practice for programming. In this paper we present a collection of libraries and tools developed at Minho with the aim of supporting the automatic conversion of programs to point-free (embedded in Haskell), their manipulation and rule-driven simplification, and the (limited) automatic application of fusion for program transformation.